Simplifying Method Calls — Optimize¶
12 cases where the refactor is correct but introduces a perf cost.
Optimize 1 — Builder allocation in hot path (Java)¶
for (Request r : requests) {
HttpResponse resp = client.send(HttpRequest.builder()
.url(r.url())
.header("X", "Y")
.build());
}
Cost & Fix
Each iteration allocates: Builder, internal Map for headers, varargs arrays. For 10K req/s with 5 headers: significant GC pressure. **Fix options:** 1. **Reusable request:** if the request is mostly the same, build once and tweak per call: 2. **Skip the builder:** provide a one-shot factory. 3. **Profile first.** Modern JVMs eliminate many builder allocations via escape analysis.Optimize 2 — Introduce Parameter Object adds allocation (Java)¶
In a hot path scanning 1M intervals: 1M DateRange allocations.
Cost & Fix
Records are short-lived; escape analysis usually eliminates the allocation. Verify with `-XX:+PrintEliminateAllocations`. If EA fails: 1. Pass primitives directly. 2. Use object pools (rarely worth it). 3. Wait for Project Valhalla. For typical cases: zero observable cost.Optimize 3 — Replace Exception with Test for Map.get (Java)¶
Cost & Fix
Two map lookups instead of one. For 1M req/sec, 2× the hash function calls. **Fix:** Use `Map.get` directly (returns null for missing) — no exception in the original anyway: Lesson: Replace Exception with Test should not double-lookup. Use APIs that return optional/null.Optimize 4 — Factory method allocation when caller wanted reuse (Java)¶
public static Money zero() { return new Money(0, USD); }
Money total = items.stream().map(Item::price).reduce(Money.zero(), Money::plus);
Cost & Fix
`Money.zero()` allocates a new Money each call. The reduce starts with one allocation, but if `Money.zero()` is called many places, it adds up. **Fix:** Cache: Or for currency-parameterized zeros: `Money.zero("USD")` with a per-currency cache.Optimize 5 — Encapsulate Downcast doesn't help in tight loop (Java)¶
public Reading lastReading() { return (Reading) readings.last(); }
for (int i = 0; i < N; i++) {
Reading r = station.lastReading(); // checkcast in the helper
process(r);
}
Cost & Fix
Each call goes through `lastReading`, which does the cast. JIT typically inlines and may eliminate the cast. If inlining doesn't happen (e.g., `lastReading` grows large), the cast is paid per iteration. **Fix:** Strongly type the underlying collection: No cast at all. Lesson: Encapsulate Downcast is a stop-gap; generics are the proper fix.Optimize 6 — Varargs logging in hot path (Java)¶
For 1M debug log calls/sec (when debug is enabled): array allocations dominate.
Cost & Fix
SLF4J provides parameterized overloads: Up to 3 args (or 4 with some libs), no array. **Fix:** check that you're using parameterized form, not concatenation. SLF4J / log4j handle the rest.Optimize 7 — Functional Options in Go hot path (Go)¶
Cost & Fix
Each `NewServer` allocates the Server struct + executes each `Option` closure. Closures in Go are cheap (typically allocations elide), but if you're constructing thousands per second... **Fix:** Build a "template" and clone: Or use a builder pattern that mutates an existing struct. For most cases: just measure. Go's escape analysis eliminates many such allocations.Optimize 8 — Replace Constructor with Factory hides slow path (Java)¶
public static User from(UserDto dto) {
return new User(
dto.id,
validateEmail(dto.email), // expensive
loadPreferences(dto.id) // database hit
);
}
Cost & Fix
Naive factories can hide expensive work. Caller may invoke in a loop, multiplying cost. **Fix:** Make the cost visible: 1. Document: "May hit DB." 2. Provide a fast variant: `User.fromShallow(dto)` that doesn't load preferences; `loadFull()` separate. 3. Use lazy loading: `user.preferences()` loads on first call.Optimize 9 — Hide Method prevents JIT specialization (Java)¶
Cost & Fix
Both are monomorphic (final OR private — no override). JIT inlines either. **No perf difference.** The `private` is preferable for encapsulation. Don't worry about perf.Optimize 10 — Parameterize Method introduces branch in hot loop (Java)¶
double raise(double percentage) {
if (percentage > 0.10) auditLargeRaise(percentage); // ❌
return salary * (1 + percentage);
}
vs. the old:
double tenPercentRaise() { return salary * 1.10; }
double fifteenPercentRaise() { return salary * 1.15; }
Cost & Fix
Parameterized version adds a branch per call. For 10M calls/sec, the branch cost adds up (~1 ns/call). Branch prediction usually handles it. **Fix:** If the audit is rare and constant, separate out: Most hot callers use `raise`; auditors call `largeRaise`.Optimize 11 — Replace Parameter with Method Call doubles work (Java)¶
double total() {
return base() + tax(base()); // base() called twice
}
private double base() { return computeExpensive(); }
vs.
Cost & Fix
If `base()` is expensive, calling it twice doubles the cost. JIT may CSE through pure calls but not through anything with side effects or non-trivial operations. **Fix:** Cache once: Lesson: Replace Parameter with Method Call is fine for cheap pure expressions; cache for expensive ones.Optimize 12 — Throw + catch in tight loop (Java)¶
for (String s : input) {
try {
result.add(Integer.parseInt(s));
} catch (NumberFormatException e) {
// skip invalid
}
}
For input where 50% are invalid: 50% of iterations pay the throw cost (~5-50µs each).
Cost & Fix
For 1M items × 50% invalid × 25µs = 12.5 seconds of throw overhead. **Fix:** Use a non-throwing parse:for (String s : input) {
var maybe = tryParse(s);
if (maybe.isPresent()) result.add(maybe.get());
}
private static Optional<Integer> tryParse(String s) {
if (s == null || s.isEmpty()) return Optional.empty();
int i = 0;
if (s.charAt(0) == '-') i = 1;
for (; i < s.length(); i++) {
if (s.charAt(i) < '0' || s.charAt(i) > '9') return Optional.empty();
}
return Optional.of(Integer.parseInt(s));
}
Patterns¶
| Refactor | Cost |
|---|---|
| Builder per call | GC pressure |
| Parameter Object | Allocation if EA fails |
| Two map lookups | 2× hash work |
| Factory not cached | Repeated construction |
| Encapsulate Downcast | Cast in loop |
| Varargs in logging | Array per call |
| Functional options | Closure allocations |
| Hidden expensive factory | Repeated DB hits |
| Parameterized branch | Branch per call |
| Method call instead of cached temp | Double work |
| Exceptions in hot loop | µs/throw |