Dependency Injection — Professional Level¶
Table of Contents¶
- Introduction
- How
wireGenerates Code - How
digResolves at Runtime - How
fxBuilds ondig - Performance Trade-offs
- Escape Analysis Around DI Containers
- Cold-Start Cost in Practice
- Reflection vs Codegen vs Manual: Numbers
- Memory and Allocation Patterns
- Diagnostic Tools
- Operational Playbook
- Summary
Introduction¶
The professional treatment of DI is not "how to use a framework" but "what each framework actually does at the runtime, allocator, and binary-layout level." Two frameworks may both be called "DI containers" and yet differ by orders of magnitude in startup cost, allocation pressure, and binary size.
This file is for engineers who write, profile, or operate Go services where the wiring layer itself shows up in profiles — typically because the binary starts thousands of times per day (serverless, CLIs), or because the service has hundreds of components and startup is on the critical path of a deploy.
After reading this you will:
- Know what
cmd/wiredoes mechanically when it generates an injector. - Trace how
digresolves a graph usingreflect. - Understand where
fxadds cost on top ofdig. - Predict cold-start behaviour by reading the wiring layer.
- Know which tools answer "did my DI container leak this allocation?"
How wire Generates Code¶
google/wire ships two things:
- The runtime package
github.com/google/wire— tiny; it exports placeholder types likewire.NewSetandwire.Bindthat exist only so injector skeletons compile. - The CLI tool
cmd/wire— the actual code generator.
The pipeline¶
When you run wire ./..., the tool:
- Parses the package with
go/packagesto load Go ASTs and type information for every file under the build constraintwireinject. - Locates injector skeletons — functions whose body is a single
panic(wire.Build(...))call. - Resolves the provider set statically. Each
wire.NewSetargument is collected; bindings are checked for ambiguity, missing providers, and cycles. - Topologically sorts the providers needed to produce the injector's return type.
- Emits a Go file (
wire_gen.go) with the build constraint!wireinject. The body of the injector is now real code: a sequence of provider calls, error returns, and cleanup composition.
What the generated code looks like¶
For a graph Config -> DB -> UserRepo -> UserService:
// Code generated by Wire. DO NOT EDIT.
//go:build !wireinject
package wire
func InitializeApp() (*UserService, func(), error) {
cfg, err := LoadConfig()
if err != nil {
return nil, nil, err
}
db, cleanupDB, err := OpenDB(cfg)
if err != nil {
return nil, nil, err
}
repo := NewUserRepo(db)
svc := NewUserService(repo)
return svc, func() {
cleanupDB()
}, nil
}
This is exactly what hand-written wiring would produce — no reflection, no maps, no interface boxing for graph machinery. wire's only runtime cost is the cost of the constructors themselves.
Cleanup composition¶
When two providers each return a cleanup function, wire composes them in reverse order of construction:
This matches "release in reverse order of acquisition" — the LIFO discipline you would write by hand.
Compile-time guarantees¶
wire's analyser fails the build on:
- Missing provider for a required type.
- Ambiguous binding (two providers for the same type without explicit
wire.Bind). - Cyclic dependencies.
- A provider declared but unreachable.
These checks happen before any code runs, which is the entire point of choosing wire over a runtime container.
How dig Resolves at Runtime¶
go.uber.org/dig is the runtime container fx builds on. Reading its source clarifies what reflection-based DI actually does.
Key data structures¶
Container— an in-memory registry of providers and resolved values, keyed byreflect.Type.provider— wraps a Go function value plus its parameter and result types extracted viareflect.paramList— the list ofreflect.Types the provider expects, computed once at registration.resultList— the list ofreflect.Types the provider returns.
Resolution algorithm¶
When you call container.Invoke(fn):
digreflects onfnto find each parameter type.- For each parameter, it looks up a provider in the registry by
reflect.Type. - If the parameter type's provider has unresolved dependencies, recurse.
- Once all dependencies are resolved,
Callthe provider with the resolved values viareflect.Value.Call. - Cache the resolved value in a singleton-scoped map.
- Finally, call
fnitself with the assembled parameters.
Each reflect.Value.Call allocates a slice for arguments and a slice for return values. For a graph of N providers, you pay N reflection calls plus all their allocations.
Container groups, optional, named¶
dig's expressive features (dig.In structs, dig.Out structs, named values, optional values, value groups) are implemented as additional metadata on the parameter / result lists. Each adds a tag-parsing pass at registration time and a small per-call check at resolve time. None changes the asymptotic cost.
Where errors surface¶
Every container in dig lazily resolves on Invoke. A provider can be registered and never called; if its dependencies are missing, you find out at Invoke time. This is the trade-off compared to wire: deferring error discovery to runtime in exchange for runtime flexibility.
How fx Builds on dig¶
fx is a thin layer of opinions on top of dig:
fx.Module— bundles a set of providers and lifecycle hooks.fx.Lifecycle— a struct withAppend(fx.Hook)for ordered startup/shutdown.fx.Invoke— registers a function to be called once during startup.fx.New— accepts modules / providers / invocations and produces an*fx.App.
When App.Run() is called:
fxbuilds the underlyingdigcontainer byProvide-ing each function.- It walks the registered
Invokecalls, asking the container to resolve each one. This triggers the provider chain. - Lifecycle hooks captured during
Invokeare sorted by registration order andOnStartruns sequentially. - The process blocks waiting for a signal; on shutdown it runs
OnStopin reverse order.
fx's cost on top of dig: tag parsing, hook bookkeeping, and the lifecycle event loop. None of these are large individually. The dominant cost is still the reflection in dig.
Performance Trade-offs¶
Manual¶
- Startup: zero overhead beyond constructors.
- Runtime call: zero overhead beyond constructors.
- Binary size: smallest — no DI machinery linked in.
wire¶
- Build time: adds a
wire generatestep (~hundreds of milliseconds for medium graphs). - Startup: identical to manual; the generated code calls each constructor directly.
- Runtime call: identical to manual.
- Binary size: identical to manual.
dig / fx¶
- Build time: unchanged.
- Startup: linear in the number of providers; each adds reflection calls and a few allocations. For 200 providers, expect 5–50 ms (varies by hardware and graph shape) plus the cost of the constructors themselves.
- Runtime call: zero (after startup, you hold real Go values).
- Binary size:
digandfxtogether add a few hundred KB to the linked binary.
Asymptotic summary¶
| Tool | Startup cost | Per-call cost | Binary size cost |
|---|---|---|---|
| Manual | O(constructors) | O(constructors) | minimal |
wire | O(constructors) | O(constructors) | minimal |
dig/fx | O(constructors) + reflection | O(constructors) | ~100s of KB |
For most services, the constant factor on startup is invisible. For serverless / CLI / high-deploy workloads, it can be the dominant deploy lag.
Escape Analysis Around DI Containers¶
A specific technical concern: containers that hold any-typed values force their inputs onto the heap.
Why¶
reflect.Value.Call accepts []reflect.Value. A reflect.Value is itself an interface boxing the underlying value. When a constructor returns a struct that the container then stores in a map[reflect.Type]any, the value escapes to the heap — even if your constructor wrote it on the stack.
In manual / wire code, the compiler can often prove a returned struct never escapes (caller takes its address only briefly), or that a small struct fits in registers and never allocates. With dig / fx, every constructed value passes through interface boxing at least once.
Practical impact¶
For singletons created once and held forever, this is irrelevant — one extra heap allocation per type at startup. For transient values (constructors called in a hot path) it would matter, but in well-designed DI graphs, transient construction is rare; the container is for singletons.
The verifying technique¶
Build a small benchmark with the same constructors wired manually and through fx, and compare with -gcflags="-m" to inspect escape decisions:
You will typically see dig's reflection paths cause constructor results to be classified as escaping; manual wiring rarely does.
Cold-Start Cost in Practice¶
For a service with ~150 providers (typical of a mid-size microservice), measured cold-start contribution of the DI layer alone:
| Approach | Median DI startup time | Allocations during startup |
|---|---|---|
| Manual | < 1 ms | low |
wire | < 1 ms | low |
fx (with reflection) | 10–50 ms | thousands |
The numbers depend heavily on your provider count, hardware, and Go version. The shape is consistent: fx is one to two orders of magnitude slower at startup than the alternatives.
For a long-running server this is invisible. For a CLI invoked thousands of times in CI, or a serverless function with cold starts, this matters.
Profiling startup¶
import (
"os"
"runtime/pprof"
)
func main() {
f, _ := os.Create("startup.pprof")
pprof.StartCPUProfile(f)
app := fx.New(...)
pprof.StopCPUProfile()
f.Close()
app.Run()
}
A CPU profile during startup makes the cost concrete: dig.(*Container).provide, reflect.Value.Call, and friends usually dominate.
Reflection vs Codegen vs Manual: Numbers¶
A reasonable micro-benchmark on a recent Apple Silicon machine, Go 1.22, 100-provider graph (numbers are illustrative; run your own to confirm):
BenchmarkManualBuild-10 1500 760 µs/op 24 KB allocs
BenchmarkWireBuild-10 1450 790 µs/op 24 KB allocs
BenchmarkFxBuild-10 30 41,000 µs/op 2.1 MB allocs
fx does ~50× the allocations and ~50× the wall-time of manual / wire for the same graph. Most of the difference is reflection.
Always measure your own graph. The shape and number of providers matter more than any rule of thumb.
Memory and Allocation Patterns¶
Manual / wire¶
A graph of n singletons allocates approximately n heap objects (one per provider's returned value, or fewer if the compiler can prove non-escape). After startup, no further allocations occur from the wiring layer.
dig / fx¶
In addition to the n singletons:
- The container holds a registry: typically
O(n)map entries keyed byreflect.Type. - Each provider call allocates an arguments slice and a return-values slice for
reflect.Value.Call. dig.In/dig.Outstructs go through reflection-driven copying.fx.Lifecyclekeeps a slice of hook closures; each closure captures its environment.
Total: ~2–4 KB per provider during startup, freed after if not retained. Long-lived overhead is small (a few KB for the registry) but visible in heap profiles.
Diagnostic Tools¶
runtime/pprof¶
- CPU profile during startup reveals which providers dominate, and whether reflection is on the hot path.
- Heap profile after startup shows
dig/fxregistry retention.
go tool trace¶
A trace captured during startup shows the exact call sequence of providers. Useful for spotting unexpected serial work.
go test -bench with -benchmem¶
Compare alternative wiring strategies head-to-head. The numbers convince stakeholders better than arguments.
dig's own visualisation¶
go.uber.org/dig exposes Visualize to write a Graphviz dot file representing the resolved graph. Pipe to dot -Tsvg > graph.svg for a picture of what your container thinks your graph is.
wire diagnostics¶
wire diff shows what the regenerated file would look like if you ran wire. Useful in CI to ensure nobody forgot to regenerate.
Operational Playbook¶
When startup time is "too slow"¶
- Profile. Confirm
dig/fxis on the hot path, not (e.g.)sql.Openwaiting for a slow database. - Move slow constructors out of startup — open the DB after
app.Start()returns, in a background goroutine, with a readiness flag. - If the wiring layer itself is the cost, migrate from
fxtowireor to manual. The migration is mechanical: eachfx.Providebecomes a normal Go call.
When binary size is a concern¶
fx and dig together add ~300 KB to the linked binary. For embedded targets or memory-constrained runtimes (some serverless) this matters. The same migration applies.
When debugging "why is this not injected?"¶
wire: the build error tells you exactly what is missing. Read it.dig/fx: the runtime panic includes a graph trace. Re-read it; the type names are all there.- Manual: it does not compile. The error is on a specific line.
When operating multiple environments¶
wire cannot easily switch implementations at runtime. Common pattern: keep the bulk of the graph in wire, but inject the environment-specific decision at the seam (a single function in main that picks between two adapters and feeds the choice into wire.InitializeApp).
Summary¶
DI frameworks differ on a single axis that matters most: when does the wiring run? wire runs at build time and produces ordinary Go code, paying nothing at runtime. dig/fx run at process startup and pay reflection cost proportional to the graph size. Manual is build time implicitly: it is just code.
The professional view is that the choice has measurable cost. Manual and wire are nearly identical at runtime — both are just sequences of constructor calls. fx adds reflection cost that is invisible for long-running services and meaningful for short-lived ones. Profiles reveal it; measurements decide it.
For most production Go services the answer is "manual or wire, decide on team taste." For services where startup time is on the critical path of deployment, the answer is "manual or wire, period." For polyglot service frameworks where uniformity across many services trumps per-service startup, fx earns its place.
Either way, knowing what each tool does at the metal is what allows the staff engineer to defend the choice — instead of arguing about taste.