Skip to content

Dependency Injection — Professional Level

Table of Contents

  1. Introduction
  2. How wire Generates Code
  3. How dig Resolves at Runtime
  4. How fx Builds on dig
  5. Performance Trade-offs
  6. Escape Analysis Around DI Containers
  7. Cold-Start Cost in Practice
  8. Reflection vs Codegen vs Manual: Numbers
  9. Memory and Allocation Patterns
  10. Diagnostic Tools
  11. Operational Playbook
  12. Summary

Introduction

The professional treatment of DI is not "how to use a framework" but "what each framework actually does at the runtime, allocator, and binary-layout level." Two frameworks may both be called "DI containers" and yet differ by orders of magnitude in startup cost, allocation pressure, and binary size.

This file is for engineers who write, profile, or operate Go services where the wiring layer itself shows up in profiles — typically because the binary starts thousands of times per day (serverless, CLIs), or because the service has hundreds of components and startup is on the critical path of a deploy.

After reading this you will:

  • Know what cmd/wire does mechanically when it generates an injector.
  • Trace how dig resolves a graph using reflect.
  • Understand where fx adds cost on top of dig.
  • Predict cold-start behaviour by reading the wiring layer.
  • Know which tools answer "did my DI container leak this allocation?"

How wire Generates Code

google/wire ships two things:

  1. The runtime package github.com/google/wire — tiny; it exports placeholder types like wire.NewSet and wire.Bind that exist only so injector skeletons compile.
  2. The CLI tool cmd/wire — the actual code generator.

The pipeline

When you run wire ./..., the tool:

  1. Parses the package with go/packages to load Go ASTs and type information for every file under the build constraint wireinject.
  2. Locates injector skeletons — functions whose body is a single panic(wire.Build(...)) call.
  3. Resolves the provider set statically. Each wire.NewSet argument is collected; bindings are checked for ambiguity, missing providers, and cycles.
  4. Topologically sorts the providers needed to produce the injector's return type.
  5. Emits a Go file (wire_gen.go) with the build constraint !wireinject. The body of the injector is now real code: a sequence of provider calls, error returns, and cleanup composition.

What the generated code looks like

For a graph Config -> DB -> UserRepo -> UserService:

// Code generated by Wire. DO NOT EDIT.
//go:build !wireinject

package wire

func InitializeApp() (*UserService, func(), error) {
    cfg, err := LoadConfig()
    if err != nil {
        return nil, nil, err
    }
    db, cleanupDB, err := OpenDB(cfg)
    if err != nil {
        return nil, nil, err
    }
    repo := NewUserRepo(db)
    svc := NewUserService(repo)
    return svc, func() {
        cleanupDB()
    }, nil
}

This is exactly what hand-written wiring would produce — no reflection, no maps, no interface boxing for graph machinery. wire's only runtime cost is the cost of the constructors themselves.

Cleanup composition

When two providers each return a cleanup function, wire composes them in reverse order of construction:

return svc, func() {
    cleanupRepo()
    cleanupDB()
}, nil

This matches "release in reverse order of acquisition" — the LIFO discipline you would write by hand.

Compile-time guarantees

wire's analyser fails the build on:

  • Missing provider for a required type.
  • Ambiguous binding (two providers for the same type without explicit wire.Bind).
  • Cyclic dependencies.
  • A provider declared but unreachable.

These checks happen before any code runs, which is the entire point of choosing wire over a runtime container.


How dig Resolves at Runtime

go.uber.org/dig is the runtime container fx builds on. Reading its source clarifies what reflection-based DI actually does.

Key data structures

  • Container — an in-memory registry of providers and resolved values, keyed by reflect.Type.
  • provider — wraps a Go function value plus its parameter and result types extracted via reflect.
  • paramList — the list of reflect.Types the provider expects, computed once at registration.
  • resultList — the list of reflect.Types the provider returns.

Resolution algorithm

When you call container.Invoke(fn):

  1. dig reflects on fn to find each parameter type.
  2. For each parameter, it looks up a provider in the registry by reflect.Type.
  3. If the parameter type's provider has unresolved dependencies, recurse.
  4. Once all dependencies are resolved, Call the provider with the resolved values via reflect.Value.Call.
  5. Cache the resolved value in a singleton-scoped map.
  6. Finally, call fn itself with the assembled parameters.

Each reflect.Value.Call allocates a slice for arguments and a slice for return values. For a graph of N providers, you pay N reflection calls plus all their allocations.

Container groups, optional, named

dig's expressive features (dig.In structs, dig.Out structs, named values, optional values, value groups) are implemented as additional metadata on the parameter / result lists. Each adds a tag-parsing pass at registration time and a small per-call check at resolve time. None changes the asymptotic cost.

Where errors surface

Every container in dig lazily resolves on Invoke. A provider can be registered and never called; if its dependencies are missing, you find out at Invoke time. This is the trade-off compared to wire: deferring error discovery to runtime in exchange for runtime flexibility.


How fx Builds on dig

fx is a thin layer of opinions on top of dig:

  • fx.Module — bundles a set of providers and lifecycle hooks.
  • fx.Lifecycle — a struct with Append(fx.Hook) for ordered startup/shutdown.
  • fx.Invoke — registers a function to be called once during startup.
  • fx.New — accepts modules / providers / invocations and produces an *fx.App.

When App.Run() is called:

  1. fx builds the underlying dig container by Provide-ing each function.
  2. It walks the registered Invoke calls, asking the container to resolve each one. This triggers the provider chain.
  3. Lifecycle hooks captured during Invoke are sorted by registration order and OnStart runs sequentially.
  4. The process blocks waiting for a signal; on shutdown it runs OnStop in reverse order.

fx's cost on top of dig: tag parsing, hook bookkeeping, and the lifecycle event loop. None of these are large individually. The dominant cost is still the reflection in dig.


Performance Trade-offs

Manual

  • Startup: zero overhead beyond constructors.
  • Runtime call: zero overhead beyond constructors.
  • Binary size: smallest — no DI machinery linked in.

wire

  • Build time: adds a wire generate step (~hundreds of milliseconds for medium graphs).
  • Startup: identical to manual; the generated code calls each constructor directly.
  • Runtime call: identical to manual.
  • Binary size: identical to manual.

dig / fx

  • Build time: unchanged.
  • Startup: linear in the number of providers; each adds reflection calls and a few allocations. For 200 providers, expect 5–50 ms (varies by hardware and graph shape) plus the cost of the constructors themselves.
  • Runtime call: zero (after startup, you hold real Go values).
  • Binary size: dig and fx together add a few hundred KB to the linked binary.

Asymptotic summary

Tool Startup cost Per-call cost Binary size cost
Manual O(constructors) O(constructors) minimal
wire O(constructors) O(constructors) minimal
dig/fx O(constructors) + reflection O(constructors) ~100s of KB

For most services, the constant factor on startup is invisible. For serverless / CLI / high-deploy workloads, it can be the dominant deploy lag.


Escape Analysis Around DI Containers

A specific technical concern: containers that hold any-typed values force their inputs onto the heap.

Why

reflect.Value.Call accepts []reflect.Value. A reflect.Value is itself an interface boxing the underlying value. When a constructor returns a struct that the container then stores in a map[reflect.Type]any, the value escapes to the heap — even if your constructor wrote it on the stack.

In manual / wire code, the compiler can often prove a returned struct never escapes (caller takes its address only briefly), or that a small struct fits in registers and never allocates. With dig / fx, every constructed value passes through interface boxing at least once.

Practical impact

For singletons created once and held forever, this is irrelevant — one extra heap allocation per type at startup. For transient values (constructors called in a hot path) it would matter, but in well-designed DI graphs, transient construction is rare; the container is for singletons.

The verifying technique

Build a small benchmark with the same constructors wired manually and through fx, and compare with -gcflags="-m" to inspect escape decisions:

go build -gcflags="-m=2" ./... 2>&1 | grep "escapes to heap"

You will typically see dig's reflection paths cause constructor results to be classified as escaping; manual wiring rarely does.


Cold-Start Cost in Practice

For a service with ~150 providers (typical of a mid-size microservice), measured cold-start contribution of the DI layer alone:

Approach Median DI startup time Allocations during startup
Manual < 1 ms low
wire < 1 ms low
fx (with reflection) 10–50 ms thousands

The numbers depend heavily on your provider count, hardware, and Go version. The shape is consistent: fx is one to two orders of magnitude slower at startup than the alternatives.

For a long-running server this is invisible. For a CLI invoked thousands of times in CI, or a serverless function with cold starts, this matters.

Profiling startup

import (
    "os"
    "runtime/pprof"
)

func main() {
    f, _ := os.Create("startup.pprof")
    pprof.StartCPUProfile(f)
    app := fx.New(...)
    pprof.StopCPUProfile()
    f.Close()
    app.Run()
}

A CPU profile during startup makes the cost concrete: dig.(*Container).provide, reflect.Value.Call, and friends usually dominate.


Reflection vs Codegen vs Manual: Numbers

A reasonable micro-benchmark on a recent Apple Silicon machine, Go 1.22, 100-provider graph (numbers are illustrative; run your own to confirm):

BenchmarkManualBuild-10           1500    760 µs/op    24 KB allocs
BenchmarkWireBuild-10             1450    790 µs/op    24 KB allocs
BenchmarkFxBuild-10                 30  41,000 µs/op   2.1 MB allocs

fx does ~50× the allocations and ~50× the wall-time of manual / wire for the same graph. Most of the difference is reflection.

Always measure your own graph. The shape and number of providers matter more than any rule of thumb.


Memory and Allocation Patterns

Manual / wire

A graph of n singletons allocates approximately n heap objects (one per provider's returned value, or fewer if the compiler can prove non-escape). After startup, no further allocations occur from the wiring layer.

dig / fx

In addition to the n singletons:

  • The container holds a registry: typically O(n) map entries keyed by reflect.Type.
  • Each provider call allocates an arguments slice and a return-values slice for reflect.Value.Call.
  • dig.In / dig.Out structs go through reflection-driven copying.
  • fx.Lifecycle keeps a slice of hook closures; each closure captures its environment.

Total: ~2–4 KB per provider during startup, freed after if not retained. Long-lived overhead is small (a few KB for the registry) but visible in heap profiles.


Diagnostic Tools

runtime/pprof

  • CPU profile during startup reveals which providers dominate, and whether reflection is on the hot path.
  • Heap profile after startup shows dig/fx registry retention.

go tool trace

A trace captured during startup shows the exact call sequence of providers. Useful for spotting unexpected serial work.

go test -bench with -benchmem

Compare alternative wiring strategies head-to-head. The numbers convince stakeholders better than arguments.

dig's own visualisation

go.uber.org/dig exposes Visualize to write a Graphviz dot file representing the resolved graph. Pipe to dot -Tsvg > graph.svg for a picture of what your container thinks your graph is.

wire diagnostics

wire diff shows what the regenerated file would look like if you ran wire. Useful in CI to ensure nobody forgot to regenerate.


Operational Playbook

When startup time is "too slow"

  1. Profile. Confirm dig/fx is on the hot path, not (e.g.) sql.Open waiting for a slow database.
  2. Move slow constructors out of startup — open the DB after app.Start() returns, in a background goroutine, with a readiness flag.
  3. If the wiring layer itself is the cost, migrate from fx to wire or to manual. The migration is mechanical: each fx.Provide becomes a normal Go call.

When binary size is a concern

fx and dig together add ~300 KB to the linked binary. For embedded targets or memory-constrained runtimes (some serverless) this matters. The same migration applies.

When debugging "why is this not injected?"

  • wire: the build error tells you exactly what is missing. Read it.
  • dig/fx: the runtime panic includes a graph trace. Re-read it; the type names are all there.
  • Manual: it does not compile. The error is on a specific line.

When operating multiple environments

wire cannot easily switch implementations at runtime. Common pattern: keep the bulk of the graph in wire, but inject the environment-specific decision at the seam (a single function in main that picks between two adapters and feeds the choice into wire.InitializeApp).


Summary

DI frameworks differ on a single axis that matters most: when does the wiring run? wire runs at build time and produces ordinary Go code, paying nothing at runtime. dig/fx run at process startup and pay reflection cost proportional to the graph size. Manual is build time implicitly: it is just code.

The professional view is that the choice has measurable cost. Manual and wire are nearly identical at runtime — both are just sequences of constructor calls. fx adds reflection cost that is invisible for long-running services and meaningful for short-lived ones. Profiles reveal it; measurements decide it.

For most production Go services the answer is "manual or wire, decide on team taste." For services where startup time is on the critical path of deployment, the answer is "manual or wire, period." For polyglot service frameworks where uniformity across many services trumps per-service startup, fx earns its place.

Either way, knowing what each tool does at the metal is what allows the staff engineer to defend the choice — instead of arguing about taste.