Unlimited Goroutines — Specification¶
Table of Contents¶
- Introduction
- Formal Anti-Pattern Definition
- The
goStatement (Language Spec) - Runtime Behaviour of Goroutine Creation
- The GMP Model
- Runtime Guarantees
- Runtime Non-Guarantees
- Memory Model Interactions
- Resource Limits
- GOMAXPROCS Interactions
- Stack Growth Specification
- Channel Backpressure Specification
- Semaphore Semantics
errgroup.SetLimitSpecification- Goroutine Lifetime
- Invariants of a Bounded Fan-Out
- Failure Modes
- Detection Specification
GODEBUGKnobs- Links to Go Runtime Source
- References
Introduction¶
This document specifies, as precisely as possible, the runtime behaviour relevant to the "unlimited goroutines" anti-pattern. It distinguishes:
- Normative behaviour (what Go guarantees): the Language Specification, the Memory Model.
- Implementation behaviour (what the current Go runtime does): scheduler, runtime, GC.
- Documented behaviour (what packages and tools promise):
runtime,sync,golang.org/x/sync.
The anti-pattern itself is not in any Go specification. It is a pattern, not a language construct. This document defines it formally and links it to the underlying mechanisms.
Formal Anti-Pattern Definition¶
Definition. A Go program exhibits the unlimited goroutines anti-pattern when, at some statement S, the count of go invocations executed since program start is unbounded as a function of an input I to which S is reachable.
Formally, let count(S, input) be the number of times S is executed when the program is fed input. The program exhibits the anti-pattern at S if there exists no constant K such that for all valid input, count(S, input) ≤ K.
Equivalently (informally): if there exists an input that causes the program to spawn more goroutines than the number of distinct invocations is bounded by the input's size, S is unbounded by control flow.
Note that this definition is about count, not concurrent liveness. A loop that spawns N goroutines sequentially, where each terminates before the next starts, is technically unbounded by count but bounded in concurrency. In practice the anti-pattern always involves concurrent liveness — the spawned goroutines overlap.
Refined definition (with concurrency). A program exhibits the unbounded-concurrent-fan-out anti-pattern if, for some bounded resource R consumed by spawned goroutines, the simultaneous count of goroutines holding R is unbounded as a function of the input.
R is typically: - Memory (each goroutine consumes stack + heap). - File descriptors. - Connections (database, HTTP). - Tokens at a downstream service.
When R is exceeded, the program fails.
The go Statement (Language Spec)¶
From the Go Programming Language Specification:
A "go" statement starts the execution of a function call as an independent concurrent thread of control, or goroutine, within the same address space.
GoStmt = "go" Expression .
The expression must be a function or method call; as with defer statements, parentheses around a built-in function call are illegal. Function value and parameters are evaluated as usual in the calling goroutine, but unlike with a regular call, program execution does not wait for the invoked function to complete. Instead, the function begins executing independently in a new goroutine. When the function terminates, its goroutine also terminates. If the function has any return values, they are discarded when the function completes.
Key formal points:
gois a statement, not an expression.- The argument to
gomust be a function or method call, not a function value. - Function values and arguments are evaluated in the calling goroutine, before the new goroutine starts.
- Return values are discarded.
The specification places no bound on how many goroutines a program can have. It also places no bound on what go does at runtime — the implementation is free to manage scheduling, stack allocation, and so on.
This permissive specification is what enables the anti-pattern: there is no language-level prohibition on writing for { go ... }. The bound is the programmer's responsibility.
Runtime Behaviour of Goroutine Creation¶
When go f() executes, the Go runtime performs the following (per runtime.newproc in src/runtime/proc.go):
-
Allocate or reuse a
gstruct. The runtime maintains a free list (gFree). If a freegis available, it is reused; otherwise a new one is allocated (~1 KB plus stack). -
Allocate the initial stack. The default initial stack is 2 KB (Go 1.21+) — actually, the runtime's
_StackMinis2048bytes; the stack is allocated from the heap. -
Copy the function arguments. The arguments to
fare copied onto the new goroutine's stack. This copy happens eagerly — even before the goroutine is scheduled. -
Initialize the
gstruct. The status is set to_Grunnable. The PC is set to point atf. The stack pointer is set. -
Enqueue on the local P's run queue. The new goroutine is placed on the calling P's local run queue. If the queue is full (256 entries), half is moved to the global queue.
-
Possibly wake a parked M. If no M is currently looking for work and the local P has work, the runtime may wake a parked M to handle it.
Total cost: typically 200-500 ns under uncontended conditions, dominated by allocation. Under heavy fan-out, the cost rises because of run queue rebalancing and lock contention on the global queue.
Implications for the anti-pattern¶
- The cost of
gois non-zero. At 1 milliongostatements per second, the runtime spends ~500ms/s just onnewproc. - Stack allocation creates GC pressure proportional to the spawn rate.
- The local run queue saturation moves work to the global queue, which is a contention point.
g0 and scheduling stack¶
Each M has a g0, a special goroutine with a large stack used for scheduling decisions. When an M is running goroutine code, the user g is active. When it is making scheduling decisions, it switches to g0.
g0's stack is typically 8 KB. It is not counted in goroutine totals; it is per-M.
The GMP Model¶
The Go runtime scheduler maps:
- G (goroutine): a unit of execution. Many per program.
- M (machine, an OS thread): runs goroutines. As many as needed.
- P (processor, a scheduling context): owns a local run queue.
GOMAXPROCSof them.
Relationships:
- Every running G is bound to an M.
- Every running M is bound to a P (otherwise it cannot run user code).
- A P with no work tries to steal from another P, then sleeps.
- An M with no P (e.g., blocked on syscall) cannot run user code.
The state diagram:
G states:
_Gidle (initial)
_Grunnable (queued, can be scheduled)
_Grunning (currently executing on an M)
_Gsyscall (executing a syscall)
_Gwaiting (blocked on a channel, mutex, etc.)
_Gdead (finished, available for reuse)
M states:
Running (bound to P, running G)
Spinning (bound to P, looking for work)
Idle (no P, no work, parked)
P states:
Running (bound to M, has G to run)
Idle (no M)
Scheduling decisions¶
When an M's bound P has no goroutine to run, the M (via findRunnable):
- Checks its local P's run queue.
- Checks the global run queue (every 61st iteration to prevent starvation).
- Checks the net poller for ready I/O.
- Tries to steal half of another P's local queue.
- If still empty, parks (the M becomes idle).
The 61 is a Mersenne prime chosen to avoid synchronisation patterns with other periodic events.
newproc and goready¶
When a goroutine is created (newproc) or unblocked (goready), it is placed on the local P's run queue (if there's capacity) or the global queue (if local is full).
The wakep function may wake a sleeping M if there is one and there is excess work.
Runtime Guarantees¶
The following are guarantees from the Go specification and runtime documentation:
-
gostarts a new concurrent thread. The goroutine is logically independent from the caller; the caller may proceed to the next statement. -
Arguments are evaluated in the caller. Before the new goroutine starts, its function arguments are fully evaluated. This means a deferred argument value is captured by the caller, not by the new goroutine.
-
Goroutines are preemptible (since Go 1.14). A goroutine in a tight loop can be preempted by the runtime via async preemption.
-
mainexits → program exits. Whenfunc mainreturns, the program terminates immediately, including all other goroutines. -
Channel operations follow happens-before. A receive from a channel happens-after the corresponding send (formally specified in the Memory Model).
-
sync.Mutex.Unlockhappens-before subsequentLocks of the same mutex. -
Garbage collection does not lose pointers. A goroutine holding a pointer to an object keeps that object live.
-
Stack growth is invisible to user code. The runtime may copy a stack to grow it; references to stack variables are updated. (Except for
unsafe.Pointeraliasing, which is documented as undefined.)
Runtime Non-Guarantees¶
The following are explicitly not guaranteed:
-
The order in which goroutines run. The scheduler may run goroutines in any order on any M.
-
Fairness. A starved goroutine is not guaranteed to eventually run. In practice, the runtime provides reasonable fairness, but the spec does not promise it.
-
The exact moment of preemption. The runtime decides when to preempt, based on heuristics.
-
The exact stack size. A goroutine's stack may grow or shrink at any GC. Code that relies on a specific stack size is incorrect.
-
runtime.NumGoroutine()precision. The count may include or exclude goroutines in transition states. -
Goroutine identity. Goroutines do not have a public identity. The
gstruct has agoid, but it is not part of the API. -
Goroutine local storage. There is none. Use
context.Context. -
Maximum goroutine count. There is no documented maximum. The practical limit is memory.
-
The cost of
go. Documented as "lightweight," but the exact cost is not specified. -
GOMAXPROCS = 0semantics. The behaviour of setting GOMAXPROCS to zero is not documented; in practice, the runtime treats it as 1.
These non-guarantees matter for the anti-pattern because relying on any of them (e.g., "the runtime will starve the goroutines I don't want to run") is incorrect.
Memory Model Interactions¶
The Go Memory Model (go.dev/ref/mem) defines when one goroutine's reads observe another's writes. Key statements relevant to the anti-pattern:
-
go f()happens-beforef's execution starts. Whatever the caller did before thegostatement is observable tofon entry. -
f's exit happens-before the corresponding event observed by another goroutine. If goroutine A spawnedfand later observesf's completion (via a channel close, a waitgroup signal, etc.), A observes everythingfdid. -
Channels synchronise. A send
ch <- vsynchronises with the corresponding receive<-ch(or vice versa, on a buffered channel). -
Mutex.Unlock synchronises with the next Lock.
-
sync/atomicoperations are sequentially consistent.
Implication for unbounded fan-out¶
If the parent spawns N goroutines and does not wait for them, the parent has no guarantee that any of them has executed when the parent proceeds. The lifetime of the spawned goroutines is not bound to the parent's; only an explicit synchronisation makes it so.
This is why structured concurrency (errgroup.Wait) is important: it provides the synchronisation point that makes the lifetimes nested.
Resource Limits¶
The following resources can be exhausted by unlimited goroutines:
Heap memory¶
Each goroutine starts with a 2 KB stack but typically grows to 8-64 KB under realistic call chains. Heap allocations made during execution add to this. There is no language-level limit; the practical limit is the host's memory.
In a container with a memory limit, the kernel kills the process at the limit (OOMKilled).
Stack¶
A goroutine's stack can grow up to runtime/debug.SetMaxStack (default 1 GB). Reaching this triggers panic runtime: goroutine stack exceeds 1000000000-byte limit.
This rarely happens for unbounded fan-out — the limit is per goroutine, not aggregate. It's typically reached by runaway recursion.
Thread (OS) count¶
When a goroutine performs a blocking syscall, an M is dedicated to it. If many goroutines block in syscalls simultaneously, many Ms are created. runtime/debug.SetMaxThreads (default 10 000) caps this.
Exceeding it: runtime: program exceeds 10000-thread limit.
File descriptors¶
Each TCP connection, file open, etc. consumes an FD. The OS ulimit (typically 65 535 in production) caps the total.
Exceeding it: syscall errors EMFILE (this process) or ENFILE (system-wide).
Database connection pool¶
Application-level limit, typically 10-200. Exceeded by: - Configurable via db.SetMaxOpenConns. - New db.Conn calls block.
TCP ephemeral ports¶
Outbound connections use ports 32 768 - 60 999 (Linux default). About 28 000 ports.
Exceeded: bind: address already in use or connect: cannot assign requested address.
Goroutine count itself¶
There is no hard runtime limit, but practical limits emerge: - GC pauses grow with goroutine count. - Scheduler overhead grows. - Stack scanning takes proportional time.
At millions of goroutines, the runtime works but performance is degraded.
GOMAXPROCS Interactions¶
GOMAXPROCS is the maximum number of OS threads that can simultaneously execute Go code. Defaults to runtime.NumCPU().
Effect on the anti-pattern¶
GOMAXPROCS does not limit goroutine count. It limits the parallelism of execution. With GOMAXPROCS=4 and 10 000 goroutines: - Only 4 execute at any instant. - 9 996 are runnable (queued) or waiting.
The 9 996 still consume memory.
When to lower GOMAXPROCS¶
In a container with CPU limit < host CPU count, default GOMAXPROCS over-allocates Ps. Use go.uber.org/automaxprocs or set explicitly.
When to raise GOMAXPROCS¶
Rarely. The default is usually correct. Raising beyond CPU count causes scheduler thrash without throughput gain.
Setting at runtime¶
Returns the previous value. Setting to 0 is treated as 1.
Reading¶
runtime.GOMAXPROCS(0) returns the current value without changing it.
Stack Growth Specification¶
A goroutine's stack starts at _StackMin (currently 2048 bytes). When the goroutine's call depth exceeds the stack, the runtime:
- Detects the overflow via a stack guard check at every function prologue.
- Allocates a new stack twice the size.
- Copies the old stack to the new.
- Updates all pointers on the stack to reference the new locations.
- Frees the old stack.
Cost: linear in the stack size. For deep recursion, growth is amortised O(1) per call.
The stack can also shrink during GC, if usage is < 1/4 of the allocated size.
Implication for the anti-pattern¶
If a goroutine's stack grows to 64 KB and 1 million goroutines exist, total stack memory = 64 GB. The runtime cannot do this; OOM occurs.
Bounded fan-out keeps the count low, keeping aggregate stack memory predictable.
Channel Backpressure Specification¶
A buffered channel of capacity N: - Holds up to N values in a ring buffer. - ch <- v blocks if the buffer is full and no receiver is waiting. - <-ch blocks if the buffer is empty and no sender is waiting.
An unbuffered channel: - Has capacity 0. - ch <- v blocks until a receiver is ready. - <-ch blocks until a sender is ready.
A closed channel: - Sends panic. - Receives return the zero value (with ok = false in the two-value form).
FIFO order¶
Channels are FIFO for waiters: - Senders block in the order they arrive. - Receivers block in the order they arrive. - When a value becomes available, the first waiter gets it.
This FIFO behaviour is the basis for using channels as semaphores: a chan struct{} of capacity N is a FIFO counting semaphore.
select semantics¶
A select with multiple ready cases chooses uniformly at random. With default, the default runs immediately if no case is ready.
Semaphore Semantics¶
golang.org/x/sync/semaphore.Weighted semantics:
NewWeighted(n int64) *Weighted¶
Creates a semaphore with total capacity n. n must be non-negative.
Acquire(ctx context.Context, n int64) error¶
Attempts to acquire n units. If cur + n <= capacity, succeeds immediately. Otherwise blocks until either: - Capacity becomes available (returns nil). - ctx is cancelled (returns ctx.Err()).
n may exceed the semaphore's capacity, but the call will never succeed in that case (unless every other holder releases, which the FIFO discipline may prevent indefinitely).
Release(n int64)¶
Releases n units. Does not check that the units were previously acquired; releasing more than was acquired drives the internal counter negative.
If the released units satisfy the requirements of the first FIFO waiter, that waiter is signalled. If not, no waiter is signalled.
TryAcquire(n int64) bool¶
Non-blocking acquire. Returns true if n units were acquired; false otherwise.
Invariants¶
- After construction,
cur = 0. - After Acquire(n) succeeds,
cur += n. - After Release(n),
cur -= n. curshould remain in[0, capacity]if used correctly.
The semaphore does not enforce the invariant cur <= capacity in Release; releasing more than acquired is a bug.
FIFO¶
Waiters are queued in FIFO order. A waiter of size N cannot be passed over by a waiter of size 1 that arrived later, even if the size-1 request would fit.
This prevents starvation of large requests but means small requests may wait longer than necessary. Designers wanting non-FIFO behaviour should not use this semaphore.
errgroup.SetLimit Specification¶
golang.org/x/sync/errgroup.Group.SetLimit(n int):
Semantics¶
n < 0: no limit.n == 0: everyGocall deadlocks (effectively unusable; do not call SetLimit(0)).n > 0: at mostngoroutines run concurrently in this group.
Go(f func() error)¶
When SetLimit(n) is set with n > 0: - If fewer than n goroutines are currently running in this group, Go starts f immediately. - If n goroutines are running, Go blocks until one finishes, then starts f.
TryGo(f func() error) bool¶
- If fewer than
nrunning, startsfand returns true. - If
nrunning, returns false without starting.
Wait() error¶
Blocks until all Go-started goroutines complete. Returns the first non-nil error returned by any of them. The order of errors observed is non-deterministic; only the first one (by some ordering) is kept, the rest are discarded.
Context interaction¶
If created with WithContext, the group has a derived context gctx. If any goroutine returns a non-nil error, gctx is cancelled (and the error is recorded). Other goroutines that observe gctx.Done() should exit.
Wait cancels gctx after all goroutines complete, regardless of error.
Restrictions¶
SetLimitmust be called before the firstGocall. After that, it panics.SetLimit(0)makesGodeadlock; do not use.Gomay block;TryGodoes not.
Implementation detail¶
The limit is implemented internally as a chan token of capacity n. Go sends to the channel before spawning; the spawned goroutine receives from it on exit. This is a chan struct{} semaphore.
Goroutine Lifetime¶
A goroutine's lifetime begins when go f() executes. It ends when:
freturns normally.fpanics (the goroutine terminates; the panic is not propagated unless recovered).mainreturns (the runtime terminates the process, ending all goroutines).runtime.Goexit()is called (the goroutine terminates after running deferred functions).
Goroutine reuse¶
Internally, the runtime reuses g structs from gFree. But each g only runs one goroutine at a time; a "reuse" happens after termination.
Liveness and GC¶
A goroutine is live (not GC-eligible) if it is in any state other than _Gdead. The GC traces from every live goroutine's stack and finds reachable heap objects.
Implication for the anti-pattern¶
Leaked goroutines (waiting forever on a channel, for example) are live. They are not GC'd. Their stacks consume memory indefinitely.
Invariants of a Bounded Fan-Out¶
A correctly bounded fan-out maintains these invariants:
Invariant 1: At-most-N concurrent¶
At any instant, at most N goroutines spawned by this fan-out are alive.
Enforcement: semaphore, channel, or errgroup.SetLimit.
Invariant 2: Eventual termination¶
Every spawned goroutine eventually terminates.
Enforcement: bounded work per goroutine, context cancellation.
Invariant 3: No-leak¶
After the fan-out's parent function returns, no spawned goroutine remains alive.
Enforcement: wg.Wait(), g.Wait(), or equivalent join.
Invariant 4: Error propagation¶
If any goroutine errors, the parent observes the error (or at least one error if multiple occur).
Enforcement: errgroup error aggregation, or a custom result channel.
Invariant 5: Cancellation propagation¶
If the parent's context is cancelled, all spawned goroutines observe the cancellation.
Enforcement: passing the context to every goroutine, using errgroup.WithContext.
Invariant 6: Resource release¶
Every resource acquired by a goroutine is released before the goroutine terminates.
Enforcement: defer statements for every acquire.
Violating any of these invariants makes the fan-out incorrect or leak-prone.
Failure Modes¶
Formal failure modes of unbounded fan-out:
Failure mode 1: Heap exhaustion¶
- Goroutine count × per-goroutine memory > available heap.
- Symptom: out-of-memory panic, OOMKill, slow GC.
- Detection: heap profile, OOM logs.
Failure mode 2: Stack memory exhaustion¶
- Sum of goroutine stack sizes > available memory.
- Special case of heap exhaustion when stacks dominate.
- Detection: pprof goroutine count, per-goroutine stack sizes.
Failure mode 3: FD exhaustion¶
- Open connections > ulimit -n.
- Symptom:
EMFILEerrors,accept: too many open files. - Detection: count of open FDs vs ulimit.
Failure mode 4: Connection pool exhaustion¶
- Goroutines waiting for connections > pool size.
- Symptom: latency climbs; eventually timeouts.
- Detection:
db.Stats().WaitCountincreasing.
Failure mode 5: Scheduler thrash¶
- Runnable count >> GOMAXPROCS.
- Symptom: CPU saturated but throughput low; high
findRunnabletime. - Detection:
go tool traceshows long scheduling delays.
Failure mode 6: GC pause amplification¶
- Stack scan time × goroutines per GC > acceptable pause.
- Symptom: p99 latency spikes correlate with GC.
- Detection:
GODEBUG=gctrace=1.
Failure mode 7: Downstream cascade¶
- Service A's unbounded fan-out into B causes B to fail; A's retries amplify; recovery is slow.
- Symptom: B unavailable; A's queue grows; A eventually fails too.
- Detection: downstream-side rate limiting metrics.
Each failure mode has a unique detection signal. A complete monitoring setup covers all of them.
Detection Specification¶
To detect the anti-pattern at runtime or build time:
Runtime detection¶
-
Goroutine count.
runtime.NumGoroutine()as a Prometheus metric. Alert on growth or absolute value. -
Allocation rate.
runtime.MemStats.Mallocsrate. Spikes correlate with fan-out events. -
GC pause.
runtime.MemStats.PauseTotalNsrate. Growing means GC is busy. -
Stack count.
runtime.MemStats.StackInuse. Approximates stack memory of all goroutines. -
Pprof.
/debug/pprof/goroutineshows stacks; high counts on a single stack are the smoking gun.
Build-time detection¶
-
Static analysis. Custom AST analyser detects
gostatements insidefor rangeover slices/channels. -
Test harness. Unit/integration tests that assert
runtime.NumGoroutine()stays bounded. -
Goleak.
go.uber.org/goleakfails tests if goroutines remain at suite end.
Production detection¶
-
Continuous profiling. Pyroscope/Polar Signals capture profiles every 10 seconds; the count of goroutines per stack is queryable.
-
Alerting. Prometheus rules: alert on derivative > threshold.
-
Health endpoint. A custom
/healththat returns degraded ifNumGoroutineis excessive.
GODEBUG Knobs¶
Environment variable GODEBUG toggles runtime debug behaviour. Relevant for this anti-pattern:
GODEBUG=schedtrace=N¶
Every N milliseconds, prints scheduler state to stderr. Useful to see run queue lengths.
GODEBUG=scheddetail=1¶
Combined with schedtrace, prints per-P and per-M detail.
GODEBUG=gctrace=1¶
Prints each GC cycle's stats: phase durations, heap sizes, goal.
GODEBUG=allocfreetrace=1¶
Logs every allocation and free. Very slow but useful for finding allocation hot spots.
GODEBUG=netdns=go¶
Forces Go's pure-DNS resolver. Bypasses libc resolver. Relevant when many goroutines do DNS.
GODEBUG=asyncpreemptoff=1¶
Disables async preemption (Go 1.14+). Useful when debugging preemption-sensitive code.
GODEBUG=cgocheck=2¶
Stricter cgo pointer checks. Slow; used for diagnostics.
These knobs don't change normal program behaviour; they add diagnostic output or change debug paths.
Links to Go Runtime Source¶
For the engineer who wants to read the source:
src/runtime/proc.go: scheduler core. Functionsnewproc,findRunnable,schedule,gopark,goready.src/runtime/runtime2.go: data structures (g,m,p,sched, etc.).src/runtime/chan.go: channel implementation.chansend,chanrecv,closechan.src/runtime/sema.go: low-level semaphore (used bysync.Mutex).src/runtime/lock_futex.go/lock_sema.go: low-level mutex.src/runtime/select.go: select implementation.src/runtime/mgc.go: GC.src/runtime/mheap.go,mcache.go,mcentral.go: heap allocator.src/runtime/preempt.go: preemption logic.src/runtime/netpoll.go,netpoll_*: network poller.
These files are written in Go (some assembly stubs for the lowest level). Each is a few hundred to a few thousand lines. They are readable; they include detailed comments.
The full runtime is approximately 30 000 lines of Go and 5 000 lines of assembly. It is approachable for a determined reader.
References¶
Normative¶
- The Go Programming Language Specification: https://go.dev/ref/spec
- The Go Memory Model: https://go.dev/ref/mem
- Effective Go: https://go.dev/doc/effective_go
runtimepackage documentation: https://pkg.go.dev/runtimesyncpackage documentation: https://pkg.go.dev/synccontextpackage documentation: https://pkg.go.dev/context
Supplementary¶
golang.org/x/syncdocumentation: https://pkg.go.dev/golang.org/x/syncgolang.org/x/sync/errgroup: https://pkg.go.dev/golang.org/x/sync/errgroupgolang.org/x/sync/semaphore: https://pkg.go.dev/golang.org/x/sync/semaphorego.uber.org/goleak: https://pkg.go.dev/go.uber.org/goleakgo.uber.org/automaxprocs: https://pkg.go.dev/go.uber.org/automaxprocs
Runtime source¶
- Go runtime: https://github.com/golang/go/tree/master/src/runtime
proc.go: https://github.com/golang/go/blob/master/src/runtime/proc.gochan.go: https://github.com/golang/go/blob/master/src/runtime/chan.goruntime2.go: https://github.com/golang/go/blob/master/src/runtime/runtime2.go
Blog posts and talks¶
- Rob Pike, "Concurrency is not Parallelism" (2012)
- Rob Pike, "Go Concurrency Patterns" (2012)
- Sameer Ajmani, "Advanced Go Concurrency Patterns" (2013)
- Dmitry Vyukov, "Go Scheduler: Implementing Language with Lightweight Concurrency" (2014)
- Kavya Joshi, "Understanding Channels" (GopherCon 2017)
External resources¶
- Bryan C. Mills, "Rethinking Classical Concurrency Patterns" (GopherCon 2018)
- Mat Ryer's posts on Pace.dev about concurrency patterns
- Dave Cheney's blog: dave.cheney.net (especially the post "Never start a goroutine without knowing how it will stop")
These references constitute the authoritative literature on Go concurrency. Reading them in sequence builds a foundation that this document presupposes.
Conclusion¶
The unlimited goroutines anti-pattern arises from the Go language's permissive specification: go can be invoked anywhere, with any argument count, with no bound. The runtime accepts whatever the program requests, up to physical resource limits.
The cure is to bound by construction: every fan-out call site is wrapped in a primitive (semaphore, errgroup, pool) that enforces an upper limit. This is not a language feature; it is a discipline.
This specification has defined the anti-pattern formally, enumerated runtime behaviours that matter, listed guarantees and non-guarantees, and pointed to the source. Future engineers reading this file should be able to look up any specific behaviour, find an authoritative answer, and act on it.
End of Specification file.