When to Use a Pool — Find the Bug¶
12 snippets where the pool choice (or a closely related concurrency choice) is the bug. Read each, find the bug, propose a fix.
Each has a hidden answer; scroll past the code before reading.
Bug 1: The over-engineered CLI¶
A CLI tool that resizes 50 JPEG files.
package main
import (
"context"
"fmt"
"os"
"github.com/panjf2000/ants/v2"
)
func main() {
files, _ := filepath.Glob("*.jpg")
pool, err := ants.NewPool(100,
ants.WithMaxBlockingTasks(1000),
ants.WithPanicHandler(func(p any) {
fmt.Println("panic:", p)
}),
ants.WithNonblocking(true),
ants.WithExpiryDuration(time.Hour),
)
if err != nil { os.Exit(1) }
defer pool.Release()
var wg sync.WaitGroup
for _, f := range files {
f := f
wg.Add(1)
pool.Submit(func() {
defer wg.Done()
resize(f)
})
}
wg.Wait()
}
What's the bug?
Answer 1
The bug is **using ants for 50 one-shot tasks**. This is a CLI tool that runs once, processes 50 files, and exits. There is: - No warm state to reuse. - No high spawn rate (50 is trivial). - No need for the elaborate options (`WithMaxBlockingTasks=1000` for 50 tasks?). - No need for panic handler (a CLI can crash gracefully). - No need for hour-long expiry on a CLI. This is cargo-cult adoption. The right answer is `errgroup.SetLimit(runtime.NumCPU())`: Half the code, no dependency, propagates errors, idiomatic.Bug 2: The wrong K for HTTP¶
pool, _ := ants.NewPool(runtime.NumCPU())
defer pool.Release()
for _, url := range urls { // 1000 URLs
url := url
pool.Submit(func() {
resp, _ := http.Get(url)
// ...
})
}
What's the bug?
Answer 2
K = `runtime.NumCPU()` is wrong for **I/O-bound work**. HTTP calls are mostly waiting on network, not CPU. With 8 cores and 1000 URLs, K=8 means at most 8 in-flight HTTP calls — the rest queue. If each URL takes 100ms and you have 1000 URLs, total time = 1000/8 × 0.1 = 12.5 seconds. With K=100, total = 1000/100 × 0.1 = 1 second. Fix: K = throughput × latency. Or just K = 100 (or some reasonable number for HTTP). Also note: original code ignores `Submit` error and `http.Get` error. Both should be handled.Bug 3: The blocking pool for non-blocking work¶
pool, _ := tunny.NewFunc(100, func(payload any) any {
url := payload.(string)
resp, _ := http.Get(url)
return resp.StatusCode
})
defer pool.Close()
statuses := make([]int, len(urls))
for i, url := range urls {
statuses[i] = pool.Process(url).(int) // !
}
What's the bug?
Answer 3
`pool.Process` is **synchronous** — it blocks the caller until the worker returns. The loop processes URLs one at a time, sequentially. The 100-worker pool is never parallelised. Fix: spawn each Process in its own goroutine, or use ants: But also: tunny is the wrong tool here. `http.Get` has no warm state per worker. Use errgroup.Bug 4: The unbounded queue¶
pool, _ := ants.NewPool(50)
defer pool.Release()
for msg := range incomingMessages { // unbounded stream
msg := msg
pool.Submit(func() { handle(msg) })
}
What's the bug?
Answer 4
The default `ants.NewPool` has **no MaxBlockingTasks**. When the pool is at K=50 and submission keeps coming, the queue (internally) blocks submitters. But if you also configure Nonblocking without MaxBlockingTasks, you have an unbounded queue. With this code, Submit blocks under heavy load. The producer (the consumer of incomingMessages) backs up. That may be what you want — but document it. Alternative bug: if you set Nonblocking without MaxBlockingTasks, the queue grows unboundedly, eventually OOMing. Fix: Or use non-blocking with explicit drop handling.Bug 5: The pool per request¶
func handler(w http.ResponseWriter, r *http.Request) {
pool, _ := ants.NewPool(10)
defer pool.Release()
for _, x := range items {
x := x
pool.Submit(func() { process(x) })
}
}
What's the bug?
Answer 5
Pool is **constructed per request**. Each handler call creates 10 workers, then releases them. The setup/teardown cost is paid every request. The whole point of a pool — worker reuse — is defeated. Fix: long-lived pool at package or struct level: Or: don't use a pool here at all. If items is a small fixed slice, errgroup or raw goroutines suffice.Bug 6: The serialised pool¶
pool, _ := ants.NewPool(100)
defer pool.Release()
var mu sync.Mutex
for _, x := range items {
x := x
pool.Submit(func() {
mu.Lock()
defer mu.Unlock()
// ... non-trivial work
})
}
What's the bug?
Answer 6
100 workers, but every task holds the **same mutex**. Effective concurrency is 1. The pool is wasted. Fix: refactor so the work doesn't require the lock. Possible options: - Per-worker state (instead of shared). - Sharded state with per-shard locks. - Lock-free data structures. - Or: just one goroutine, no pool. The pool is a symptom, not the disease. The disease is the shared mutex.Bug 7: The lifecycle leak¶
func processItems(items []Item) {
pool, _ := ants.NewPool(10)
// no defer Release!
for _, x := range items {
x := x
pool.Submit(func() { work(x) })
}
}
What's the bug?
Answer 7
No `defer pool.Release()`. The pool's 10 workers run forever, even after the function returns. Goroutine leak. If `processItems` is called many times, you accumulate stuck goroutines. Fix: Or even better, make the pool long-lived (at struct level) rather than per-call. Also missing: WaitGroup or similar to wait for tasks before returning. As written, the function returns before tasks complete.Bug 8: The errgroup without SetLimit¶
g, _ := errgroup.WithContext(ctx)
for _, url := range urls { // 10,000 URLs
url := url
g.Go(func() error {
return fetch(ctx, url)
})
}
return g.Wait()
What's the bug?
Answer 8
No `g.SetLimit(K)`. The errgroup is **unbounded**. All 10,000 URLs are fetched at once. Memory blow-up, downstream rejection (429s), file-descriptor exhaustion. Fix: The classic mistake: thinking `errgroup` bounds by default. It does not — `SetLimit` is opt-in.Bug 9: The submit error ignored¶
pool, _ := ants.NewPool(100, ants.WithNonblocking(true))
defer pool.Release()
for msg := range messages {
msg := msg
pool.Submit(func() { process(msg) }) // !
}
What's the bug?
Answer 9
`pool.Submit` returns an error. With `WithNonblocking(true)`, the pool returns `ErrPoolOverload` when full. The code ignores it: messages are **silently dropped** under load. Fix: Always handle the Submit error in non-blocking mode.Bug 10: The pool for one item¶
pool, _ := ants.NewPool(50)
defer pool.Release()
pool.Submit(func() { processOne(item) })
// continues immediately, no wait
What's the bug?
Answer 10
A pool for **one task**? And no wait? Submitting and not waiting means processOne runs in the background after the function returns. If processOne writes to a shared state the caller reads, race. The function `pool.Submit + return` is fire-and-forget. Bizarre for a single task. Fix: just call `processOne(item)` directly. No pool needed. Or, if you need fire-and-forget background, `go processOne(item)` (raw goroutine).Bug 11: The pool with a captured ctx¶
func handler(w http.ResponseWriter, r *http.Request) {
for _, item := range items {
item := item
pool.Submit(func() {
// uses r.Context() — but it's already closed when handler returns!
fetch(r.Context(), item)
})
}
w.WriteHeader(200)
return
}
What's the bug?
Answer 11
`r.Context()` is the **request context**. It's cancelled when the handler returns. The pool's tasks (which run asynchronously) see a cancelled ctx as soon as the handler returns. If tasks are intended to outlive the handler, use a separate context: Or: if the tasks should match the request lifecycle, wait for them before returning. Use a WaitGroup or sync.Bug 12: The errgroup deadlock¶
g, _ := errgroup.WithContext(ctx)
g.SetLimit(2)
g.Go(func() error {
g.Go(func() error { // !
return doSomething(ctx)
})
return waitForInner(ctx)
})
return g.Wait()
What's the bug?