Go Maps — Professional Level¶
Table of Contents¶
- hmap: The Runtime Header Struct
- bmap: Bucket Memory Layout in Detail
- tophash: Fast Slot Discrimination
- Hash Function Selection and AES-NI
- Bucket Count: B Field and Power-of-Two Sizing
- Load Factor: 6.5 and Why
- Two Growth Modes: Overflow vs Overload
- Evacuation Protocol: nevacuate and oldbuckets
- mapaccess1 — Full Algorithm Trace
- mapassign — Full Algorithm Trace
- mapdelete — Slot Clearing
- Iterator Invalidation and the flags Field
- Concurrent Write Detection (hashWriting)
- Memory Allocator Interaction
- Assembly Output for Common Map Operations
- Escape Analysis: When Maps Go to Heap
1. hmap: The Runtime Header Struct¶
The complete hmap definition from runtime/map.go:
// src/runtime/map.go
type hmap struct {
count int // # live cells == size of map. Must be first (used by len() builtin)
flags uint8 // Various flags (see below)
B uint8 // log_2 of # of buckets (can hold up to loadFactor * 2^B items)
noverflow uint16 // approximate number of overflow buckets; see incrnoverflow
hash0 uint32 // hash seed
buckets unsafe.Pointer // array of 2^B Buckets. may be nil if count==0.
oldbuckets unsafe.Pointer // previous bucket array of half the size, non-nil only when growing
nevacuate uintptr // progress counter for evacuation (buckets less than this # have been evacuated)
extra *mapextra // optional fields
}
type mapextra struct {
// If both key and elem do not contain pointers and are inline, then we mark bucket
// type as containing no pointers. This avoids scanning such maps during GC.
overflow *[]*bmap // overflow buckets for hmap.buckets
oldoverflow *[]*bmap // overflow buckets for hmap.oldbuckets
nextOverflow *bmap // a pointer to a free overflow bucket
}
// flags values:
const (
iterator = 1 // there may be an iterator using buckets
oldIterator = 2 // there may be an iterator using oldbuckets
hashWriting = 4 // a goroutine is writing to the map
sameSizeGrow = 8 // the current map growth is to a same-size bucket array
)
Memory layout of hmap (64-bit system):
Offset Size Field
0 8 count (int)
8 1 flags (uint8)
9 1 B (uint8)
10 2 noverflow (uint16)
12 4 hash0 (uint32)
16 8 buckets (unsafe.Pointer)
24 8 oldbuckets (unsafe.Pointer)
32 8 nevacuate (uintptr)
40 8 extra (*mapextra)
Total: 48 bytes
2. bmap: Bucket Memory Layout in Detail¶
Each bucket is a bmap. The struct is generated by the compiler for each map type:
// Generic bmap (runtime uses unsafe layout tricks)
// For map[string]int, the compiler generates a type like:
type bmap_string_int struct {
tophash [8]uint8 // 8 bytes
keys [8]string // 8 * 16 = 128 bytes (string = ptr + len)
values [8]int // 8 * 8 = 64 bytes
overflow *bmap // 8 bytes
} // Total: 208 bytes
Key/value layout optimization:
===============================
Go stores ALL keys together, then ALL values together (NOT interleaved)
Layout: [tophash[8]][key0][key1]...[key7][val0][val1]...[val7][overflow*]
WHY? Alignment. If K=bool and V=int64:
Interleaved: bool(1) + pad(7) + int64(8) = 16 bytes per pair * 8 = 128 bytes
Grouped: [8 bools][56 pad bytes][8 int64s] = much better packing
The compiler calculates the exact offsets using:
keyOffset = unsafe.Offsetof(bmap.tophash) + bucketCnt (= 8)
valueOffset = keyOffset + bucketCnt * sizeof(K)
package main
import (
"fmt"
"unsafe"
)
// Inspect bucket layout using unsafe (educational only)
func inspectMap() {
// A map variable is just a *hmap
m := make(map[string]int)
m["hello"] = 42
// The map header pointer (hmap):
// We can access count (first field) via unsafe
type hmap struct {
count int
// ... more fields
}
// Extract hmap pointer
hmapPtr := *(*uintptr)(unsafe.Pointer(&m))
if hmapPtr != 0 {
hdr := (*hmap)(unsafe.Pointer(hmapPtr))
fmt.Printf("count from hmap: %d\n", hdr.count) // 1
}
}
func main() {
inspectMap()
fmt.Println("Bucket size for map[string]int: ~208 bytes")
fmt.Println("Bucket size for map[int]int: ~136 bytes")
// [8]uint8(8) + [8]int64(64) + [8]int64(64) + ptr(8) = 144
}
3. tophash: Fast Slot Discrimination¶
tophash stores the top 8 bits of a key's hash for fast rejection:
// runtime/map.go
const (
emptyRest = 0 // this cell is empty and there are no more non-empty cells following it
emptyOne = 1 // this cell is empty
evacuatedX = 2 // key/elem is valid. Entry has been evacuated to first half of larger table.
evacuatedY = 3 // same as above, but evacuated to second half of larger table.
evacuatedEmpty = 4 // cell is empty, bucket is evacuated.
minTopHash = 5 // minimum tophash for a normal filled cell.
)
// tophash computes the hash top-byte used for quick comparison.
func tophash(hash uintptr) uint8 {
top := uint8(hash >> (goarch.PtrSize*8 - 8))
if top < minTopHash {
top += minTopHash // ensure value >= 5 (not a special sentinel)
}
return top
}
Lookup flow with tophash:
=========================
hash = hashfn(key, seed)
bucket_idx = hash & (2^B - 1) // low bits select bucket
top = uint8(hash >> 56) // high 8 bits (on 64-bit)
if top < 5: top += 5 // avoid special values
for each slot i in [0..7]:
if tophash[i] == emptyRest: STOP (rest of chain is empty)
if tophash[i] != top: CONTINUE // fast reject — no key comparison!
if keys[i] == key: FOUND // full key comparison only here
This means for a typical lookup with no collision, only 8 byte comparisons happen before finding (or ruling out) the key. This fits in a single cache line.
4. Hash Function Selection and AES-NI¶
Go selects the hash function at runtime based on CPU capabilities:
// runtime/alg.go (simplified)
// Go uses AES-NI instructions when available:
// - Intel: AES-NI (aesenc, aesenclast instructions)
// - ARM: ARMv8 AES
// Fall back to wyhash when AES hardware not available
// For strings: runtime.memhash (uses AES or wyhash)
// For int64: runtime.memhash64
// For int32: runtime.memhash32
// etc.
// The hash function is stored in the type's alg structure:
// type typeAlg struct {
// hash func(unsafe.Pointer, uintptr) uintptr
// equal func(unsafe.Pointer, unsafe.Pointer) bool
// }
AES hash performance (approximate, modern hardware):
String up to 16 bytes: ~1ns
String up to 32 bytes: ~2ns
String up to 64 bytes: ~3ns
Hash quality ensures uniform bucket distribution.
Bad distribution → more overflow chains → O(n) worst case.
package main
import (
"fmt"
"runtime"
)
func main() {
// Check if AES is available (affects map performance)
fmt.Println("GOARCH:", runtime.GOARCH)
// On amd64 and arm64, AES-NI hash is used
// On other architectures, pure-Go hash is used (slightly slower)
// Demonstration: same-prefix strings still distribute well
m := make(map[string]int, 100)
for i := 0; i < 100; i++ {
key := fmt.Sprintf("user-session-token-%04d", i)
m[key] = i
}
fmt.Println(len(m)) // 100 — all distinct buckets (ideally)
}
5. Bucket Count: B Field and Power-of-Two Sizing¶
// Number of buckets = 2^B
// B=0: 1 bucket (holds ~6 entries)
// B=1: 2 buckets (holds ~13 entries)
// B=2: 4 buckets (holds ~26 entries)
// B=10: 1024 buckets (holds ~6656 entries)
//
// Initial B for make(map[K]V, hint):
// find smallest B such that loadFactor * 2^B >= hint
// loadFactor = 6.5 (approximately 13/2)
// runtime/map.go
func overLoadFactor(count int, B uint8) bool {
return count > bucketCnt && // > 8 items
uintptr(count) > loadFactorNum*(bucketShift(B)/loadFactorDen)
// loadFactorNum = 13, loadFactorDen = 2 → 6.5
}
package main
import "fmt"
func initialB(hint int) uint8 {
// Approximate: find B where 6.5 * 2^B >= hint
var b uint8
for b < 62 {
if float64(hint) <= 6.5*float64(uint(1)<<b) {
return b
}
b++
}
return b
}
func main() {
for _, hint := range []int{0, 1, 6, 7, 8, 13, 14, 26, 27, 100, 1000, 10000} {
b := initialB(hint)
buckets := uint(1) << b
fmt.Printf("hint=%5d → B=%d, buckets=%4d, capacity≈%.0f\n",
hint, b, buckets, float64(buckets)*6.5)
}
}
6. Load Factor: 6.5 and Why¶
The Go team chose 6.5 based on benchmarking:
Tradeoffs:
- Lower load factor (e.g., 4.0): more buckets, better locality, more memory
- Higher load factor (e.g., 8.0): fewer buckets, more overflow chains, slower
Load factor 6.5 means:
Average bucket occupancy at growth time: 6.5/8 = 81%
Overflow probability per bucket: ~10% (empirically measured)
Comparison:
Java HashMap: load factor 0.75 (max 75% full before resize)
Python dict: 2/3 (~67% before resize, compact dicts since 3.6)
Go map: 6.5/8 = ~81% (higher! because per-bucket tophash helps avoid scanning)
package main
import "fmt"
func main() {
// Demonstrate load factor effect on growth:
// A map with hint=13 starts with B=1 (2 buckets, capacity≈13)
m := make(map[int]int, 13)
for i := 0; i < 13; i++ {
m[i] = i
}
fmt.Println("At 13 items:", len(m)) // 13, growth not yet triggered
// Adding item 14 likely triggers growth to B=2
m[14] = 14
fmt.Println("At 14 items:", len(m)) // 14, B may have grown to 2
}
7. Two Growth Modes: Overflow vs Overload¶
// runtime/map.go: two conditions trigger growth
// Mode 1: OVERLOAD growth (most common)
// Triggered when: count > 6.5 * 2^B
// Action: B++ (double bucket count)
// Result: better distribution, fewer overflow chains
// Mode 2: SAME-SIZE growth (reorganization)
// Triggered when: noverflow >= 2^(B&15)
// This means: too many overflow buckets even at acceptable count
// This happens when: many deletes followed by inserts
// (Swiss cheese pattern: many empty slots scattered in overflow chains)
// Action: B stays the same, but buckets are reorganized
// Result: eliminates overflow chains, better cache behavior
func shouldGrow(h *hmap) bool {
// overloadGrowth
if overLoadFactor(h.count+1, h.B) {
return true
}
// tooManyOverflowBuckets
if tooManyOverflowBuckets(h.noverflow, h.B) {
return true
}
return false
}
package main
import "fmt"
func demonstrateSameSizeGrowth() {
// Pattern that triggers same-size growth:
// 1. Fill map to near-capacity
// 2. Delete most entries (creates many empty slots in buckets)
// 3. Insert new entries (overflow chains build up)
// 4. Runtime detects too many overflows → same-size reorganization
m := make(map[int]int)
// Fill
for i := 0; i < 1000; i++ {
m[i] = i
}
// Delete most (creates Swiss cheese pattern)
for i := 0; i < 990; i++ {
delete(m, i)
}
fmt.Printf("After delete: count=%d\n", len(m)) // 10
// Now re-insert — may trigger same-size growth
for i := 1000; i < 2000; i++ {
m[i] = i
}
fmt.Printf("After re-insert: count=%d\n", len(m)) // 1010
}
func main() {
demonstrateSameSizeGrowth()
}
8. Evacuation Protocol: nevacuate and oldbuckets¶
Growth Sequence:
================
1. hashGrow() called:
- oldbuckets = buckets
- buckets = new array (2x or same size)
- nevacuate = 0
- flags |= iterator (mark: iterators may exist)
2. Every subsequent mapassign or mapdelete:
- evacuate(oldbuckets, nevacuate) // evacuate one old bucket
- evacuate(oldbuckets, nevacuate+1) // evacuate a second old bucket
- nevacuate++
3. When nevacuate == len(oldbuckets):
- oldbuckets = nil (growth complete)
- flags &^= iterator
Evacuation of one old bucket:
For each slot in old bucket:
hash = hash(key)
if overload growth: new_bucket = hash & (2^(B+1) - 1) // uses new B
if same-size growth: new_bucket = hash & (2^B - 1)
copy key+value to new bucket, set tophash = evacuatedX or evacuatedY
package main
import "fmt"
func main() {
// During growth, lookups must check both oldbuckets and buckets
// The runtime handles this transparently:
m := make(map[int]int)
for i := 0; i < 10; i++ {
m[i] = i
}
// At this point, growth may have occurred
// The map is fully usable throughout — no pause
// Concurrent iteration during growth:
// Iterator captures oldbuckets if growth is in progress
// Keys in oldbuckets but not yet evacuated: read from old
// Keys already evacuated: read from new
// Result: no key is missed or double-visited during iteration
sum := 0
for _, v := range m {
sum += v
}
fmt.Println("Sum:", sum) // 45 (0+1+...+9)
}
9. mapaccess1 — Full Algorithm Trace¶
// runtime/map.go — mapaccess1 (simplified)
func mapaccess1(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer {
// 1. Safety checks
if h == nil || h.count == 0 {
return unsafe.Pointer(&zeroVal[0]) // return pointer to zero value
}
if h.flags&hashWriting != 0 {
throw("concurrent map read and map write")
}
// 2. Hash the key
hash := t.hasher(key, uintptr(h.hash0))
// 3. Find bucket
m := bucketMask(h.B) // 2^B - 1
b := (*bmap)(add(h.buckets, (hash&m)*uintptr(t.bucketsize)))
// 4. If growing, check old buckets
if c := h.oldbuckets; c != nil {
if !h.sameSizeGrow() {
m >>= 1 // old mask has one fewer bit
}
oldb := (*bmap)(add(c, (hash&m)*uintptr(t.bucketsize)))
if !evacuated(oldb) {
b = oldb // use old bucket if not yet evacuated
}
}
// 5. Compute tophash
top := tophash(hash)
// 6. Walk bucket chain
for ; b != nil; b = b.overflow(t) {
for i := uintptr(0); i < bucketCnt; i++ {
if b.tophash[i] != top {
if b.tophash[i] == emptyRest {
goto miss // early exit: no more entries
}
continue
}
k := add(unsafe.Pointer(b), dataOffset+i*uintptr(t.keysize))
if t.key.equal(key, k) {
e := add(unsafe.Pointer(b), dataOffset+bucketCnt*uintptr(t.keysize)+i*uintptr(t.elemsize))
return e // found!
}
}
}
miss:
return unsafe.Pointer(&zeroVal[0])
}
10. mapassign — Full Algorithm Trace¶
// runtime/map.go — mapassign (simplified)
func mapassign(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer {
// 1. Nil check
if h == nil {
panic(plainError("assignment to entry in nil map"))
}
// 2. Race detection
if h.flags&hashWriting != 0 {
throw("concurrent map writes")
}
h.flags ^= hashWriting // set the writing flag
// 3. Hash key
hash := t.hasher(key, uintptr(h.hash0))
// 4. Trigger growth if needed
if !h.growing() && (overLoadFactor(h.count+1, h.B) || tooManyOverflowBuckets(h.noverflow, h.B)) {
hashGrow(t, h)
goto again // retry after growth setup
}
// 5. Find bucket (post-growth uses new buckets)
// ... bucket computation ...
// 6. Walk bucket chain, find empty slot or existing key
// Two passes:
// a. Look for existing key (update in place)
// b. Remember first empty slot (for insertion)
// 7. Insert key/value into slot
// 8. Update tophash, increment count
// 9. Clear writing flag
h.flags &^= hashWriting
// Return pointer to value slot (caller writes the value)
return val
}
11. mapdelete — Slot Clearing¶
// runtime/map.go — mapdelete (simplified)
func mapdelete(t *maptype, h *hmap, key unsafe.Pointer) {
// 1. Nil/empty check — delete from nil map is a no-op
if h == nil || h.count == 0 {
return
}
// 2. Set writing flag
if h.flags&hashWriting != 0 {
throw("concurrent map writes")
}
h.flags ^= hashWriting
// 3. Hash, find bucket (same as access)
// ... bucket computation, old bucket check ...
// 4. Find matching slot
// 5. If found:
// - Zero out key memory (for GC)
// - Zero out value memory (for GC)
// - Set tophash[i] = emptyOne
// - If this is last non-empty slot in chain:
// set tophash[i] = emptyRest (allows early termination in lookups)
// - Decrement h.count
// - Potentially shrink? No! Maps never shrink automatically.
// 6. Clear writing flag
h.flags &^= hashWriting
}
package main
import "fmt"
func main() {
m := map[string]int{"a": 1, "b": 2, "c": 3}
// After delete, the slot is zeroed and marked emptyOne
// The bucket itself remains allocated
delete(m, "b")
// Memory is NOT freed to the OS — it stays in the Go heap
// for potential future assignments
fmt.Println(m) // map[a:1 c:3]
// To release memory, replace the map:
newM := make(map[string]int, len(m))
for k, v := range m {
newM[k] = v
}
m = newM // old map GC-eligible
fmt.Println(m)
}
12. Iterator Invalidation and the flags Field¶
// When a range loop starts, Go captures the current state:
// 1. Sets flags |= iterator (or oldIterator)
// 2. Records h.B, h.buckets at iteration start
// 3. Uses a random start bucket (prevents ordering dependence)
// During iteration, the map may grow:
// - iterator checks if oldbuckets != nil
// - if yes, reads non-evacuated keys from oldbuckets
// - evacuated keys are read from new buckets
// - result: every key is visited exactly once (barring concurrent modification)
// flags field tracking:
// iterator: a goroutine is ranging over h.buckets
// oldIterator: a goroutine is ranging over h.oldbuckets
// These prevent bucket freeing while iteration is active
package main
import "fmt"
func main() {
m := make(map[int]int)
for i := 0; i < 20; i++ {
m[i] = i
}
count := 0
for k, v := range m {
_ = k
_ = v
count++
// Adding during iteration: the runtime is safe here
// New keys may or may not appear in THIS iteration
if count == 5 {
m[100] = 100
m[101] = 101
}
}
fmt.Println("visited:", count, "final len:", len(m))
// count may be 20, 21, or 22 depending on timing of new key insertion
}
13. Concurrent Write Detection (hashWriting)¶
// The hashWriting detection works as follows:
//
// mapassign sets: h.flags ^= hashWriting (XOR to set bit)
// mapassign clears: h.flags &^= hashWriting (AND NOT to clear)
//
// At the start of mapassign and mapaccess:
// if h.flags & hashWriting != 0:
// throw("concurrent map writes") // NOT a panic — cannot be recovered!
//
// This is a BEST-EFFORT detection (not a full memory barrier).
// The race detector (-race) provides accurate detection.
// The hashWriting flag catches the most obvious concurrent cases.
// runtime.throw() vs panic():
// panic: can be recovered with defer/recover
// throw: terminates the program immediately (like abort())
package main
import "fmt"
func main() {
// The -race flag enables the full race detector which uses
// shadow memory to detect ALL concurrent accesses.
// hashWriting catches some without -race for faster feedback.
// Example: this WILL be caught even without -race in some cases:
// (but don't actually run concurrent writes without synchronization!)
// Safe concurrent usage:
type SafeMap struct {
data map[string]int
// Use -race to verify, then rely on logic for correctness
}
sm := SafeMap{data: make(map[string]int)}
sm.data["key"] = 1 // safe: single goroutine
fmt.Println(sm.data["key"])
}
14. Memory Allocator Interaction¶
// Map allocation flow:
// 1. makemap (small maps): hmap allocated on stack or heap
// If hint <= 8: may use smallmap (single-bucket optimization)
// 2. makemap_small: for make(map[K]V) with no hint
// Allocates just hmap + one initial bucket
// 3. makemap: for make(map[K]V, n) with n > 8
// Allocates hmap + 2^B buckets in one allocation
// 4. Overflow buckets: allocated lazily via newoverflow()
// Uses mapextra.nextOverflow free list first
// Bucket sizes are calculated at compile time:
// bucketsize = 8 + 8*sizeof(K) + 8*sizeof(V) + 8 (overflow ptr)
// This is embedded in the maptype descriptor
// GC interaction:
// If K and V contain no pointers: bucket is marked as noscan
// → GC never scans bucket contents (only the bucket pointer array)
// If K or V contain pointers: GC must scan every live slot
package main
import (
"fmt"
"runtime"
)
func gcPressureTest() {
var ms1, ms2 runtime.MemStats
runtime.GC()
runtime.ReadMemStats(&ms1)
// Map with pointer values — GC must scan every slot
mPtr := make(map[int]*[64]byte, 10000)
for i := 0; i < 10000; i++ {
arr := [64]byte{}
mPtr[i] = &arr
}
runtime.GC()
runtime.ReadMemStats(&ms2)
fmt.Printf("GC with pointer values: %d ns/op\n", ms2.PauseTotalNs-ms1.PauseTotalNs)
_ = mPtr
runtime.GC()
runtime.ReadMemStats(&ms1)
// Map with scalar values — GC only scans bucket array header
mScalar := make(map[int][8]int64, 10000)
for i := 0; i < 10000; i++ {
mScalar[i] = [8]int64{}
}
runtime.GC()
runtime.ReadMemStats(&ms2)
fmt.Printf("GC with scalar values: %d ns/op\n", ms2.PauseTotalNs-ms1.PauseTotalNs)
// Scalar values should show lower GC pause time
_ = mScalar
}
func main() {
gcPressureTest()
}
15. Assembly Output for Common Map Operations¶
// To inspect assembly:
// go build -gcflags="-S" main.go 2>&1
// For map[string]int:
// v := m["key"]
// → CALL runtime.mapaccess1_faststr(SB)
//
// v, ok := m["key"]
// → CALL runtime.mapaccess2_faststr(SB)
//
// m["key"] = v
// → CALL runtime.mapassign_faststr(SB)
//
// delete(m, "key")
// → CALL runtime.mapdelete_faststr(SB)
// For map[int64]int64 (fast path):
// v := m[42]
// → CALL runtime.mapaccess1_fast64(SB)
// For map[uint32]int (fast path):
// → CALL runtime.mapaccess1_fast32(SB)
// For map[interface{}]int (slow path):
// → CALL runtime.mapaccess1(SB) // uses type's hash/equal functions
// fast* variants inline the bucket walk for common key sizes
// They avoid interface dispatch, saving ~10-20ns per operation
package main
import "fmt"
// Verify which runtime function is called by inspecting assembly:
// go tool compile -S demo.go | grep "runtime.map"
func stringMap() {
m := map[string]int{"hello": 1}
_ = m["hello"] // mapaccess1_faststr
_, _ = m["hello"] // mapaccess2_faststr
m["world"] = 2 // mapassign_faststr
delete(m, "world")// mapdelete_faststr
fmt.Println(len(m))
}
func intMap() {
m := map[int]int{1: 1}
_ = m[1] // mapaccess1_fast64 (on 64-bit)
m[2] = 2 // mapassign_fast64
delete(m, 1)// mapdelete_fast64
fmt.Println(len(m))
}
func main() {
stringMap()
intMap()
}
16. Escape Analysis: When Maps Go to Heap¶
// Maps almost always escape to the heap because:
// 1. They are reference types (variables are pointers)
// 2. The hmap struct is heap-allocated
// 3. Buckets are heap-allocated
//
// Exception: make(map[K]V) with no hint may use a zero-bucket map
// on the stack for empty maps that are never written to (rare).
//
// Check with:
// go build -gcflags="-m" main.go
package main
import "fmt"
func stackOrHeap() map[string]int {
// This map escapes to heap — returned from function
m := make(map[string]int) // "make(map[string]int) escapes to heap"
m["key"] = 1
return m
}
func localMap() {
// This may not escape if not returned or stored globally
m := make(map[string]int)
m["key"] = 1
fmt.Println(m) // still escapes: passed to fmt.Println (interface{})
}
func main() {
m := stackOrHeap()
fmt.Println(m)
localMap()
// Conclusion:
// - Maps are always heap-allocated in practice
// - This is fine: maps are designed for use cases where the heap cost
// is justified by the lookup/insert benefit
// - If you need stack allocation for key-value pairs, use a struct
}
Appendix: Key Runtime Constants¶
// From runtime/map.go:
const (
bucketCntBits = 3
bucketCnt = 1 << bucketCntBits // 8 slots per bucket
loadFactorNum = 13
loadFactorDen = 2
// loadFactor = 13/2 = 6.5
maxKeySize = 128 // keys larger than this are stored as pointers
maxElemSize = 128 // values larger than this are stored as pointers
dataOffset = unsafe.Offsetof(struct {
b bmap
v int64
}{}.v) // byte offset of first key in bmap
emptyRest = 0
emptyOne = 1
evacuatedX = 2
evacuatedY = 3
evacuatedEmpty = 4
minTopHash = 5
iterator = 1
oldIterator = 2
hashWriting = 4
sameSizeGrow = 8
noCheck = 1<<(8*goarch.PtrSize) - 1
)
Summary: Professional Map Knowledge
=====================================
Architecture: hmap header → buckets array → bmap slots → overflow chain
Slot capacity: 8 per bucket (bucketCnt = 8)
Load factor: 6.5 (= 13/2), chosen empirically for speed/memory balance
Growth modes: Overload (B++) and Same-size (reorganization)
Evacuation: Incremental, 2 buckets per write/delete during growth
Hash functions: AES-NI on supported hardware, wyhash fallback
tophash: Top 8 bits of hash, values 0-4 reserved as sentinels
Concurrency: hashWriting flag for detection, throw() on violation
Key/value layout: Grouped (all keys then all values) for better alignment
GC interaction: Noscan optimization when K,V contain no pointers
Assembly: fast32/fast64/faststr variants for common key types
Memory release: Manual only (m = nil or rebuild) — maps never auto-shrink