Memento — Optimize¶
Each section presents a Memento that works but is wasteful. Profile, optimize, measure.
Table of Contents¶
- Optimization 1: Diff-based Mementos for large state
- Optimization 2: Persistent data structures for free Mementos
- Optimization 3: Bound undo history
- Optimization 4: Coarse-grain Mementos for typing
- Optimization 5: Compress persisted Mementos
- Optimization 6: Snapshot every N events for fast replay
- Optimization 7: Lock-free snapshot via AtomicReference
- Optimization 8: Replace JSON with Protobuf for persistent Mementos
- Optimization 9: Off-heap Mementos for high-volume
- Optimization 10: Drop Memento for trivial state
- Optimization Tips
Optimization 1: Diff-based Mementos for large state¶
Before¶
50 undo levels × 100MB = 5GB. OOM.
After¶
public DiffMemento save(int position, String oldChar, String newChar) {
return new DiffMemento(position, oldChar, newChar); // few bytes
}
Each diff is tiny.
Measurement. Memory: 5GB → ~5KB for 50 small edits.
Trade-off. Restore must replay diffs in reverse — slightly slower.
Lesson: For large state with small changes, diff Mementos beat full snapshots by orders of magnitude.
Optimization 2: Persistent data structures for free Mementos¶
Before (Java with mutable HashMap)¶
HashMap<String, String> state = new HashMap<>();
// every snapshot = deep copy
HashMap<String, String> snapshot = new HashMap<>(state);
Per-snapshot allocation: O(N).
After (Vavr / PCollections / Clojure-style)¶
import io.vavr.collection.HashMap;
HashMap<String, String> state = HashMap.empty();
state = state.put("k", "v");
// snapshot is just the reference; structural sharing
HashMap<String, String> snapshot = state;
Per-snapshot allocation: O(log N) for the modification, snapshot itself = pointer.
Measurement. 1M Mementos with mutable maps: hundreds of MB. With persistent: a few MB (shared structure).
Lesson: Persistent data structures give Mementos for free via structural sharing.
Optimization 3: Bound undo history¶
Before¶
Memory grows forever.
After¶
from collections import deque
class History:
def __init__(self, max_size=1000):
self._stack: deque = deque(maxlen=max_size)
def push(self, m):
self._stack.append(m)
Memory bounded; oldest auto-evicted.
Measurement. Memory steady regardless of session length.
Lesson: Always bound histories. deque(maxlen=N) is the simplest pattern.
Optimization 4: Coarse-grain Mementos for typing¶
Before¶
class Editor {
public void type(char c) {
history.push(this.save()); // Memento per character
content += c;
}
}
For "hello world" — 11 Mementos.
After (coalesce typing within 1 second)¶
class Editor {
private long lastTypeTs;
private Memento pending;
public void type(char c) {
long now = System.currentTimeMillis();
if (pending == null || now - lastTypeTs > 1000) {
pending = this.save();
history.push(pending);
}
lastTypeTs = now;
content += c;
}
}
11 keystrokes → 1 Memento. One Ctrl+Z removes "hello world."
Measurement. Memento count drops 10×; user gets sane undo granularity.
Lesson: Match Memento granularity to user-perceived actions, not implementation primitives.
Optimization 5: Compress persisted Mementos¶
Before¶
After 1000 saves, 1 GB on disk.
After¶
String json = mapper.writeValueAsString(memento);
byte[] compressed = zstd.compress(json.getBytes());
fs.write(compressed); // ~100 KB per save
Measurement. ~10× compression for typical document JSON.
Trade-off. Save/load CPU adds tens of µs. Rarely noticeable for interactive apps.
Lesson: Compress persistent Mementos. zstd is the modern default.
Optimization 6: Snapshot every N events for fast replay¶
Before¶
def load_aggregate(id):
events = event_store.find_all(id) # 100K events
agg = Aggregate()
for e in events: agg.apply(e)
return agg
Load takes seconds.
After¶
def load_aggregate(id):
snap = snapshot_store.find_latest(id)
if snap:
agg = Aggregate.from_snapshot(snap)
events = event_store.find_after(id, snap.sequence)
else:
agg = Aggregate()
events = event_store.find_all(id)
for e in events: agg.apply(e)
return agg
Save snapshot every 1000 events.
Measurement. Load time: seconds → milliseconds. Worst-case replay: 1000 events.
Lesson: Event-sourced systems need snapshots. Tune frequency by load latency.
Optimization 7: Lock-free snapshot via AtomicReference¶
Before¶
public synchronized Memento save() {
return new Memento(count, label);
}
public synchronized void increment() {
count++;
}
All snapshots and writes serialize.
After¶
record State(int count, String label) {}
private final AtomicReference<State> state = new AtomicReference<>(new State(0, ""));
public State save() { return state.get(); }
public void increment() {
state.updateAndGet(s -> new State(s.count() + 1, s.label()));
}
Measurement. Lock removed; concurrent snapshots are pointer reads.
Lesson: Immutable atomic state is lock-free for Mementos. CAS handles concurrent updates.
Optimization 8: Replace JSON with Protobuf for persistent Mementos¶
Before¶
JSON: text-heavy, slow.
After¶
Measurement. ~10× faster, ~3× smaller.
Trade-off. Schema management; less inspectable.
Lesson: For high-volume persistent Mementos, Protobuf / Avro / Kryo. JSON for low-volume and human inspection.
Optimization 9: Off-heap Mementos for high-volume¶
Before¶
GC pauses noticeable under load.
After¶
ByteBuffer offHeap = ByteBuffer.allocateDirect(SIZE);
// serialize Memento bytes into offHeap; address by offset
Or memory-mapped file:
FileChannel ch = FileChannel.open(path, READ, WRITE);
MappedByteBuffer mmap = ch.map(MapMode.READ_WRITE, 0, size);
Measurement. GC pressure drops; pauses shorter. Trade-off: manual lifecycle management.
Lesson: For millions of Mementos, off-heap or memory-mapped storage avoids GC pressure. Common in databases, search engines.
Optimization 10: Drop Memento for trivial state¶
Before¶
public final class IntCounter {
private int value;
public IntCounterMemento save() { return new IntCounterMemento(value); }
public void restore(IntCounterMemento m) { this.value = m.value; }
}
public final class IntCounterMemento {
final int value;
IntCounterMemento(int v) { this.value = v; }
}
Memento class for one int.
After¶
public final class IntCounter {
private int value;
public int save() { return value; }
public void restore(int v) { this.value = v; }
}
Measurement. Less code; no allocation.
Lesson: Memento earns its weight when state is non-trivial AND opacity matters. For one int, just return the value.
Optimization Tips¶
- Diff Mementos for large state with small changes. Memory savings huge.
- Persistent data structures = free Mementos. Embrace them when possible.
- Always bound histories.
deque(maxlen=N)or fixed-size circular buffer. - Match Memento granularity to user actions. Coalesce typing.
- Compress persistent Mementos. zstd or similar.
- Snapshot event-sourced aggregates. Replay performance.
- Lock-free with AtomicReference + immutable record. Concurrent friendly.
- Protobuf for high-volume persistent. ~10× faster than JSON.
- Off-heap for high-volume. Avoids GC pressure.
- Drop Memento when overkill. Trivial state doesn't need the pattern.
- Profile before optimizing. Memento overhead is rarely the bottleneck — until it is.