Best Practices

AI Rules for Memory Management

AI creates memory leaks with uncleared intervals, unremoved event listeners, and growing arrays. Rules for cleanup patterns, WeakRef, memory profiling, and leak prevention.

7 min read·August 15, 2024

setInterval without clearInterval — the most common AI memory leak

Cleanup on unmount, WeakRef for caches, bounded collections, and heap profiling

AI Creates Memory Leaks in Every Language

Memory leaks are the silent killer of long-running applications. AI generates code that leaks in predictable ways: setInterval without clearInterval (accumulates callbacks), addEventListener without removeEventListener (accumulates listeners), arrays that grow without bounds (accumulates data), closures that capture large objects (prevents garbage collection), and React components that set state after unmount (the classic async leak).

In garbage-collected languages (JavaScript, Python, Go, Java), memory leaks come from references that prevent garbage collection — not from missing free() calls. The garbage collector cannot free an object that something still references. AI creates these references and never cleans them up — the memory grows until the process crashes or the browser tab becomes unresponsive.

These rules cover: cleanup patterns for timers and listeners, WeakRef for caches, collection size limits, and how to find leaks with profiling tools. They apply to any garbage-collected language with emphasis on JavaScript/TypeScript (the most common AI-generated leak source).

Rule 1: Clean Up Everything on Unmount/Dispose

The rule: 'Every setup must have a corresponding cleanup. setInterval → clearInterval. addEventListener → removeEventListener. WebSocket.open → WebSocket.close. IntersectionObserver.observe → IntersectionObserver.disconnect. In React: useEffect returns a cleanup function. In Vue: onUnmounted. In Angular: ngOnDestroy. In plain JS: a dispose() or destroy() method. No exceptions — every resource acquisition has a release.'

For React: 'useEffect(() => { const interval = setInterval(tick, 1000); return () => clearInterval(interval); }, []). The return function runs on unmount — it clears the interval. Without the cleanup, navigating away from the page leaves the interval running: accumulating callbacks, calling setState on unmounted components, and leaking memory.'

AI generates useEffect(() => { setInterval(tick, 1000); }, []) — no cleanup, no return. The interval runs forever. After 10 page navigations, 10 intervals are running simultaneously. After 100 navigations, the page is unresponsive. One return statement prevents the entire leak.

  • setInterval → clearInterval on unmount — timers never self-clean
  • addEventListener → removeEventListener — named functions, not inline arrows
  • WebSocket/EventSource → close on unmount — connections never self-close
  • Observer (Intersection, Mutation, Resize) → disconnect on unmount
  • React: useEffect return cleanup — Vue: onUnmounted — Angular: ngOnDestroy
⚠️ One Return Prevents the Leak

useEffect(() => { const id = setInterval(tick, 1000); return () => clearInterval(id); }, []). Without the return, navigating away leaves the interval running. After 100 navigations, 100 intervals run simultaneously. One return statement prevents it.

Rule 2: WeakRef and WeakMap for Caches

The rule: 'Use WeakMap for caches keyed by objects: const cache = new WeakMap<HTMLElement, ComputedStyle>(). When the key object is garbage collected, the cache entry is automatically removed — no manual cleanup needed. Use WeakRef for references that should not prevent garbage collection: const ref = new WeakRef(largeObject); const obj = ref.deref(); // null if GC collected it.'

For Map vs WeakMap: 'Map keeps strong references to keys — the key object can never be garbage collected while in the Map. WeakMap keeps weak references — when nothing else references the key, both the key and value are collected. Use Map for: data you control and explicitly delete. Use WeakMap for: metadata attached to objects whose lifecycle you do not control (DOM elements, third-party objects).'

AI uses Map for everything — including caches that grow indefinitely. A Map cache with 10,000 entries for DOM elements that have been removed from the page is a memory leak. WeakMap automatically releases entries when the DOM elements are garbage collected — the cache shrinks itself.

Rule 3: Bound All Growing Collections

The rule: 'Every array, Map, Set, or queue that grows over time must have a size limit. Log buffer: keep last 1000 entries, evict oldest. Event history: keep last 100 events. Cache: LRU with max 500 entries. Undo stack: limit to 50 steps. Without limits, these collections grow until the process runs out of memory. Use a circular buffer or LRU cache — never an unbounded array with push().'

For implementation: 'Simple bounded array: if (buffer.length > MAX) buffer.shift(). LRU cache: use lru-cache package or Map with manual eviction (Map preserves insertion order — delete the first key when size exceeds limit). Circular buffer: overwrite at index % MAX_SIZE. Choose based on access pattern: FIFO (shift oldest), LRU (evict least recently used), or circular (overwrite in place).'

AI generates: const logs = []; function addLog(entry) { logs.push(entry); } — after 10 million log entries, the process crashes. One line (if (logs.length > 1000) logs.shift()) prevents the crash. Every push without a corresponding eviction is a potential memory bomb.

  • Every growing collection has a max size — logs, events, cache, undo stack
  • LRU cache for key-value: evict least recently used when full
  • Circular buffer for append-only: overwrite at index % MAX_SIZE
  • logs.push(entry) + if (logs.length > MAX) logs.shift() — bounded
  • Unbounded push() = memory bomb — bounded push = predictable memory
💡 Every push() Needs a Limit

logs.push(entry) without a limit is a memory bomb — after 10M entries, the process crashes. logs.push(entry); if (logs.length > 1000) logs.shift(); — bounded, predictable, safe. One conditional after every push.

Rule 4: Closure and Reference Leaks

The rule: 'Closures capture their enclosing scope — including large objects you may not intend to keep. If a closure references a DOM element, that element cannot be garbage collected while the closure exists. If a callback registered on a global object captures local state, that state lives as long as the global object. Minimize what closures capture — extract the specific values you need, not the entire scope.'

For common patterns: 'Event emitter leak: emitter.on("event", () => { this.largeData... }) — the listener keeps this alive. Fix: emitter.removeListener on cleanup. Timer leak: setTimeout(() => { this.process(this.bigBuffer) }, 60000) — the callback keeps this alive for 60 seconds even if the object is no longer needed. Fix: clearTimeout on cleanup, or use WeakRef.'

For React: 'Stale closure is not a memory leak but a bug: useCallback with stale dependencies captures old state. The fix: correct dependency arrays or use refs. Actual leaks: subscribing to a store in useEffect without unsubscribing: useEffect(() => { const unsub = store.subscribe(handler); return () => unsub(); }, []).'

Rule 5: Memory Profiling and Leak Detection

The rule: 'Profile memory in development before production: Chrome DevTools Memory tab → take heap snapshot → perform the suspected leaking action → take another snapshot → compare. Growing objects that should have been collected are the leak. Use the Allocation timeline for real-time tracking. Use performance.measureUserAgentSpecificMemory() for programmatic monitoring.'

For the leak detection pattern: '1) Open Memory tab in DevTools. 2) Take a heap snapshot (baseline). 3) Perform the action (navigate to page and back, open and close modal). 4) Force garbage collection (click the trash can icon). 5) Take another snapshot. 6) Compare: if retained objects grew, something is leaking. Filter by "Objects allocated between Snapshot 1 and Snapshot 2" to find the leaked objects.'

For Node.js: 'Use --inspect flag and Chrome DevTools for heap snapshots. Use process.memoryUsage() for programmatic monitoring. Use clinic.js for automated heap profiling: clinic heapprofile -- node app.js. Monitor RSS (Resident Set Size) in production — if it grows continuously without plateauing, you have a memory leak.'

  • Chrome DevTools Memory tab: heap snapshots + comparison
  • Pattern: snapshot → action → GC → snapshot → compare — find growth
  • Allocation timeline: real-time tracking of growing objects
  • Node.js: --inspect + DevTools, process.memoryUsage(), clinic.js
  • Production: monitor RSS — continuous growth without plateau = leak
ℹ️ Snapshot → Action → Snapshot

Chrome DevTools Memory: take snapshot, perform the suspected leaking action, force GC, take another snapshot, compare. Objects that grew but should have been collected are the leak. The comparison view shows exactly what is retained.

Complete Memory Management Rules Template

Consolidated rules for memory management.

  • Every setup has a cleanup: interval→clear, listener→remove, socket→close, observer→disconnect
  • React: useEffect returns cleanup — Vue: onUnmounted — Angular: ngOnDestroy
  • WeakMap for object-keyed caches — entries auto-collected when key is GC'd
  • Bound all growing collections: max size + eviction (LRU, FIFO, circular)
  • Minimize closure captures — extract specific values, not entire scope
  • Unsubscribe from stores, emitters, observables on unmount
  • Profile: DevTools heap snapshots + comparison — find retained objects
  • Production: monitor RSS — alert on continuous growth without plateau