$ npx rulesync-cli pull✓ Wrote CLAUDE.md (2 rulesets)# Coding Standards- Always use async/await- Prefer named exports
Best Practices

AI Rules for Concurrency and Parallelism

AI runs async operations sequentially or creates uncontrolled parallel floods. Rules for Promise.all, concurrency limits, race condition prevention, and async patterns.

7 min read·October 1, 2024

10 sequential awaits take 10 seconds — Promise.all takes 1 second

Parallel I/O, concurrency limits, race condition prevention, and async error isolation

AI Either Serializes Everything or Parallelizes Without Limits

AI generates async code at two extremes: sequential (await each operation one-by-one, taking 10 seconds for 10 independent API calls that could run in parallel in 1 second) or uncontrolled parallel (Promise.all with 10,000 concurrent requests, overwhelming the target service and getting rate limited or banned). Both waste time — sequential by waiting unnecessarily, parallel by flooding without limits.

The correct approach depends on the operations: independent I/O operations (API calls, database queries) should run in parallel with a concurrency limit. Dependent operations (step B needs the result of step A) must run sequentially. CPU-intensive operations should use worker threads or processes — not the main event loop.

These rules cover: when to parallelize, how to limit concurrency, race condition prevention, and patterns for different async scenarios (I/O-bound, CPU-bound, mixed).

Rule 1: Promise.all for Independent Operations

The rule: 'When operations are independent (do not depend on each other), run them in parallel with Promise.all: const [users, orders, stats] = await Promise.all([getUsers(), getOrders(), getStats()]). Three independent database queries run simultaneously — total time = the slowest query, not the sum of all three. Never await each one sequentially when they are independent.'

For error handling: 'Promise.all rejects on the first failure — all other results are lost. Use Promise.allSettled when you need all results regardless of failures: const results = await Promise.allSettled([...]); results.forEach(r => r.status === "fulfilled" ? use(r.value) : log(r.reason)). Use Promise.all when all operations must succeed. Use Promise.allSettled when partial success is acceptable.'

AI generates: const users = await getUsers(); const orders = await getOrders(); const stats = await getStats(); — sequential. If each takes 1 second, total = 3 seconds. Promise.all runs all three simultaneously: total = 1 second. For 10 independent operations, the difference is 10 seconds vs 1 second.

  • Promise.all for independent operations — runs simultaneously, total = slowest
  • Sequential only when B depends on the result of A
  • Promise.allSettled when partial success is acceptable — no early rejection
  • 3 sequential queries at 1s each = 3s. 3 parallel = 1s. 10x difference at scale.
  • Promise.race for first-to-complete — timeout patterns, fallback sources
💡 3 Seconds → 1 Second

Three independent queries at 1s each: sequential = 3s, Promise.all = 1s (the slowest query). For 10 independent operations: 10s vs 1s. One line (Promise.all) provides a 10x speedup with zero complexity.

Rule 2: Bounded Concurrency — Never Unlimited Parallel

The rule: 'Never run unlimited parallel operations: Promise.all(thousandUrls.map(fetch)) fires 1000 simultaneous requests — the target service rate limits you, your connection pool exhausts, or your memory spikes. Use a concurrency limiter: p-limit (Node.js), asyncio.Semaphore (Python), or errgroup with limit (Go). Limit to 5-20 concurrent operations depending on the target service.'

For p-limit: 'import pLimit from "p-limit"; const limit = pLimit(10); const results = await Promise.all(urls.map(url => limit(() => fetch(url)))); — maximum 10 requests in flight at any time. When one completes, the next starts. Total throughput: 10 concurrent, processed in batches, no flooding.'

For batch processing: 'For very large datasets (100K+ items), process in batches: for (let i = 0; i < items.length; i += BATCH_SIZE) { const batch = items.slice(i, i + BATCH_SIZE); await Promise.all(batch.map(process)); }. Each batch runs in parallel, batches run sequentially. This bounds both: memory (one batch in memory) and concurrency (BATCH_SIZE parallel).'

⚠️ 1000 Parallel = Banned

Promise.all(thousandUrls.map(fetch)) fires 1000 requests simultaneously. The target rate-limits you, your connection pool exhausts, or your memory spikes. p-limit(10) caps at 10 concurrent — full throughput without flooding.

Rule 3: Prevent Race Conditions

The rule: 'Race conditions occur when: two async operations read-then-write the same data, and the interleaving produces a wrong result. Classic: read balance → subtract amount → write balance. If two withdrawals run simultaneously, both read 100, both subtract 50, both write 50 — the balance should be 0 but is 50. Prevent with: database transactions (SERIALIZABLE), optimistic locking (version field), or mutex/semaphore (in-memory coordination).'

For database: 'Use transactions with appropriate isolation: BEGIN; SELECT balance FROM accounts WHERE id = 1 FOR UPDATE; UPDATE accounts SET balance = balance - 50 WHERE id = 1; COMMIT. FOR UPDATE locks the row — the second transaction waits until the first commits. This serializes access to the specific row without locking the entire table.'

For in-memory: 'Use a mutex for shared in-memory state: const mutex = new Mutex(); await mutex.runExclusive(async () => { const value = await read(); await write(value + 1); }). Only one execution enters the critical section at a time. Use async-mutex (npm) for JavaScript. Use threading.Lock for Python. Use sync.Mutex for Go.'

  • Race condition: two operations read-then-write same data with wrong interleaving
  • Database: transactions + FOR UPDATE for row-level locking
  • In-memory: mutex/semaphore for critical sections — async-mutex for JS
  • Optimistic locking: version field — retry if version changed between read and write
  • Test with concurrent load — race conditions only appear under concurrency
ℹ️ Race Conditions Need Load

Race conditions only appear under concurrent load — your single-user test passes perfectly. Two withdrawals reading balance=100 simultaneously both write 50 instead of 0. Test with concurrent requests. Use FOR UPDATE in SQL to serialize access.

Rule 4: Sequential vs Parallel Decision Framework

The rule: 'Decision framework: Are the operations independent? Yes → parallel (Promise.all). No → sequential (await). Is the parallelism bounded? Yes (5-20 items) → Promise.all directly. No (100+ items) → p-limit or batch processing. Is the operation I/O-bound or CPU-bound? I/O → async (event loop handles it). CPU → worker thread/process (event loop would block).'

For I/O-bound: 'Network requests, database queries, file reads — async/await with Promise.all. The event loop handles thousands of concurrent I/O operations because it delegates to the OS. One thread, many concurrent I/O operations — this is what async/await is designed for.'

For CPU-bound: 'Image processing, data transformation, hashing, compression — worker threads (Node.js), multiprocessing (Python), goroutines (Go). CPU work blocks the event loop — even async functions. Use Worker Threads (Node) or Web Workers (browser) for: operations >50ms that would block the UI or prevent handling other requests.'

Rule 5: Async Error Handling Patterns

The rule: 'Unhandled Promise rejections crash Node.js (v15+) or silently fail (v14-). Always: catch errors on every Promise, add a .catch to fire-and-forget promises, and use try/catch around await. For Promise.all: one rejection rejects the entire batch — use Promise.allSettled if partial results are acceptable. For background tasks: catch at the top level and log/alert — never let a rejection go unhandled.'

For fire-and-forget: 'If you intentionally do not await a Promise (background logging, analytics), add a .catch: sendAnalytics(data).catch(err => logger.warn("Analytics failed", err)). Without .catch, a rejection creates an unhandled rejection — which crashes Node.js in production. Even non-critical operations need error handling.'

For concurrent error isolation: 'When running multiple operations in parallel, isolate errors: each operation has its own try/catch or .catch handler. One failing operation should not prevent others from completing. Use Promise.allSettled for this: every operation runs to completion, you inspect results individually.'

  • Every Promise: await with try/catch OR .catch handler — never unhandled
  • Fire-and-forget: .catch(logError) — unhandled rejection crashes Node.js v15+
  • Promise.all: one rejection rejects all — use allSettled for partial results
  • Background tasks: catch at top level, log, alert — never silent failure
  • Concurrent errors: isolate with individual .catch — one failure does not block others

Complete Concurrency Rules Template

Consolidated rules for concurrency and parallelism.

  • Promise.all for independent operations — sequential only when dependent
  • p-limit for bounded concurrency (5-20) — never unlimited Promise.all on large arrays
  • Batch processing for 100K+ items: parallel within batch, sequential between batches
  • Race conditions: transactions + FOR UPDATE, mutex for in-memory, optimistic locking
  • Decision: independent→parallel, dependent→sequential, CPU→worker threads
  • Every Promise has error handling: try/catch or .catch — never unhandled
  • Fire-and-forget: .catch(logError) — unhandled rejections crash Node.js
  • Promise.allSettled for partial success — Promise.all for all-or-nothing