Rule Writing

CLAUDE.md for Cloudflare Workers

Workers run V8 isolates at the edge — not Node.js. AI uses Node APIs that don't exist. Rules for Web Standard APIs, KV, D1, R2, Durable Objects, and wrangler workflow.

8 min read·June 23, 2025

Workers run V8 at 300+ edge locations — AI generates Node.js that crashes immediately

Web Standard APIs, KV/D1/R2 bindings, Durable Objects, and wrangler workflow

Why Cloudflare Workers Need Edge-Native Rules

Cloudflare Workers run V8 isolates — not Node.js. There's no fs, no child_process, no net, no path, no Buffer (use Uint8Array), and no require (use ESM import). AI trained on Node.js generates code that crashes on the first line. Workers also have unique constraints: 128MB memory limit, 30-second CPU time (not wall clock), and no persistent state between requests (use KV, D1, or Durable Objects).

Workers' superpower is running at the edge — your code executes in the Cloudflare data center closest to the user (300+ locations worldwide). Cold starts are sub-millisecond (V8 isolates, not containers). But the edge runtime is restricted: Web Standard APIs only, no native modules, no filesystem. AI must generate edge-compatible code.

These rules target Workers with Hono, Remix, or vanilla fetch handlers. They cover the runtime constraints, Cloudflare bindings (KV, D1, R2, Queues, Durable Objects), and the wrangler development workflow.

Rule 1: Web Standard APIs — No Node.js

The rule: 'Workers use Web Standard APIs exclusively: fetch, Request, Response, Headers, URL, URLSearchParams, crypto.subtle, TextEncoder, TextDecoder, ReadableStream, WritableStream, caches (Cache API). Some Node.js APIs are available with nodejs_compat flag: Buffer, crypto, stream, util. Enable only what you need — don't enable nodejs_compat by default.'

For the handler: 'Workers export a default fetch handler: export default { async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> { ... } }. The env parameter contains your bindings (KV, D1, R2). ctx provides waitUntil() for background work that shouldn't block the response. Return a Response object — same as the Fetch API.'

AI generates require('http') and process.env in Workers — neither exists. The fetch handler is the Workers' interface to the world: Request in, Response out, env for bindings. One handler pattern, no framework overhead (unless you choose Hono or Remix).

  • Web Standard APIs: fetch, Request, Response, URL, crypto.subtle, Cache API
  • nodejs_compat flag for Node shims — enable only when necessary
  • No: fs, child_process, net, path, require — ESM imports only
  • Handler: export default { fetch(request, env, ctx) → Response }
  • env for bindings (KV, D1, R2) — ctx.waitUntil for background work
⚠️ Not Node.js

require('http'), process.env, fs.readFile — none exist in Workers. V8 isolates run Web Standard APIs only. One Node.js import crashes the Worker. Enable nodejs_compat only for specific shims you need.

Rule 2: KV, D1, and R2 Bindings

The rule: 'Use Cloudflare KV for key-value storage (cache, sessions, config): await env.MY_KV.get("key"), await env.MY_KV.put("key", value, { expirationTtl: 3600 }). Use D1 for relational data (SQLite at the edge): const results = await env.DB.prepare("SELECT * FROM users WHERE id = ?").bind(id).first(). Use R2 for object storage (files, images, backups): await env.BUCKET.put("file.jpg", data).'

For KV: 'KV is eventually consistent (reads may be slightly stale) and optimized for reads (low-latency globally). Use for: cached API responses, feature flags, session data, configuration. Don't use for: rapidly changing data (writes are slow to propagate) or relational data (use D1).'

For D1: 'D1 is SQLite at the edge — full SQL support, transactions, and joins. Use for: user data, content, orders — anything relational. D1 is single-region primary with global read replicas. Use prepared statements with .bind() for parameterized queries — never string interpolation.'

  • KV: key-value, eventually consistent, global reads — cache, sessions, config
  • D1: SQLite at edge, full SQL, transactions — relational data, user records
  • R2: S3-compatible object storage — files, images, backups — no egress fees
  • Bind in wrangler.toml: [[kv_namespaces]], [[d1_databases]], [[r2_buckets]]
  • Access via env: env.MY_KV, env.DB, env.BUCKET — typed in Env interface
ℹ️ D1 = SQLite at the Edge

D1 gives you full SQL (joins, transactions, indexes) running at the edge — globally distributed read replicas, single-region writes. Use .prepare().bind() for parameterized queries. Never string interpolation in SQL.

Rule 3: Durable Objects for Stateful Logic

The rule: 'Use Durable Objects for: coordinated state (counters, rate limiters), real-time collaboration (document editing, chat rooms), and WebSocket servers. Each Durable Object instance has: a unique ID, private storage (transactional key-value), and a single-threaded execution environment. Access from Workers: const id = env.COUNTER.idFromName("global"); const obj = env.COUNTER.get(id); const response = await obj.fetch(request).'

For when to use: 'Workers are stateless — each request is independent. KV is eventually consistent — not suitable for counters or coordination. Durable Objects are strongly consistent and single-threaded — perfect for: atomically incrementing counters, managing WebSocket connections, coordinating access to shared resources, and implementing rate limiters.'

AI generates in-memory state in Workers (lost between requests) or uses KV for counters (race conditions due to eventual consistency). Durable Objects are the correct primitive for any state that needs consistency across requests.

💡 Durable = Consistent

KV is eventually consistent — incrementing a counter with KV has race conditions. Durable Objects are strongly consistent and single-threaded — atomically correct for counters, rate limiters, and coordination.

Rule 4: Wrangler Development Workflow

The rule: 'Use wrangler for all development and deployment: wrangler dev for local development (runs a local Workers runtime), wrangler deploy for production deployment, wrangler tail for live log streaming. Configure in wrangler.toml: name, main entry point, compatibility_date, bindings (KV, D1, R2), and routes. Never deploy through the Cloudflare dashboard — wrangler.toml is the source of truth.'

For environments: 'Use wrangler environments for staging/production: [env.staging] in wrangler.toml with separate bindings. Deploy with: wrangler deploy --env staging. Each environment gets its own KV namespaces, D1 databases, and R2 buckets — isolated from production.'

For secrets: 'Use wrangler secret put SECRET_NAME for secrets — they're encrypted and available as env.SECRET_NAME. Never put secrets in wrangler.toml (committed to git). Use .dev.vars for local development secrets (.gitignored). Secrets are per-environment — set separately for staging and production.'

  • wrangler dev: local runtime — wrangler deploy: production — wrangler tail: live logs
  • wrangler.toml: name, main, compatibility_date, bindings, routes
  • Environments: [env.staging] for isolated staging — separate bindings per env
  • wrangler secret put for encrypted secrets — .dev.vars for local development
  • Never deploy via dashboard — wrangler.toml is the source of truth

Rule 5: Workers-Specific Patterns

The rule: 'Use ctx.waitUntil() for background work that shouldn't block the response: ctx.waitUntil(logToAnalytics(data)). The response returns immediately; the background work continues. Use the Cache API for response caching: const cache = caches.default; let response = await cache.match(request); if (!response) { response = await fetchOrigin(request); ctx.waitUntil(cache.put(request, response.clone())); }.'

For Queues: 'Use Cloudflare Queues for async processing: send messages from Workers, process in a queue consumer. await env.MY_QUEUE.send({ type: "email", to: user.email }). Consumer: export default { async queue(batch, env) { for (const message of batch.messages) { ... message.ack(); } } }. Use for: email sending, image processing, analytics batching.'

For Pages Functions: 'Cloudflare Pages supports Workers as functions: functions/api/users.ts. File-based routing like Next.js API routes. Use for: SSR, API endpoints, and server-side logic in Cloudflare Pages projects. Pages Functions share the same runtime as Workers — same constraints, same bindings.'

Complete Cloudflare Workers Rules Template

Consolidated rules for Cloudflare Workers projects.

  • Web Standard APIs only — nodejs_compat only when needed — no require, no fs
  • fetch handler: Request → Response — env for bindings — ctx.waitUntil for background
  • KV for cache/sessions — D1 for relational — R2 for storage — typed Env interface
  • Durable Objects for stateful: counters, WebSocket, coordination — not in-memory state
  • wrangler.toml for config — wrangler dev/deploy/tail — environments for staging/prod
  • wrangler secret for encrypted secrets — .dev.vars for local — never in wrangler.toml
  • Cache API for response caching — Queues for async processing
  • compatibility_date pinned — 128MB memory — 30s CPU time — sub-ms cold starts