$ npx rulesync-cli pull✓ Wrote CLAUDE.md (2 rulesets)# Coding Standards- Always use async/await- Prefer named exports
Rule Writing

CLAUDE.md for AWS Lambda Functions

AI generates Express-style code for Lambda. Rules for event-driven handlers, cold start optimization, connection reuse, layers, and proper IAM scoping.

7 min read·April 7, 2025

Lambda is event-driven — AI generates Express servers inside it

Handler design, cold start optimization, connection reuse, layers, and IAM scoping

Why Lambda Needs Serverless-Native Rules

AWS Lambda is event-driven — functions receive an event, process it, and return a response. There's no server, no long-running process, and no persistent state between invocations. AI assistants generate Express/Fastify server code inside Lambda (wrapping a server in a handler), establish database connections inside the handler (new connection per invocation), and ignore cold start optimization — all patterns that waste money and add latency.

Lambda's execution model: the runtime creates a container, initializes your code (cold start), then reuses the container for subsequent invocations (warm start). Code outside the handler runs once per cold start — it's where connections, SDK clients, and heavy initialization belong. Code inside the handler runs on every invocation — it should be fast and lightweight.

These rules target Lambda with Node.js, Python, or Go runtimes. The patterns are universal across languages — only the syntax differs.

Rule 1: Event-Driven Handler Design

The rule: 'Handlers receive an event and return a response — no Express, no Fastify, no HTTP server inside Lambda. export const handler = async (event: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => { const body = JSON.parse(event.body || "{}"); ... return { statusCode: 200, body: JSON.stringify(result) }; }. Use typed event objects for each trigger type: APIGatewayProxyEvent, SQSEvent, S3Event, ScheduledEvent.'

For API Gateway: 'API Gateway passes HTTP requests as APIGatewayProxyEvent. Parse event.body for POST data, event.pathParameters for path params, event.queryStringParameters for query params. Return { statusCode, headers, body }. Never use express-like routing inside Lambda — API Gateway handles routing, Lambda handles logic.'

AI wraps Express inside Lambda using serverless-express or similar adapters. This works but: adds cold start time (Express initialization), adds bundle size (Express + middleware), and defeats Lambda's model. If you need Express-style routing, use a framework designed for Lambda (SST, Serverless Framework) that maps routes to individual functions.

  • Event in → response out — no Express, no HTTP server inside Lambda
  • Typed events: APIGatewayProxyEvent, SQSEvent, S3Event, ScheduledEvent
  • Parse event.body, event.pathParameters, event.queryStringParameters
  • Return { statusCode, headers, body } — API Gateway handles HTTP
  • One function per route/trigger — never Express router inside Lambda
⚠️ No Express Inside Lambda

AI wraps Express inside Lambda with serverless-express. This adds cold start time, bundle size, and defeats Lambda's model. API Gateway handles routing — Lambda handles logic. One function per route, not one Express app for all routes.

Rule 2: Cold Start Optimization

The rule: 'Initialize expensive resources OUTSIDE the handler — they persist across warm invocations. Database connections, SDK clients, configuration loading — all outside the handler. Inside the handler: only request-specific logic. const db = new DynamoDBClient({}); // outside — runs once per cold start. export const handler = async (event) => { const result = await db.send(new GetCommand(...))); // inside — reuses the client }.'

For bundle size: 'Smaller bundles = faster cold starts. Use esbuild or webpack for bundling and tree-shaking. Exclude AWS SDK v3 clients you don't use. Move heavy dependencies (puppeteer, sharp, ffmpeg) to Lambda Layers. Target: under 5MB zipped for sub-second cold starts. Use provisioned concurrency for latency-critical functions (eliminates cold starts entirely).'

For runtime selection: 'Node.js and Python have the fastest cold starts (~200-400ms). Java and .NET have the slowest (~1-3s without SnapStart). Use Lambda SnapStart for Java — it snapshots the initialized JVM, reducing cold starts to <200ms. Arm64 (Graviton2) is 20% cheaper and often faster than x86.'

💡 Outside = Once, Inside = Every Time

Code outside the handler runs once per cold start. Code inside runs every invocation. DB connections, SDK clients, config — all outside. Request-specific logic — inside. This single pattern eliminates most Lambda performance issues.

Rule 3: Connection Reuse and External Services

The rule: 'Create database connections, HTTP clients, and SDK clients OUTSIDE the handler for reuse across warm invocations. For RDS: use RDS Proxy to pool connections (Lambda can exhaust connection limits with direct connections). For DynamoDB: create DynamoDBClient once outside the handler. For HTTP: create an axios/fetch instance outside the handler with keep-alive enabled.'

For database connections: 'Direct RDS connections from Lambda exhaust the database connection limit — 1000 concurrent Lambda invocations = 1000 database connections. Use RDS Proxy: it pools connections and shares them across invocations. For DynamoDB: no connection limit concerns — it's an HTTP API. For Redis: use a connection pool with a max limit.'

AI creates new database connections inside every handler invocation — a fresh connection per request. With connection reuse outside the handler, warm invocations skip the connection overhead entirely. With RDS Proxy, even cold starts share pooled connections.

  • SDK clients outside handler: const client = new DynamoDBClient({}) — reused on warm
  • RDS Proxy for PostgreSQL/MySQL — never direct connections from Lambda
  • HTTP clients with keep-alive — reuse TCP connections across invocations
  • DynamoDB: no connection concerns — HTTP API, no pooling needed
  • New connection per invocation = cold start overhead + connection exhaustion
ℹ️ RDS Proxy Prevents Exhaustion

1000 concurrent Lambda invocations = 1000 database connections without a proxy. RDS Proxy pools connections — Lambda shares them. Without it, you'll hit the database connection limit under load and every function errors.

Rule 4: Lambda Layers and Packaging

The rule: 'Use Lambda Layers for: shared dependencies across functions, heavy libraries (puppeteer, ffmpeg, sharp), and custom runtimes. Layers are deployed separately from function code — updating your function doesn't re-upload layers. Keep function code small (business logic only), put dependencies in layers. Maximum 5 layers per function, 250MB unzipped total.'

For packaging: 'Use esbuild for TypeScript/JavaScript: bundle, tree-shake, and minify into a single file. For Python: use poetry or pip with --target for packaging deps. For Go: compile to a single binary — no layers needed. Use .zip deployment for direct upload or SAM/CDK for infrastructure-as-code deployment.'

For the deployment pipeline: 'Use SAM (sam build && sam deploy) or CDK (cdk deploy) — never manually zip and upload through the console. Define Lambda functions as infrastructure code. Pin runtime versions. Use aliases (prod, staging) for traffic management. Use canary deployments (CodeDeploy) for safe rollouts.'

Rule 5: IAM and Observability

The rule: 'Every Lambda function gets its own IAM role with least-privilege permissions. Never use a shared role with broad permissions. Specify exact actions and resources: dynamodb:GetItem on arn:aws:dynamodb:*:*:table/users — not dynamodb:* on *. Use IAM policy conditions for additional restrictions: aws:SourceVpc, aws:PrincipalOrgID.'

For logging: 'Lambda writes to CloudWatch Logs automatically. Use structured JSON logging: console.log(JSON.stringify({ level: "info", message: "User created", userId, requestId: context.awsRequestId })). Include the request ID (context.awsRequestId) in every log — it correlates log entries for a single invocation. Use Lambda Powertools for structured logging, tracing, and metrics.'

For monitoring: 'Monitor: invocation count, duration (P50, P95, P99), error rate, throttles, and concurrent executions. Set CloudWatch alarms: error rate > 1%, P99 duration > timeout * 0.8, throttle count > 0. Use X-Ray for distributed tracing across Lambda → API Gateway → DynamoDB. Use Lambda Insights for enhanced monitoring of memory and CPU usage.'

  • One IAM role per function — least privilege: exact actions, exact resources
  • Structured JSON logging — include context.awsRequestId in every log
  • Lambda Powertools for logging, tracing, metrics — one library, three capabilities
  • CloudWatch alarms: error rate, P99 duration, throttles
  • X-Ray for distributed tracing — Lambda Insights for resource monitoring

Complete AWS Lambda Rules Template

Consolidated rules for AWS Lambda functions.

  • Event-driven handlers — no Express/Fastify inside Lambda — typed events per trigger
  • Initialize outside handler: DB connections, SDK clients, config — reused on warm starts
  • RDS Proxy for SQL databases — never direct connections — DynamoDB needs no pooling
  • Bundle < 5MB: esbuild/webpack — Lambda Layers for heavy deps — Arm64 for cost/speed
  • One IAM role per function: exact actions, exact resources — never shared broad roles
  • Structured JSON logging with requestId — Lambda Powertools for logging/tracing/metrics
  • SAM or CDK for deployment — aliases for traffic — canary for safe rollouts
  • CloudWatch alarms: error rate, P99 duration, throttles — X-Ray for tracing