Comparisons

Edge vs Server: AI Rules for Each Runtime

Edge runtime runs at CDN edge locations with Web APIs. Server runtime runs on Node.js with full access. Each has different API constraints, database access patterns, and deployment characteristics that AI rules must specify.

6 min read·June 24, 2025

pg TCP driver at the edge = runtime crash. The build succeeds. The function deploys. The first request fails.

Web APIs vs Node.js, HTTP vs TCP database, middleware vs routes, and when edge latency actually matters

Two Runtimes, Different Capabilities

Edge runtime: your code runs in V8 isolates at 300+ CDN edge locations worldwide. Sub-millisecond cold start (V8 isolate, not a container). Web Standard APIs only: fetch, Request, Response, URL, TextEncoder, crypto.subtle. No Node.js APIs: no fs, no path, no Buffer, no child_process, no net (TCP). The trade-off: globally distributed with strict API limitations. Used by: Cloudflare Workers, Vercel Edge Functions, Deno Deploy, and Next.js middleware.

Server runtime: your code runs on Node.js in one or a few regions. Full Node.js API access: fs, path, Buffer, child_process, net, crypto, and every npm package. TCP connections: to databases (Postgres, Redis, MongoDB via standard drivers). The trade-off: single-region with unlimited API access. Used by: Vercel serverless functions (default), AWS Lambda, traditional servers, and Next.js API routes (default).

Without runtime rules: the AI generates Node.js code for edge functions (import fs from 'fs' at the edge = runtime error), uses TCP database drivers at the edge (@neondatabase/serverless HTTP driver is needed, not pg TCP driver), or deploys heavy computation to the edge (edge functions have execution time and memory limits). The runtime determines: which APIs are available, which database drivers work, and which npm packages can be imported.

API Constraints: Web Standards vs Node.js

Edge API constraints: Web Standard APIs only. Available: fetch (HTTP requests), Request/Response (Web API), URL (parsing), TextEncoder/TextDecoder (string encoding), crypto.subtle (Web Crypto), ReadableStream/WritableStream (streaming), Headers, FormData, and AbortController. Not available: fs (no filesystem), path (no file paths), Buffer (use Uint8Array), child_process (no shell), net (no TCP sockets), and process (limited). AI rule: 'Edge: Web APIs only. No fs, path, Buffer, child_process, net. Use Uint8Array not Buffer. Use fetch not http module.'

Server API constraints: none. Full Node.js access: fs (file system), path (file paths), Buffer (binary data), child_process (shell commands), net (TCP sockets), crypto (Node crypto + Web Crypto), process (env vars, exit, memory), and every npm package that uses Node.js APIs. AI rule: 'Server: full Node.js APIs. Use any npm package. TCP database connections (pg, mysql2, ioredis). File system access for uploads, temp files, and logs.'

The constraint rule prevents: the AI importing Buffer in an edge function (use Uint8Array), using pg (TCP driver) at the edge (use @neondatabase/serverless HTTP driver), reading files with fs.readFileSync at the edge (no filesystem), or spawning processes with child_process at the edge (no shell). Each of these: produces a runtime error that is not caught at build time. The rule prevents: the most common edge runtime failures.

  • Edge: fetch, Request/Response, URL, crypto.subtle, TextEncoder, streams — Web Standards only
  • Server: fs, path, Buffer, child_process, net, process — full Node.js, any npm package
  • Edge forbidden: fs, Buffer, child_process, net, TCP sockets — runtime errors if used
  • Replace: Buffer → Uint8Array. pg TCP → @neondatabase/serverless HTTP. fs → fetch from KV/R2
  • Build time: no error. Runtime: crash. The constraint is invisible until the code runs at the edge
⚠️ Invisible Until Runtime

import fs from 'fs' at the edge: builds successfully, deploys successfully, crashes on the first request. The constraint is invisible at build time. Runtime errors from wrong API usage are: the most common edge deployment failure. One rule about which APIs are available: prevents every invisible build-time-silent runtime crash.

Database Access: HTTP Drivers vs TCP Connections

Edge database access: HTTP-based drivers. @neondatabase/serverless: sends SQL over HTTP (each query is an HTTP request, no persistent TCP connection). PlanetScale serverless driver: same HTTP approach. Cloudflare D1: edge-native SQLite. Turso/libSQL: HTTP-based SQLite. These drivers: work at the edge because HTTP is a Web Standard API. The trade-off: slightly higher per-query latency (HTTP overhead) but no connection pool management and no TCP socket requirement. AI rule: 'Edge: HTTP database drivers only. Neon: @neondatabase/serverless. PlanetScale: @planetscale/database. No pg, no mysql2, no ioredis (TCP-based).'

Server database access: TCP-based drivers (the standard approach). pg: Postgres via TCP connection pool. mysql2: MySQL via TCP. ioredis: Redis via TCP. mongoose: MongoDB via TCP. These drivers: maintain persistent connections (connection pools with 5-20 connections), offer lowest latency (TCP is faster than HTTP for sequential queries), and support: transactions, prepared statements, and streaming results. AI rule: 'Server: TCP database drivers. pg for Postgres. mysql2 for MySQL. ioredis for Redis. Connection pool: min 2, max 10. Close connections on shutdown.'

The database rule is: the most common edge runtime error source. The AI generating const pool = new Pool() (pg TCP driver) in an edge function: fails at runtime because TCP sockets are not available. The fix: import { neon } from '@neondatabase/serverless' (HTTP driver). Same SQL queries, different transport. The AI must know: which runtime the code runs in, to select the correct database driver. One rule prevents: every database connection failure at the edge.

  • Edge: HTTP drivers (@neondatabase/serverless, @planetscale/database). No TCP connections
  • Server: TCP drivers (pg, mysql2, ioredis, mongoose). Connection pools, persistent connections
  • Edge trade-off: HTTP overhead per query. Server trade-off: connection pool management
  • Same SQL: both execute the same queries. Different transport: HTTP vs TCP
  • Most common edge error: pg TCP driver at the edge = runtime crash. HTTP driver fixes it
💡 Same SQL, Different Transport

Edge: const sql = neon(DATABASE_URL); await sql('SELECT * FROM users'). Server: const pool = new Pool(); await pool.query('SELECT * FROM users'). Same SQL query. Different driver (HTTP vs TCP). Same results. The AI must know: which runtime to select the correct driver. One import change: HTTP for edge, TCP for server.

Runtime Selection: Middleware vs API Routes

Next.js middleware (always edge): middleware.ts runs at the edge on every matching request. Use for: auth checks (verify JWT, redirect unauthenticated), redirects (old URL to new URL), geo-routing (serve different content by country), A/B testing (assign variant by cookie), and request rewriting (rewrite URL based on headers). Middleware is: lightweight, fast (sub-millisecond), and runs before the page renders. AI rule: 'middleware.ts: always edge runtime. No Node.js APIs. No TCP database. Use: JWT verification (jose library), cookie manipulation, URL rewriting, and header inspection.'

Next.js API routes (configurable): route handlers in app/api/ default to Node.js runtime. Add export const runtime = 'edge' to opt into edge runtime. Server runtime: for routes that need database access (TCP), file operations, or heavy computation. Edge runtime: for routes that need global low latency and use only Web APIs. AI rule: 'API routes: default Node.js. Add runtime = "edge" only for routes that: need global low latency, use HTTP-only database drivers, and do not need Node.js APIs. Most routes: keep Node.js default.'

The runtime selection rule: middleware = always edge (no choice, it is the Next.js design). API routes = default server, opt into edge when appropriate. The AI should: never add runtime = 'edge' to a route that uses pg, fs, or Buffer (it will crash). Always add runtime = 'edge' to middleware (it is edge by design). The selection is: per-route. Different routes in the same project: can use different runtimes.

  • Middleware: always edge. Auth, redirects, geo-routing, A/B testing. No Node.js APIs
  • API routes: default Node.js (full access). Opt into edge with runtime = 'edge' per route
  • Edge routes: global latency, HTTP database only, Web APIs. Server routes: TCP database, full Node.js
  • Never edge: routes using pg, fs, Buffer, child_process. Always edge: middleware.ts
  • Per-route decision: different routes in the same project can use different runtimes

When to Choose Each Runtime

Choose edge runtime when: global low latency matters (the function runs at the edge nearest the user — 10ms vs 200ms), the function is lightweight (JWT verification, cookie manipulation, URL rewriting — not heavy computation), the database supports HTTP access (@neondatabase/serverless, PlanetScale serverless), or the function is: middleware, redirects, or simple API responses. Edge is: the latency-optimized choice for lightweight global operations.

Choose server runtime when: the function needs Node.js APIs (file system, TCP connections, child_process), the function does heavy computation (image processing, PDF generation, data transformation), the function uses TCP database drivers (pg connection pool, Redis TCP), or the function has complex dependencies (npm packages that use Node.js internals). Server is: the capability-maximized choice for full-featured backend operations.

The default recommendation: server runtime for most API routes (full Node.js, no constraints). Edge runtime for middleware (latency-sensitive, lightweight, runs on every request). Edge runtime for select API routes where: global latency matters AND the function uses only Web APIs. Do not optimize for edge prematurely: most API routes do not benefit from edge (the database is still in one region — moving the function to the edge does not move the database).

  • Edge: global latency, lightweight functions, middleware, HTTP-only database access
  • Server: Node.js APIs, TCP database, heavy computation, complex npm packages
  • Default: server for API routes, edge for middleware. Opt into edge only when latency matters
  • Do not over-optimize: edge function + single-region database = the database is the bottleneck, not the function
  • Per-route: different routes use different runtimes based on their specific requirements
ℹ️ Edge Function + Single-Region DB = Marginal Gain

Moving the function to the edge (300+ locations) while the database stays in us-east-1: the function runs 10ms closer to the user, but the database query still crosses the globe. The latency bottleneck is: the database, not the function. Edge matters when: the database is also distributed (Neon replicas, PlanetScale global).

Runtime Rule Summary

Summary of edge vs server runtime AI rules.

  • Edge: V8 isolates, Web APIs only, 300+ locations, sub-ms cold start, no TCP/fs/Buffer
  • Server: Node.js, full APIs, 1-few regions, container/serverless cold start, unlimited access
  • Database: edge = HTTP drivers (@neondatabase/serverless). Server = TCP drivers (pg, ioredis)
  • Middleware: always edge. API routes: default server, opt into edge per route
  • Most common error: pg TCP driver at edge = runtime crash. Use HTTP driver instead
  • Edge for: middleware, auth checks, redirects, lightweight global functions
  • Server for: database operations, file handling, heavy computation, complex dependencies
  • Do not over-optimize: single-region database = edge function latency gain is marginal