AI Polls GraphQL Queries in a Loop
AI generates real-time GraphQL data with: polling (useQuery with pollInterval: 1000 โ a full query execution every second), wasted bandwidth (re-fetching the entire query result when nothing has changed), server load proportional to connected clients (1000 clients polling = 1000 query executions per second), no instant delivery (changes visible after up to 1 second delay), and no selective updates (the entire query result is replaced, even if one field changed). Polling GraphQL is REST polling with extra steps.
GraphQL subscriptions solve this with: WebSocket transport (persistent connection, server pushes data when it changes), event-driven delivery (data sent only when a mutation triggers a publish), selective payloads (only the changed data is sent, not the entire query result), zero-interval updates (data arrives within milliseconds of the mutation), and efficient scaling (server load proportional to events, not connected clients). AI generates none of these.
These rules cover: WebSocket subscription transport, resolver implementation with async iterators, Redis pub/sub as the backend, subscriber-level event filtering, connection lifecycle management, and horizontal scaling strategies.
Rule 1: WebSocket Transport for Subscriptions
The rule: 'Use the graphql-ws protocol for GraphQL subscriptions over WebSocket. Server: import { useServer } from "graphql-ws/lib/use/ws"; const wsServer = new WebSocketServer({ server: httpServer, path: "/graphql" }); useServer({ schema }, wsServer). Client (Apollo Client): import { GraphQLWsLink } from "@apollo/client/link/subscriptions"; import { createClient } from "graphql-ws"; const wsLink = new GraphQLWsLink(createClient({ url: "ws://localhost:4000/graphql" })). The graphql-ws protocol replaces the deprecated subscriptions-transport-ws.'
For Apollo Client split link: 'Use split to route queries/mutations over HTTP and subscriptions over WebSocket: const link = split(({ query }) => { const def = getMainDefinition(query); return def.kind === "OperationDefinition" && def.operation === "subscription"; }, wsLink, httpLink). Queries and mutations: HTTP (stateless, cacheable, load-balanced). Subscriptions: WebSocket (persistent, server-push, event-driven). The split function routes each operation to the correct transport automatically.'
AI generates: useQuery with pollInterval: 1000 for real-time data. 1000 connected clients: 1000 HTTP requests per second, each executing a full GraphQL query, each returning the same data if nothing changed. WebSocket subscriptions: 1000 persistent connections, zero requests when nothing changes, instant push when data changes. 1000 requests/second reduced to events/second (typically 1-10). Same real-time experience, 100-1000x less server load.
- graphql-ws protocol: replaces deprecated subscriptions-transport-ws
- Split link: HTTP for queries/mutations, WebSocket for subscriptions
- Persistent connection: zero polling, server pushes on data change
- 1000 clients polling = 1000 req/s. 1000 subscriptions = events/s (1-10 typically)
- Sub-millisecond delivery vs up to 1-second polling delay
1000 clients polling every second = 1000 query executions/second, most returning unchanged data. WebSocket subscriptions: 1000 persistent connections, zero requests when nothing changes, instant push on mutation. Same real-time experience, 100-1000x less server load.
Rule 2: Subscription Resolvers with Async Iterators
The rule: 'Subscription resolvers return an async iterator that yields events. Schema: type Subscription { orderUpdated(orderId: ID!): Order! }. Resolver: orderUpdated: { subscribe: (_, { orderId }) => pubsub.asyncIterator(["ORDER_UPDATED_" + orderId]), resolve: (payload) => payload.order }. When a mutation updates an order: pubsub.publish("ORDER_UPDATED_" + orderId, { order: updatedOrder }). All clients subscribed to that order ID receive the update instantly.'
For the resolver structure: 'A subscription resolver has two parts: subscribe (returns the async iterator โ determines which events this client receives) and resolve (transforms the raw event payload into the GraphQL response shape). The subscribe function is called once when the client subscribes. The resolve function is called for each event. This separation means: the pub/sub channel can carry raw data, and the resolver transforms it into the correct GraphQL type before sending to the client.'
AI generates: a subscription that returns all order updates to all clients (no filtering by order ID). Client watching order-123 receives updates for orders 456, 789, and every other order โ filtering client-side. With per-resource channels (ORDER_UPDATED_{orderId}): each client subscribes to exactly the events they need. Zero irrelevant events delivered. The pub/sub channel is the server-side filter.
Rule 3: Redis Pub/Sub as Subscription Backend
The rule: 'For multi-server deployments, use Redis pub/sub as the subscription backend instead of in-memory pub/sub. In-memory: events published on server A are not received by clients connected to server B (because pub/sub state is per-process). Redis pub/sub: events published on any server are received by subscribers on all servers. Library: graphql-redis-subscriptions (RedisPubSub). Configuration: const pubsub = new RedisPubSub({ connection: redisUrl }). The pub/sub backend is swappable โ the resolver code is identical.'
For why in-memory fails at scale: 'With in-memory pub/sub and 3 server instances behind a load balancer: a mutation hits server A and publishes ORDER_UPDATED. Clients connected to server A receive the event. Clients connected to servers B and C do not โ the event was published to server A memory only. With Redis: server A publishes to Redis, Redis broadcasts to all subscribers including servers B and C, all clients receive the event. Redis is the shared event bus across all server instances.'
AI generates: const pubsub = new PubSub() โ in-memory, works perfectly in development (one server). Deploy to production with 3 instances: 67% of clients do not receive events (they are connected to different servers than the publisher). The bug is intermittent and load-balancer-dependent. Redis pub/sub: one line change (new RedisPubSub instead of new PubSub), works correctly across any number of server instances.
- Redis pub/sub for multi-server: events broadcast across all instances
- In-memory pub/sub: works for single server only โ fails silently in production
- graphql-redis-subscriptions: drop-in replacement, same resolver code
- One line change: new RedisPubSub({ connection }) instead of new PubSub()
- Test with multiple server instances in staging โ catch the in-memory bug before production
In-memory PubSub with 3 server instances: mutation on server A publishes to A memory only. Clients on B and C never receive the event. Intermittent, load-balancer-dependent bug. Redis pub/sub: one line change, events broadcast to all instances. 100% of clients receive every event.
Rule 4: Subscriber-Level Event Filtering
The rule: 'Filter events at the server level, not the client level. Use withFilter from graphql-subscriptions: subscribe: withFilter(() => pubsub.asyncIterator("ORDER_UPDATED"), (payload, variables) => payload.order.id === variables.orderId). The filter function runs on the server for each event. If it returns false: the event is not sent to that client. This means: publish one event to the ORDER_UPDATED channel, withFilter delivers it only to clients watching that specific order. Fewer channels needed, server-side filtering, zero irrelevant events to clients.'
For filter complexity: 'Simple filters: match by ID (orderId === variables.orderId). Complex filters: match by multiple criteria (order.status === variables.status AND order.region === variables.region). Dynamic filters: user permissions (only send if the subscriber has access to this order). Performance consideration: the filter function runs for every event for every subscriber. Keep filters fast (O(1) lookups, no database queries). For complex authorization: pre-compute permissions at subscription time, not per event.'
AI generates: subscribe to all order events, filter on the client. 10,000 order updates per minute, client cares about 1 order: 9,999 events sent over the WebSocket and discarded by the client. Wasted bandwidth and processing. Server-side withFilter: 1 event sent. The server evaluates the filter and sends only matching events. The client receives exactly what it needs.
Subscribe to all orders, filter client-side: 10,000 events/minute over WebSocket, client discards 9,999. Server-side withFilter: 1 event delivered. The server evaluates the filter and sends only matching events. Zero wasted bandwidth.
Rule 5: Connection Lifecycle and Scaling
The rule: 'Manage WebSocket connection lifecycle: onConnect (authenticate the user, reject unauthorized connections), onSubscribe (validate the subscription, check permissions for the requested resource), onComplete (clean up per-subscription state), onDisconnect (clean up per-connection state, remove from active subscriber tracking). Authentication: validate the JWT or session token in onConnect โ reject before any subscription is processed. Do not authenticate per-subscription โ that is too late (the connection is already open).'
For horizontal scaling: 'WebSocket connections are stateful โ the client is connected to a specific server instance. Scaling strategy: sticky sessions (load balancer routes the client to the same server for the WebSocket duration) + Redis pub/sub (events flow across all servers). Health checks: monitor active connection count per server. Rebalancing: when a new server is added, new connections go to it (existing connections stay on their current server until they disconnect and reconnect). Connection limits: set a max per server (10,000 connections typical for a Node.js server) and scale horizontally.'
AI generates: no connection authentication (any WebSocket client can subscribe to any data), no lifecycle hooks (connections leak, state accumulates), and no scaling strategy (one server handles all WebSocket connections until it crashes). With lifecycle management: unauthorized connections rejected immediately, state cleaned up on disconnect, and horizontal scaling via sticky sessions + Redis pub/sub. Secure, clean, and scalable.
Complete GraphQL Subscriptions Rules Template
Consolidated rules for GraphQL subscriptions.
- graphql-ws protocol over WebSocket: replaces subscriptions-transport-ws
- Split link: HTTP for queries/mutations, WebSocket for subscriptions
- Async iterator resolvers: subscribe returns iterator, resolve transforms payload
- Redis pub/sub backend: events broadcast across all server instances
- withFilter for server-side filtering: deliver only matching events per subscriber
- Connection lifecycle: onConnect (auth), onSubscribe (permissions), onDisconnect (cleanup)
- Horizontal scaling: sticky sessions + Redis pub/sub + connection limits per server
- Publish per-resource channels: ORDER_UPDATED_{orderId} for targeted delivery