AI Does Not Measure What It Ships
AI generates applications with: no performance measurement (metrics are unknown until users complain), no LCP optimization (hero image loads after JavaScript, 5+ seconds on slow connections), no layout stability (content shifts as images and fonts load — CLS score of 0.5+), no interaction responsiveness (click handlers blocked by long tasks — INP 500ms+), and no performance budgets (each PR adds 10KB unchecked, bundle grows linearly). Google uses Core Web Vitals as a ranking signal — poor vitals hurt SEO directly.
Modern web performance is: measured (web-vitals library in production, Lighthouse in CI), LCP-optimized (largest contentful paint under 2.5 seconds), layout-stable (cumulative layout shift under 0.1), interaction-responsive (interaction to next paint under 200ms), and budget-enforced (CI fails if metrics regress). AI generates none of these.
These rules cover: the three Core Web Vitals (LCP, INP, CLS), measurement and monitoring, optimization techniques for each metric, and CI performance budgets.
Rule 1: LCP Under 2.5 Seconds
The rule: 'Largest Contentful Paint (LCP) measures when the largest visible element (typically hero image or heading) finishes rendering. Target: under 2.5 seconds. Optimizations: (1) preload the LCP image (<link rel="preload" as="image">), (2) use fetchpriority="high" on the LCP element, (3) eliminate render-blocking resources (defer non-critical CSS/JS), (4) use a CDN for fast asset delivery, (5) optimize server response time (TTFB under 800ms).'
For identifying the LCP element: 'Use Chrome DevTools > Performance panel > Timings > LCP. Or web-vitals library: onLCP((metric) => { console.log(metric.entries[0].element); }). The LCP element varies by page: hero image on landing pages, first heading on text-heavy pages, featured product image on e-commerce. Once identified, prioritize that specific element.'
AI generates: a hero image loaded after React hydration — the browser downloads JavaScript, parses it, executes it, then requests the image. LCP is JavaScript parse time + image download time = 5+ seconds. Server-rendered hero image with preload: the browser requests the image during HTML parsing, before JavaScript. LCP drops from 5s to 1.5s.
- Target: LCP under 2.5 seconds — Google 'good' threshold
- Preload LCP image: <link rel='preload' as='image' href='hero.webp'>
- fetchpriority='high' on LCP element — browser prioritizes this resource
- Server-render above-fold content — no waiting for JavaScript hydration
- TTFB under 800ms — server response time is the floor for LCP
AI loads the hero image after JavaScript hydration: parse JS + download image = 5s LCP. Server-rendered hero with <link rel='preload'>: browser requests the image during HTML parsing, before JS. LCP drops from 5s to 1.5s. Same image, different loading strategy.
Rule 2: INP Under 200ms
The rule: 'Interaction to Next Paint (INP) measures the delay between user interaction (click, tap, keypress) and the next visual update. Target: under 200ms. Optimizations: (1) break long tasks into smaller chunks (yield to the main thread with scheduler.yield() or setTimeout), (2) move heavy computation to Web Workers, (3) reduce JavaScript bundle size (less code to parse and execute), (4) debounce rapid interactions, (5) avoid forced synchronous layouts.'
For long task identification: 'Use Chrome DevTools > Performance > Long Tasks (red rectangles). Or the Long Tasks API: new PerformanceObserver((list) => { for (const entry of list.getEntries()) { if (entry.duration > 50) console.warn("Long task:", entry); } }).observe({ type: "longtask" }). Tasks over 50ms block the main thread — user interactions during this time are delayed. Break long tasks into 50ms chunks.'
AI generates: click handlers that perform heavy computation synchronously: onClick={() => { processLargeDataset(); updateChart(); recalculateLayout(); }}. Three expensive operations blocking the main thread = 300ms INP. Break into chunks: process data, yield, update chart, yield, recalculate layout. Each chunk under 50ms, total INP under 100ms.
Rule 3: CLS Under 0.1
The rule: 'Cumulative Layout Shift (CLS) measures visual stability — how much the page content moves unexpectedly during loading. Target: under 0.1. Causes: (1) images without dimensions (content shifts when image loads), (2) dynamically injected content (ads, banners, cookie notices push content down), (3) web fonts causing FOUT/FOIT (text reflows when font loads), (4) dynamic content above existing content (notifications, alerts inserted above the fold).'
For fixes: 'Images: always include width and height or CSS aspect-ratio. Fonts: font-display: swap with size-adjust to minimize reflow. Dynamic content: reserve space with min-height or placeholder. Ads: reserve the ad slot size with CSS before the ad loads. Cookie banners: overlay (position: fixed) instead of pushing content down. Each fix reserves space before the content loads — eliminating the shift.'
AI generates: <img src='photo.jpg' /> (no dimensions — layout shifts on load), font-face with no font-display (invisible text until font loads, then reflow), and dynamically injected banners that push the page down. Three CLS violations on one page. CLS of 0.5+ means: the user tries to click a button, the page shifts, they click the wrong thing. Three attributes and CSS properties prevent this entirely.
- Target: CLS under 0.1 — Google 'good' threshold
- Images: always width + height or aspect-ratio — prevent shift on load
- Fonts: font-display: swap + size-adjust — minimize text reflow
- Dynamic content: reserve space with min-height or placeholder
- Ads and banners: reserve slot size in CSS, overlay instead of push
CLS 0.5: the user tries to click a button, the page shifts (image loads, font swaps, banner inserts), they click the wrong thing. Three attributes fix this: width/height on images, font-display: swap on fonts, min-height on dynamic content containers.
Rule 4: Real-User Monitoring with web-vitals
The rule: 'Install the web-vitals library and report metrics from real users: import { onLCP, onINP, onCLS } from "web-vitals"; onLCP(sendToAnalytics); onINP(sendToAnalytics); onCLS(sendToAnalytics). Send to your analytics service (Google Analytics, Datadog, custom endpoint). Lab metrics (Lighthouse) test controlled conditions. Field metrics (web-vitals) measure real users on real devices and networks — what Google actually uses for ranking.'
For percentile targets: 'Target the 75th percentile (p75) — what 75% of your users experience. Google uses p75 for Core Web Vitals assessment. If your p75 LCP is 3.5s, 25% of users experience even worse. Do not optimize for the median (p50) — it hides the experience of users on slower devices and connections. The p75 is the metric that determines your green/yellow/red status in Google Search Console.'
AI generates: no monitoring. Performance issues are discovered when: users complain (reactive), bounce rate increases (weeks later), or search rankings drop (months later). web-vitals library: 2KB, reports real performance data continuously. You see the regression in the dashboard the day the code ships, not the month the ranking drops.
Rule 5: CI Performance Budgets
The rule: 'Set performance budgets in CI that fail the build on regression. Lighthouse CI: lhci autorun with assertions: { "categories:performance": ["error", { "minScore": 0.9 }], "first-contentful-paint": ["error", { "maxNumericValue": 2000 }] }. size-limit for bundle size: if the JavaScript bundle exceeds 200KB gzipped, the build fails. Budgets catch regressions at PR time — before they reach production.'
For budget evolution: 'Start with current baseline: measure your existing metrics, set budgets 10% above current values (allow some headroom). Tighten budgets quarterly as you optimize: current LCP is 2.8s, budget at 3.0s. After optimizing to 2.2s, tighten budget to 2.5s. Budgets ratchet toward the target — they prevent regression while the team optimizes forward.'
AI generates: no performance budget, no CI check. Each PR adds 5-10KB unchecked. 50 PRs later: 500KB of bundle creep, LCP degraded by 2 seconds, CLS up by 0.3. Nobody notices because there is no measurement. CI budgets turn performance from an invisible quality into a build gate — as visible and enforceable as test coverage.
Without CI budgets: 50 PRs add 5KB each = 250KB of bundle creep, LCP degrades by 2 seconds. Nobody notices. With Lighthouse CI: the PR that adds the 201st KB fails the build. Performance becomes as visible and enforceable as test coverage.
Complete Web Vitals Rules Template
Consolidated rules for Web Vitals and Core Metrics.
- LCP under 2.5s: preload LCP image, fetchpriority='high', server-render above fold
- INP under 200ms: break long tasks, yield to main thread, Web Workers for computation
- CLS under 0.1: dimensions on images, font-display: swap, reserve space for dynamic content
- web-vitals library: 2KB, real-user monitoring, send to analytics
- Target p75 (75th percentile) — what Google uses for ranking assessment
- Lighthouse CI: fail build if performance score drops below threshold
- size-limit: fail build if bundle exceeds byte budget
- Tighten budgets quarterly: prevent regression, optimize forward