Best Practices

AI Rules for Performance Optimization

AI optimizes prematurely or not at all. Rules for measuring before optimizing, the performance budget, lazy loading, code splitting, and avoiding premature optimization.

8 min read·July 3, 2024

useMemo on every component is not optimization — it is cargo cult

Measure first, performance budgets, lazy loading, bundle analysis, and profile before fixing

AI Optimizes the Wrong Things (Or Nothing At All)

AI generates code at two performance extremes: ignoring performance entirely (loading entire libraries for one function, rendering everything synchronously, no lazy loading, no code splitting) or optimizing prematurely (memoizing every component, caching every function call, using complex algorithms for 10-item arrays). Both waste developer time — the first by creating real performance problems, the second by adding complexity that solves nothing.

The cardinal rule of performance optimization: measure first, optimize second. Profile your application, identify the actual bottleneck, fix that bottleneck, and measure again. AI skips measurement entirely — it applies textbook optimizations without knowing if they address the real problem. Memoizing a component that renders once is wasted code. Not memoizing a component that renders 1000 times per second is a real problem.

These rules enforce: measure-first optimization, performance budgets, the optimizations that matter most (lazy loading, code splitting, image optimization), and the discipline to avoid premature optimization.

Rule 1: Measure Before Optimizing — Always

The rule: 'Never optimize without measuring. Before: run Lighthouse, check bundle analyzer, profile with Chrome DevTools Performance tab. Identify the actual bottleneck — is it: bundle size (slow initial load), render performance (janky UI), API latency (slow data), or memory (growing heap)? Optimize THAT specific bottleneck. After: measure again to verify the optimization actually helped. If it did not measurably improve the metric, revert — you added complexity for nothing.'

For the tools: 'Lighthouse for overall page performance (LCP, INP, CLS, bundle size). Chrome DevTools Performance tab for runtime profiling (where is time spent?). Bundle analyzer (rollup-plugin-visualizer, @next/bundle-analyzer) for bundle composition (which packages are large?). React DevTools Profiler for component render timing. Network tab for API waterfall (which requests are slow?).'

AI generates useMemo, useCallback, and React.memo on everything — a cargo cult of optimization that adds code complexity without measured benefit. The React team explicitly says: most apps do not need these. Profile first, memoize only the measured bottleneck.

  • Profile first: Lighthouse, DevTools, bundle analyzer — find the real bottleneck
  • Optimize the specific bottleneck — not a generic optimization pass
  • Measure after: verify the optimization helped — revert if it did not
  • useMemo/useCallback only when profiling shows a render bottleneck
  • Premature optimization adds complexity without measurable benefit
💡 Profile, Then Memoize

The React team says: most apps do not need useMemo or useCallback. Profile with React DevTools Profiler first. If a component renders 1000x/second, memoize it. If it renders once, the memo adds complexity for zero benefit.

Rule 2: Performance Budgets

The rule: 'Set performance budgets and enforce in CI: total JS bundle < 200KB (gzipped), individual page JS < 100KB, LCP < 2.5s, INP < 200ms, CLS < 0.1. Fail the build if any budget is exceeded. This prevents: one new dependency adding 500KB, one unoptimized image adding 5MB, and one synchronous API call adding 3 seconds to LCP. Budgets catch regressions before they reach production.'

For enforcement: 'Use Lighthouse CI in your pipeline: lhci autorun --collect.url=https://preview-url --assert.preset=lighthouse:recommended. Set custom assertions: assertions: { "total-byte-weight": ["error", { maxNumericValue: 500000 }] }. Bundle size: use bundlesize or size-limit in CI — fail on budget violation.'

AI adds dependencies without considering their size impact. lodash adds 70KB. moment adds 300KB. A date picker component adds 150KB. Without a budget, the bundle grows silently until the page takes 10 seconds to load. With a budget, the build fails at 200KB — forcing the developer to: use a smaller alternative, lazy-load the heavy dependency, or justify the size increase.

⚠️ Budgets Catch Regressions

Without a budget, the bundle grows silently: lodash +70KB, moment +300KB, a date picker +150KB. With a 200KB budget, the build fails — forcing the developer to choose a smaller alternative or justify the size. Prevention beats remediation.

Rule 3: Lazy Loading and Code Splitting

The rule: 'Lazy-load everything not needed for the initial render: below-the-fold images (loading="lazy"), route-level code splits (React.lazy, Next.js dynamic), heavy components (modals, charts, rich text editors), and third-party scripts (analytics, chat widgets). The initial page load should contain: the HTML structure, above-the-fold content, and critical CSS — nothing else.'

For route-level splitting: 'Every route is a separate code split: const Profile = lazy(() => import("./pages/Profile")). In Next.js: automatic per-page splitting. In Vite/React: React.lazy + Suspense. The user downloads code for the current page only — navigating to a new page downloads that page code on demand. Never bundle all pages into one file.'

For component-level splitting: 'Heavy components loaded on interaction: const Editor = dynamic(() => import("./RichTextEditor"), { ssr: false }). Charts, maps, code editors, video players — all lazy-loaded. The initial bundle contains none of these. They download when the user opens the modal, scrolls to the chart, or clicks the edit button.'

  • Route-level: one code split per route — download current page only
  • Component-level: lazy-load modals, charts, editors — on interaction
  • Images: loading='lazy' on below-the-fold — eager on above-the-fold
  • Third-party: analytics/chat with async/defer — never block initial render
  • Goal: initial bundle = HTML + above-the-fold content + critical CSS

Rule 4: Bundle Size Optimization

The rule: 'Analyze your bundle with rollup-plugin-visualizer or @next/bundle-analyzer — see exactly which packages contribute to bundle size. Tree-shake: import { format } from "date-fns" (4KB) not import * as dateFns (200KB). Replace heavy dependencies: moment → date-fns or dayjs (90% smaller), lodash → native array methods (0KB), uuid → crypto.randomUUID() (0KB, built-in).'

For common replacements: 'moment (300KB) → dayjs (7KB) or date-fns (tree-shakeable). lodash (70KB) → lodash-es (tree-shakeable) or native JS. axios (30KB) → fetch (0KB, built-in). classnames (1KB) → clsx (0.5KB). uuid (10KB) → crypto.randomUUID() (0KB). Every replacement: smaller bundle, fewer dependencies, less maintenance.'

AI imports entire libraries for single functions: import _ from "lodash" for _.get (70KB for one function). import { get } from "lodash-es" tree-shakes to the one function used (~1KB). Or use optional chaining: obj?.nested?.value — 0KB, built into the language.

ℹ️ moment→dayjs = -293KB

moment (300KB) → dayjs (7KB): same API, 97% smaller. lodash (70KB) → native JS: array methods are built-in. axios (30KB) → fetch: built into every browser. Three replacements, 400KB saved, zero functionality lost.

Rule 5: Avoid Premature Optimization

The rule: 'Do not optimize unless: you have measured a problem, the problem affects users (not just a Lighthouse score), and the optimization addresses the measured problem specifically. Do not: memoize every component (React re-renders are fast), cache every function (function calls are nanoseconds), use complex algorithms for small data (O(n) vs O(log n) does not matter for 100 items), or pre-optimize database queries (optimize when the query is actually slow, not before).'

For when to optimize: 'Optimize when: users report slowness (real problem), Lighthouse score drops below 70 (measurable degradation), bundle size exceeds budget (automated catch), P95 API latency exceeds SLA (monitored threshold), or INP exceeds 200ms on real devices (Core Web Vital failure). These are measurable, user-affecting problems — not theoretical concerns.'

For the optimization mindset: 'Write clear, correct code first. Ship it. Measure performance with real users and real data. If a metric is bad, profile to find the cause. Fix the cause. Measure again. This cycle — write → ship → measure → fix — produces better performance than speculative optimization because you optimize what actually matters, not what might matter.'

  • Optimize only measured bottlenecks — not theoretical concerns
  • Write clear code first — premature optimization obscures intent
  • useMemo/useCallback: only when profiling shows render bottleneck
  • O(n) vs O(log n) irrelevant for 100 items — matters for 1M items
  • Cycle: write → ship → measure → profile → fix → measure again

Complete Performance Rules Template

Consolidated rules for performance optimization.

  • Measure first: Lighthouse, DevTools profiler, bundle analyzer — find the real bottleneck
  • Performance budgets: JS <200KB, LCP <2.5s, INP <200ms — enforced in CI
  • Lazy load: route splits, component-level (modals, charts), images below fold
  • Bundle: tree-shake imports, replace heavy deps, analyze with visualizer
  • moment→dayjs, lodash→native, axios→fetch, uuid→crypto.randomUUID
  • No premature optimization: useMemo only when measured, simple code first
  • Optimize when: users report, Lighthouse <70, budget exceeded, P95 > SLA
  • Cycle: write → ship → measure → profile → fix → verify improvement