Comparisons

Before and After AI Rules: Real Code Examples

See the concrete difference AI rules make. Side-by-side code examples showing what the AI generates without rules vs with rules: component patterns, API endpoints, state management, error handling, and test files.

8 min readยทJune 7, 2025

Without rules: 40-60% rewriting. With rules: 5-10% rewriting. 7-10x reduction in convention corrections.

Five real code examples: components, APIs, state, errors, tests โ€” before and after CLAUDE.md rules

Seeing Is Believing: The Rules Impact in Code

AI rules are: abstract guidance in a markdown file. The impact is: concrete code that matches your conventions. This article shows: five real before/after code examples. Each example: the same task, without rules (generic AI output) and with rules (convention-matching AI output). The difference is: not subtle. Without rules, the AI generates: technically correct code that does not match your project. With rules, the AI generates: code that looks like your team wrote it.

The examples cover: the five most common AI coding tasks (component creation, API endpoint, state management, error handling, and test file). Each task shows: the prompt given to the AI, the output without rules, the output with rules, and the specific rules that caused the difference. The goal: make the abstract value of rule files concrete and measurable. If one example matches a pattern your team struggles with: that one rule would save hours of code review corrections.

These examples use: the RuleSync project's own conventions (Next.js App Router, Drizzle ORM, Zustand, Tailwind, Vitest) as the target. Your project's conventions differ, but the pattern is the same: without rules, the AI generates generic patterns. With rules that match YOUR conventions, the AI generates YOUR patterns. The specific conventions change; the improvement does not.

Example 1: React Component

Prompt: 'Create a user profile card component that shows name, email, avatar, and a settings link.' Without rules, the AI generates: a Client Component (use client at the top), useState for loading state, useEffect to fetch user data, fetch('/api/user') for data, className with inline styles or CSS modules, and a div-based layout with onClick for navigation. The code: works but is client-side rendered, fetches data in a useEffect waterfall, uses patterns the project does not follow, and adds unnecessary client JavaScript.

With CLAUDE.md rules ('Server Components default. Tailwind CSS with cn(). Use Link from next/link for navigation. Data fetching: async component, no useEffect. Image: next/image.'), the AI generates: a Server Component (no use client), async function that fetches data directly (const user = await getUser()), Tailwind className with cn() for conditional styles, Link component for the settings navigation, Image component for the avatar with width/height, and zero client JavaScript. The code: matches the project conventions, uses Server Components for performance, and requires zero convention-related code review corrections.

The rules that made the difference: 'Server Components default' (prevented use client + useEffect). 'Tailwind CSS with cn()' (prevented CSS modules or inline styles). 'Use Link from next/link' (prevented div onClick navigation). Three rules in CLAUDE.md: transformed the output from generic React to project-specific Next.js App Router code. The time saved: 5-10 minutes of manual refactoring per component. Over 50 AI-generated components: 4-8 hours saved.

  • Without rules: use client + useEffect + fetch + CSS modules + div onClick = generic React
  • With rules: Server Component + async data + Tailwind cn() + Link + Image = project-specific Next.js
  • Three rules caused the difference: Server Components, Tailwind cn(), and Link component
  • Time saved per component: 5-10 minutes of refactoring. 50 components: 4-8 hours total
  • The AI generated correct code both times โ€” but only with rules did it match the project
๐Ÿ’ก Three Rules Transform a Component

'Server Components default' + 'Tailwind cn()' + 'Link from next/link': three rules transform AI output from generic React (use client + useEffect + CSS modules + div onClick) to project-specific Next.js (async Server Component + Tailwind + Link). Three rules, 5-10 minutes saved per component.

Example 2: API Endpoint

Prompt: 'Create a GET endpoint that returns paginated users filtered by role.' Without rules: Express-style handler (req, res) with manual query string parsing, raw SQL query (SELECT * FROM users WHERE role = $1 LIMIT $2 OFFSET $3), offset-based pagination (page number in query params), no input validation, and a bare JSON response (res.json(users)). The code: uses Express patterns in a Next.js project, offset pagination (the project uses cursors), no Zod validation, and no structured response format.

With CLAUDE.md rules ('Next.js Route Handlers. Drizzle ORM: db.select().from(). Zod validation on every API input. Cursor-based pagination. Response format: { data, pagination }.'), the AI generates: a Next.js Route Handler (export async function GET(request: Request)), Zod schema for query params (z.object({ role: z.enum(['admin', 'editor', 'viewer']), cursor: z.string().optional(), limit: z.number().default(20) })), Drizzle query (db.select().from(users).where(eq(users.role, role)).limit(limit)), cursor-based pagination (WHERE id > cursor ORDER BY id), and structured response ({ data: users, pagination: { cursor: lastUser.id, hasNext } }).

The rules that made the difference: 'Next.js Route Handlers' (prevented Express patterns). 'Drizzle ORM' (prevented raw SQL). 'Zod validation on every API input' (added input validation). 'Cursor-based pagination' (prevented offset). 'Response format: { data, pagination }' (structured the response). Five rules: transformed a generic Express endpoint into a project-specific Next.js API route with validation, correct ORM usage, cursor pagination, and structured responses.

Example 3: State Management

Prompt: 'Add theme toggle functionality (light/dark/system) with persistence.' Without rules: Redux Toolkit with createSlice (dispatch, reducer, selectors โ€” 30+ lines of boilerplate), localStorage read/write inside the reducer (impure โ€” Redux reducers should be pure), and a ThemeProvider with React Context wrapping the app. The code: uses Redux (the project uses Zustand), mixes side effects into reducers, and adds unnecessary Provider wrapper complexity.

With CLAUDE.md rules ('Zustand for global client state. No Redux. Persist: zustand/middleware persist. No Provider needed.'), the AI generates: a Zustand store with persist middleware (const useThemeStore = create(persist((set) => ({ theme: 'system', setTheme: (theme) => set({ theme }) }), { name: 'theme' }))). Usage: const theme = useThemeStore(s => s.theme). No Provider, no reducer, no dispatch, no action types. The code is: 8 lines instead of 30+, uses the correct library, persists correctly with the Zustand middleware, and requires zero Provider wrapping.

The rule that made the difference: 'Zustand for global client state. No Redux.' Two sentences. The negative rule ('No Redux') is as important as the positive ('Zustand'): without 'No Redux', the AI may still default to Redux (it has more Redux training data). The explicit prohibition: prevents the AI from falling back to its most common training pattern. The time saved: 20 minutes refactoring from Redux to Zustand. Plus: every future state management task uses Zustand from the start.

  • Without rules: Redux Toolkit, 30+ lines, createSlice + dispatch + Provider wrapper
  • With rules: Zustand, 8 lines, create() + persist middleware, no Provider needed
  • Key rule: 'Zustand for state. No Redux.' Negative rule prevents Redux fallback
  • AI defaults to Redux (more training data) unless explicitly told to use Zustand
  • Time saved: 20 min per refactor + every future state task correct from the start
โš ๏ธ The Negative Rule Is as Important as the Positive

'Zustand for state. No Redux.' Without 'No Redux': the AI defaults to Redux (more training data). The explicit prohibition prevents the fallback. Without rules: Redux 30+ lines. With two sentences: Zustand 8 lines. The negative rule is: the guard against the AI's statistical default.

Example 4: Error Handling and Example 5: Test File

Error handling without rules: try/catch with console.error and generic res.status(500).json({ error: 'Something went wrong' }). No structured error format, no error code, no request ID, no distinction between client and server errors. With rules ('Structured errors: { error: { code, message, details } }. Never expose stack traces. Log with request context.'): the AI generates a structured error response with error code (VALIDATION_ERROR), user-friendly message, field-level details for validation errors, request ID for log correlation, and no exposed stack trace. The rule file transforms: generic error handling into a professional API error system.

Test file without rules: Jest syntax (describe, it, expect from Jest globals), shallow rendering with enzyme (deprecated), mocking with jest.mock, and only happy-path testing. With rules ('Vitest + React Testing Library. Test files: *.test.ts alongside source. Test user behavior, not implementation. Include edge cases: null, empty, and error states.'): the AI generates Vitest syntax (import { describe, it, expect } from 'vitest'), React Testing Library (render, screen, userEvent), co-located test file (Button.test.tsx next to Button.tsx), behavior-based tests (click the button, verify the text changes), and edge case tests (null input, empty array, error state).

Both examples show: the AI generates technically correct code regardless of rules. But the code matches different conventions: without rules, it matches the AI training data distribution (Jest is more common than Vitest, Redux more common than Zustand, Express more common than Next.js Route Handlers). With rules: the code matches YOUR project conventions. The rule file shifts: the AI output from the statistical average (generic) to your specific project standard (targeted).

Measuring the Rule File Impact

Across the five examples: without rules, the AI-generated code needed: 40-60% convention-based rewriting (correct functionality, wrong patterns). With rules: 5-10% rewriting (correct functionality, matching patterns, minor adjustments). The rewrite rate improvement: 40-60% down to 5-10% = 7-10x reduction in convention-related code review corrections. For a team generating 50 AI-assisted files per week: the rule file saves 10-20 hours of refactoring per week.

The highest-impact rules from the examples: (1) 'Server Components default' (prevents use client + useEffect on every component). (2) 'Zustand, no Redux' (prevents 30-line Redux boilerplate on every state task). (3) 'Drizzle ORM' (prevents raw SQL and wrong ORM usage on every database query). (4) 'Vitest, not Jest' (prevents wrong test runner on every test file). (5) 'Cursor-based pagination' (prevents offset pagination on every paginated endpoint). Five rules cover: the five most frequent AI generation patterns. They are: the Tier 1 rules that produce 80% of the rule file value.

The takeaway: a well-written CLAUDE.md with 10-15 specific rules produces: AI-generated code that matches your project conventions 90-95% of the time. The rewrite rate: drops from hours per week to minutes. The code review: focuses on logic and architecture, not style and convention corrections. The rule file is: the highest-ROI investment in AI-assisted coding. 30 minutes to write, thousands of hours saved over the project lifetime.

โ„น๏ธ 30 Minutes Saves 10-20 Hours Per Week

A well-written CLAUDE.md: 30 minutes to write. The impact: 40-60% rewriting drops to 5-10%. A team generating 50 AI files/week: saves 10-20 hours of convention corrections. The rule file is: the highest-ROI investment in AI-assisted coding. The payback period: less than one day.

Before and After Summary

Summary of before and after AI rules across five examples.

  • Component: use client + useEffect โ†’ Server Component + async data. 3 rules, 5-10 min/component saved
  • API endpoint: Express + raw SQL + offset โ†’ Route Handler + Drizzle + cursor. 5 rules, complete convention match
  • State: Redux 30+ lines โ†’ Zustand 8 lines. 1 rule with negative ('No Redux'), 20 min saved per task
  • Error handling: generic 500 โ†’ structured { error: { code, message, details } }. 1 rule, professional API errors
  • Tests: Jest + enzyme + happy path โ†’ Vitest + RTL + edge cases. 3 rules, correct runner + methodology
  • Rewrite rate: 40-60% without rules โ†’ 5-10% with rules. 7-10x reduction in convention corrections
  • Highest-impact rules: RSC default, Zustand no Redux, Drizzle, Vitest, cursor pagination
  • 30 minutes to write CLAUDE.md: saves 10-20 hours per week for a team generating 50 AI files/week