Why Benchmark: Track Quality, Not Just Adoption
Adoption metrics: tell you how many teams use AI rules. Quality metrics: tell you whether the rules actually improve AI output. 100% adoption with poor-quality rules: worse than 50% adoption with excellent rules. Benchmarking: measures the quality of AI-generated code with and without rules, tracks whether quality improves as rules evolve, and identifies specific rules that have the most impact on output quality.
The benchmarking approach: define quality criteria (what makes AI output good?), create benchmark prompts (standardized prompts that exercise the rules), score the output (rate each quality dimension), and track over time (compare scores across rule versions). The benchmark: runs periodically (after each rule change or quarterly) and produces a quality score that complements adoption and satisfaction metrics.
What benchmarking reveals: 'Rule v2.3 produces 85% convention-compliant code. Rule v2.4 (after adding the error handling rule): produces 92% convention-compliant code. The error handling rule: increased quality by 7 percentage points.' This attribution: impossible without benchmarking. With benchmarking: you know exactly which rule changes improve quality and by how much.
Step 1: Define Quality Criteria
Convention compliance: does the output follow the project's naming conventions, error handling patterns, testing standards, and framework patterns? Score: 1 (none followed) to 5 (all followed). This is the most direct measure of rule effectiveness. If the rules encode your conventions and the AI follows them: convention compliance is high.
Code correctness: does the output work? Does it handle edge cases? Does it contain logic errors? Score: 1 (broken) to 5 (correct with edge cases handled). Correctness is not directly affected by rules (rules do not fix logic bugs). But: well-structured code (from following rules) tends to have fewer bugs because the patterns encode defensive programming practices.
Completeness: does the output include everything needed? Tests alongside the feature? Error handling for all paths? Documentation for public APIs? Input validation? Score: 1 (only the happy path) to 5 (complete with tests, errors, and docs). Rules that specify 'every endpoint includes validation, error handling, and tests': directly improve completeness scores.
Readability: is the output easy to understand? Well-named variables? Clear structure? Appropriate comments? Consistent formatting? Score: 1 (confusing) to 5 (self-documenting). Rules that enforce naming conventions and structural patterns: improve readability. The overall quality score: average of the four dimensions (convention, correctness, completeness, readability). AI rule: 'Four dimensions: convention compliance, correctness, completeness, readability. Score each 1-5. Average = overall quality. Track over time.'
Convention compliance alone: tells you if the AI follows rules but not if the code works. Correctness alone: tells you if the code works but not if it follows team patterns. Completeness alone: tells you if all parts are present but not if they are good. Readability alone: tells you if the code is clear but not if it is complete. All four together: a complete picture. A high score in all four: the AI generates code that is correct, consistent, complete, and readable.
Step 2: Design Benchmark Prompts
Benchmark prompts: standardized prompts that exercise the most important rules. Use the same prompts every time: this allows comparison across rule versions. The 5-prompt benchmark suite: (1) API endpoint with validation and error handling, (2) database query with joins and error handling, (3) React component with data fetching and loading states, (4) test suite for a function with edge cases, (5) refactoring a legacy function to current conventions.
Prompt stability: never change the benchmark prompts. If you change the prompt: the score change might be from the prompt change, not from rule improvements. Add new prompts: when you want to benchmark a new area. But keep the original 5 for trend tracking. After 12 months: you have 12 data points on the same 5 prompts â a clear quality trend. AI rule: 'Benchmark prompts are fixed. Add new ones. Never modify existing ones. Consistency enables trend comparison.'
Running the benchmark: for each prompt: run it 3 times (AI output varies slightly each run). Score each run. Average the 3 runs per prompt. Average the 5 prompts for the overall score. The 3 runs per prompt: account for AI output variability. The 5-prompt average: provides a comprehensive quality score. Total benchmark time: 30-45 minutes (manual scoring) or 5-10 minutes (automated scoring with a script).
The temptation: improve a benchmark prompt after 3 months ('the prompt was not specific enough'). The problem: the quality score changes. But: is the change from the improved prompt or from rule improvements made during those 3 months? Impossible to separate. The rule: never modify existing benchmark prompts. Add new ones if you need to benchmark a new area. The original 5 prompts: the consistent baseline for trend comparison. Any prompt modification: invalidates all historical data.
Step 3: Tracking Quality Trends
Run the benchmark: after each significant rule change (new rules added, existing rules updated) and quarterly (even if rules have not changed â to detect AI model behavior changes). Record: the date, the rule version, the per-prompt scores, and the overall score. The dataset: grows over time, showing whether rules are improving quality.
Trend analysis: plot the overall quality score over time (X axis: date, Y axis: score 1-5). The trend: should be gradually upward (rules are improving quality). A flat trend: the rules are not improving (add more rules or revise existing ones). A downward trend: a rule change degraded quality (investigate and rollback the problematic change). Each data point: annotated with the rule version, creating a visible connection between rule changes and quality changes.
Attribution: when the quality score changes after a rule change, the delta can be attributed to that change. 'Rule v2.4 (added error handling rule): quality score improved from 3.8 to 4.1 (+0.3). The improvement: primarily in the convention compliance dimension (+0.5) and completeness dimension (+0.4), with no change in correctness or readability.' This attribution: tells the platform team exactly which rules are working and which are not. AI rule: 'Quality benchmarks with attribution: the most rigorous way to evaluate AI rules. It answers: which rules improve output quality, by how much, and in which dimensions.'
Without attribution: 'Quality improved from 3.8 to 4.1 this quarter.' Useful but vague â many rule changes happened this quarter. With attribution: 'Quality improved from 3.8 to 4.1. The error handling rule (added in v2.4): +0.5 on convention compliance and +0.4 on completeness. The naming convention update (v2.4.1): +0.1 on readability.' Attribution: tells the platform team exactly which investments paid off. High-impact rules: keep and protect. Low-impact rules: evaluate whether they are worth the complexity.
Benchmarking Summary
Summary of benchmarking AI output quality.
- Purpose: measure whether rules improve AI output quality, not just adoption
- Quality criteria: convention compliance, correctness, completeness, readability. Score 1-5 each
- Benchmark prompts: 5 standardized prompts. Fixed â never change existing ones for trend consistency
- Execution: 3 runs per prompt (average for variability). 5 prompts averaged for overall score
- Frequency: after each significant rule change + quarterly baseline
- Trend tracking: plot overall score over time. Annotate with rule version. Upward = improving
- Attribution: quality delta after a rule change = that rule's impact. Per-dimension breakdown
- Time investment: 30-45 min manual, 5-10 min automated. The most rigorous rule effectiveness measure