Why Your Rules Need an Audit
AI rules decay like all documentation. The codebase evolves: new libraries are adopted, patterns change, team conventions shift. But the rule file stays the same. After 3-6 months: some rules reference deprecated patterns, some gaps have appeared (new conventions that were never encoded), and some rules are too rigid (causing frequent overrides). An audit identifies: what is stale, what is missing, what is too rigid, and what is working well.
When to audit: after 3 months of initial use (the first audit catches early issues), quarterly thereafter (aligned with the quarterly rule review), after a major technology change (framework upgrade, new library adoption), and when override rates increase (a signal that rules are not matching reality). The audit takes 1-2 hours and produces: a list of rules to update, add, and remove.
The audit mindset: rules are not sacred. Every rule should earn its place by improving code quality or developer productivity. Rules that do not contribute: should be removed without guilt. A lean rule file with 25 effective rules: beats a bloated file with 80 rules where half are ignored.
Step 1: Staleness Check (20 Minutes)
Go through each rule and ask: does this still match what the codebase actually does? Check for: library references (does the rule mention a library the project no longer uses?), pattern references (does the rule describe a pattern the team has moved away from?), version references (does the rule specify a version that is no longer current?), and tool references (does the rule mention a tool that has been replaced?). AI rule: 'A stale rule is worse than no rule — the AI generates outdated code that must be manually corrected.'
Quick staleness test: for each rule that references a specific library, function, or pattern — search the codebase. If the reference is not found: the rule may be stale. Example: rule says 'use styled-components for styling.' Search for styled-components in the code. If all new code uses Tailwind: the rule is stale. Update or remove it.
Common staleness patterns: rules that reference the old routing system (pages/ instead of app/ in Next.js), rules that specify old package managers (npm when the team switched to pnpm), rules that reference removed utility functions, and rules about deprecated testing patterns (enzyme when the team uses testing-library). AI rule: 'Staleness accumulates fastest after technology migrations. Post-migration: audit every rule that references the old technology.'
The rule says: 'Use styled-components for styling.' The team migrated to Tailwind 3 months ago. The AI reads the rule and generates styled-components code. The developer: manually rewrites it to Tailwind. Every. Single. Time. A stale rule does not just fail to help — it actively generates wrong code that requires rework. Stale rules cost more time than no rules. The staleness check is the most important part of the audit.
Step 2: Coverage Gap Analysis (20 Minutes)
Identify conventions the team follows that are not in the rules. Methods: review the last month of code review comments (any recurring convention comments? Each one is a missing rule), ask the team (what conventions do you follow that are not in the rule file?), and compare the rule file against a new PR (read a recent PR and note every convention the code follows that the rule file does not mention).
Common coverage gaps: error handling patterns that evolved since the rules were written, new component patterns adopted from a recent framework upgrade, database query patterns introduced with a new ORM feature, API response patterns standardized in a recent architecture decision, and testing patterns that the team converged on through practice but never formalized. AI rule: 'Every convention that exists in practice but not in the rules: is at risk of being violated by AI-generated code. The AI only follows what is written.'
Prioritize gaps by impact: which missing rules cause the most review comments? Which affect the most code? Which prevent the most common bugs? Add the top 5 gaps to the rule file. Save the rest for the next quarterly review. AI rule: 'Do not add all gaps at once. Add 5, use them for a month, and verify they work. Then add 5 more. Incremental additions are tested; bulk additions are not.'
The reviewer writes: 'Please use the Result pattern instead of throwing.' For the third time this month. From three different developers. The convention exists in the team's practice but not in the CLAUDE.md. The AI does not know about it. Three review comments = one missing rule. Add it. The comment never needs to be made again. Track your recurring review comments — they are your coverage gap backlog, pre-prioritized by frequency.
Step 3: Specificity and Effectiveness Assessment (20 Minutes)
Specificity check: for each rule, assess whether it is: too vague (the AI interprets it differently each time — 'write clean error handling' is vague), too rigid (developers override it frequently — 'always use try-catch with exactly this format' may not fit all cases), or just right (the AI generates correct code 80%+ of the time with occasional appropriate overrides). Revise vague rules by adding specific examples. Relax rigid rules by adding exception guidance.
Override rate analysis: if you track rule overrides (or can estimate them): rules with >20% override rate are candidates for revision. They are either too rigid (the rule does not account for legitimate use cases) or unclear (developers misunderstand the rule and override unnecessarily). For each high-override rule: investigate why developers override. If the reason is consistent: revise the rule. If reasons vary: the rule may need clearer documentation.
Effectiveness measurement: for rules you can measure (test coverage rules, naming convention rules, security rules): compare the rule's target against actual performance. Example: rule says 'minimum 80% test coverage.' Actual: 60% of PRs meet this. The rule is not effective — either it needs CI enforcement or the target needs adjustment. AI rule: 'A rule that is universally ignored is not a rule — it is a wish. Either enforce it (CI), revise it (make it realistic), or remove it (acknowledge it does not work).'
The rule says: 'Minimum 80% test coverage.' Actual coverage: 55%. No one enforces it. No CI check. Developers skip it because the deadline is tight. The rule has been there for 6 months and has never been met. This is not a rule — it is an aspiration. Three options: enforce it (add a CI check), revise it (lower to 70% which is achievable), or remove it (acknowledge the team values speed over coverage for now). Keeping an ignored rule: erodes trust in all rules.
Audit Output: The Action Plan
The audit produces a categorized action list. Rules to update (stale references, vague wording, rigid constraints): typically 3-5 rules per audit. Rules to add (coverage gaps identified from review comments and team feedback): typically 3-5 rules per audit. Rules to remove (ineffective, universally ignored, or no longer relevant): typically 1-3 rules per audit. Rules to keep (working well, no changes needed): the majority. AI rule: 'A healthy audit: updates 20%, adds 10%, removes 5%, and keeps 65%. If every rule needs changing: the initial rule file was poorly written. If no rules need changing: the audit was too superficial.'
- Step 1: staleness check. Search codebase for rule references. Flag rules citing unused libraries or patterns
- Step 2: coverage gaps. Review last month's PR comments. Ask the team. Compare rules vs actual conventions
- Step 3: specificity. Vague rules: add examples. Rigid rules: add exception guidance. Ignored rules: enforce or remove
- Override analysis: >20% override rate = candidate for revision. Investigate the why before changing
- Effectiveness: compare rule targets vs actual performance. Adjust targets or add enforcement
- Action plan: update 3-5, add 3-5, remove 1-3. Implement within 2 weeks of the audit
- Cadence: first audit at 3 months, then quarterly. Post-migration audits for technology changes
- Time investment: 1-2 hours per audit. The highest-ROI maintenance activity for AI rules