Tutorials

How to Sunset Unused AI Rules

Rules accumulate. Some stop being useful. Sunsetting: the process of identifying rules that no longer contribute, verifying they can be removed, and cleaning the rule file without losing institutional knowledge.

4 min read·July 5, 2025

35 rules. 3 are dead weight. The sunset process: identify, verify removal safety, remove, document in changelog, and communicate.

Dead rule identification, removal test, team dependency check, clean deletion, and changelog preservation

Rule Files Grow. Some Rules Stop Contributing.

The rule lifecycle: a rule is written to address a specific need, used for months or years, and eventually: the need disappears (the technology it covers is deprecated), the rule becomes redundant (another rule covers the same convention better), or the rule is never followed (the AI ignores it — the wording is too vague or conflicting). These unused rules: consume context window space, clutter the rule file (making it harder to read and maintain), and may confuse the AI (conflicting with active rules or referencing deprecated patterns).

The sunset principle: every rule should earn its place. A rule that is not contributing: should be removed. But removal must be: verified (confirm the rule is not silently preventing issues), documented (record in the changelog why the rule was removed), and communicated (notify the team so nobody is surprised). The sunset process: ensures removal is safe and informed, not hasty and risky.

When to sunset: during the quarterly rule review (a standing cleanup pass), when the impact score falls below the threshold (from the scoring tutorial — rules with composite score below 10 out of 125), when the technology the rule covers is fully deprecated (no code in the codebase uses the old pattern), and when the rule has been overridden by a more specific rule (the general rule is now redundant).

Step 1: Identify Sunset Candidates (15 Minutes)

Signals that a rule is a sunset candidate: zero AI compliance (the AI never generates code that shows evidence of this rule — tested with a prompt), high override rate (developers consistently override — the rule does not match practice), references deprecated technology (mentions a library, framework version, or pattern no longer used), redundancy (another rule covers the same convention with more specificity), and zero impact (the rule addresses a concern that no longer exists in the codebase).

The quick scan: read each rule and ask: is this rule active? Can I find evidence of this rule in recent AI-generated code or recent code reviews? If the answer is 'no' for both: the rule is a sunset candidate. For a 35-rule file: this scan takes 15 minutes. The result: 2-5 sunset candidates per quarterly review (a healthy rule file has 5-15% candidates). Zero candidates: either the file is perfect (unlikely) or the scan was too superficial.

Automated identification: use the impact scoring system (Reach × Compliance × Value). Rules with composite scores below 10 (out of 125): automatic sunset candidates. Alternatively: check the AI's response to a comprehensive prompt and note which rules are NOT reflected in the output. The unreflected rules: candidates for sunset (the AI does not follow them). AI rule: 'Impact score below 10 = sunset candidate. Zero AI compliance = sunset candidate. References deprecated tech = sunset candidate. Any of these triggers: investigation.'

💡 2-5 Sunset Candidates per Quarterly Review Is Healthy

Zero candidates: either the file is perfect (possible but rare at 35+ rules) or the scan was too superficial (more likely). 10+ candidates: the file was not maintained — many rules drifted into irrelevance. 2-5 candidates: a healthy rule file where most rules are active and a few naturally aged out. If you consistently find 2-5: the quarterly review is working. If you consistently find 0: scan more carefully.

Step 2: Verify Removal Safety

Before removing: verify the rule is not silently preventing issues. The removal test (from the impact tracking tutorial): temporarily remove the rule. Run 3 test prompts that would have been affected. Compare the output: with the rule (previous output) vs without the rule (new output). If the output is identical: the rule was not affecting AI behavior (safe to remove). If the output is worse: the rule was contributing (do not remove — instead, investigate why it scored low).

The team check: ask the team: 'We are considering removing [rule description]. Does anyone rely on this rule or know of a reason to keep it?' The question: surfaces hidden dependencies. A developer might say: 'That rule prevents a specific bug in the payment processing module — I added it after an incident.' Without the team check: the rule is removed. The bug: recurs. With the team check: the dependency is identified and the rule is kept (or the dependency is addressed and the rule is safely removed).

AI rule: 'Two checks before removal: the removal test (does the AI output change?) and the team check (does anyone depend on this rule?). Both must pass. If the AI output changes for the worse: the rule is contributing — do not remove. If a team member identifies a dependency: investigate before removing.'

⚠️ The Removal Test Catches Rules That Silently Prevent Bugs

The rule looks unused: zero compliance, zero review comments. But: the rule prevents the AI from generating a specific anti-pattern. When the rule is removed: the AI generates the anti-pattern (it was being prevented, not actively generating the correct pattern). The test output: worse than before. The rule: was silently contributing by preventing bad code, even though it was not visibly generating good code. The removal test: catches this invisible contribution.

Step 3: Remove, Document, and Communicate

Clean removal: delete the rule entirely from the rule file. Do not leave a comment ('removed rule about lodash' — this clutters the file). Do not strike through (deprecated rules use strikethrough; removed rules are deleted). The rule file: should only contain active rules. Removed rules: preserved in the changelog, not in the rule file.

Changelog documentation: 'v2.6.0 — Removed: lodash utility function rule. Reason: all lodash usage has been migrated to native alternatives. The rule has been unused for 6+ months (zero AI compliance, zero review comments). Impact: none expected — the native alternatives rule (added in v2.3) covers the same area.' The changelog entry: preserves the institutional knowledge (why the rule existed, why it was removed, and what replaced it). A developer who encounters old lodash code: reads the changelog and understands the migration history.

Communication: Slack message: 'In the next rule update (v2.6.0): removing the lodash utility rule. Reason: all lodash usage has been migrated. If you have concerns: reply by Friday.' The communication: gives the team a final opportunity to object before the removal is deployed. Most of the time: no objections (the rule was already inactive). Occasionally: a team member identifies a use case that was missed. AI rule: 'Communicate removals before deploying. A 2-day objection window: costs nothing and catches the rare case where a rule has a hidden dependency.'

ℹ️ The Changelog Preserves What the Rule File Forgets

The rule file: only contains active rules. Removed rules: gone from the file. But: the knowledge of why the rule existed and why it was removed is valuable. A developer encounters old lodash code and wonders: 'Did we ever have a rule about this? Why was it removed?' The changelog: 'v2.6.0: Removed lodash rule. Reason: migrated to native alternatives. The native alternatives rule (v2.3) covers this area.' The developer: has the complete history without the rule file being cluttered with historical entries.

Rule Sunset Summary

Summary of sunsetting unused AI rules.

  • Signals: zero AI compliance, high override rate, deprecated tech references, redundancy, zero impact
  • Quick scan: read each rule and ask 'is this active?' 15 minutes for a 35-rule file. 2-5 candidates typical
  • Automated: impact score below 10/125 = candidate. Zero AI compliance = candidate
  • Removal test: temporarily remove. Run 3 prompts. Output identical = safe. Output worse = keep the rule
  • Team check: ask if anyone depends on the rule. Hidden dependencies surface through this question
  • Clean removal: delete entirely. No comments, no strikethrough. Only active rules in the file
  • Changelog: record what was removed, why, and what replaced it. Preserves institutional knowledge
  • Communication: Slack message with 2-day objection window before deploying the removal