Lint Rules Like You Lint Code
You lint your code to catch: unused variables, inconsistent formatting, and potential bugs. Why not lint your rules? Common rule quality issues: vague rules ('handle errors properly' — how?), prohibitions without alternatives ('do not use any' — use what instead?), missing rationale (the rule exists but nobody knows why), conflicting rules (two rules that give contradictory guidance), and structural inconsistencies (some rules use bullet format, others use paragraphs, headings are inconsistent). A rule linter: catches these issues automatically.
The rule linting approach: define quality checks (what makes a good rule?), run them against the rule file (automated or semi-automated), and report issues (with suggestions for fixing them). The linter: runs as part of the rule authoring workflow (before publishing) and periodically (during quarterly reviews). It catches: issues that human reviewers miss and issues that accumulate over time as rules are added by different authors.
The value: a linted rule file is more effective than an unlinted one. Vague rules: the AI interprets them differently each time (inconsistent output). Prohibitions without alternatives: the AI avoids the bad pattern but may choose another bad pattern (it does not know what to use instead). Conflicting rules: the AI picks one arbitrarily (unpredictable output). The linter: catches all three before they affect AI behavior.
Step 1: Define Quality Checks
Check 1 — Vagueness detection: flag rules that use vague words without specifics. Vague indicators: 'properly,' 'appropriately,' 'correctly,' 'as needed,' 'when possible,' 'cleanly.' Example: 'Handle errors properly' → flag: 'Rule uses vague word "properly" without specifying the error handling pattern.' Suggested fix: 'Replace with specific error handling convention (Result pattern, try-catch with specific error classes, etc.).' AI rule: 'Vague words: the most common rule quality issue. The linter flags them. The author: replaces with specifics.'
Check 2 — Prohibition without alternative: flag rules that say 'do not' or 'never' without providing what to do instead. Example: 'Never use any type' → flag: 'Prohibition without alternative. What should be used instead of any?' Suggested fix: 'Add alternative: use unknown with type guards, or explicit types.' This check: ensures every prohibition pairs with a positive alternative. AI rule: 'Prohibitions without alternatives: the second most common issue. The linter catches them. The author adds the alternative.'
Check 3 — Potential conflicts: flag rules that use opposing keywords in the same file. Pairs to check: class/function, throw/return, sync/async, default export/named export, any/unknown. When both appear: flag for manual review. Not every pair is a conflict (class for NestJS + function for React is a scoped difference, not a conflict). But the flag: prompts the author to verify. AI rule: 'Conflict detection: automated flagging of opposing keywords. Human review: determines if the flag is a true conflict or a scoped difference.'
Prompt: 'Review this CLAUDE.md. Flag any rules you would find ambiguous when generating code.' The AI: flags rules like 'handle errors properly' (ambiguous — which error handling pattern?) and 'use appropriate naming' (ambiguous — what is the naming convention?). The AI is the consumer of the rules. Its confusion: reveals exactly the rules that will produce inconsistent output. The AI-assisted lint: zero code, zero development, and remarkably effective.
Step 2: Implement the Linter
Simple script approach: a Node.js script that reads the CLAUDE.md, runs regex-based checks, and outputs warnings. The script: 50-100 lines of code. Checks: vague word detection (regex for 'properly|appropriately|correctly|as needed'), prohibition without alternative (regex for 'never|do not|don\'t' not followed by 'instead|use|prefer' within 2 lines), and structural consistency (every ## heading has at least one bullet point underneath). Output: list of warnings with line numbers and suggested fixes.
AI-assisted approach: ask the AI to lint its own rules. Prompt: 'Review this CLAUDE.md for quality issues: vague rules, prohibitions without alternatives, potential conflicts, and inconsistent structure. For each issue: identify the line, describe the problem, and suggest a fix.' The AI: remarkably effective at identifying quality issues in rules because it is the consumer of those rules. It knows what is ambiguous (because it would interpret it ambiguously). This approach: zero script development needed.
CI integration: add the linter to the CI pipeline for the rules repository (or any repo containing a CLAUDE.md). The linter: runs on every PR that modifies the rule file. Warnings: displayed in the PR checks. Errors (critical issues like prohibitions without alternatives): block the PR. This ensures: no quality issues are introduced in new rule additions. AI rule: 'The AI-assisted approach is the quickest to set up (zero code). The script approach is more consistent (same checks every time). Both work. Choose based on team preference.'
The word 'properly': appears in more rule files than any other vague indicator. 'Handle errors properly.' 'Validate inputs properly.' 'Document code properly.' Each one: the AI interprets differently each time. 'Properly': means something different to every person and every AI prompt. Replace with specifics: 'Handle errors: catch with AppError class, return structured { success, error } response, log the full error server-side.' Now: the AI generates the exact pattern. 'Properly': eliminated.
Step 3: Advanced Quality Checks
Rationale coverage: check that rules in the Critical Rules and Coding Conventions sections have a 'Why:' or rationale line. Rules in these sections: should explain their reasoning. Formatting rules: may not need rationale (self-evident). The check: flags rules without rationale in high-impact sections. AI rule: 'Rationale coverage: not required for every rule. Required for: security rules, error handling rules, architectural patterns. Optional for: naming conventions, formatting preferences.'
Specificity scoring: rate each rule's specificity on a scale. High specificity: 'Use Zod schemas for input validation on all API route handlers. Schema defined in the same file as the handler. Validate body, query params, and path params.' Low specificity: 'Validate inputs.' The score: flags rules below a specificity threshold for improvement. Implementation: word count (longer rules tend to be more specific), presence of examples (rules with examples are more specific), and scope indicators (rules that specify where they apply are more specific).
Freshness check: flag rules that reference specific library versions or framework versions that may be outdated. Pattern: match 'v[0-9]', 'version [0-9]', specific package names against the project's package.json. If the rule references TypeScript 5.3 but the project uses TypeScript 5.5: flag as potentially stale. AI rule: 'The freshness check: automated staleness detection. It does not mean the rule is wrong — just that it references a version worth verifying. Run after every major dependency update.'
The linter flags: 'class' and 'function' both appear in the rules. Is this a conflict? Not necessarily: 'NestJS controllers: class-based with decorators. React components: functional with hooks.' The keywords are opposing but the rules are scoped — no conflict. The linter: flags for human review. The human: determines if it is a conflict (same scope, contradictory) or a scoped difference (different scope, complementary). Automated flagging + human judgment = effective conflict detection.
Rule Linting Summary
Summary of linting AI rules for consistency.
- Concept: lint rules like code. Catch vague, conflicting, and incomplete rules automatically
- Check 1: vagueness — flag 'properly,' 'appropriately,' 'as needed.' Suggest specifics
- Check 2: prohibition without alternative — flag 'never/don't' without 'instead/use'
- Check 3: potential conflicts — flag opposing keywords (class/function, throw/return)
- Implementation: simple script (50-100 lines) or AI-assisted (zero code, prompt-based)
- CI integration: run on PRs that modify rule files. Warnings for minor issues. Block for critical
- Advanced: rationale coverage, specificity scoring, freshness check against package.json versions
- AI-assisted linting: ask the AI to review its own rules. It knows what is ambiguous — it is the consumer