Using AI to Write Rules That Guide AI
The meta-level: using Claude Code to write the CLAUDE.md that will guide Claude Code. This is not circular — it is practical. The AI: understands what makes a good rule (it processes rules daily), can analyze your codebase to identify conventions (it reads code fluently), and can generate well-structured rule text (it writes markdown natively). The human: provides the intent (what conventions to encode), validates the output (do the rules match the team's practice?), and makes judgment calls (which conventions to prioritize). The AI: handles the mechanical writing. The human: handles the strategic decisions.
AI-assisted rule authoring is faster than manual writing for: the initial rule set (the AI generates a complete draft in 2 minutes — the human refines in 15 minutes), codebase extraction (the AI identifies patterns across files faster than manual reading), rule refinement (the AI suggests more specific wording when a rule is too vague), and gap identification (the AI identifies areas not covered by existing rules). For all of these: the AI produces a draft. The human: validates, refines, and approves.
The workflow: provide context to the AI (your tech stack, your conventions, or your existing codebase files) → prompt the AI to generate rules → review the AI's draft → refine based on your team's specific needs → test the rules with benchmark prompts → deploy. The AI: saves 60-80% of the authoring time. The human: ensures the remaining 20-40% of judgment, validation, and team-specific customization. AI rule: 'AI writes the draft. You write the decisions. The combination: faster and better than either alone.'
Step 1: Prompts for Generating Rule Drafts
Prompt for a complete rule set: 'I am setting up AI coding rules for a [Next.js 16 App Router / NestJS / Go] project. The project uses [list your key technologies: Drizzle ORM, Tailwind CSS, Vitest, etc.]. Generate a CLAUDE.md with: project context, coding conventions (naming, error handling, imports, async patterns), testing standards, and security rules. Use the what-why-when format for each rule. Include 20-25 rules.' The AI: generates a complete draft in the correct format. You: review each rule against your team's actual practice.
Prompt for codebase-aware rules: 'Here are 5 files from our codebase: [paste or reference files]. Analyze the coding conventions these files share. Write AI rules that encode these conventions. Format: CLAUDE.md with sections for naming, error handling, testing, and architecture. Each rule: what to do, why (based on the pattern in the code), and when it applies.' The AI: extracts conventions from your actual code and writes rules that match. The rules: immediately relevant because they are derived from your codebase.
Prompt for filling gaps: 'Here is our current CLAUDE.md: [paste content]. Analyze it for gaps: conventions that are common in [Next.js / NestJS / Go] projects but not covered in our rules. Suggest 5-10 additional rules that would improve AI output quality for our tech stack.' The AI: identifies missing rules based on its knowledge of the tech stack's best practices. You: evaluate each suggestion — adopt the ones that match your team's practice, skip the ones that do not. AI rule: 'The gap-filling prompt: the most underused AI-assisted authoring technique. The AI knows what conventions your tech stack typically has. It identifies what your rules are missing.'
'Here are our current rules. What conventions common in Next.js projects are we missing?' The AI: 'You do not have rules for: Server Component vs Client Component decision criteria, metadata/SEO conventions, image optimization with next/image, and route group organization.' Four gaps: identified in 10 seconds. Each gap: would take minutes to discover by manually comparing your rules against the Next.js documentation. The gap-filling prompt: surfaces missing rules you did not know you needed.
Step 2: Prompts for Refining Existing Rules
Prompt for increasing specificity: 'This rule is too vague: "Handle errors properly." Rewrite it to be specific enough that an AI can generate the correct error handling pattern. Our project uses: [TypeScript, Result pattern from @/lib/result, AppError class for domain errors]. Include a code example.' The AI: generates a specific, actionable rule with example. The vague rule: replaced with a concrete, AI-followable convention. This technique: fixes the most common rule quality issue (vagueness) in 30 seconds per rule.
Prompt for detecting conflicts: 'Here is our CLAUDE.md: [paste content]. Identify any rules that contradict each other or could cause confusion about which pattern to follow. For each conflict: describe the two rules, explain why they conflict, and suggest a resolution (scope narrowing, priority ordering, or merge).' The AI: identifies conflicts that the human author missed (the AI processes all rules simultaneously, while the human reads them sequentially and may miss cross-references).
Prompt for improving rationale: 'This rule lacks rationale: "Use named exports instead of default exports." Add a Why section explaining: the technical reason for this convention, what problem it solves, and when it applies (scope). Keep the rationale to 1-2 sentences.' The AI: adds a specific, defensible rationale. The rule: transforms from an arbitrary mandate into a justified decision. This technique: adds rationale to rules that were written without it. AI rule: 'For each refinement prompt: the AI provides a draft. You: accept, modify, or reject. The AI saves writing time. You maintain quality control.'
A useful validation loop: the AI generates a rule. You then test: 'Using this rule, create a function that handles database errors.' If the AI follows the rule it just wrote: the rule is well-worded (the AI understands its own output). If the AI does NOT follow the rule it just wrote: the rule is ambiguous (even the author cannot follow it consistently). This self-test: the fastest way to validate rule quality. If the author cannot follow it: no AI will.
Step 3: Validate AI-Generated Rules
The validation loop: after the AI generates or refines a rule, test it immediately. Prompt: 'Using the rules in this CLAUDE.md, create a [relevant code artifact — an API endpoint, a component, a test].' Evaluate: does the AI follow the newly generated rule? If yes: the rule is well-written (the AI that wrote it can also follow it). If no: the rule's wording is ambiguous (even the AI that wrote it cannot follow it consistently). Refine and re-test.
Team review of AI-generated rules: the AI's draft is a starting point, not the final product. Share the draft with the team: 'The AI generated these rules based on our codebase. Do they accurately describe our conventions? Is anything missing? Is anything incorrect?' The team: validates against their experience. The AI: may have identified patterns that are coincidental (not conventions) or missed patterns that are obvious to the team but not visible in 5 files.
The human-AI partnership: the AI excels at: generating structured text quickly, identifying patterns across files, suggesting improvements based on best practices, and detecting conflicts between rules. The human excels at: deciding which conventions to encode, validating against team practice, making judgment calls about edge cases, and ensuring rules align with the team's values and direction. Together: a rule set that is comprehensive (AI's breadth) and correct (human's judgment). AI rule: 'The AI is the author. The human is the editor. The best rules: come from this partnership.'
The AI generates 25 rules in 2 minutes. 20 are excellent. 3 encode patterns that are coincidental (not intentional conventions). 2 reference libraries the team does not use (the AI assumed based on the tech stack). Without team validation: all 25 are deployed, including the 5 incorrect rules. With team validation: the 5 incorrect rules are caught and removed. The AI saves 80% of authoring time. The team's 20% of validation: ensures correctness.