The "It's Not Working" Problem
You've written a CLAUDE.md or .cursorrules file. You've read the best practices. You've added specific, actionable rules. But when you start coding, the AI seems to completely ignore them. It generates default exports when you said named exports. It uses Prisma when you specified Drizzle. It writes class components when you explicitly said functional components only.
Before you blame the AI, know this: in 90% of cases, the rules are being read — they're just not being applied effectively. The issue is almost always one of five fixable problems: wrong file location, rules too vague, file too long, contradictory instructions, or a tool-specific parsing quirk.
This guide walks through each cause systematically. Check them in order — the most common cause is listed first, and each check takes under 60 seconds.
Check 1: File Location and Naming
The single most common reason rules don't work is that the file is in the wrong place or has the wrong name. Each AI assistant looks for a specific file in a specific location, and there's zero tolerance for variation.
Claude Code reads CLAUDE.md (capital letters, no dot prefix) from the repository root. Not claude.md (lowercase), not .claude.md (dotfile), not docs/CLAUDE.md (subdirectory). The root of the repo — the same directory as your package.json or go.mod.
Cursor reads .cursorrules (lowercase, with the leading dot) from the project root. Not cursorrules (no dot), not .cursor-rules (hyphenated), not .cursorrules.md (with extension). The dot matters — without it, Cursor won't detect the file at all.
GitHub Copilot reads .github/copilot-instructions.md from the .github directory. Not the repo root, not a copilot/ directory. It must be inside .github/ alongside your workflows.
- 1Verify the exact file name (case-sensitive, dot prefix where required)
- 2Verify the file is in the correct directory (root for Claude/Cursor, .github/ for Copilot)
- 3Open the file in your editor and confirm it's not empty
- 4If using a monorepo, make sure the file is at the workspace root, not a package root
Wrong file name or location accounts for ~40% of 'rules not working' issues. CLAUDE.md (no dot, capitals) in root. .cursorrules (with dot, lowercase) in root. copilot-instructions.md in .github/ folder.
Check 2: Rule Specificity
Vague rules produce vague compliance. When the AI 'ignores' a rule, it often means the rule was too general for the AI to apply consistently. Rules like 'write clean code' or 'follow best practices' don't change AI behavior because they don't describe a specific, measurable action.
Test your rules against this standard: could a junior developer read this rule and know exactly what to do without asking a follow-up question? If not, the rule needs to be more specific.
Before: 'Use modern JavaScript patterns.' This means nothing actionable — every JS pattern from the last 10 years is 'modern.' After: 'Use async/await for all asynchronous operations. Never use .then() chains or callbacks. For parallel operations, use Promise.all() with named variables.' Now the AI has three unambiguous decisions to make.
Rewrite your top 5 most important rules to be maximally specific. You'll likely see immediate improvement in AI output quality.
Check 3: File Length and Attention Limits
AI models have practical attention limits. A 500-line rule file means the model is processing a large block of instructions before it even starts working on your code. Rules near the end of a very long file may receive less weight than rules at the beginning.
The sweet spot for most projects is 50-200 lines. Under 50 lines and you're probably missing important context. Over 200 lines and you're likely including rules that don't meaningfully affect AI output — verbose descriptions, unnecessary commentary, or rules that duplicate linter/formatter behavior.
If your rules genuinely exceed 200 lines, restructure instead of trimming. Claude Code supports subdirectory CLAUDE.md files — move framework-specific rules into frontend/CLAUDE.md and api/CLAUDE.md. For Cursor, consider the Cursor Rules directory format in newer versions.
A quick test: remove the bottom third of your rule file and see if AI output quality changes. If it doesn't, those rules weren't being applied effectively anyway.
The sweet spot is 50-200 lines. Under 50 misses context. Over 200 dilutes attention. Quick test: remove the bottom third — if output quality doesn't change, those rules weren't being applied.
Check 4: Conflicting Instructions
Contradictory rules are more common than you'd think, especially in files that have been edited by multiple people over time. When the AI receives conflicting instructions, its behavior becomes unpredictable — it might follow one rule in one context and the opposite rule in another.
Common conflicts: 'Use default exports for components' later contradicted by 'Always use named exports.' 'Keep functions under 20 lines' contradicted by 'Don't create helper functions for one-time operations.' 'Use inline styles for components' contradicted by 'Use Tailwind utility classes exclusively.'
Read your rule file from top to bottom and look for pairs of rules that could be interpreted as conflicting. When in doubt, remove one. A smaller set of consistent rules beats a larger set with internal contradictions.
- Search for 'always' and 'never' rules — do any contradict each other?
- Check formatting rules — are you specifying two different formatting approaches?
- Check import rules — default vs named exports is a common conflict
- Check testing rules — are mock and integration preferences consistent?
- Ask a teammate to read the file — fresh eyes catch conflicts you've gone blind to
Check 5: Tool-Specific Parsing Quirks
Each AI assistant parses rule files slightly differently, and some formats that work in one tool are silently ignored by another.
Claude Code handles full markdown including tables, code blocks, and nested lists. But very long code blocks (50+ lines) in your CLAUDE.md may be treated as examples rather than rules — keep code snippets short (under 10 lines) and pair them with explicit instructions.
Cursor's .cursorrules works best with direct imperative statements. Long explanatory paragraphs are less effective than short bullet points. If you migrated rules from CLAUDE.md to .cursorrules without reformatting, try converting paragraphs into single-line imperatives.
GitHub Copilot's instructions file has the most limited parsing. Complex conditional rules ('When working in the /api directory, use Express patterns') may not be applied consistently. Stick to universal, unconditional rules for Copilot.
A Systematic Debugging Approach
When rules aren't working and you've checked all five causes above, use this systematic approach to isolate the problem.
Step 1: Create a minimal test. Write a rule file with exactly one rule: 'Always add a comment at the top of every new file that says: RULE TEST.' Ask the AI to create a new file. If the comment appears, the file is being read. If not, revisit Check 1 (file location).
Step 2: Escalate complexity. Add your rules back one section at a time. After each addition, test whether the AI follows the new rules. The section where compliance drops is where the problem lies — usually a vague rule, a conflict, or a length threshold.
Step 3: Compare outputs. Generate the same code with and without your rule file (rename it temporarily to disable it). If the outputs are identical, the file isn't being read at all. If they differ but not in the way you expect, the rules are being read but not interpreted as you intended — revisit Check 2 (specificity).
- 1Test with a single obvious rule to confirm the file is being read
- 2Add rules back one section at a time — test after each addition
- 3Compare AI output with rules vs. without rules (rename file to disable)
- 4If specific rules aren't working, rewrite them as shorter, more imperative statements
- 5If nothing works, check the tool's documentation for recent format changes
Add a single obvious rule like 'Add a comment // RULE TEST to the top of every new file.' If it appears, your file is being read. If not, it's a file location problem.