Setup and Getting Started FAQ
Q: How do I set up AI rules for my project? A: The rules file (CLAUDE.md for Claude Code, .cursorrules for Cursor, .github/copilot-instructions.md for Copilot) goes in the root of your repository. Copy the template from the shared rules repository (github.com/org/ai-rules) and add any team-specific rules. The AI reads this file automatically when generating code.
Q: Do I need to install anything special? A: No additional installation beyond your AI coding tool. Claude Code reads CLAUDE.md automatically. Cursor reads .cursorrules automatically. The rules file is a text file โ no plugins, extensions, or configuration needed. Just ensure the file is in the repo root and your AI tool is configured to use it.
Q: How long does setup take? A: 5-10 minutes. (1) Copy the organization's rule template to your repo root. (2) Add any team-specific rules. (3) Make a test prompt to verify the AI follows the rules. (4) Commit the rule file. If you encounter issues: check the troubleshooting section or ask in the #ai-standards Slack channel.
Q: Can I use AI rules with multiple AI tools? A: Yes. Create the rule file for each tool you use: CLAUDE.md for Claude Code, .cursorrules for Cursor, .github/copilot-instructions.md for Copilot. The content can be identical or tool-specific. Some teams maintain one source file and generate the tool-specific files from it.
Daily Usage FAQ
Q: Do I need to do anything different when coding with rules? A: No. Code normally. The AI reads the rules automatically and applies them to all generated code. You may notice: more consistent naming, better error handling, and code that follows your team's patterns without explicit prompting. Review AI output as you normally would.
Q: The AI is not following a rule. What should I do? A: First: verify the rule file exists in the repo root and is correctly formatted. Second: check if the rule is clear and specific enough (vague rules produce vague results). Third: try rephrasing your prompt to reference the convention explicitly. If the issue persists: report it in the #ai-standards channel with: the rule, the prompt, and the AI's output.
Q: When should I override an AI rule? A: Override when the specific context makes the rule inappropriate. Examples: the rule says 'use async/await' but you are working in a callback-based legacy module that cannot be refactored. The rule says 'use the structured logger' but you are in a test helper where console is appropriate. When you override: add a brief comment explaining why. If you find yourself overriding the same rule frequently: propose a rule update.
Q: How do I propose a change to a rule? A: Submit a PR to the rules repository (or your team's rule file for team rules). Include: what you want to change, why (the problem the current rule causes), and the proposed new rule text. For organization-wide rules: the governance board reviews proposals bi-weekly. For team rules: the tech lead reviews.
When the AI generates unexpected code: do not just fix it โ ask why. Prompt: 'Why did you use a class instead of a function for this service?' The AI explains its reasoning, often referencing a specific rule. If the reasoning is correct but you disagree: the rule may need updating. If the AI misinterpreted the rule: the rule wording is ambiguous. Asking 'why' turns every unexpected output into an opportunity to improve the rules.
Troubleshooting FAQ
Q: The AI generates code in a different style than my rules specify. A: Check: (1) Is the rule file in the correct location (repo root)? (2) Is the file named correctly for your AI tool? (3) Is the rule clear and unambiguous? (4) Are you working in a subdirectory that might have its own rules file? (5) Has the AI tool been restarted since the rule file was added? Most style issues: resolved by making the rule more specific.
Q: The AI ignores my rules entirely. A: This usually means the AI tool is not reading the file. Verify: the file exists at the repo root, the filename matches your tool's convention (CLAUDE.md, .cursorrules, etc.), and the file is not empty or malformed. For Claude Code: run 'claude --print-system-prompt' to verify the rules are loaded. For other tools: check the tool's documentation for rule file verification.
Q: A rule works for some files but not others. A: This can happen when: the rule is language-specific but the file is in a different language, the rule references a specific directory but you are working elsewhere, or the AI's context window does not include the full rule file for large files. Fix: make the rule's scope explicit ('For TypeScript files in src/: ...') and keep the total rule file under the tool's context limit.
Q: How do I debug why the AI made a specific choice? A: Ask the AI directly: 'Why did you use X pattern instead of Y?' The AI can explain its reasoning, referencing which rule it followed. If the AI's reasoning is based on a rule you disagree with: the rule may need revision. If the AI misinterpreted the rule: the rule may need clearer wording.
New teams: start with 15-20 rules covering the biggest conventions. Expanding too fast (100 rules on day 1): developers do not read them all, many are untested, and the rule file is intimidating. Starting too small (3 rules): the AI still generates inconsistent code for uncovered patterns. 15-20 rules cover naming, error handling, testing, security, and the top framework patterns. Expand after the initial set is proven effective.
FAQ Quick Reference
Quick reference for the most common AI standards questions.
- Setup: copy template to repo root. No installation needed. 5-10 minutes
- Multiple tools: maintain tool-specific rule files (CLAUDE.md, .cursorrules) with same content
- Not following rules: check file location, naming, specificity. Report persistent issues to #ai-standards
- Override: when context makes the rule inappropriate. Add a comment. Propose update if frequent
- Propose changes: PR to rules repo with what, why, and proposed text. Team rules: tech lead reviews
- Rule count: start with 15-20. Most projects stabilize at 30-50. Readable in 5-10 minutes
- AI ignores rules: verify file exists, correct name, not empty. Check tool-specific verification command
- Debug AI choices: ask the AI 'why did you use X?' It explains its reasoning from the rules