Rule Optimization Tips (1-5)
Tip 1: Start your CLAUDE.md with the 3 most impactful conventions. Not all rules are equal. The top 3 that eliminate the most review comments: error handling pattern, import/export convention, and testing framework choice. Write these 3 first. The remaining rules: add incrementally based on review feedback. Tip 2: Include one code example per major convention. 'Use the Result pattern' — ambiguous. 'Use the Result pattern: const result = await getUser(id); if (!result.ok) return result; const user = result.data;' — unambiguous. The example: eliminates interpretation differences between the AI and your intent.
Tip 3: Organize rules by frequency of use. Put the most commonly needed conventions at the top of CLAUDE.md (the AI gives more weight to content it encounters first). Error handling, file structure, and naming conventions: top of file. Edge cases like deployment scripts and migration patterns: bottom of file. Tip 4: Use negative rules for common AI mistakes. 'Never use try-catch for control flow.' 'Never use default exports.' 'Never use any type.' Negative rules: prevent specific AI tendencies that you have observed. Positive rules tell the AI what TO do. Negative rules tell the AI what NOT to do. Both are necessary.
Tip 5: Review and update rules every 2 weeks. After each sprint: check if any code review comments could have been prevented by a rule. If yes: add the rule. If a rule consistently produces awkward code: remove or revise it. The rules file: a living document that improves with every sprint. Two weeks: enough time to observe patterns without letting issues accumulate. AI rule: 'Rule optimization is the highest-leverage productivity activity in AI-assisted development. One hour spent improving rules: saves 10+ hours of review time in the next sprint. The investment: compounding returns.'
Prompt and Interaction Tips (6-10)
Tip 6: Describe the outcome, not the steps. Instead of: 'Create a form with email input, password input, submit button, and validation.' Try: 'Create a login form that validates email format and password strength, shows inline errors, and submits to /api/auth/login.' The outcome-focused prompt: lets the AI use your rules to decide the implementation. The step-by-step prompt: bypasses your rules because you already specified the implementation.
Tip 7: Use the AI to review its own output. After the AI generates code, ask: 'Review this code against our CLAUDE.md rules. List any violations.' The AI: finds its own inconsistencies. Common catches: missed error handling, wrong import pattern, or a convention it overlooked. Self-review: adds 30 seconds, catches 15-20% of issues before your review. Tip 8: Break complex features into sequential prompts. Instead of: 'Build the entire checkout flow.' Try: '1. Create the cart summary component. 2. Add the payment form component. 3. Create the order confirmation page. 4. Wire them together with the checkout flow.' Sequential prompts: produce higher-quality code because each step has focused context.
Tip 9: Re-prompt with specific feedback, not vague corrections. Instead of: 'This is wrong, try again.' Try: 'The error handling should use our Result pattern instead of try-catch. The import should be a named export, not default.' Specific feedback: produces targeted improvements. Vague feedback: produces random variations. Tip 10: Save effective prompts as templates. If you find a prompt pattern that consistently produces good results: save it. 'Create a new API endpoint for [resource] with CRUD operations, input validation, and error handling' — a reusable template that works for any resource. AI rule: 'Prompt effectiveness multiplies rule effectiveness. Good rules + good prompts = consistently excellent code. Good rules + poor prompts = inconsistent results. The combination: matters more than either individually.'
After the AI generates code, add one prompt: 'Review this code against our CLAUDE.md rules. List any violations.' Time: 30 seconds. The AI re-reads its own output against the rules and flags inconsistencies: a missed error handling pattern, a wrong import style, or a convention it overlooked on the first pass. Catch rate: 15-20% of issues that would have been flagged in code review. The 30-second investment: prevents a full review round-trip (typically 2-4 hours). One extra prompt: high-leverage quality gate.
Workflow Integration Tips (11-15)
Tip 11: Generate tests first, then implementation. Prompt: 'Write tests for a user registration function that validates email uniqueness, hashes the password, and returns the created user.' Then: 'Implement the function that makes these tests pass.' The AI: generates implementation that is test-driven by design. The tests: serve as the specification. The implementation: guaranteed to match the specification. Tip 12: Use AI for code review preparation. Before submitting a PR, ask the AI: 'Review this diff for potential issues, missing edge cases, and convention violations.' The AI: catches issues that would have been flagged in review. The PR: cleaner on first submission. Review rounds: reduced from 2-3 to 1.
Tip 13: Batch similar tasks for AI generation. Need to create 5 similar API endpoints? Generate them in sequence, not intermixed with other work. The AI: maintains context across similar tasks, producing more consistent results. Mixed context (endpoint, then UI component, then endpoint): produces less consistent output because the AI keeps switching conventions. Tip 14: Use AI to generate documentation alongside code. After generating a feature, prompt: 'Generate JSDoc comments for the public API of this module.' The AI: knows the code (it just generated it) and produces accurate documentation. Documentation written separately: often diverges from the actual implementation.
Tip 15: Automate rule validation in CI. Add a CI step that checks generated code against your rules (linting, type checking, test coverage requirements). If a developer's AI-generated code violates a rule that the AI should have followed: the CI catches it. The developer: updates their prompt or the rules. The codebase: stays convention-compliant even when the AI occasionally misses a rule. AI rule: 'Workflow integration tips share a theme: making AI assistance seamless rather than a separate step. The goal: AI assistance woven into your existing workflow (write, test, review, document) rather than a parallel process you switch to.'
Code-first: 'Write a user registration function.' Then: 'Write tests for it.' The tests: test what the AI built, not what you wanted. Test-first: 'Write tests for user registration: validates email uniqueness, hashes password, returns created user.' Then: 'Implement the function that makes these tests pass.' The implementation: matches the specification (the tests). The tests: serve as documentation. The code: guaranteed to handle the specified cases. Test-first with AI: transforms the AI from code generator to specification implementer.
Measurement and Improvement Tips (16-20)
Tip 16: Track PR review time before and after rule adoption. The single most impactful metric. Measure: average time from PR opened to PR approved. Before rules: baseline (typically 4-6 hours). After rules: target 30% reduction. If no improvement: the rules are not addressing the right conventions. Tip 17: Count convention-related review comments. Before rules: count how many review comments are about conventions vs. design/logic. The ratio: typically 40-60% conventions. After rules: the convention comments should drop to near zero. The remaining comments: high-value design feedback.
Tip 18: Measure new developer time-to-first-PR. How long does a new developer take from first day to first merged PR? Before AI rules: typically 2-3 weeks. After AI rules: target 3-5 days. The AI with rules: teaches conventions through generated code. The new developer: learns by seeing convention-compliant examples in every AI interaction. Tip 19: Track rule update frequency. Healthy frequency: 2-4 rule changes per sprint (the team is actively learning and improving). Zero changes: the rules are stale (the team stopped investing). More than 10 changes: the rules are unstable (too many changes confuse the AI and the team).
Tip 20: Run a quarterly rule effectiveness review. Every quarter: review all rules. For each rule: ask 'Does this rule still produce the code we want?' Remove rules that no longer apply (deprecated patterns). Update rules that need refinement. Add rules for new patterns that have emerged. The quarterly review: ensures the rules evolve with the project. Without it: rules drift from the actual codebase over 3-6 months. AI rule: 'Measurement tips ensure your AI coding practice improves over time. Without measurement: you can not distinguish between good rules and useless rules. With measurement: every sprint produces data that makes the next sprint better. The improvement: compounding.'
A team writes 20 rules. Which ones actually improve code quality? Without measurement: impossible to know. Some rules: prevent 50 review comments per sprint. Other rules: prevent zero (because the AI already handles that convention by default). Tip 16: track PR review time. Tip 17: count convention comments. The data: reveals which rules are load-bearing and which are redundant. After one sprint of measurement: remove useless rules (reduce noise), strengthen effective rules (amplify impact), and add missing rules (fill gaps). Measurement: transforms rule writing from guesswork into engineering.
Productivity Tips Quick Reference
Quick reference of 20 AI coding productivity tips.
- Tips 1-5: Rule optimization — start with top 3 conventions, include examples, organize by frequency, use negative rules, update biweekly
- Tip 6: Describe outcomes, not steps — let the AI + rules decide implementation
- Tip 7: AI self-review — ask the AI to check its own output against rules
- Tip 8: Sequential prompts for complex features — focused context produces better code
- Tips 9-10: Specific feedback and prompt templates — save patterns that work
- Tips 11-12: Test-first generation and AI review prep — cleaner PRs on first submission
- Tips 13-15: Batch similar tasks, generate docs alongside code, validate rules in CI
- Tips 16-20: Track review time, convention comments, onboarding speed, rule updates, run quarterly reviews