The Tech Lead as Rules Architect
The tech lead is the author and maintainer of AI rules. The CTO sets direction, the VP Engineering manages adoption, the EM ensures team compliance, but the tech lead writes the actual rules. Writing effective AI rules requires: deep codebase knowledge (which patterns matter), pragmatism (which conventions are worth enforcing), communication skills (rules must be clear enough for AI to follow), and architectural vision (rules should guide the codebase toward the desired architecture).
The tech lead's rules responsibilities: author the initial rule set (based on team conventions and architectural decisions), evolve rules as the codebase changes (new frameworks, new patterns, deprecated approaches), resolve rule conflicts (when team conventions disagree with organizational standards), and mentor the team on rule usage (how to get the most from AI-generated code).
The tech lead writes rules that are: specific enough to generate correct code (not vague platitudes like 'write good code'), flexible enough to handle edge cases (not so rigid that developers constantly override), and current (updated when the codebase evolves, not stale references to deprecated patterns).
How to Write Effective AI Rules
Rule structure: each rule should answer three questions. What (the convention): 'Use async/await for all asynchronous operations.' Why (the reason): 'Consistent async pattern reduces cognitive load and makes error handling uniform.' When (the scope): 'All new code. Refactor existing callbacks when modifying those files.' AI rule: 'Rules that explain the why are followed more consistently than rules that only state the what. Developers (and AI) make better decisions when they understand the intent behind the rule.'
Specificity level: too vague ('write clean code') — the AI cannot act on it. Too specific ('use Array.map instead of for loops for all array transformations') — too rigid for edge cases. Right level ('prefer functional array methods (map, filter, reduce) for data transformations. Use for loops when performance is critical or when break/continue is needed.'). AI rule: 'Rules should be specific enough to generate correct code in 80% of cases and flexible enough to allow override in the remaining 20%.'
Anti-patterns to encode: some of the most valuable rules are prohibitions. 'Do not use any type — use unknown for truly unknown types.' 'Do not use console.log for production logging — use the structured logger.' 'Do not use synchronous file system operations — use async/promises.' AI rule: 'Prohibition rules prevent the most common mistakes. List the anti-pattern and the preferred alternative. The AI avoids the anti-pattern and generates the alternative.'
You write the same PR comment for the third time: 'Use the structured logger, not console.log.' That comment should be a rule: 'Logging: use the structured logger (import { logger } from @/lib/logger). Never use console.log in production code.' The comment never needs to be made again. The AI generates correct logging from the start. Track your most frequent review comments — they are your rule writing backlog.
Evolving Rules with the Codebase
When to add rules: after a bug caused by an inconsistency (reactive), after adopting a new framework or library (proactive), after identifying a recurring code review comment (pattern recognition), and after an architecture decision (encoding the decision). AI rule: 'Every recurring code review comment is a rule waiting to be written. If reviewers comment on the same pattern 3+ times: write a rule. The comment should never need to be made again.'
When to update rules: when a dependency is upgraded (new API patterns), when the team adopts a better approach (replace the old pattern), and when a rule is consistently overridden (the rule may be too restrictive). AI rule: 'Track which rules are frequently overridden. If developers add // eslint-disable or ignore the AI's suggestion for the same rule repeatedly: the rule needs revision, not stricter enforcement.'
When to remove rules: when the pattern is no longer relevant (deprecated library), when the rule conflicts with a better approach, and when the rule adds friction without measurable quality benefit. AI rule: 'Rules have a lifecycle: proposed → active → deprecated → removed. Deprecated rules are marked with a replacement suggestion. Removed rules are deleted (not commented out). Keep the rule file lean — every rule should earn its place.'
If developers consistently override a rule (adding // eslint-disable, ignoring the AI's suggestion, or working around the constraint): the rule is probably wrong. It may be too restrictive for the real-world use cases developers encounter. Investigate: why are developers overriding this rule? Is the rule too rigid? Does it not account for a common edge case? Revise the rule to be practical. Doubled-down enforcement of an impractical rule: creates resentment without improving code quality.
Balancing Specificity and Flexibility
The 80/20 rule for AI rules: write rules that guide 80% of cases correctly and allow developer judgment for the remaining 20%. Overly prescriptive rules (every error must use the ErrorResponse class with exactly these fields) break when edge cases arise. Overly permissive rules (handle errors appropriately) provide no guidance. AI rule: 'The sweet spot: describe the pattern, provide an example, and note when to deviate. Example: Errors: return structured ErrorResponse with code, message, and optional details. Deviation: internal service errors may use a simplified format for performance.'
Layered specificity: organization rules are general (always validate input). Technology rules are moderate (use Zod for TypeScript input validation). Team rules are specific (validation schemas in src/schemas/ with naming convention schema_{entity}.ts). Each layer adds precision without the higher layers being overly prescriptive. AI rule: 'Write rules at the right layer. General principles: organization level. Technology patterns: technology level. Project-specific patterns: team level.'
The tech lead's judgment: not everything should be a rule. Some patterns are too context-dependent, too subjective, or too infrequently encountered to warrant a rule. AI rule: 'Add a rule when the benefit (consistency, bug prevention, onboarding speed) outweighs the cost (maintenance, rigidity, developer friction). If the rule would require more time to explain than it saves: it should not be a rule. Use code review for judgment calls and rules for clear patterns.'
A rule for every decision: paralysis. The rule file becomes 50 pages. Developers stop reading it. The AI gets conflicting guidance. Better: rules for clear patterns that have a single right answer (error handling structure, naming conventions, import ordering). Code review for judgment calls that depend on context (algorithm choice, abstraction level, feature decomposition). The tech lead's wisdom: knowing the difference between rule-worthy patterns and review-worthy decisions.
Tech Lead Action Items
Summary of the tech lead's guide to writing and maintaining AI coding standards.
- Rule structure: what (convention) + why (reason) + when (scope). Why is the most important
- Specificity: specific enough for 80% of cases, flexible enough for 20% edge cases
- Anti-patterns: prohibitions with preferred alternatives. Prevent the most common mistakes
- Add rules: after bugs, new frameworks, recurring review comments, architecture decisions
- Update rules: when dependencies change, better approaches emerge, or rules are consistently overridden
- Remove rules: when no longer relevant, conflicts with better approach, or adds friction without benefit
- Lifecycle: proposed → active → deprecated → removed. Every rule earns its place
- Judgment: not everything should be a rule. Rules for clear patterns. Code review for judgment calls