Disagreements Are Healthy — Unresolved Disagreements Are Not
Developers disagree about: error handling patterns (Result vs try-catch), naming conventions (camelCase vs snake_case for API responses), testing approaches (mocking vs real database), and architectural patterns (classes vs functions for services). These disagreements: reflect diverse experience. Two senior developers who disagree about error handling: both have valid reasons based on different past experiences. The goal: not to eliminate disagreement but to resolve it into a team decision that everyone can support.
Unresolved disagreements: become recurring debates. Every PR: the same discussion about error handling. Every code review: the same argument about naming. The team: fatigued by repetition. AI rules: encode the resolution so the debate happens once and the decision persists. The resolved rule: 'Use Result pattern for service functions. Use try-catch for Express middleware.' Both sides heard. Decision made. Rule encoded. Debate: permanently resolved.
The resolution principle: resolve on merit (which approach produces better code?), not on authority (the tech lead prefers X). Resolve with evidence (test data, PR metrics, A/B results), not with opinions (I think X is better). And: resolve on a timeline (if no resolution in 2 weeks: the facilitator makes the call). Unresolved debates: more damaging than any specific choice. AI rule: 'A resolved decision that 60% of the team agrees with: better than an unresolved debate that 100% of the team is tired of.'
Step 1: Resolution Techniques (From Fastest to Most Rigorous)
Technique 1 — Scoping (5 minutes): both approaches survive in different contexts. 'Use classes for NestJS services (framework convention). Use functions for utilities and helpers (simpler).' This technique: works when both approaches are valid for different situations. Neither side loses. The rule: more precise than either individual preference. Scoping: the fastest resolution and the one that leaves both sides most satisfied. Try scoping first before any other technique.
Technique 2 — Evidence-based discussion (15-30 minutes): each side presents: their approach, the rationale (specific problems it solves or prevents), and evidence (code examples, bug history, industry best practices). The team: evaluates on merit. A vote: majority wins. The key: evidence, not opinion. 'I prefer X' is an opinion. 'X prevented 3 bugs last quarter while Y caused 2 bugs' is evidence. The facilitator: steers the discussion toward evidence and away from preference.
Technique 3 — A/B test (2-4 weeks): when evidence is unavailable or equally strong on both sides. Deploy both approaches to comparable teams. Measure after 2 weeks. The data: resolves the dispute objectively. This technique: the most rigorous but the most time-consuming. Use: only for significant disputes where the outcome affects substantial AI output. Do not A/B test: trivial preferences (the cost of the experiment exceeds the value of the resolution). AI rule: 'Scoping: 5 minutes, try first. Evidence-based: 30 minutes, when scoping does not work. A/B test: 2+ weeks, for significant disputes only.'
Most disagreements: are about which approach is best. But both approaches: are often valid in different contexts. Classes vs functions: both valid (classes for framework services, functions for utilities). Try-catch vs Result: both valid (try-catch for middleware, Result for business logic). Scoping: the 5-minute resolution where both sides win. Try it before spending 30 minutes on evidence-based discussion or 2 weeks on an A/B test. If scoping works: you saved time and produced a more precise rule.
Step 2: Facilitation and Time-Boxing
The facilitator's role: the EM, tech lead, or a neutral party. The facilitator: ensures both sides are heard equally (no one dominates the discussion), steers toward evidence (redirects 'I prefer' to 'the evidence shows'), proposes resolutions (scoping, voting, or A/B testing), and enforces the timeline (the decision must be made by the deadline). The facilitator: does NOT impose their own preference. They manage the process, not the outcome.
Time-boxing: every disagreement has a deadline. Minor disputes (naming convention preference): resolved in one meeting (30 minutes). Moderate disputes (error handling pattern): resolved within 1 week (discussion + evidence gathering + vote). Major disputes (architectural pattern change): resolved within 2 weeks (A/B test or structured debate with evidence). After the deadline: the facilitator makes the call (based on the evidence presented). The deadline: prevents indefinite debates.
The 'disagree and commit' principle: after the team decides (by vote, by evidence, or by facilitator's call), everyone commits to the decision. A developer who preferred the losing approach: follows the winning rule. They may: propose revisiting at the next quarterly review (with new evidence). They may not: override the rule in their PRs or argue against it in every code review. The commitment: essential for the rule to be effective. AI rule: 'Disagree and commit. The decision is made. Follow the rule. Propose revisiting with new evidence at the quarterly review — not in every PR comment.'
The team debates error handling for 3 months. Every PR: the same discussion. Every code review: the same argument. Developer frustration: high. AI output: inconsistent (no rule encodes the decision). After 3 months: the team is exhausted but still undecided. Any resolution: better than no resolution. A 60-40 vote for Result pattern: encoded as a rule. The 40% who preferred try-catch: follow the rule. Reviews: no more error handling debates. AI output: consistent. The resolved imperfect decision: infinitely better than the perfect unresolved debate.
Step 3: Escalation Path for Persistent Disagreements
Escalation level 1 — Team vote: the team votes after evidence-based discussion. Majority wins. Most disagreements: resolved here. The team: owns the decision because they voted. The minority: can propose revisiting at the quarterly review.
Escalation level 2 — Tech lead decision: if the team vote is tied (5-5) or too close to be decisive (6-4 with strong dissent), the tech lead makes the call. The tech lead: considers the team's input, the evidence, and the architectural direction. The decision: documented with rationale. The team: commits to the tech lead's decision. This is not the tech lead overruling the team — it is the tech lead breaking a tie that the team could not resolve.
Escalation level 3 — Staff/principal engineer or architecture board: for disputes that cross team boundaries (the decision affects multiple teams) or have architectural significance (the decision shapes the long-term direction). The staff engineer: evaluates from a broader perspective (cross-team consistency, organizational direction). The architecture board: for disputes that affect organization-level rules. This level: rare. Most disagreements are resolved at level 1 or 2. AI rule: 'Level 1 for most. Level 2 for ties. Level 3 for cross-team or architectural. The escalation: structured, not ad-hoc. Everyone knows the path before the disagreement starts.'
The team voted 7-3 for the Result pattern. You voted against. The rule: 'Use Result pattern for service functions.' You: follow the rule in your code. You do NOT: override the rule, argue against it in code reviews, or generate try-catch code deliberately. You CAN: at the next quarterly review, present new evidence for try-catch ('since we adopted Result, we have encountered these issues...'). The quarterly review: the designated time to revisit decisions. PR comments: not the place.
Disagreement Resolution Summary
Summary of handling team disagreements about AI rules.
- Principle: disagreements are healthy. Unresolved disagreements are not. Resolve once, encode in rules
- Technique 1 — Scoping: both approaches survive in different contexts. Try first. 5 minutes
- Technique 2 — Evidence-based: present rationale + evidence. Team votes. 30 minutes
- Technique 3 — A/B test: deploy both, measure, adopt winner. 2-4 weeks. For significant disputes only
- Facilitation: neutral party ensures both sides heard, steers toward evidence, enforces timeline
- Time-boxing: minor (30 min), moderate (1 week), major (2 weeks). Deadline prevents infinite debate
- Disagree and commit: follow the decided rule. Propose revisiting at quarterly review with new evidence
- Escalation: team vote (level 1) → tech lead (level 2) → staff engineer/ARB (level 3)