Rules Are a Team Product, Not an Individual's Preference
AI rules authored by one person: reflect that person's preferences and blind spots. Rules authored by the team: reflect team consensus, cover more edge cases, and have higher adoption because everyone contributed. The collaboration model: the team writes rules together, reviews changes like code PRs, and evolves rules through structured feedback. Nobody owns the rules exclusively. Everyone owns them collectively.
The collaboration principles: consensus over authority (rules reflect what the team agrees on, not what the tech lead prefers), transparency (all rule changes are visible and reviewable), inclusivity (every team member can propose rule changes, not just senior engineers), and iteration (rules evolve based on real-world experience, not theoretical best practices). These principles: produce rules that the team follows willingly because they shaped the rules themselves.
The workflow: team authoring session (initial rules) → PR-based changes (ongoing evolution) → quarterly review (systematic assessment) → feedback integration (continuous improvement). Each step: involves the whole team. The result: rules that are trusted, followed, and continuously improved.
Step 2: PR-Based Rule Changes
After the initial session: all rule changes go through a PR workflow. The developer: creates a branch, edits the CLAUDE.md (or the rules source file in the central repo), and opens a PR. The PR: includes the rule change, the reason for the change, and any test prompts that demonstrate the improved AI behavior. The team: reviews the PR like any code change. Approved: the rule is merged and distributed. Rejected: the proposer revises based on feedback.
PR review criteria for rules: does the new rule address a real problem (not a theoretical one)? Is the rule specific enough for the AI to follow? Does it conflict with any existing rule? Is the reason documented in the PR description? Was the rule tested with a prompt before proposing? These criteria: keep rule quality high and prevent rule bloat (rules added for hypothetical problems that never occur). AI rule: 'Rule PRs are reviewed with the same rigor as code PRs. A bad rule: affects every AI-generated line of code. It deserves careful review.'
Small changes, fast merges: most rule changes are small (updating a library reference, adding a missing convention, clarifying vague wording). These: should be reviewed and merged quickly (same day, ideally). Large changes (new section, fundamental pattern change): warrant more discussion (bring to the team meeting or async discussion before the PR). AI rule: 'Small rule changes: fast merge (same day). Large rule changes: discuss first, then PR. Match the review effort to the change scope.'
A code PR that introduces a bug: affects one feature. A rule PR that introduces a bad rule: affects every AI-generated line of code for every developer. The blast radius: much larger. Review rule PRs with proportional rigor. Check: does the rule address a real problem? Is it specific enough? Does it conflict with existing rules? Was it tested? A hastily merged rule: causes more damage than a hastily merged function.
Step 3: Collecting and Integrating Feedback
Continuous feedback channels: a Slack thread or channel for rule feedback (quick observations: 'The AI keeps generating X pattern — should we add a rule?'), code review comments that reference rules (a reviewer says 'This would be prevented if we had a rule for Y'), and override annotations (when a developer overrides a rule, they add a comment explaining why — each override is potential feedback). These channels: surface real-world rule issues as they occur.
Structured feedback collection: at the quarterly review, collect formal feedback through a survey: which rules are most helpful (keep and protect), which rules cause friction (revise or add exceptions), which rules are missing (add to the backlog), and overall satisfaction with the rules (trending indicator). The survey: 5 questions, takes 2 minutes. The results: drive the quarterly rule update. AI rule: 'Continuous channels catch urgent issues (the AI generates a bug because a rule is wrong). Quarterly surveys provide comprehensive assessment (which rules are working, which are not).'
Integrating feedback into rules: after collecting feedback, the tech lead (or whoever maintains the rules) creates PRs for the proposed changes. Each PR: references the specific feedback that motivated the change ('Based on 3 team members reporting that the error handling rule is too rigid — adding an exception for Express middleware.'). This traceability: shows the team that their feedback leads to action. When feedback leads to visible changes: the team provides more feedback. The improvement cycle: becomes self-sustaining. AI rule: 'Close the feedback loop. Every piece of feedback: either results in a rule change or is explicitly acknowledged with a reason for keeping the current rule. Feedback that disappears into a void: discourages future feedback.'
Developer submits feedback: 'The error handling rule is too rigid for Express middleware.' Three weeks later: no response. No rule change. No acknowledgment. The developer: stops giving feedback ('Why bother? Nothing changes.'). Close the loop: 'Thanks for the feedback. We added an Express middleware exception to the error handling rule in PR #42. It will be in the next rule version.' The developer: sees their feedback in action. They give more feedback. The cycle continues.
Team Collaboration Summary
Summary of collaborative AI rule development.
- Principle: rules are a team product, not an individual's preference. Consensus over authority
- Initial session: 1-2 hours. Async preparation + synchronous discussion. 15-25 rules as output
- Disagreements: discuss reasoning, vote or reach consensus. No tech lead dictates
- PR workflow: all changes through PRs. Review criteria: real problem, specific, no conflicts, tested
- Small changes: merge same-day. Large changes: discuss first, then PR. Match effort to scope
- Feedback channels: Slack, code review comments, override annotations. Continuous collection
- Quarterly survey: which rules help, which cause friction, what is missing. 5 questions, 2 minutes
- Close the loop: every feedback item → rule change or explicit acknowledgment. No void