Tutorials

How to Collaborate on AI Rules as a Team

AI rules are a team artifact: authored collaboratively, reviewed like code, and evolved through feedback. This tutorial covers the collaborative workflow that produces rules the whole team trusts.

5 min read·July 5, 2025

Rules authored by the team are followed by the team. Collaboration produces rules everyone trusts and nobody resists.

Team authoring sessions, PR-based changes, feedback integration, and the self-sustaining improvement cycle

Rules Are a Team Product, Not an Individual's Preference

AI rules authored by one person: reflect that person's preferences and blind spots. Rules authored by the team: reflect team consensus, cover more edge cases, and have higher adoption because everyone contributed. The collaboration model: the team writes rules together, reviews changes like code PRs, and evolves rules through structured feedback. Nobody owns the rules exclusively. Everyone owns them collectively.

The collaboration principles: consensus over authority (rules reflect what the team agrees on, not what the tech lead prefers), transparency (all rule changes are visible and reviewable), inclusivity (every team member can propose rule changes, not just senior engineers), and iteration (rules evolve based on real-world experience, not theoretical best practices). These principles: produce rules that the team follows willingly because they shaped the rules themselves.

The workflow: team authoring session (initial rules) → PR-based changes (ongoing evolution) → quarterly review (systematic assessment) → feedback integration (continuous improvement). Each step: involves the whole team. The result: rules that are trusted, followed, and continuously improved.

Step 1: Team Authoring Sessions

The initial authoring session: a 1-2 hour meeting where the team writes the first version of the rules together. Preparation: each team member lists their top 5 conventions (the patterns they care most about). The facilitator (usually the tech lead or EM): collects the lists beforehand and identifies: agreements (conventions most people listed — these become rules immediately), disagreements (conventions where people differ — these need discussion), and gaps (conventions nobody listed but the codebase follows — these are identified through codebase review).

Session structure: (1) Present the agreements (10 min — quick wins, no debate needed). (2) Discuss the disagreements (30 min — each side presents their reasoning, the team votes or reaches consensus). (3) Review gaps (15 min — look at 5-10 recent PRs and identify conventions not yet listed). (4) Write the rules (30 min — the team writes the actual rule text, not just the convention name). The output: a CLAUDE.md with 15-25 rules that the entire team contributed to.

Remote-friendly approach: for distributed teams, use a shared document (Google Doc, Notion, HackMD) where team members add their conventions asynchronously (over 2-3 days). The synchronous session (via video call): resolves disagreements, reviews gaps, and finalizes the rules. The async preparation: gives introverted team members equal voice (they write their conventions without speaking up in a meeting). AI rule: 'Async preparation + synchronous resolution: produces better rules with more inclusive participation than a purely synchronous session.'

💡 Async Preparation Gives Introverted Team Members Equal Voice

In a live session: the loudest voices dominate. The senior engineer's preference wins because they speak most confidently. Async preparation: every team member writes their top 5 conventions before the session. The facilitator collects them anonymously. In the session: the conventions are discussed on their merit, not on who proposed them. A junior developer's convention: adopted if the team agrees it is the best approach. The async step: equalizes participation.

Step 2: PR-Based Rule Changes

After the initial session: all rule changes go through a PR workflow. The developer: creates a branch, edits the CLAUDE.md (or the rules source file in the central repo), and opens a PR. The PR: includes the rule change, the reason for the change, and any test prompts that demonstrate the improved AI behavior. The team: reviews the PR like any code change. Approved: the rule is merged and distributed. Rejected: the proposer revises based on feedback.

PR review criteria for rules: does the new rule address a real problem (not a theoretical one)? Is the rule specific enough for the AI to follow? Does it conflict with any existing rule? Is the reason documented in the PR description? Was the rule tested with a prompt before proposing? These criteria: keep rule quality high and prevent rule bloat (rules added for hypothetical problems that never occur). AI rule: 'Rule PRs are reviewed with the same rigor as code PRs. A bad rule: affects every AI-generated line of code. It deserves careful review.'

Small changes, fast merges: most rule changes are small (updating a library reference, adding a missing convention, clarifying vague wording). These: should be reviewed and merged quickly (same day, ideally). Large changes (new section, fundamental pattern change): warrant more discussion (bring to the team meeting or async discussion before the PR). AI rule: 'Small rule changes: fast merge (same day). Large rule changes: discuss first, then PR. Match the review effort to the change scope.'

⚠️ Rule PRs Deserve the Same Rigor as Code PRs

A code PR that introduces a bug: affects one feature. A rule PR that introduces a bad rule: affects every AI-generated line of code for every developer. The blast radius: much larger. Review rule PRs with proportional rigor. Check: does the rule address a real problem? Is it specific enough? Does it conflict with existing rules? Was it tested? A hastily merged rule: causes more damage than a hastily merged function.

Step 3: Collecting and Integrating Feedback

Continuous feedback channels: a Slack thread or channel for rule feedback (quick observations: 'The AI keeps generating X pattern — should we add a rule?'), code review comments that reference rules (a reviewer says 'This would be prevented if we had a rule for Y'), and override annotations (when a developer overrides a rule, they add a comment explaining why — each override is potential feedback). These channels: surface real-world rule issues as they occur.

Structured feedback collection: at the quarterly review, collect formal feedback through a survey: which rules are most helpful (keep and protect), which rules cause friction (revise or add exceptions), which rules are missing (add to the backlog), and overall satisfaction with the rules (trending indicator). The survey: 5 questions, takes 2 minutes. The results: drive the quarterly rule update. AI rule: 'Continuous channels catch urgent issues (the AI generates a bug because a rule is wrong). Quarterly surveys provide comprehensive assessment (which rules are working, which are not).'

Integrating feedback into rules: after collecting feedback, the tech lead (or whoever maintains the rules) creates PRs for the proposed changes. Each PR: references the specific feedback that motivated the change ('Based on 3 team members reporting that the error handling rule is too rigid — adding an exception for Express middleware.'). This traceability: shows the team that their feedback leads to action. When feedback leads to visible changes: the team provides more feedback. The improvement cycle: becomes self-sustaining. AI rule: 'Close the feedback loop. Every piece of feedback: either results in a rule change or is explicitly acknowledged with a reason for keeping the current rule. Feedback that disappears into a void: discourages future feedback.'

ℹ️ Close the Feedback Loop — Or the Team Stops Giving Feedback

Developer submits feedback: 'The error handling rule is too rigid for Express middleware.' Three weeks later: no response. No rule change. No acknowledgment. The developer: stops giving feedback ('Why bother? Nothing changes.'). Close the loop: 'Thanks for the feedback. We added an Express middleware exception to the error handling rule in PR #42. It will be in the next rule version.' The developer: sees their feedback in action. They give more feedback. The cycle continues.

Team Collaboration Summary

Summary of collaborative AI rule development.

  • Principle: rules are a team product, not an individual's preference. Consensus over authority
  • Initial session: 1-2 hours. Async preparation + synchronous discussion. 15-25 rules as output
  • Disagreements: discuss reasoning, vote or reach consensus. No tech lead dictates
  • PR workflow: all changes through PRs. Review criteria: real problem, specific, no conflicts, tested
  • Small changes: merge same-day. Large changes: discuss first, then PR. Match effort to scope
  • Feedback channels: Slack, code review comments, override annotations. Continuous collection
  • Quarterly survey: which rules help, which cause friction, what is missing. 5 questions, 2 minutes
  • Close the loop: every feedback item → rule change or explicit acknowledgment. No void