30 Minutes to Turn Deployment into Adoption
Deploying rules to a repository: takes 2 minutes. Getting the team to actually use and benefit from them: takes a training session. Without training: developers are vaguely aware rules exist but do not understand: how to verify their AI is using the rules, what the rules do (they have not read the file), or how the rules change their workflow (they code the same way as before). With a 30-minute training session: every developer leaves with verified setup, understanding of what the rules do, and hands-on experience seeing the rules in action.
The session structure: setup verification (5 min — confirm everyone's AI tool is reading the rules), live demo (10 min — the facilitator shows the AI generating code with and without rules), hands-on exercise (10 min — each developer runs a prompt and evaluates the output), and Q&A (5 min — address questions and concerns). Total: 30 minutes. The investment: 30 minutes per developer × team size. The return: every developer produces better AI-generated code from the next prompt.
When to run the session: after initial rule deployment (the first session for the team), after major rule changes (a refresh session covering what changed), and for new team members (onboarding session within the first week). Most teams: 1 initial session + quarterly refreshers. New hires: individual or small-group sessions during onboarding.
Step 1: Setup Verification + Live Demo (15 Minutes)
Setup verification (5 min): every attendee opens their AI tool (Claude Code, Cursor, Copilot) in the project repository. Each person runs a verification prompt: 'What coding conventions does this project follow?' The AI should reference the rules from CLAUDE.md (or equivalent). If it does: setup is confirmed. If it does not: troubleshoot (file not found, wrong directory, AI tool not configured). Fix any issues before proceeding. AI rule: 'Do not skip setup verification. A developer whose tools are not configured: gains nothing from the rest of the session. Verify first.'
Live demo — without rules (3 min): the facilitator removes or renames CLAUDE.md temporarily. Prompts the AI: 'Create a new API endpoint that validates input and handles errors.' Shows the output: generic patterns, default error handling, no project-specific conventions. The audience: sees what the AI produces without guidance.
Live demo — with rules (7 min): the facilitator restores CLAUDE.md. Same prompt. Shows the output: project-specific patterns, correct error handling, proper naming. Highlights the differences: 'Notice: with rules, the AI uses our Result pattern instead of try-catch. It uses our Zod validation schema. It follows our naming convention.' The contrast: the most convincing part of the training. Developers see the value with their own eyes. AI rule: 'The before/after demo: more convincing than any slide or explanation. Show the difference on the team's actual codebase. Real code > sample code.'
Without rules: the AI generates generic error handling (try-catch with console.error). With rules: the AI generates the team's Result pattern with structured error codes. The transition happens in real-time: remove the rules file, run the prompt, show generic output. Restore the rules file, run the same prompt, show project-specific output. The contrast: undeniable. The audience: convinced not by words but by their own eyes. This demo: worth more than any slide deck.
Step 2: Hands-On Exercise (10 Minutes)
The exercise: every developer runs the same prompt on their machine. Prompt (provided by the facilitator): 'Create a function that fetches a list of users from the database, filters by role, and returns a paginated result with total count.' Each developer: runs the prompt, reviews the output, and verifies: does it follow the naming convention? Does it use the correct ORM pattern? Does it handle errors correctly? Is the return type correct? This verification: teaches developers what to look for in AI-generated code.
Comparison: after 5 minutes, 2-3 developers share their output. The facilitator: highlights consistency ('everyone got the same naming pattern, the same error handling, the same return type — because the rules guided the AI'). If outputs differ: investigate which rule was not followed and why (usually a vague rule that needs refinement). The comparison: demonstrates consistency across the team and identifies rule gaps in real-time.
The takeaway prompt: 'From now on, every time you accept AI-generated code, spend 5 seconds checking: does it follow the rules? If it does not: the rule may be missing or vague. Report it in #ai-standards and we will fix it.' This framing: turns every developer into a rule quality tester. The feedback: drives continuous improvement. AI rule: 'The exercise: teaches the skill (evaluate AI output against rules). The takeaway prompt: builds the habit (check every output, report gaps). Both are needed for lasting adoption.'
The facilitator demos: beautiful AI output following all rules. A developer in the audience: runs the same prompt. Gets generic output. Their AI tool: is not reading the rules (file in wrong location, tool not configured). Without verification: this developer sits through the training confused and disengaged. With verification at the start: the issue is caught and fixed in 2 minutes. The developer: participates fully. 5 minutes of verification: prevents 25 minutes of wasted training for any misconfigured attendee.
Step 3: Q&A and Follow-Up
Common questions: 'What if the AI ignores a rule?' (Debug using the how-to-debug-ai-output tutorial — usually a missing or vague rule). 'Can I add my own rules?' (Yes — propose via PR, or add to the team rules section). 'What if I disagree with a rule?' (Propose a change with your reasoning — rules are team-owned, not dictated). 'How often do rules change?' (Minor: monthly. Major: quarterly. You will be notified via Slack). These 4 questions: cover 80% of what developers ask. Have answers ready.
Post-session resources: send a follow-up message with: a link to the rules file (so developers can read it at their own pace), a link to the changelog (so they understand the rules' history), the verification prompt (so they can re-verify setup anytime), and the #ai-standards Slack channel (for questions and feedback). The follow-up: ensures developers have everything they need to succeed after the session ends.
Measuring training effectiveness: after 2 weeks: compare the team's AI-generated code quality (are they following the rules more consistently?). After 1 month: survey (did the training help? Are the rules useful?). If training effectiveness is low: the rules may need improvement (not the training). If training effectiveness is high: the rules are working. AI rule: 'Training effectiveness: measured by code quality improvement, not by training satisfaction scores. Developers who find rules useful: produce better code. That is the metric that matters.'
'Every time you accept AI output: spend 5 seconds checking if it follows the rules. If it does not: report in #ai-standards.' This single instruction: turns every developer into a continuous rule quality tester. Over the next month: 10 developers × 20 AI interactions per day × 5-second checks = 1,000 quality checks per day. Issues: surface rapidly. Missing rules: identified within days, not months. The training's lasting impact: not the session itself but the habit it installs.
Team Training Summary
Summary of the 30-minute AI rules team training session.
- Duration: 30 minutes. Setup (5 min) + demo (10 min) + exercise (10 min) + Q&A (5 min)
- Setup verification: every attendee confirms their AI tool reads the rules. Fix issues first
- Demo: before/after contrast. Without rules: generic. With rules: project-specific. On real code
- Exercise: same prompt for everyone. Compare outputs. Highlight consistency. Identify rule gaps
- Takeaway: 'Check AI output against rules. Report gaps in #ai-standards.' Build the feedback habit
- Q&A: prepare for 4 common questions (ignoring rules, adding rules, disagreeing, change frequency)
- Follow-up: rules link, changelog link, verification prompt, Slack channel. Sent after session
- Measure: code quality improvement after 2 weeks. Survey after 1 month. Effectiveness = quality delta