Monday: Rule Review and Sprint Alignment
Monday morning: review CLAUDE.md against the sprint goals (15 minutes). This sprint: building the notification system. The rules: should include notification patterns (real-time vs. polling, notification channel conventions, message template patterns). If the rules do not cover the sprint's domain: add the missing conventions before the team starts generating code. The Monday rule review: proactive (add rules BEFORE they are needed) instead of reactive (add rules AFTER the review catches a violation).
Monday sprint planning: when estimating stories, factor in AI-assisted generation. A task that takes 8 hours manually: may take 3-4 hours with AI generation + review. The estimation: accounts for the generation-review cycle, not just raw coding time. AI-assisted estimates: typically 40-60% of manual estimates for feature implementation, 20-30% for bug fixes (debugging still requires human understanding), and 80-90% for pure boilerplate tasks (CRUD endpoints, form components).
Monday team alignment: ensure all team members have updated their AI tool rules. If CLAUDE.md was updated since last sprint: verify everyone has pulled the latest version. If the team uses multiple AI tools: verify rule synchronization (CLAUDE.md matches .cursorrules matches copilot-instructions.md). The Monday sync: prevents a week of inconsistent AI-generated code from different team members using different rule versions. AI rule: 'Monday sets the foundation for the week. Updated rules + sprint alignment + team sync = consistent AI-generated code from day one. Skip the Monday routine: and you spend Wednesday fixing convention drift.'
Tuesday-Thursday: Generation, Review, and Iteration
The midweek generation rhythm: each day follows the daily workflow (morning generation, afternoon review). The weekly arc: Tuesday is the most productive generation day (fresh from Monday planning, full sprint ahead). Wednesday: generation slows as review and collaboration increase (mid-sprint checkpoint). Thursday: focused on completing features started earlier in the week and preparing for end-of-sprint activities. The rhythm: build momentum Tuesday, collaborate Wednesday, close Thursday.
Wednesday team sync (30 minutes): the mid-sprint AI coding sync. Agenda: 1) Share any rule improvements discovered during the week (5 min). 2) Review any convention conflicts in open PRs (10 min). 3) Identify any missing rules for the current sprint's domain (10 min). 4) Brief demo of any effective AI prompts or workflow improvements (5 min). The sync: surfaces issues before they multiply. A convention conflict caught Wednesday: affects 2 PRs. Caught Friday: affects 8 PRs.
Thursday completion focus: prioritize closing open features over starting new ones. Use AI tools for targeted completion: 'Add the missing edge case handling to the notification endpoint.' 'Generate the remaining test cases for the payment flow.' 'Add JSDoc documentation to the public API.' Thursday AI usage: surgical and specific. The goal: all features from this sprint's generation sessions are complete, tested, and ready for review by Thursday end of day. AI rule: 'The midweek rhythm mirrors sprint energy: high generation Tuesday, collaborative Wednesday, completion-focused Thursday. Each day has a different relationship with AI tools: broad generation, then collaborative refinement, then surgical completion.'
Without a mid-week sync: a convention conflict in a PR goes unnoticed until Friday. By then: 4 more PRs use the same wrong pattern. Total rework: 5 PRs. With Wednesday sync: the conflict is caught when only 1 PR is affected. Total rework: 1 PR. The 30-minute Wednesday sync: prevents 4x the rework. Over a 10-sprint quarter: the sync prevents 40 unnecessary PR revisions. The ROI of 30 minutes per week: measured in days of saved rework per quarter.
Friday: Metrics, Retro, and Rule Updates
Friday metrics review (15 minutes): check the numbers. How many PRs were submitted this week? What was the average review time? How many convention-related review comments were made? Compare to last week. The trend: tells you whether the AI coding practice is improving. If convention comments increased: the rules need updating. If review time increased: the AI-generated code quality may have declined (check for stale rules or vague prompts).
Friday rule update session (30 minutes): the most impactful 30 minutes of the week. Review all code review comments from the week. For each convention-related comment: write a rule to prevent it next week. For each rule that produced awkward code: revise or remove it. For each new pattern the team adopted: add a rule to codify it. The Friday update: ensures next week starts with better rules than this week. After 10 Fridays: the rules have been refined 10 times โ comprehensive and battle-tested.
Friday sprint retro (AI section): during the sprint retrospective, include AI coding as a discussion topic. What worked well? (specific prompts, workflow patterns, rule improvements). What did not work? (convention violations, confusing AI output, missing rules). What should we try next sprint? (new prompt patterns, rule restructuring, tool changes). The retro: ensures the team's AI coding practice improves at the sprint level, not just the individual level. AI rule: 'Friday closes the weekly improvement loop. Metrics reveal the state. Rule updates fix the gaps. The retro captures team learning. Each week: the AI coding practice gets measurably better because Friday invests 45 minutes in improvement.'
Friday 30-minute rule update: review the week's code review comments. Find 3 convention-related comments that a rule would prevent. Write 3 rules (10 min each). Next week: those 3 convention comments do not recur. Across the team: 3 comments x 5 developers x 2 occurrences each = 30 convention comments prevented. Time saved: 30 review comments x 5 minutes each = 150 minutes of review time saved next week. ROI: 30 minutes invested, 150 minutes saved. 5x return in the first week alone. Every subsequent week: the same 150 minutes saved again.
Sprint Milestones for AI Coding Teams
Sprint start (Day 1): major rule review. Align rules with sprint goals. Verify team rule synchronization. This is the Monday routine amplified: not just checking rules, but strategically adding conventions for the sprint's domain. The sprint-start rule review: the highest-impact rule activity because it shapes 2 weeks of AI-generated code.
Mid-sprint (Day 5): rule adjustment checkpoint. Are the rules from sprint start working? Any unexpected convention conflicts? Any new patterns that emerged during implementation? The mid-sprint check: catches issues at the halfway point when there is still time to correct course. A rule discovered to be problematic at mid-sprint: affects 5 days of code. Discovered at sprint end: affects 10 days of code.
Sprint end (Day 10): comprehensive rule audit. Update published count in the rule system documentation. Document all rule changes made during the sprint. Prepare the rule set for the next sprint. The sprint-end audit: ensures the rule system is clean, documented, and ready for the next iteration. The rule system: improves with every sprint cycle, just like the codebase improves with every feature cycle. AI rule: 'Sprint milestones add structure to rule improvement. Start: strategic alignment. Middle: course correction. End: cleanup and preparation. Each sprint: the rules get better because the improvement is planned, not ad-hoc.'
Sprint starts without rule alignment. The team builds a notification system. The rules: have no notification conventions. Day 1: Developer A generates notifications with WebSockets. Day 2: Developer B generates notifications with Server-Sent Events. Day 3: the code review reveals the inconsistency. Day 4-5: one developer rewrites their approach. The sprint-start rule review (15 minutes): 'Notifications use SSE for real-time updates, polling for batch.' Added to CLAUDE.md on Day 0. Developers A and B: both generate SSE-based notifications. Zero inconsistency. Zero rework. 15 minutes of alignment: saves 1-2 days of rework.
Weekly Routine Quick Reference
Quick reference for the AI coding weekly routine.
- Monday: rule review (15 min), sprint planning with AI estimates, team rule sync
- Tuesday: peak generation day โ morning focused sessions, afternoon review, high output
- Wednesday: team sync (30 min) โ share rule improvements, resolve conflicts, identify gaps
- Thursday: completion focus โ close features, surgical AI edits, prepare for review
- Friday: metrics (15 min), rule updates (30 min), sprint retro (AI section)
- Sprint start: major rule review aligned with sprint goals, verify team synchronization
- Mid-sprint: rule adjustment checkpoint โ catch issues at halfway point
- Sprint end: comprehensive rule audit, document changes, prepare for next sprint