Tutorials

How to Create Rules from PR Feedback Patterns

Every recurring code review comment is a rule waiting to be written. This tutorial covers mining PR comments for rule opportunities, prioritizing by frequency, and converting feedback into encoded conventions.

5 min read·July 5, 2025

Every recurring review comment: a rule waiting to be written. 3rd occurrence = propose the rule. The comment never needs to be written again.

Comment mining, frequency prioritization, what-why-when conversion, testing, and the self-improving feedback loop

PR Comments: The Best Source of Rule Ideas

Every time a reviewer writes a convention-related comment ('Please use our Result pattern instead of try-catch'), that comment: represents a convention the AI should have encoded. If the AI had followed the rule: the comment would not have been needed. The reviewer: would have focused on logic instead of conventions. Recurring comments: the highest-priority rules to write, because they address the conventions that fail most often. PR comments: the most accurate, real-world source of rule ideas — they come from actual code, not from theoretical best practices.

The math: a reviewer writes the same comment 3 times per week across different PRs. 3 comments × 2 minutes each × 52 weeks = 312 minutes per year (5.2 hours) spent on one recurring convention comment. One rule: eliminates all 3 weekly comments. Time saved: 5.2 hours per year per reviewer. For a team of 5 reviewers who share the same convention: 26 hours per year saved by one rule. The ROI: 30 seconds to write the rule vs 26 hours of recurring comments.

The process: mine recent PR comments for recurring patterns, prioritize by frequency, convert the top patterns into rules, and verify the AI follows the new rules. Time investment: 1 hour to mine and convert. Return: every future occurrence of each pattern is eliminated. AI rule: 'PR comment mining: the highest-ROI rule creation activity. Every recurring comment found and encoded: eliminates that comment permanently.'

Step 1: Mine PR Comments for Patterns (30 Minutes)

Data source: the last 30 days of PR comments on your team's repositories. GitHub: search with 'is:pr is:merged commenter:@me' and scan your comments. GitLab: merge request comments filtered by date. For the entire team: each reviewer scans their own comments and lists the conventions they commented on most. Alternatively: use the GitHub API to extract all review comments and search for keywords.

Categorize each comment: convention (the reviewer is enforcing a pattern — 'use Result pattern,' 'use the structured logger,' 'add Zod validation'), logic (the reviewer is questioning the code's correctness — 'this filter should be <=, not <'), nit-pick (personal preference — 'I would name this differently'), and question (seeking understanding — 'why did you choose this approach?'). Only convention comments: become rules. Logic, nit-pick, and question comments: are not rule material.

Count frequency: for each convention comment, count how many times it appears in the 30-day window. 'Use Result pattern': 12 times. 'Add Zod validation': 8 times. 'Use structured logger': 6 times. 'Named exports, not default': 4 times. The frequency: is the prioritization. Higher frequency: higher priority rule. The top 3-5 by frequency: the rules to write this quarter. AI rule: 'Focus on the top 3-5 by frequency. These: have the highest ROI because they eliminate the most recurring comments.'

💡 30 Seconds to Write a Rule vs 26 Hours of Recurring Comments

The math: a reviewer writes 'use Result pattern' 3 times per week. 3 × 2 minutes × 52 weeks = 312 minutes (5.2 hours) per year per reviewer. For 5 reviewers: 26 hours per year. One rule: 'Error handling: use Result pattern in service functions.' Written in 30 seconds. The rule: eliminates all 156 annual comments (3/week × 52 weeks). ROI: 30 seconds invested → 26 hours saved. This is why PR comment mining is the highest-ROI rule creation activity.

Step 2: Convert Comments into Rules (20 Minutes)

For each top-frequency comment: write a rule using the what-why-when format. The reviewer's comment: provides the what ('use Result pattern instead of try-catch'). The reviewer's reasoning: provides the why (ask them, or infer from the context — 'composable error propagation across service boundaries'). The PR context: provides the when ('in service and repository functions — Express middleware still uses try-catch'). The conversion: 30 seconds per rule if the comment is specific. 2 minutes per rule if the comment needs to be generalized.

Generalize from specific comments: the reviewer commented on a specific PR: 'This database query should use a transaction.' The rule: not 'this query should use a transaction' (too specific to one PR). Instead: 'Database operations that modify multiple tables: wrap in a transaction. If any operation fails: the transaction rolls back.' The generalization: turns a one-time comment into a permanent rule that prevents the issue across all future code.

Test the converted rule: after writing, run a prompt that would trigger the pattern. 'Create a function that transfers funds between two accounts.' Expected: the AI generates a database transaction wrapping both the debit and credit operations. If the AI does not generate a transaction: the rule is too vague. Refine until the AI follows it consistently. AI rule: 'The test: the final validation. A rule that reads well but that the AI does not follow: needs refinement. Test before deploying.'

⚠️ Only Convention Comments Become Rules

Not every PR comment is a rule candidate. Convention: 'Please use our structured logger instead of console.log.' → Rule material. Logic: 'This filter should use <= not < for the boundary case.' → Not a rule (logic errors are case-specific). Nit-pick: 'I would name this variable differently.' → Not a rule (personal preference, not team convention). Question: 'Why did you use a map instead of reduce here?' → Not a rule (seeking understanding). Filter: only convention comments → rules. Everything else: stays as review conversation.

Step 3: Continuous Mining and the Feedback Loop

The one-time mining: produces 3-5 rules from the initial review of 30 days of comments. But: new recurring comments emerge every month as the codebase evolves. The continuous mining: each reviewer, during their normal review process, notes when they write the same convention comment for the third time. The third occurrence: is the signal. The reviewer: proposes a rule (via PR to the rules file or a message in #ai-standards). The rule: adopted and the comment never needs to be written again.

The feedback loop: reviewer writes convention comment → counts: is this the 3rd time? → if yes: propose a rule → rule is reviewed and adopted → the AI follows the rule → the reviewer never writes that comment again. This loop: is self-improving. Over time: convention comments in PRs decrease because the most frequent ones are encoded as rules. The review process: shifts toward logic and architecture (what humans do best) and away from convention enforcement (what rules do best).

Tracking the impact: compare convention-related review comments before and after mining. Before mining: 47% of comments are about conventions. After 3 months of mining and rule creation: 15% of comments are about conventions. The delta: 32 percentage points of review time redirected from convention enforcement to logic review. AI rule: 'Track convention comment percentage over time. It should decrease as you mine and encode recurring patterns. If it stays flat: you are not mining effectively or the rules are not working.'

ℹ️ The 3rd Occurrence Is the Signal

First time a reviewer writes a convention comment: could be a one-off. Second time: possibly a pattern. Third time: definitely a recurring pattern. The 3rd occurrence: the threshold for proposing a rule. Why not the first? Too aggressive — not every first-time comment represents a recurring need. Why not the 5th? Too slow — 5 occurrences of the same comment means the reviewer already wasted 10 minutes on a comment that should be a rule. The 3rd: the sweet spot between false positives and wasted effort.

PR Feedback to Rules Summary

Summary of creating AI rules from PR feedback patterns.

  • Source: recurring PR convention comments are the best rule ideas (real-world, pre-prioritized)
  • Mining: scan 30 days of PR comments. Categorize: convention, logic, nit-pick, question. Only conventions become rules
  • Frequency: count each convention comment. Top 3-5 by frequency: write these rules first
  • Conversion: what (from the comment) + why (from the reviewer) + when (from the PR context). 30 sec per rule
  • Generalize: turn specific PR comments into general rules that cover all future occurrences
  • Test: run a prompt that should trigger the new rule. Refine until the AI follows consistently
  • Continuous: 3rd occurrence of the same comment = propose a rule. Self-improving feedback loop
  • Impact: convention comment percentage decreases over time (47% → 15% after 3 months of mining)