Tutorials

How to Create a Rule Changelog

A rule changelog documents every change to your AI rules: what changed, when, why, and who approved it. The permanent record that answers 'why did the AI start generating different code?'

4 min read·July 5, 2025

v2.5.0: Added Result pattern for error handling. Why: try-catch swallowed errors at service boundaries. Impact: AI generates Result.ok()/Result.err().

Added/Updated/Removed format, semantic versioning, semi-automated generation, and the changelog as quarterly review agenda

The Changelog Answers 'Why Did the AI Change?'

A developer notices: the AI generates a different error handling pattern than last week. Without a changelog: they investigate (read the rules diff, ask in Slack, check git blame on CLAUDE.md). This takes 15-30 minutes. With a changelog: they check the latest entry ('v2.5.0 — 2026-03-25: Updated error handling from try-catch to Result pattern for business logic functions. Reason: consistent error propagation across service boundaries.'). Answer: found in 30 seconds. The changelog: eliminates investigation time for every rule change.

The changelog serves: developers (understanding why AI behavior changed), the platform team (tracking the evolution of rules over time), auditors (demonstrating controlled change management), and new team members (understanding how the rules reached their current state). It is the single most valuable piece of rule documentation after the rules themselves.

Keep the changelog alongside the rules: in the same repository (CHANGELOG.md next to CLAUDE.md), in the RuleSync dashboard (version history with descriptions), or both. The changelog: linked from every notification (Slack, email, PR). When a developer receives a notification about a rule change: the changelog link provides the details.

Step 1: Changelog Format

Each entry: version number, date, and a list of changes. Format: '## v2.5.0 — 2026-03-25\n### Added\n- Error handling: Result<T, E> pattern for business logic (replaces try-catch)\n- API validation: Zod schemas required on all route handlers\n### Updated\n- TypeScript version reference: 5.3 → 5.4\n- Import ordering: added blank line between external and internal imports\n### Removed\n- Deprecated lodash import rule (replaced by native alternatives rule in v2.3)'. The Added/Updated/Removed categories: follow the Keep a Changelog convention, familiar to most developers.

Each change includes: what changed (the rule content), why (the motivation — a sentence or a link to the discussion/PR), and the impact (which AI behavior changes as a result). Example: 'Added: Result pattern for error handling. Why: try-catch swallowed errors at service boundaries, causing silent failures. Impact: AI now generates Result.ok() and Result.err() instead of try-catch in src/services/.' The why and impact: the most valuable parts. Without them: the changelog is a list of facts. With them: it is a narrative of improvement.

Semantic versioning for rules: patch (v2.5.1): typo fixes, wording clarifications — no behavior change. Minor (v2.5.0): new rules added, existing rules updated — AI behavior changes. Major (v3.0.0): breaking changes — rules removed, fundamental patterns changed. The version: signals the significance. Developers: scan for major/minor versions. Patches: safe to ignore. AI rule: 'Semantic versioning: patch for text fixes, minor for behavior changes, major for breaking changes. The version number: communicates significance at a glance.'

💡 Why + Impact = The Valuable Part of Each Entry

'Updated error handling rule.' — The developer knows what changed but not why or how it affects them. 'Updated error handling from try-catch to Result pattern. Why: try-catch swallowed errors at service boundaries. Impact: AI now generates Result.ok()/Result.err() in src/services/.' — The developer knows what changed, why, and exactly how their AI will behave differently. The what: automated from diffs. The why and impact: the human-authored part that makes the changelog valuable.

Step 2: Automating Changelog Generation

Manual changelog: the rule author writes the changelog entry when making a rule change. Advantage: the human adds why and impact context. Disadvantage: sometimes forgotten (the rule changes but the changelog is not updated). Semi-automated: a script generates the what (diffs the old and new rules, lists additions/updates/removals). The human: adds the why and impact. This combination: ensures completeness (the script catches every change) with context (the human adds meaning).

Git-based automation: if rules are in a git repository, the commit history IS a changelog. But: commit messages are often terse ('update rules'). Better: enforce descriptive commit messages for rule changes ('feat(rules): add Result pattern for error handling — replaces try-catch at service boundaries'). The commit log: becomes a usable changelog. For formal changelogs: extract from git tags and commit messages using a tool like conventional-changelog.

RuleSync dashboard: automatically maintains version history with diffs. Each published version: shows what changed from the previous version. The platform team: adds descriptions to each version in the dashboard. The description: serves as the changelog entry. Export: the version history can be exported as CHANGELOG.md for the repository. AI rule: 'The best changelog: automated what (from diffs) + human why and impact (added by the rule author). The combination: complete and meaningful.'

ℹ️ The Changelog IS the Quarterly Review Agenda

Quarterly review without a changelog: 'What changed this quarter? Does anyone remember?' 15 minutes of reconstruction. Quarterly review with a changelog: open CHANGELOG.md. v2.4.0: 3 rules added, 2 updated. v2.4.1: 1 rule patched. v2.5.0: breaking change — new error handling pattern. The agenda: review each change for effectiveness. The changelog: provides the complete, accurate list. No memory required. No investigation needed.

Step 3: Changelog Maintenance and Usage

Link the changelog everywhere: in the rules file header ('Changelog: see CHANGELOG.md or [dashboard link]'), in every notification (Slack message includes the changelog link), in every sync PR (PR body links to the relevant changelog entry), and in the quarterly review (the changelog provides the agenda for what changed this quarter). The changelog: only valuable if people can find it. Linking: makes it discoverable.

Changelog as quarterly review input: at the quarterly review, the changelog for the past 3 months IS the agenda. What was added? Review: was the addition effective? What was updated? Review: did the update improve quality? What was removed? Review: was the removal appropriate? The changelog: provides the structure for the review discussion. Without it: the review starts with 'What changed this quarter?' and spends 15 minutes reconstructing the history.

Long-term value: after 12 months, the changelog tells the story of how the rules evolved: the initial 15 rules (v1.0), the security-focused expansion (v1.5), the framework-specific addition (v2.0), the error handling refactoring (v2.5), and the breaking change to a new testing convention (v3.0). A new team member: reads the changelog and understands why the rules are the way they are — not just what they are. AI rule: 'The changelog: the narrative of how your team's conventions evolved. It turns rules from arbitrary decisions into documented, reasoned choices.'

⚠️ A Changelog Only Works If People Can Find It

The most complete changelog in the world: useless if it is in a file nobody knows about. Link the changelog: in the CLAUDE.md header ('Changelog: CHANGELOG.md'), in every Slack notification ('Details: [changelog link]'), in every sync PR ('Changes: [changelog link]'), and in the quarterly review invitation ('Review the changelog before the meeting: [link]'). Every touchpoint with rules: includes a changelog link. Discovered once: bookmarked. Never discovered: never used.

Changelog Summary

Summary of creating and maintaining a rule changelog.

  • Purpose: answers 'why did the AI change?' in 30 seconds instead of 30 minutes of investigation
  • Format: version, date, Added/Updated/Removed categories. Each change: what, why, and impact
  • Versioning: patch (text fix), minor (behavior change), major (breaking change). Semantic signals
  • Automation: script generates 'what' from diffs. Human adds 'why' and 'impact'. Semi-automated = best
  • Git-based: descriptive commit messages for rule changes. Conventional-changelog for extraction
  • Link everywhere: rules file, notifications, sync PRs, quarterly review. Discoverable = used
  • Quarterly input: the changelog IS the quarterly review agenda. What changed → was it effective?
  • Long-term: the story of how conventions evolved. New team members: understand the why, not just the what