Case Studies

Case Study: Open Source Project AI Rules

A popular open source project with 200+ contributors implements AI rules to maintain consistency across a global, volunteer contributor base. Results: PR acceptance rate improved 45% and maintainer review time dropped 50%.

6 min readยทJuly 5, 2025

200+ contributors. PR acceptance rate improved 45%. Maintainer review time cut in half. AI rules are the best contribution guide.

Contributor-friendly rules, multi-tool distribution, first-PR success rate, and maintainer capacity recovery

The Project: FormKit (Popular Open Source UI Library)

FormKit (name changed) is a popular open source form library for Vue.js with 15K GitHub stars, 200+ contributors, and 4 core maintainers. The codebase: TypeScript, monorepo with 12 packages, comprehensive test suite. The challenge: external contributors submitted PRs that varied wildly in: coding style (some followed Vue conventions, others came from React), test quality (some PRs had no tests, others had excessive mocking), documentation (some included docs updates, most did not), and error handling (no consistent pattern).

The maintainer burden: core maintainers spent 60% of their review time on convention issues โ€” requesting style changes, asking for tests, pointing out that the project uses a specific error handling pattern. For every 10 PRs: 3 were accepted on first review, 4 required 2-3 rounds of revision (mostly convention-related), and 3 were abandoned after the contributor gave up on the revision process. The high abandonment rate: meant valuable contributions were lost because the convention barrier was too high.

The solution: add AI rules (CLAUDE.md and .cursorrules) to the repository so that contributors' AI tools generate code following the project's conventions. The rules file becomes the machine-readable contribution guide: conventions that were previously described in CONTRIBUTING.md (and often not read) are now enforced by the contributor's AI tool at generation time.

Implementation: Contributor-Friendly AI Rules

Rule design principle: make the first PR succeed. The rules were designed to help a first-time contributor submit a PR that passes review on the first or second attempt. Rules covered: the project's TypeScript conventions (strict mode, explicit return types, no any), the testing pattern (Vitest, describe/it naming, snapshot tests for components, unit tests for logic), error handling (the project's custom Result type instead of try/catch), documentation requirements (JSDoc for public APIs, update docs/ for new features), and commit message format (conventional commits matching the project's changelog generator).

Distribution: the rules files were added to the repository root. CLAUDE.md for Claude Code users, .cursorrules for Cursor users, and .github/copilot-instructions.md for Copilot users. All three contained identical content. The CONTRIBUTING.md was updated to say: 'Using an AI coding tool? The rules in CLAUDE.md / .cursorrules guide your AI to generate code following our conventions. This is the fastest way to get your PR accepted.'

No enforcement, only guidance: open source cannot enforce rules (contributors use their own tools on their own machines). The rules are guidance that makes it easier to contribute correctly. Contributors who do not use AI tools: still read CONTRIBUTING.md. Contributors who use AI tools: the AI follows the conventions automatically. Both paths lead to the same result โ€” but the AI path is faster and more consistent.

๐Ÿ’ก Support All Major AI Tools with One Source File

CLAUDE.md for Claude Code. .cursorrules for Cursor. .github/copilot-instructions.md for Copilot. Three files with identical content. Maintaining three files: risks drift. Solution: one canonical file (RULES.md) and a simple build script that copies to all three locations. Contributors using any AI tool: get the same conventions. Maintainers update one file. The build script handles distribution.

Results After 6 Months

PR acceptance rate: improved from 30% first-review acceptance to 55% (+45% relative improvement). The remaining 45%: mostly needed logic changes (which rules cannot prevent โ€” domain knowledge is required). Convention-related revision requests: dropped from 60% of review comments to 15%. Maintainers: reviewed for correctness and architecture, not style.

Maintainer review time: dropped 50%. The average PR review went from 45 minutes to 22 minutes. The reduction: almost entirely from eliminated convention discussions. Maintainers used the saved time to: review more PRs (throughput increased 30%), write more documentation, and work on the roadmap. The 4 maintainers effectively gained 1 FTE of capacity from review time savings alone.

Contributor experience: contributor satisfaction survey showed a 40% improvement. Top feedback: 'The AI rules made my first contribution easy โ€” I did not have to guess the coding style.' 'My PR was accepted on the first try because the AI followed the project conventions.' PR abandonment rate: dropped from 30% to 12%. Contributors who might have given up after 3 rounds of style fixes: now submitted convention-compliant PRs from the start.

โ„น๏ธ Lower the First-PR Barrier, Increase Contributions

The open source contribution funnel: discover project โ†’ read contribution guide โ†’ set up environment โ†’ write code โ†’ submit PR โ†’ revise based on review โ†’ merged. Each step loses potential contributors. AI rules remove the biggest friction point: 'write code that matches the project's style.' Before rules: 30% of PRs needed 3+ revision rounds for style issues. After rules: the AI matches the style automatically. More contributors complete the funnel. The project grows faster.

Lessons Learned

Lesson 1 โ€” AI rules are the best contribution guide: CONTRIBUTING.md describes conventions in prose โ€” contributors must read, understand, and remember them. AI rules encode conventions in a format that the contributor's AI tool applies automatically. The contributor does not need to read, understand, or remember โ€” the AI handles it. AI rule: 'For open source projects: the AI rules file is the most effective contribution guide because it is enforced by the contributor's own tool, not by maintainer review.'

Lesson 2 โ€” Reducing the first-PR barrier increases contributions: many potential contributors are discouraged by the 2-3 revision cycles needed to match a project's conventions. AI rules reduce this barrier: the first PR is more likely to be accepted. Lower barrier: more contributors attempt contributions. Higher acceptance rate: more contributors complete contributions. The result: the contributor funnel converts at a higher rate at every stage.

Lesson 3 โ€” Three rule files, one source of truth: maintaining CLAUDE.md, .cursorrules, and copilot-instructions.md could lead to drift. Solution: one source file (RULES.md) and a build script that copies it to all three locations. Update one file, run the script, all three are in sync. AI rule: 'For projects that support multiple AI tools: maintain one canonical rule file and generate the tool-specific files from it.'

โš ๏ธ 60% of Review Comments Were About Conventions, Not Logic

Maintainers reviewing PRs: 'Please use our Result type instead of try/catch.' 'Our tests use describe/it, not test().' 'We use explicit return types on all exported functions.' These comments: repeated on every PR from every new contributor. The maintainer's time: wasted on conventions instead of evaluating the contribution's merit. AI rules automated these conventions. Maintainers now review for: is the logic correct? Is the approach sound? Does this improve the library? โ€” the questions that actually determine contribution quality.

Case Study Summary

Key metrics from the FormKit open source AI rules implementation.

  • Project: 15K-star Vue.js library, 200+ contributors, 4 maintainers, TypeScript monorepo
  • Problem: 60% of review time on conventions. 30% PR abandonment. High first-time contributor friction
  • Rules: TypeScript conventions, test patterns, error handling, docs requirements, commit format
  • Distribution: CLAUDE.md + .cursorrules + copilot-instructions.md. Generated from one source file
  • PR acceptance: 30% โ†’ 55% first-review acceptance (+45% improvement)
  • Review time: 45 min โ†’ 22 min per PR (-50%). Freed 1 FTE equivalent of maintainer capacity
  • Abandonment: 30% โ†’ 12%. Convention barrier lowered for first-time contributors
  • Key lesson: AI rules are the most effective contribution guide. The contributor's AI enforces conventions automatically