Guides

AI Coding for Open Source Maintainers

Open source maintainers review hundreds of PRs from contributors with different conventions. AI rules in the repo: the contribution guide that contributors' AI tools follow automatically.

5 min read·July 5, 2025

200 contributors. One CLAUDE.md. Convention-related review comments: dropped 80%. The maintainer's highest-leverage 30 minutes.

Self-sufficient rules, PR quality improvement, multi-tool support, CONTRIBUTING.md integration, and community rule contributions

The Maintainer's Review Burden

Open source maintainers: volunteer their time to review community contributions. The burden: each PR from a new contributor requires: style correction comments ('We use named exports, not default'), pattern guidance ('Our error handling uses Result types, not try-catch'), and test standard enforcement ('Tests need edge cases, not just happy path'). These comments: repeated for every new contributor. The time: 30-60 minutes per PR on convention issues alone. For a popular project with 50 PRs per month: 25-50 hours per month spent on convention reviews. Time that could be spent on feature development, architecture decisions, or community building.

CLAUDE.md as the solution: add a CLAUDE.md (and .cursorrules, copilot-instructions.md) to the repository. Contributors' AI tools: read the file automatically and generate code following the project's conventions. The convention-related review comments: reduced by 80%+ (see the open source case study — Article 403). The maintainer's review: focuses on logic and correctness, not on conventions. The maintainer's time: recovered for higher-value activities.

The multiplier effect: a CLAUDE.md benefits every contributor who uses AI tools (80%+ of developers in 2026). Each contributor: generates convention-compliant code without reading the full CONTRIBUTING.md. The maintainer: writes the CLAUDE.md once (30 minutes). Every future contributor: benefits automatically. The CLAUDE.md: the highest-leverage document a maintainer can create. 30 minutes of writing: saves hundreds of hours of review comments over the project's lifetime.

Designing Rules for Open Source Projects

Open source rules differ from private project rules: contributors are external (they do not know your unwritten conventions), contributor skill levels vary widely (from first-time contributors to expert developers), and the rules must be self-sufficient (contributors cannot ask the team for clarification — the rules must explain everything). Design principles: explicit (state every convention — do not assume the contributor knows anything about the project), include examples (show the correct pattern — contributors learn faster from examples than from descriptions), and include the why (the rationale helps contributors understand, not just follow, the conventions).

Essential OSS rule sections: project context (what the project does, the tech stack, how to set up the development environment), coding conventions (naming, error handling, import ordering — with examples), testing standards (the test framework, how to run tests, what to test, the naming convention), PR requirements (what a PR must include — tests, documentation updates, changelog entries), and architecture overview (how the project is organized — which directories contain what, how modules interact). These sections: a complete contributor onboarding document that the AI reads and follows.

What NOT to include in OSS rules: internal team processes (release procedures, deployment steps — these are for maintainers, not contributors), opinions about other projects ('We do not use library X because...' — this is unnecessary for the AI), and overly strict rules that discourage contributions (a first-time contributor should be able to submit a PR that passes the rules without being an expert). AI rule: 'OSS rules: welcoming, not gatekeeping. The rules help contributors succeed, not filter them out. A good CLAUDE.md: lowers the barrier to contribution while maintaining quality.'

💡 OSS Rules Must Be Self-Sufficient — Contributors Cannot Ask

In a company: a new developer asks in Slack: 'What is our naming convention?' Gets an answer in 5 minutes. In open source: a contributor from another timezone submits a PR at 3am. There is no Slack to ask. The CLAUDE.md: must answer every convention question without human interaction. If the rules say 'follow our conventions': the contributor does not know what those are. If the rules say 'camelCase for functions, PascalCase for components, UPPER_SNAKE for constants': the AI generates it correctly. Self-sufficient rules: the standard for OSS.

Improving PR Quality Without Increasing Review Time

Before CLAUDE.md: a new contributor submits a PR. Review comments: 'Please use named exports' (3 files), 'Our tests use describe/it naming' (test file), 'We use Zod for validation, not manual checks' (route handler), and 'Please add edge case tests' (test file). The contributor: submits a revision. More comments. Another revision. After 3 rounds: the PR is mergeable. Total maintainer time: 45 minutes. Total contributor time: 2 hours. After CLAUDE.md: the same new contributor submits a PR. Their AI: followed the CLAUDE.md. Named exports, describe/it naming, Zod validation, and edge case tests: all correct from the first submission. Review comments: 2 (both about logic, not conventions). One revision. Merged. Total maintainer time: 15 minutes. Total contributor time: 45 minutes.

PR acceptance rate: the open source case study (Article 403) reported: PR first-review acceptance improved from 30% to 55% after adding AI rules. Convention-related review comments dropped from 60% to 15% of all comments. PR abandonment rate dropped from 30% to 12%. The numbers: directionally consistent across projects that adopt AI rules. The CLAUDE.md: the single highest-impact quality improvement a maintainer can make for contributor PRs.

Scaling contributions: a project that grows from 10 to 200 contributors: the review burden grows linearly with contributors. Without AI rules: more contributors = more convention review time = maintainer burnout. With AI rules: more contributors = same convention review time (near zero) + more logic review time (proportional to contribution volume, but much less per PR). The AI rules: break the linear relationship between contributors and maintainer review burden. AI rule: 'AI rules scale contribution quality without scaling maintainer time. 10 contributors or 200: the convention compliance is the same because the AI handles it.'

ℹ️ CLAUDE.md: The Highest-Leverage Document a Maintainer Can Create

The maintainer writes: CONTRIBUTING.md (read by humans — maybe). CLAUDE.md (read by AI tools — automatically). The CONTRIBUTING.md: a document that contributors may or may not read before submitting. The CLAUDE.md: a document that the contributor's AI reads and follows for every line of generated code. The CLAUDE.md: enforced by the contributor's own tool, not by the maintainer's review. The enforcement: free (the AI does it) and complete (every generated line). For the 30 minutes of writing: the highest leverage a maintainer can achieve.

Supporting All AI Tools in Open Source

Contributors use different AI tools: Claude Code (reads CLAUDE.md), Cursor (reads .cursorrules), GitHub Copilot (reads .github/copilot-instructions.md), Windsurf (reads .windsurfrules), and Cline (reads .clinerules). For maximum coverage: include all rule files in the repository. The content: identical across all files. Maintenance: one source file (e.g., RULES.md) and a build script that copies to all tool-specific files. The build script: runs as a pre-commit hook or CI step.

The CONTRIBUTING.md integration: update the CONTRIBUTING.md to reference the AI rules. 'Using an AI coding tool? The rules in CLAUDE.md / .cursorrules / copilot-instructions.md guide your AI to generate code following our project conventions. This is the fastest way to get your PR accepted on the first review.' This note: directs contributors to the rules and sets the expectation that their AI should follow the project's conventions.

Community contributions to rules: contributors who discover missing rules (the AI generates a pattern that gets flagged in review): can propose rule additions. The contribution: helps the entire community. The rule PR: reviewed by the maintainer and merged like any other contribution. Over time: the rules improve from community experience, not just the maintainer's knowledge. The CLAUDE.md: a community-maintained document, not just a maintainer-authored one. AI rule: 'The best OSS rules: community-contributed. A contributor who encounters a missing rule and proposes the addition: has improved every future contribution. Encourage rule PRs alongside code PRs.'

⚠️ Rules Should Welcome Contributors, Not Filter Them

Overly strict rules: '100% test coverage required. Every function needs JSDoc with 5+ fields. No PR accepted without architecture diagram.' A first-time contributor: overwhelmed. They do not submit the PR. The contribution: lost. Welcoming rules: 'Tests: happy path + main error case. JSDoc: @param and @returns for exported functions. No architecture diagram needed.' A first-time contributor: can meet these standards with AI assistance. They submit. The maintainer: provides guidance for improvements in the review. The contribution: saved. Strict enough for quality. Welcoming enough for participation.

Open Source Maintainer Quick Reference

Quick reference for open source maintainers using AI rules.

  • The leverage: 30 min writing CLAUDE.md → saves hundreds of hours of convention review comments
  • PR quality: convention comments drop 80%+. First-review acceptance improves 45%+. Abandonment drops
  • Design: explicit, with examples, with rationale. Contributors cannot ask for clarification — rules must be self-sufficient
  • Sections: project context, coding conventions, testing, PR requirements, architecture overview
  • Welcoming: rules help contributors succeed, not filter them out. Lower the barrier, maintain quality
  • Multi-tool: CLAUDE.md + .cursorrules + copilot-instructions.md. One source, multiple copies
  • CONTRIBUTING.md: reference the AI rules. Set expectation that AI should follow conventions
  • Community rules: encourage contributors to propose rule additions. Rules improve from community experience