Tutorials

How to Write Your First CLAUDE.md in 10 Minutes

A step-by-step tutorial for writing your first CLAUDE.md file: from blank file to a working rule set that makes Claude Code generate code following your project's conventions.

10 min read·July 5, 2025

10 minutes. One file. Your AI generates code that follows your project's conventions from the first prompt.

Project context, coding conventions, testing standards, verification, and iteration — step by step

What Is CLAUDE.md and Why You Need One

CLAUDE.md is a markdown file in the root of your repository that tells Claude Code how to generate code for your project. Without it: Claude generates generic code using general best practices. With it: Claude generates code that follows your specific conventions — your naming patterns, your error handling approach, your testing framework, your project structure. The difference: generic code that needs manual adjustment vs project-specific code that is ready for review.

Where it goes: the root of your repository, next to package.json (or go.mod, requirements.txt, etc.). Claude Code reads it automatically — no configuration needed. Just create the file and start writing. The file uses standard Markdown: headings for categories, bullet points for rules, and code blocks for examples.

What to include: project context (what the project does, what tech stack it uses), coding conventions (naming, file structure, patterns), testing standards (framework, naming, coverage expectations), and project-specific rules (domain terminology, architectural decisions, integration patterns). You do not need to cover everything — start with the 10-15 conventions that matter most.

Step 1: Project Context (2 Minutes)

Start with a brief description of the project. This gives Claude context for every decision it makes. Write 3-5 lines covering: what the project is, what tech stack it uses, and what architectural pattern it follows.

Example: '# Project Context\n\nThis is a Next.js 16 App Router application with TypeScript, Drizzle ORM, PostgreSQL (Neon), and Tailwind CSS 4. The app uses Server Components by default and tRPC for type-safe API calls. Authentication is handled by NextAuth.js v5.\n\nThe project follows a feature-based folder structure: src/features/{feature-name}/ contains components, hooks, and server actions for each feature.'

Why this matters: without project context, Claude might suggest Express.js patterns in a Next.js project, or recommend Prisma when you use Drizzle. The context section prevents these mismatches. Keep it factual and specific — Claude uses this to calibrate every subsequent generation.

💡 Start with What Causes the Most Review Comments

Not sure what conventions to write? Look at your last 10 code reviews. What did reviewers comment on most? Naming? Error handling? Import ordering? Test structure? Those are your first rules. The conventions that generate the most review comments: are the conventions where human memory fails most often. Encode them first — they have the highest immediate impact on review speed.

Step 2: Coding Conventions (4 Minutes)

List your top 10 coding conventions. Focus on: the patterns that cause the most code review comments, the decisions that differ from defaults, and the conventions that are unique to your project. Format: bullet points under a ## Coding Conventions heading.

Example conventions: '- Use async/await for all asynchronous operations (never .then() chains)\n- Error handling: use Result pattern with { success: true, data } | { success: false, error } — never throw in business logic\n- Naming: camelCase for variables and functions, PascalCase for components and types, UPPER_SNAKE for constants\n- Imports: group by external → internal → relative, with blank line between groups\n- Components: functional only, use named exports (not default exports)\n- Database: use Drizzle query builder, never raw SQL. Transactions for multi-table mutations\n- API routes: validate all inputs with Zod schemas. Return structured JSON responses\n- No any type — use unknown for truly unknown types, then narrow with type guards'

The test: imagine a new developer joining your team. What 10 things would you tell them in the first hour? Those are your first 10 rules. You can always add more later — start with what matters most.

⚠️ Do Not Try to Cover Everything on Day 1

The temptation: write 100 rules before starting. The result: 2 weeks of writing, rules that are untested, and many that turn out to be impractical. Better: write 10-15 rules today (10 minutes). Use them for a week. Add 5 more rules based on what Claude gets wrong. Repeat. After a month: 25-35 battle-tested rules. Each one earned its place by solving a real problem. 10 great rules beat 100 untested ones.

Step 3: Testing Standards (2 Minutes)

Define how tests should be written. Claude generates tests alongside code — these rules ensure the tests follow your patterns. Cover: the test framework, naming convention, what to test, and what not to test.

Example: '## Testing\n\n- Framework: Vitest for unit/integration, Playwright for E2E\n- Test files: co-located with source (user-service.ts → user-service.test.ts)\n- Naming: describe("FunctionName") → it("should return X when given Y")\n- Unit tests: test business logic and utilities. Mock external dependencies\n- Integration tests: test API routes with real database (use test database)\n- E2E tests: only for critical user flows (login, checkout, onboarding)\n- No snapshot tests for components — use assertion-based tests'

Why testing rules matter: without them, Claude might generate Jest tests when you use Vitest, put test files in a separate __tests__ directory when you co-locate them, or generate snapshot tests when you prefer explicit assertions. Testing conventions are some of the most visible rules because every feature includes tests.

ℹ️ The Verification Prompt Is Your Quality Check

After writing your CLAUDE.md: test it with a realistic prompt. Not 'write a hello world' — something real: 'Create an API endpoint that fetches a user by ID with error handling and a test.' Check the output: correct naming? Your error pattern? Your test framework? If the output matches your conventions: the rules work. If not: the rule was too vague. Make it more specific and test again. Two verification prompts: enough to validate the entire rule file.

Step 4: Verify and Iterate (2 Minutes)

Save the file as CLAUDE.md in your repo root. Open Claude Code and test with a realistic prompt: 'Create a new API endpoint that fetches a user by ID with error handling and a test.' Review the output: does it use your naming convention? Does it use your error handling pattern? Does it generate a test with your framework and naming? If yes: the rules are working. If no: the rule was too vague — make it more specific.

Iterate: after using the rules for a few days, you will notice gaps — patterns Claude gets wrong because the rule is missing. Add rules as you discover gaps. The CLAUDE.md is a living document. A good rule file after 1 month: has 25-35 rules. After 3 months: has 30-50 rules. It grows as you discover what Claude needs to know.

Share with your team: commit the CLAUDE.md to the repo. Every developer who uses Claude Code on this repo: automatically gets the same rules. No configuration needed. The conventions are encoded once and applied everywhere. Your first CLAUDE.md is done — and it took less time than writing a Jira ticket.

  • Step 1: project context (tech stack, architecture, structure) — 2 minutes
  • Step 2: top 10 coding conventions (naming, patterns, error handling) — 4 minutes
  • Step 3: testing standards (framework, naming, what to test) — 2 minutes
  • Step 4: verify with a test prompt and iterate — 2 minutes
  • Total: 10 minutes from blank file to working CLAUDE.md
  • Living document: add rules as you discover gaps. 25-35 rules after 1 month
  • Share: commit to repo root. All team members get the same rules automatically
How to Write Your First CLAUDE.md in 10 Minutes — RuleSync Blog