Guides

AI Coding for Technical Writers

Technical writers create documentation, code examples, and tutorials. AI rules ensure every code example in docs follows the project's actual conventions โ€” not generic patterns from the AI's training data.

5 min readยทJuly 5, 2025

Same CLAUDE.md guides the codebase AND the documentation examples. Convention alignment: guaranteed. Documentation readers: learn the real patterns.

Code example generation, documentation freshness, testable docs in CI, and the writer-developer collaboration workflow

Technical Writers Need Accurate Code Examples

Technical writers produce: API documentation (with request and response examples), getting started guides (with setup code and configuration examples), tutorials (with step-by-step code that readers follow along), and code samples (standalone examples that demonstrate specific patterns). Every code example: must be accurate (it runs), current (it uses the project's current conventions), and consistent (it matches how the actual codebase is written). Without AI rules: the writer generates code examples that may not match the real codebase's conventions. With AI rules: every code example follows the same conventions as the real code.

The documentation code problem: a developer writes the actual codebase code (using the Result pattern, named exports, Vitest). A technical writer writes the documentation code (using try-catch, default exports, Jest). Both: syntactically correct. But: the documentation shows a different style than the actual code. A reader who follows the documentation: writes code that does not match the project. The disconnect: confuses readers and erodes documentation trust. AI rules: ensure the documentation code and the codebase code follow the same conventions because the same rules guide both.

AI rules as the single source of truth: the developer's AI reads CLAUDE.md and generates codebase code. The technical writer's AI reads the same CLAUDE.md and generates documentation code. Both: follow the same rules. The result: documentation examples that are indistinguishable from actual codebase code. The reader: learns the real patterns. The documentation: trustworthy because it matches reality. AI rule: 'One CLAUDE.md guides both development and documentation. The code examples in docs match the real code because both are generated from the same rules.'

Generating Code Examples with AI Rules

API documentation examples: prompt: 'Generate a code example showing how to call the createUser API endpoint. Use our project's conventions (from CLAUDE.md): fetch with async/await, Zod for request validation, and our structured error response format.' The AI: generates an example that matches the actual API's conventions. The reader: copies the example and it works with the real API. Without rules: the AI might generate an example using axios instead of fetch, without validation, and with a different error format.

Tutorial step-by-step code: prompt: 'Generate step 3 of the Getting Started tutorial: creating the first API route. Use our Next.js App Router conventions.' The AI: generates a route.ts file with the correct structure (App Router pattern, not Pages Router), the correct error handling (Result pattern from the rules), and the correct testing approach (Vitest, not Jest). The tutorial code: follows the same patterns the reader will see in the real codebase when they explore it.

Standalone code samples: prompt: 'Generate a complete example of user authentication using our project's patterns. Include: the NextAuth configuration, the session provider, the login page, and the middleware.' The AI: generates all 4 files following the rules (the same patterns used in the real codebase). The sample: a working, convention-compliant starting point that the reader can use directly. AI rule: 'Every documentation code example: generated with the same CLAUDE.md that guides the real codebase. The examples: accurate by construction, not by manual verification.'

๐Ÿ’ก Generate Code Examples, Do Not Hand-Write Them

Hand-written code examples: risk convention mismatch (the writer uses try-catch when the project uses Result pattern), typos (a missing semicolon or wrong import path), and staleness (the example reflects last year's conventions). AI-generated code examples: follow the current CLAUDE.md (convention match guaranteed), are syntactically correct (the AI does not make typos), and can be regenerated when conventions change (freshness through regeneration). The writer: prompts and reviews. The AI: generates the accurate code. The combination: faster and more accurate than hand-writing.

Maintaining Documentation Code Freshness

The documentation freshness problem: the codebase is updated (new error handling pattern, renamed API endpoint, updated framework version). The documentation code examples: still show the old patterns. The documentation: becomes stale. Readers: follow outdated patterns. The fix: regenerate documentation code examples when the codebase changes. With AI rules: prompt 'Regenerate the createUser API example using our current conventions.' The AI: reads the current CLAUDE.md (which has been updated with the codebase) and generates an example using the current patterns. Freshness: maintained through regeneration.

Automated freshness checks: include documentation code examples in the test suite. Each example: extracted and run as a test. If the example fails (the API changed, the function was renamed, the pattern was updated): the test fails. The writer: regenerates the example using the current rules. The test: passes. The documentation: current. This approach: the same as testing code examples in the README (a well-known practice). AI rules: ensure the regenerated examples match current conventions. AI rule: 'Testable documentation code: extracted and run in CI. When the test fails: the example is stale. Regenerate with current rules. The CI: catches stale documentation automatically.'

Version-specific documentation: if the project supports multiple versions (v1 and v2 of an API): each version's documentation uses the rules that were active during that version. The current CLAUDE.md: for current version docs. Archived rule sets: for older version docs. The AI: generates examples for the correct version by using the version-appropriate rules. AI rule: 'Current docs: current CLAUDE.md. Old version docs: archived rules from that version. The AI generates version-appropriate examples because the rules are version-matched.'

โ„น๏ธ Testable Documentation Code: Stale Examples Caught in CI

Extract every code example from the documentation. Run each as a test in CI. The login example: tests that the authentication flow works. The API example: tests that the endpoint returns the expected response. When the codebase changes (endpoint renamed, parameter added): the documentation test fails. The writer: regenerates the example with current rules. The test: passes. The documentation: current. Without testable docs: stale examples persist for months. With testable docs: stale examples caught within one CI cycle.

The Technical Writer's AI Workflow

Step 1 โ€” Set up the rules: ensure the CLAUDE.md (or equivalent) is accessible in the documentation project. If the docs are in the same repo as the code: the CLAUDE.md is already there. If the docs are in a separate repo: copy the CLAUDE.md from the code repo. The writer's AI: reads the same rules as the developer's AI. Convention alignment: guaranteed.

Step 2 โ€” Generate, do not hand-write, code examples: instead of typing code examples manually (error-prone, may not match conventions): prompt the AI to generate them. The AI: generates examples that follow the current rules. The writer: reviews for accuracy (does the example demonstrate the concept correctly?) and adds surrounding prose (the explanation, the context, the narrative). The code: from the AI. The words: from the writer. The combination: accurate examples with human-quality explanations.

Step 3 โ€” Review with a developer: the technical writer generates the code example. A developer reviews: does this example match our real codebase? Is the convention correct? Does this pattern work in our current version? The developer review: catches any AI errors (hallucinated APIs, incorrect patterns). The process: writer generates with AI + developer verifies accuracy. Together: documentation that is both well-written (the writer) and technically accurate (the developer). AI rule: 'The technical writer: generates code examples with AI rules. The developer: verifies accuracy. The writer handles the narrative. The developer handles the technical verification. Both roles: essential for quality documentation.'

โš ๏ธ Documentation Code โ‰  Codebase Code Without Shared Rules

Developer writes codebase code: Result pattern, named exports, Vitest, Drizzle ORM. Technical writer writes documentation code: try-catch, default exports, Jest, Prisma. Both: syntactically correct. But: the documentation shows a completely different style than the codebase. A reader who follows the documentation: produces code that does not match the project. The disconnect: confusing and trust-eroding. One CLAUDE.md for both: the documentation code matches the codebase code because both follow the same rules.

Technical Writer Quick Reference

Quick reference for technical writers using AI coding tools.

  • The problem: documentation code examples that do not match the real codebase's conventions
  • The solution: same CLAUDE.md guides both codebase code and documentation code. Convention alignment guaranteed
  • API docs: AI generates examples using project conventions. Readers copy examples that work with the real API
  • Tutorials: step-by-step code follows current framework patterns. Not outdated or generic patterns
  • Freshness: regenerate examples when the codebase changes. The AI uses updated rules. Examples stay current
  • Testing: extract documentation code examples into CI tests. Stale examples: caught automatically
  • Workflow: writer generates code with AI โ†’ developer reviews accuracy โ†’ writer adds narrative prose
  • Version docs: current CLAUDE.md for current version. Archived rules for older version documentation