The AI Pair Programming Model
Traditional pair programming: two developers, one keyboard. The driver writes code, the navigator reviews in real-time. AI pair programming: one developer, one AI. But the roles are more fluid: sometimes the AI drives (generating code from a prompt), sometimes the human drives (writing code while the AI provides suggestions), and sometimes both contribute to the same function (the AI generates the structure, the human fills in the business logic).
The three modes of AI pair programming: AI-driven (the AI generates most of the code, the human reviews and guides), human-driven (the human writes code, the AI provides autocomplete and suggestions), and collaborative (the human describes the intent, the AI generates a draft, the human refines, the AI iterates). Each mode is appropriate for different tasks. The skill: knowing which mode to use when.
Rules enhance every mode: in AI-driven mode, rules ensure the AI generates convention-compliant code. In human-driven mode, rules guide the AI's suggestions toward project patterns. In collaborative mode, rules provide the shared context that keeps both human and AI aligned. Without rules: the AI's suggestions drift from project conventions over a long session. With rules: alignment is maintained automatically.
When to Let the AI Lead
AI-driven mode works best for: boilerplate generation (CRUD endpoints, data models, configuration files), pattern application (creating a new component that follows an established pattern), and repetitive tasks (generating similar functions for different entities, creating test suites for multiple endpoints). In these cases: the AI produces correct code faster than the human can type. The human's role: review the output, verify correctness, and make domain-specific adjustments.
Effective AI-driven prompts: be specific about the desired outcome, reference the pattern to follow (if the project has an existing example), and specify the scope ('Create the endpoint. Do not modify any existing files.'). Example: 'Create a new API endpoint GET /api/users/:id that: fetches a user by ID from the database using Drizzle, returns a structured response with the user data, returns 404 if the user is not found, and includes a Vitest test with happy path and not-found cases.' This prompt: gives the AI everything it needs to generate correct code in one pass.
Review AI-driven output carefully: the AI moves fast and the code looks polished. But: it might use a library function that does not exist in your version, handle an error case incorrectly, or generate a test that passes but does not actually verify the behavior. AI rule: 'When the AI drives: your job shifts from writing to reviewing. Review with the same rigor you would apply to code from a colleague โ the AI is your colleague, not your rubber stamp.'
Vague prompt: 'Create a user endpoint.' The AI generates: something that works but may not match your patterns. You spend 10 minutes adjusting. Specific prompt: 'Create GET /api/users/:id using Drizzle to fetch from the users table, returning a structured response with 404 handling, and a Vitest test with happy and not-found cases.' The AI generates: exactly what you need. You spend 1 minute reviewing. The time invested in writing a specific prompt: pays for itself immediately in reduced revision.
When to Take Over from the AI
Human-driven mode works best for: complex business logic (the domain knowledge is in your head, not in the rules), architectural decisions (choosing between approaches requires context the AI does not have), debugging (investigating unexpected behavior requires understanding the system holistically), and security-sensitive code (authentication flows, encryption, access control โ where subtle mistakes have severe consequences).
When to switch from AI-driven to human-driven: the AI generates code that is structurally correct but logically wrong (the AI does not understand the business domain well enough), the AI is stuck in a loop (regenerating similar incorrect code despite different prompts), or the task requires understanding code across many files simultaneously (the AI's context is limited). AI rule: 'Recognize when the AI is not the right tool for the current task. Switching to human-driven mode is not a failure โ it is efficient resource allocation.'
Using AI suggestions in human-driven mode: write the function signature and the first few lines. The AI provides autocomplete suggestions for the remainder. Accept suggestions that match your intent. Reject suggestions that diverge. This mode: leverages the AI for speed while keeping the human in control of the logic. AI rule: 'In human-driven mode: accept AI suggestions for syntax and boilerplate. Write business logic manually. The AI handles the mechanical parts while you handle the thinking parts.'
The AI generates the function three times with the same logical error. You rephrase the prompt. Same error. You add more context. Same error. Continuing to prompt: wastes time. Switching to human-driven mode: you write the 15 lines of business logic in 3 minutes. The AI could not understand the domain nuance from a prompt โ but you understand it from experience. Knowing when to stop prompting and start typing: is the key AI pair programming skill.
Session Flow and Context Management
Session structure: start by reading the rules (the AI loads them automatically, but reminding yourself what rules exist helps you prompt effectively). Plan the session: what will you build? Break it into 3-5 tasks. For each task: decide whether AI-driven or human-driven mode is more appropriate. Execute: alternate between modes as needed. End the session: commit the work, note any rules that need updating based on the session's experience.
Context management in long sessions: the AI's effective context degrades in very long sessions (the conversation grows and earlier context is compressed). Techniques: start new conversations for new tasks (do not reuse a conversation from the previous task), reference files explicitly ('Look at src/services/user-service.ts for the pattern'), and repeat key constraints when the AI seems to forget them ('Remember: we use the Result type, not throw'). AI rule: 'Long AI sessions: start a fresh conversation every 30-60 minutes or when switching to a new task. This resets the context and ensures the rules are fully loaded.'
The end-of-session check: before committing, review all changes made during the session. AI-generated code accumulates quickly โ a 2-hour session might produce 500+ lines of changes. Review: does all the code follow the rules? Are all tests passing? Are there any hallucinated imports or API calls? Is the code complete (no TODO placeholders the AI left behind)? AI rule: 'The end-of-session review is mandatory. AI pair programming produces code fast โ fast enough to outpace your ability to verify in real-time. The batch review at the end catches issues that slipped by during the session.'
A 3-hour AI session in one conversation: by hour 3, the AI's context is crowded with earlier code, earlier discussions, and earlier mistakes. The rules are still loaded but compete with 50,000 tokens of conversation history. Starting a fresh conversation: the AI loads the rules cleanly, has full context capacity for the current task, and does not carry forward misunderstandings from earlier in the session. Treat conversations like git branches: fresh for each task, not one infinite thread.
AI Pair Programming Summary
Summary of effective AI pair programming techniques.
- Three modes: AI-driven (boilerplate, patterns), human-driven (business logic, security), collaborative (intent โ draft โ refine)
- AI leads: for CRUD, pattern application, repetitive tasks. Human reviews every output
- Human leads: for complex logic, architecture, debugging, security. AI assists with suggestions
- Mode switching: recognize when AI is stuck or wrong. Switch to human-driven without guilt
- Prompts: be specific about outcome, reference existing patterns, specify scope
- Long sessions: fresh conversation every 30-60 min. Reference files explicitly. Repeat key constraints
- End-of-session: batch review all changes. Check for hallucinated APIs, incomplete code, failing tests
- Rules: enhance every mode. Maintain alignment between human intent and AI output throughout the session