What Is Agentic Coding?
Agentic coding is a development workflow where AI doesn't just suggest code — it plans, executes, and iterates autonomously across multi-step tasks. Instead of asking the AI to 'write a function that validates email addresses,' you say 'add user registration with email validation, password hashing, and session management' and the AI handles the entire implementation: creating files, installing packages, writing tests, and debugging errors.
Tools like Claude Code's agent mode, Cursor's composer, and emerging frameworks like the Claude Agent SDK enable this pattern. The AI operates as an autonomous agent with access to your file system, terminal, and development tools — not just a text completion engine sitting in your editor.
This shift from 'AI suggests, human implements' to 'AI implements, human reviews' fundamentally changes the role of coding rules. When the AI is writing 50 lines of code, vague rules are annoying but manageable. When it's writing 500 lines across 10 files in a single session, vague rules become dangerous.
What Is Vibe Coding?
Vibe coding is the cultural counterpart to agentic coding. Coined in early 2025, it describes a development style where the programmer acts more like a creative director than an engineer — describing what they want in natural language and letting the AI handle the implementation details.
The 'vibe' in vibe coding refers to the feeling-based feedback loop: you describe the vibe you want ('make this dashboard feel snappy and modern'), the AI generates code, you look at the result and say 'more like this, less like that,' and the AI iterates. It's programming by intention rather than instruction.
Vibe coding works surprisingly well for prototyping, UI development, and exploratory coding. It works poorly for production systems that require specific security patterns, performance characteristics, or architectural consistency. The gap between 'looks right' and 'is right' is where AI coding rules fill in.
Why Agentic Workflows Need Stronger Rules
In traditional AI-assisted coding, the human reviews every suggestion before it's applied. There's a natural checkpoint — you see the generated code, evaluate it, and accept or reject it. In agentic workflows, the AI may create multiple files, install dependencies, and run commands before you review anything.
This increased autonomy means mistakes compound. Without rules, an agentic session might install an unmaintained npm package, create a database query with SQL injection, hardcode an API key for testing, and commit all of it — in under 60 seconds. Each mistake individually is fixable. Combined, they create a codebase that needs a security audit after every AI session.
Stronger rules don't slow agentic workflows down — they make them viable for production code. The rules act as guardrails that let the AI operate autonomously within safe boundaries. 'Never install packages with fewer than 1,000 weekly npm downloads' prevents one class of problems. 'Always use parameterized queries' prevents another. The AI follows these constraints while still handling the creative implementation work.
Without rules, an agentic session can install unmaintained packages, create SQL injection vulnerabilities, hardcode API keys, and commit everything — in under 60 seconds. Rules are the guardrails that make autonomy safe.
Designing Rules for Autonomous AI Agents
Rules for agentic workflows need to be different from rules for traditional AI assistance. They need to be more explicit about boundaries, more specific about prohibited actions, and more comprehensive about the decisions the AI will make without human review.
The key pattern is boundary rules — rules that define what the AI can and cannot do autonomously. 'Never delete files without asking first,' 'Never run database migrations in production,' 'Never push to main without creating a PR.' These aren't coding style preferences; they're operational safety limits.
Architectural rules become critical in agentic contexts. When the AI is creating new files and directories, it needs to know your project structure: 'New API routes go in src/app/api/. New components go in src/components/. New utility functions go in src/lib/. Never create new top-level directories.' Without these rules, the AI invents its own organizational scheme with every session.
Dependency rules prevent supply chain risks: 'Only install packages from the npm registry. Check that packages have more than 1,000 weekly downloads and have been updated in the last 12 months. Never install packages that require postinstall scripts from unknown publishers.'
- Boundary rules: what the AI can/cannot do autonomously (delete, deploy, commit)
- Architectural rules: where new files go, how the project is organized
- Dependency rules: what packages can be installed, what criteria they must meet
- Testing rules: what must be tested before the AI considers a task complete
- Review rules: what changes require human approval before proceeding
Write boundary rules first: 'Never delete files without asking,' 'Never push to main,' 'Never run migrations in production.' These aren't style preferences — they're operational safety limits for autonomous AI.
The Risk of Vibe Coding Without Standards
Vibe coding without rules produces impressive demos and fragile production code. The AI optimizes for the immediate request — 'make it work, make it look good' — without considering the long-term implications that an experienced engineer would catch: error handling, edge cases, security, performance under load, and maintainability.
The most common failure mode is 'demo-quality code in production.' The AI generates a feature that works perfectly in development with test data, but breaks in production because it doesn't handle network errors, validates no input, has no rate limiting, and stores session data in memory. Every one of these gaps could be prevented by a rule.
This doesn't mean vibe coding is bad — it means it needs guardrails. Think of AI rules as the equivalent of building codes for construction. An architect can be as creative as they want with the design, but the building still needs to meet structural and safety standards. Vibe coding is the architecture. Rules are the building codes.
Balancing Freedom and Control
The goal isn't to constrain the AI so much that agentic workflows lose their speed advantage. It's to create a framework where the AI can operate freely within safe boundaries — fast and creative within the lines, stopped hard before the lines.
In practice, this means three tiers of rules. Tier 1 (hard limits): rules that must never be violated regardless of context — security rules, data handling rules, deployment restrictions. These are non-negotiable. Tier 2 (strong preferences): rules that should be followed by default but can be overridden with explicit justification — architectural patterns, testing requirements, code style. Tier 3 (soft guidance): rules that improve quality but aren't critical — naming conventions, comment style, formatting preferences.
Most teams over-index on Tier 3 rules (formatting, naming) and under-index on Tier 1 rules (security, boundaries). For agentic workflows, flip the priority: write Tier 1 rules first, add Tier 2 rules as patterns emerge, and let linters handle Tier 3.
The sweet spot is roughly 30 rules total: 10 hard limits, 15 strong preferences, and 5 pieces of project context. This gives the AI enough freedom to be creative while preventing the mistakes that are expensive to fix after the fact.
- Tier 1 (Hard Limits): Security, data safety, deployment restrictions — never violated
- Tier 2 (Strong Preferences): Architecture, testing, patterns — followed by default
- Tier 3 (Soft Guidance): Naming, comments, formatting — nice to have, not critical
- Priority for agentic work: Tier 1 > Tier 2 >> Tier 3
- Target: ~30 rules total (10 hard + 15 strong + 5 context)
Aim for ~30 rules: 10 hard limits (security, boundaries), 15 strong preferences (architecture, testing), 5 project context items. This gives the AI freedom to be creative while preventing expensive mistakes.