Comparisons

Strict vs Flexible AI Rules: Finding the Balance

Too strict and the AI cannot solve novel problems. Too flexible and the AI generates inconsistent code. A guide to calibrating rule strictness by category: mandatory for security, strict for conventions, flexible for implementation, and open for exploration.

7 min read·May 22, 2025

React Hook Form for a 2-field search box because the rule demanded it — too strict. No form convention at all — too flexible.

Mandatory, strict, flexible, open: four levels, calibrated by consequence, with progression as teams learn

The Strictness Spectrum

AI rules exist on a spectrum from absolute ("never do X under any circumstance") to suggestive ("prefer X when appropriate"). Too strict: the AI cannot solve novel problems because every approach is constrained by rigid rules. A rule saying "always use React Hook Form for forms" prevents the AI from using a simple useState form for a 2-field search box. The AI follows the rule but the output is over-engineered. Too flexible: the AI generates inconsistent code because it has no guiding conventions. Without a form library rule: one component uses React Hook Form, another uses Formik, a third uses raw useState. Each works, but the codebase is inconsistent.

The balance: different categories of rules need different strictness levels. Security rules: mandatory (never override — parameterized queries, input validation, no secrets in code). Convention rules: strict (rarely override — naming conventions, file structure, import ordering). Implementation rules: flexible (AI chooses the best approach for the specific task). Exploration rules: open (the AI experiments with approaches, the developer evaluates). The strictness level matches: the consequence of violation (security = critical, convention = consistency, implementation = preference).

This article provides: a framework for calibrating rule strictness, examples at each level, signals that your rules are too strict or too flexible, and practical guidance for finding the balance. The goal: rules that guide the AI toward consistent, high-quality code without preventing it from solving novel problems effectively.

Mandatory Rules: Never Override (Security and Correctness)

Mandatory rules are: absolute constraints that the AI must follow in every situation, with no exceptions. Language: "Always", "Never", "Must", "Do not". Examples: "Never store passwords in plain text — always bcrypt with cost factor 12+." "Always use parameterized queries — never concatenate user input into SQL strings." "Never commit secrets to the repository — use environment variables or a vault." "Always validate user input at the API boundary with Zod schemas." "Never use any type in TypeScript — use unknown and narrow with type guards."

Mandatory rules cover: security (input validation, authentication, encryption), correctness (type safety, error handling, data integrity), and compliance (GDPR data handling, audit logging, accessibility). The consequence of violating a mandatory rule: security vulnerability, data loss, compliance violation, or production outage. The AI should: never find a scenario where violating these rules is acceptable. If the AI encounters a task that seems to require violating a mandatory rule: it should ask the developer rather than override the rule.

The risk of too many mandatory rules: if 80% of your rules are mandatory, the AI is: over-constrained (cannot choose the best approach for novel problems), slow to generate (checks every output against many hard constraints), and potentially unable to complete tasks (conflicting mandatory rules create deadlocks). Keep mandatory rules: focused on security and correctness. 10-20% of total rules should be mandatory. The rest: strict, flexible, or open.

  • Language: 'Always', 'Never', 'Must', 'Do not' — absolute, no exceptions
  • Covers: security (injection, auth, secrets), correctness (types, errors), compliance (GDPR, a11y)
  • Consequence of violation: vulnerability, data loss, compliance failure, outage
  • 10-20% of rules should be mandatory — more = over-constrained AI
  • If AI seems to need to violate: it should ask the developer, not override
💡 10-20% Mandatory, No More

If 80% of your rules are mandatory ('Always', 'Never'), the AI is over-constrained: cannot choose the best approach for novel problems. Keep mandatory at 10-20% (security, correctness). The rest: strict (40-60%), flexible (20-30%), open (0-10%). The pyramid matches consequence severity.

Strict Rules: Rarely Override (Conventions and Patterns)

Strict rules are: strong preferences that should be followed in almost all cases but can be overridden with good reason. Language: "Prefer", "Use X unless Y", "Default to", "Standard pattern is". Examples: "Prefer async/await over .then() chains — exception: when working with legacy callback-based APIs that require .then()." "Use Zustand for global state management — exception: if the feature requires Redux middleware (logging, persistence)." "Default to Server Components — add use client only when the component needs useState, useEffect, or browser APIs."

Strict rules cover: coding conventions (naming, formatting, import ordering), pattern selection (which library for which job, which pattern for which problem), and architectural decisions (file structure, module boundaries, data flow direction). The consequence of violating a strict rule: inconsistency (one module uses a different pattern than the rest), but not a security issue or correctness bug. Strict rules are: about team consistency and code review expectations. A developer violating a strict rule should: have a documented reason in a code comment.

Strict rules enable: AI to generate consistent code across the project (every component follows the same patterns), with escape hatches for edge cases (the exception clause allows the AI to deviate when the standard pattern does not fit). The AI should: follow the strict rule by default, and only deviate when the specific task falls into the documented exception case. 40-60% of rules should be strict — they are the foundation of codebase consistency.

  • Language: 'Prefer', 'Use X unless Y', 'Default to', 'Standard pattern is'
  • Covers: conventions (naming, formatting), patterns (libraries, architecture), file structure
  • Exception clause: 'unless Y' gives the AI permission to deviate in documented edge cases
  • 40-60% of rules should be strict — the foundation of codebase consistency
  • Violation consequence: inconsistency (not security or correctness) — requires documented reason

Flexible Rules: AI Chooses (Implementation Details)

Flexible rules are: guidance that the AI can interpret based on the specific situation. Language: "Consider", "When possible", "For simple cases X, for complex cases Y". Examples: "Consider memoization with useMemo for expensive computations — only when profiling shows a performance problem." "For simple forms (1-3 fields): useState is fine. For complex forms (4+ fields with validation): use React Hook Form." "When possible, use built-in browser APIs instead of adding a library (Intl for dates, URL for parsing, fetch for HTTP)."

Flexible rules cover: implementation decisions (which approach for this specific task), performance optimization (when to optimize, which technique to use), and library selection for non-standardized concerns (the project has no standard for a specific task type). The AI should: read the flexible rule, evaluate the current task, and choose the best approach. A flexible rule is: a decision framework, not a directive. The AI makes the decision within the framework.

Flexible rules prevent: the two failure modes of strict rules. Over-engineering (React Hook Form for a 2-field search box because the strict rule says "always use React Hook Form") and under-engineering (raw string manipulation for complex date formatting because the strict rule says "always use built-in APIs"). Flexible rules say: use your judgment, here is the framework for deciding. 20-30% of rules should be flexible — they cover the judgment calls that vary by task.

  • Language: 'Consider', 'When possible', 'For simple X, for complex Y'
  • Covers: implementation decisions, performance optimization, library selection
  • Decision framework: the rule provides criteria, the AI makes the judgment call
  • Prevents: over-engineering (strict rule forces complex solution for simple task)
  • 20-30% of rules should be flexible — judgment calls that vary by specific task

Open: No Rule (Exploration and Novel Tasks)

Some areas should have: no rules at all. The AI explores freely, the developer evaluates the output. No-rule areas: novel features that have no established pattern in the codebase (the AI proposes an approach, the developer refines it), spike/prototype work (speed matters more than consistency — rules slow down exploration), and areas where the team has not yet decided on a convention (adding a rule prematurely locks in an uninformed decision). Not every part of the codebase needs rules. Rules should: grow from experience, not from anticipation.

The danger of premature rules: a rule written before the team has experience with a pattern may be wrong. Example: a team adds a rule "always use tRPC for internal APIs" before building their first tRPC endpoint. After building 5 endpoints: they discover tRPC does not fit their multi-language backend. The premature rule: wasted time following a convention that did not fit. Better: no rule until the team has built 3-5 implementations, then codify the pattern that emerged. Rules from experience > rules from anticipation.

The progression: open (no rule, AI explores) → flexible (pattern emerging, provide decision framework) → strict (pattern established, enforce as default with exceptions) → mandatory (security/correctness, no exceptions). Rules should: migrate through this progression as the team gains experience. New feature area: start with no rules. After 3-5 implementations: add flexible guidance. After 10+ implementations: promote to strict convention. After security review: mandatory for security concerns. The rule lifecycle matches the team learning curve.

  • No rule: novel features, prototypes, areas without established patterns
  • Danger of premature rules: lock in uninformed decisions before team has experience
  • Better progression: no rule → flexible → strict → mandatory as experience grows
  • Rules from experience > rules from anticipation — build first, then codify
  • 0-10% of areas should be rule-free — space for exploration and novel approaches
⚠️ Rules from Experience, Not Anticipation

A team adds 'always use tRPC' before building their first tRPC endpoint. After 5 endpoints: tRPC does not fit their multi-language backend. The premature rule wasted effort. Better: no rule until 3-5 implementations, then codify the pattern that emerged. Build first, then rule.

Signals That Your Rules Need Recalibration

Too strict signals: the AI asks permission for every minor decision (cannot resolve simple choices independently), the AI generates over-engineered solutions (uses React Hook Form for a search input because the rule demands it), developers frequently override AI output (the rules produce correct-but-wrong code), or the AI fails to complete tasks (conflicting mandatory rules create deadlocks). Fix: demote some strict rules to flexible, remove rules for areas that do not need them.

Too flexible signals: every component looks different (no consistent patterns across the codebase), code reviews focus on style and convention disagreements (the AI did not enforce consistency), new developers cannot predict the AI output (the AI makes different choices on similar tasks), or the AI generates technically correct but stylistically inconsistent code (mixing camelCase and snake_case, mixing async/await and .then()). Fix: promote some flexible rules to strict, add conventions for patterns that appear in 5+ places.

The right balance signals: the AI generates consistent code that matches the codebase style (strict rules working), the AI makes good judgments on implementation details (flexible rules working), security and correctness rules are never violated (mandatory rules working), and the AI proposes novel approaches for new feature areas (open areas allowing exploration). The codebase looks: like it was written by one developer (consistent) who makes smart choices (flexible) and never compromises on safety (mandatory).

ℹ️ The Right Balance Looks Like This

Consistent code matching codebase style (strict working). Smart judgments on implementation (flexible working). Security never compromised (mandatory working). Novel approaches for new areas (open working). The codebase looks like one developer who makes smart choices and never compromises on safety.

Strictness Level Summary

Summary of the four rule strictness levels.

  • Mandatory (10-20%): 'Never', 'Always' — security, correctness, compliance. No exceptions
  • Strict (40-60%): 'Prefer', 'Default to', 'Unless' — conventions, patterns. Rarely override with documented reason
  • Flexible (20-30%): 'Consider', 'For simple X, complex Y' — implementation, AI judges per task
  • Open (0-10%): no rule — exploration, prototypes, novel features. AI proposes, developer evaluates
  • Progression: open → flexible → strict → mandatory as team experience grows
  • Too strict: over-engineered output, AI asks permission for everything, frequent overrides
  • Too flexible: inconsistent code, style disagreements in review, unpredictable AI choices
  • Right balance: consistent style + smart judgments + safe defaults + room for exploration