The Security Blind Spot in AI-Generated Code
AI coding assistants are remarkably good at generating code that works. It compiles, passes tests, and handles the happy path correctly. But 'works correctly' and 'is secure' are two very different standards, and AI assistants frequently fail the second one.
The problem isn't that AI models don't know about security — they do. Ask Claude or Copilot about SQL injection and they'll give you a textbook explanation. The problem is that without explicit rules, AI assistants optimize for functionality and readability, not security. They'll generate string-concatenated SQL because it's shorter. They'll hardcode an API key because it makes the example work. They'll skip input validation because it clutters the code.
This is where AI coding rules have the highest ROI. A single security ruleset — 20-30 lines applied to every repo — prevents entire classes of vulnerabilities at generation time instead of catching them in code review after the fact.
Vulnerability 1: SQL Injection
SQL injection is the most common vulnerability AI assistants introduce, especially when generating quick database queries or prototyping endpoints. The AI knows about parameterized queries — but when you ask it to 'write a query that filters users by email,' it often reaches for the simpler string interpolation approach first.
The rule: 'Always use parameterized queries or your ORM's query builder for all database operations. Never construct SQL strings with template literals, string concatenation, or f-strings. If the ORM doesn't support the query, use raw parameterized queries with placeholder syntax ($1, ?, :param).'
This rule works because it's specific and absolute. There's no judgment call — if the AI generates a SQL string with a variable inserted directly, it's violating the rule. The AI can easily comply because every database library supports parameterized queries.
SQL injection is the #1 vulnerability AI assistants introduce. One rule — 'Always use parameterized queries, never string-interpolate user input into SQL' — eliminates it entirely.
Vulnerability 2: Cross-Site Scripting (XSS)
XSS vulnerabilities appear when AI assistants generate frontend code that renders user input without sanitization. React's JSX is safe by default (it escapes string content), but the AI still introduces XSS through unsafe HTML rendering APIs, direct DOM manipulation, or rendering user content in non-JSX contexts like email templates.
The rule: 'Never use unsafe HTML rendering methods unless the content is from a trusted source and has been explicitly sanitized with DOMPurify or an equivalent library. Never construct HTML strings with user input for email templates, PDF generation, or server-side rendering. Use framework-provided escaping functions for all dynamic content.'
For non-React frameworks, add specific sanitization requirements: 'In Express templates, always use the escape filter. In Django, never use the |safe filter on user input. In Go, use html/template (not text/template) for any HTML output.'
Vulnerability 3: Hardcoded Secrets and API Keys
This is the vulnerability AI assistants introduce most shamelessly. When generating example code, integration setups, or configuration files, the AI will happily write API_KEY = 'sk-abc123...' because it makes the example work immediately. The developer commits it, pushes, and the secret is in git history forever.
The rule: 'Never hardcode API keys, passwords, tokens, connection strings, or any credential in source code. Always read secrets from environment variables using process.env (Node.js), os.environ (Python), or os.Getenv (Go). For local development, use .env files loaded by dotenv. Never commit .env files to git.'
Pair this with a .gitignore rule: 'Ensure .env, .env.local, and .env.*.local are in .gitignore. If a secret is accidentally committed, consider it compromised and rotate it immediately — git history retains deleted content.'
AI assistants will happily hardcode API keys to make examples work. One commit to git and the secret is in history forever. Always use environment variables — never exceptions.
Vulnerability 5: Insecure Input Handling
AI assistants frequently skip input validation entirely. When generating an API endpoint, the AI assumes the request body matches the expected shape. No type checking, no length limits, no format validation. This opens the door to injection attacks, buffer overflows, and application crashes from malformed input.
The rule: 'Validate and sanitize all external input at the API boundary. Use Zod (TypeScript), Pydantic (Python), or a validation library for your language. Define explicit schemas for every request body, query parameter, and URL parameter. Reject requests that don't match the schema with a 400 response — never silently coerce or ignore extra fields.'
For TypeScript projects specifically: 'Never use `any` for request body types. Always parse request bodies through a Zod schema before processing. Treat req.body as unknown until validated.'
A Complete Security Ruleset Template
Here's a consolidated security ruleset you can add to any CLAUDE.md or .cursorrules file. It covers all five vulnerability patterns above plus additional hardening rules. Copy it into a dedicated '# Security' section of your rule file.
The template is 25 lines — short enough to not overwhelm the AI's attention, specific enough to prevent the most common vulnerabilities. For compliance-regulated industries, extend it with your specific requirements (HIPAA data handling, PCI-DSS cardholder rules, SOC 2 logging requirements).
For organizations using RuleSync, create this as a standalone 'security' ruleset and apply it to every repo. When a new vulnerability pattern emerges, update the ruleset once and every repo gets the fix on the next sync.
- SQL: Always use parameterized queries — never string-interpolate user input into SQL
- XSS: Never render unsanitized user content as HTML — use DOMPurify for trusted HTML
- Secrets: Never hardcode credentials — always use environment variables via process.env / os.environ
- Auth: Every resource endpoint must verify the authenticated user has access — never query by ID alone
- Input: Validate all external input with Zod / Pydantic — reject malformed requests with 400
- Logging: Never log secrets, tokens, passwords, or full request bodies containing PII
- Dependencies: Never install packages without checking their npm/PyPI download counts and maintenance status
- HTTPS: Never construct HTTP URLs for API calls — always use HTTPS
Create a standalone 'security' ruleset with these 8 rules and apply it to every repo via RuleSync. One ruleset, universal protection, 5-minute setup.
Enforcing Security Rules Beyond the Rule File
AI coding rules are your first line of defense — they prevent vulnerabilities at generation time. But defense in depth means adding additional layers: static analysis, secret scanning, and dependency auditing in your CI pipeline.
Combine your AI rules with tools like Semgrep (static analysis for custom security patterns), GitGuardian or truffleHog (secret scanning in git history), and npm audit or pip-audit (dependency vulnerability checking). The AI rules prevent the most common issues; the CI tools catch what slips through.
For compliance-regulated teams, this combination creates an auditable security posture: you can demonstrate that AI-generated code is subject to both preventive controls (rule files) and detective controls (CI scanning). This is exactly the kind of layered approach that SOC 2 and similar frameworks look for.