Security AI Rules: Shifting Left
Traditional security: the security team reviews code after it is written, finds vulnerabilities, and sends them back for fixing. Shift-left security: encode security requirements into AI rules so the AI generates secure code from the start. The security team's AI rules prevent vulnerabilities rather than finding them after the fact. This is more efficient (fixing during development costs 10x less than fixing in production) and more scalable (the security team cannot review every line of code, but AI rules apply to every line generated).
The security team's role with AI rules: define the security standards (OWASP Top 10, organization-specific threat model), encode them as AI rules (input validation patterns, authentication requirements, encryption standards), distribute them to all repos (via the rules platform), and verify compliance (automated security scanning that validates AI rules are followed).
The security AI rules hierarchy: organization-wide security rules (apply to all repos), technology-specific security rules (SQL injection prevention for database projects, XSS prevention for frontend projects), and compliance-specific security rules (PCI for payment systems, HIPAA for healthcare). The security team maintains all three levels.
OWASP Top 10 as AI Rules
A01 Broken Access Control: AI rule: 'Every endpoint: authenticated and authorized. Default: deny access. Explicit authorization check before data access. No direct object references without ownership verification (user can only access their own data). CORS: restrictive whitelist, not wildcard.' A02 Cryptographic Failures: AI rule: 'Sensitive data: encrypted at rest (AES-256) and in transit (TLS 1.2+). Passwords: bcrypt or Argon2 hash, never plaintext or reversible encryption. Keys: in KMS, never in code.'
A03 Injection: AI rule: 'Database queries: parameterized (prepared statements). Never concatenate user input into queries. ORM queries: use the ORM's built-in parameterization. Shell commands: avoid entirely; if necessary, use allowlist-based input validation and safe APIs.' A07 Identification and Authentication Failures: AI rule: 'Passwords: enforce minimum length (12 characters), check against breached password lists. Sessions: secure cookies (HttpOnly, Secure, SameSite=Strict), server-side session management, timeout after inactivity. MFA: available for all users, required for admin accounts.'
A09 Security Logging and Monitoring Failures: AI rule: 'Log all authentication events, access control decisions, input validation failures, and application errors. Logs: structured JSON, no sensitive data in log messages (no passwords, tokens, or PII). Forwarded to SIEM for analysis and alerting.' These OWASP rules should be in every organization's base security rule set, applying to all repos regardless of technology stack.
Despite decades of awareness, SQL injection and similar injection flaws remain the most exploited web vulnerabilities. The AI rule is simple but critical: never concatenate user input into SQL queries, shell commands, or template strings. Always use parameterized queries (prepared statements). ORMs like Prisma and Drizzle handle this automatically โ but raw query builders and string templates do not. The AI must use the ORM's parameterized query methods exclusively.
Vulnerability and Secrets Management
Dependency vulnerability management: the security team defines how dependency vulnerabilities are handled. AI rule: 'Dependency scanning: automated in CI/CD (Snyk, Dependabot, Trivy). Critical vulnerabilities: block the build. High vulnerabilities: warn and require acknowledgment. Medium/Low: track in the vulnerability backlog. The AI should generate dependencies with known-good versions and never introduce dependencies with known critical vulnerabilities.'
Secrets management: the most common security violation in AI-generated code: hardcoded secrets (API keys, database passwords, tokens). AI rule: 'No secrets in source code. No secrets in environment files committed to git. Secrets: stored in the secrets management system (Vault, AWS Secrets Manager, GCP Secret Manager). Referenced via environment variables or secret injection. The AI must generate secret references (process.env.DATABASE_URL), never secret values.'
Security scanning integration: the security team operates scanning tools that should be integrated into the development workflow. SAST (Static Application Security Testing): runs on every PR, catches code-level vulnerabilities. DAST (Dynamic Application Security Testing): runs against staging environments, catches runtime vulnerabilities. SCA (Software Composition Analysis): scans dependencies for known vulnerabilities. AI rule: 'The CI pipeline includes: SAST scan, SCA scan, and secret detection scan. The AI generates code that passes all three scans.'
A developer accidentally commits an API key. Without secret detection: the key is in git history forever (even if deleted in a subsequent commit). With secret detection in CI (GitLeaks, TruffleHog, GitHub secret scanning): the PR is blocked before merge. The key never enters the main branch. The developer rotates the key locally and re-submits. Cost: 5 minutes. Without detection: the key is exposed, must be rotated, all services using it must be updated, and the incident must be documented.
Threat Modeling and Incident Response
Threat modeling: for new features, the security team conducts threat analysis. STRIDE model: Spoofing (can an attacker impersonate a user?), Tampering (can data be modified in transit or at rest?), Repudiation (can an action be denied?), Information Disclosure (can data be exposed?), Denial of Service (can the service be overwhelmed?), Elevation of Privilege (can an attacker gain unauthorized access?). AI rule: 'When generating a new feature: consider STRIDE threats. Authentication prevents Spoofing. Integrity checks prevent Tampering. Audit logging prevents Repudiation. Encryption prevents Information Disclosure. Rate limiting prevents DoS. Authorization prevents Elevation of Privilege.'
Incident response integration: AI-generated code should include hooks for incident detection and response. AI rule: 'Generate alerting for security-relevant events: authentication failures exceeding threshold (potential brute force), unusual data access patterns (potential data exfiltration), configuration changes (potential unauthorized modification), and error rate spikes (potential attack or failure). Alerts route to the security team's incident response workflow.'
Security review gates: the security team defines which changes require security review. AI rule: 'Changes that require security review: authentication/authorization modifications, cryptography changes, new external integrations (data leaves the system boundary), new data collection (PII or sensitive data), and infrastructure security changes (firewall rules, network configuration). The AI should flag these changes for security team review in the PR description.'
STRIDE is powerful because each threat has a well-known mitigation. Spoofing โ Authentication. Tampering โ Integrity checks (checksums, signatures). Repudiation โ Audit logging. Information Disclosure โ Encryption + access control. Denial of Service โ Rate limiting + auto-scaling. Elevation of Privilege โ Authorization + least privilege. When the AI generates a new feature: applying STRIDE systematically ensures no threat category is overlooked.
Security Engineering AI Rules Summary
Summary of AI rules for security engineering teams encoding organizational security standards.
- Shift left: encode security in AI rules so secure code is generated from the start
- OWASP Top 10: parameterized queries, access control defaults, encryption standards, logging
- Dependencies: automated scanning in CI. Critical vulns block builds. Known-good versions
- Secrets: never in source code. Reference via env vars. Use organizational secrets manager
- Scanning: SAST on PRs, DAST on staging, SCA for dependencies, secret detection
- Threat modeling: STRIDE for new features. Each threat maps to a security control
- Incident detection: alerts for auth failures, unusual access, config changes, error spikes
- Security review: flag auth, crypto, data collection, and infrastructure changes for review