Enterprise

AI Rules for 100-Person Engineering Orgs

A 100-person engineering org has 8-12 teams, multiple tech stacks, and enough complexity to need standards but enough agility to move fast. This guide covers the right level of AI rules for this organizational size.

6 min read·July 5, 2025

100 engineers: enough complexity to need standards, enough agility to keep them lightweight. One page of rules beats 50 pages of process.

Minimal viable framework, lightweight governance, show-don't-mandate adoption, and scaling signals

100 Engineers: The Sweet Spot for AI Rules

At 100 engineers: you have enough teams (8-12) that inconsistency is visible, enough codebase diversity (frontend, backend, mobile, infrastructure) that one-size-fits-all rules do not work, and enough organizational overhead that manual knowledge transfer breaks down. But you are still small enough that: a single architect knows the full system, teams can coordinate informally, and heavy governance processes add more cost than value.

The 100-person AI rules principle: lightweight governance, strong conventions, minimal tooling. You do not need: a rules committee, a self-service portal, compliance dashboards, or a dedicated platform team. You do need: a shared rule file that all teams follow, technology-specific additions for each stack, a simple distribution mechanism (copy the file or use a shared repo), and a culture of updating rules when conventions change.

The organizational structure at 100: typically 1 VP/Director of Engineering, 3-4 Engineering Managers, 8-12 teams of 8-12 developers, and 2-4 staff/principal engineers who set technical direction. AI rules ownership: staff engineers write the organization-level rules, tech leads customize per team, and EMs ensure adoption. No dedicated rules team needed at this size.

The Minimal Viable Rules Framework

Organization rules (1 page): the top 15-20 conventions that apply to all teams. Security basics (input validation, parameterized queries, no secrets in code), testing requirements (minimum coverage, test naming), error handling pattern (structured errors, proper logging), and code quality (linting, formatting, type safety). AI rule: 'The organization rules fit on one page. If they do not fit: they are too detailed for the organization level. Move specifics to technology-level rules.'

Technology rules (1 page per stack): TypeScript/React rules, Go rules, Python rules, infrastructure/Terraform rules. Each page covers: language idioms, framework patterns, dependency conventions, and testing patterns specific to that technology. AI rule: 'Technology rules are maintained by the staff engineer or senior developer most experienced in that technology. They update when the stack evolves (new framework version, new library adoption).'

Team rules (optional, short): project-specific additions that do not fit the organization or technology level. Domain terminology, project-specific patterns, integration-specific conventions. AI rule: 'Team rules are optional at 100 engineers. Most teams operate well with just organization + technology rules. Team rules are added when a project has genuinely unique requirements (domain-specific terminology, unusual architecture constraints).'

💡 One Page of Rules > Fifty Pages of Process

At 100 engineers: a 1-page rule file that every developer reads beats a 50-page standards document that no one reads. Focus on the 15-20 conventions that cause the most code review friction and defects. Those 15-20 rules: apply to 80% of the code the AI generates. Everything else: handled through code review and team conventions. Expand the rules when the 1-page version is fully adopted and effective.

Simple Distribution and Adoption

Distribution at 100 engineers: a shared repository with rule files, and a simple script or CI job that copies rule files to project repos. No need for a sophisticated sync platform. Pattern: github.com/org/ai-rules contains: base.md (organization rules), typescript.md, go.md, python.md (technology rules). Each project repo: CLAUDE.md that includes the relevant base + technology file. AI rule: 'Keep distribution simple. A shared repo + manual copy works at 100 engineers. Automate only when the manual process becomes a bottleneck (usually around 150-200 engineers).'

Adoption approach: at 100 engineers, personal relationships drive adoption better than dashboards. The staff engineer demos AI rules at the engineering all-hands. Teams that see the benefit adopt voluntarily. Teams that are hesitant: pair programming sessions where the staff engineer shows AI rules in action on the team's actual codebase. AI rule: 'At 100 engineers: show, do not mandate. Personal demos and pair programming are more effective than compliance dashboards. Reserve mandates for security-critical rules only.'

Measuring impact: simple metrics collected manually or from existing tools. Before/after: PR review time (from GitHub/GitLab analytics), defect rate (from issue tracker), and developer satisfaction (quarterly survey with 2-3 AI-specific questions). AI rule: 'At 100 engineers: do not build custom metrics infrastructure. Use existing tools (GitHub Insights, Jira reports) and add 3 questions to the existing developer survey. The investment in measurement should match the organizational size.'

ℹ️ Show, Do Not Mandate at This Size

At 100 engineers: you can reach every team personally. A staff engineer spending 30 minutes with each team: demonstrates AI rules on the team's actual code, shows the productivity benefit in their specific context, and answers questions about their unique concerns. Total investment: 6 hours for 12 teams. Result: genuine buy-in. The alternative (mandate via email): 50% compliance with 0% enthusiasm. Personal demos are worth the time investment at this organizational size.

Preparing to Scale Beyond 100

Signs you are outgrowing the 100-person approach: rule updates require coordinating with more than 5 teams (too many for informal coordination), new developers take more than 1 week to find and configure rules (distribution friction), teams are creating conflicting rules (no governance to resolve conflicts), and compliance questions arise from enterprise customers or auditors (informal standards are insufficient). AI rule: 'When 3 or more of these signs appear: begin planning the transition to the 200-person framework (automated distribution, formal governance, adoption dashboard).'

What to formalize first: distribution (automate the copy from shared repo to project repos), governance (create a lightweight review process for rule changes that affect multiple teams), and compliance (generate a report showing which repos have current rules). AI rule: 'Formalize in the order that relieves the most pain. If distribution is the bottleneck: automate it first. If conflicts are the issue: add governance first. Do not formalize everything at once.'

What to keep informal: team rule ownership (tech leads manage their own rules without approval workflows), technology rule updates (senior engineers update when the stack evolves, no committee needed), and exception handling (resolved through direct conversation, not formal request processes). AI rule: 'At 100-150 engineers: keep most processes informal. Formalize only what is actually causing problems. Over-formalizing at this size: adds bureaucracy without proportional benefit.'

⚠️ Do Not Over-Formalize at 100 Engineers

A 100-person org that implements: a rules committee, a self-service portal, compliance dashboards, automated enforcement, and a dedicated platform team — has built governance for a 500-person org. The overhead exceeds the benefit. At 100: informal coordination works. Direct conversations resolve conflicts. Manual distribution is fast enough. Save the heavy infrastructure for when you genuinely need it. The signal: when informal processes become bottlenecks, not when someone reads an article about enterprise governance.

100-Person Org AI Rules Summary

Summary of AI rules framework for 100-person engineering organizations.

  • Lightweight: 1-page org rules, 1-page per technology, optional team rules. No heavy governance
  • Ownership: staff engineers write org rules. Tech leads customize per team. EMs ensure adoption
  • Distribution: shared repo + manual copy. Automate only when it becomes a bottleneck
  • Adoption: show, do not mandate. Demos and pair programming over dashboards and mandates
  • Metrics: existing tools (GitHub Insights, Jira). 3 AI questions in quarterly developer survey
  • Scaling signals: 5+ teams to coordinate, 1-week setup time, conflicting rules, compliance needs
  • Formalize in order of pain: distribution first, then governance, then compliance reporting
  • Keep informal: team ownership, technology updates, exception handling. Do not over-formalize