Enterprise

Rolling Out AI Rules to 200+ Repos

At 200+ repos, AI rules management becomes a platform capability. This guide covers rule inheritance hierarchies, automated compliance checking, exception management, and the platform team structure needed at scale.

7 min read·July 5, 2025

At 200 repos, AI rules become a platform. Rule inheritance, automated compliance, and a dedicated team make it work.

Three-layer inheritance, compliance scoring, gradual enforcement, exception workflows, and effectiveness measurement

200 Repos: Rules as a Platform

At 200+ repos, AI rules are no longer a set of files — they are a platform capability managed by a dedicated team. The challenges that emerge at this scale: rule files cannot be identical across all repos (a React frontend and a Go microservice need different rules), team customizations create drift that is hard to track, rule updates must roll out gradually (not all 200 repos at once), compliance verification must be automated (no one can manually check 200 repos), and exception management becomes critical (some repos legitimately need different rules).

The platform mindset: treat AI rules like any other internal developer platform. There is a platform team that maintains the rules infrastructure, a self-service interface for teams to adopt and customize, an API for automation and integration, SLAs for rule updates and support, and metrics for adoption, compliance, and effectiveness.

This guide builds on the 50-repo foundation (two-layer architecture, automated sync, adoption dashboard) and adds the capabilities needed for 200+ repo scale: rule inheritance hierarchies, automated compliance checking, exception workflows, and a dedicated platform team.

Rule Inheritance Hierarchies

At 200 repos, the two-layer model (base + team) is insufficient. Different repo types need different base rules. A three-layer hierarchy: Organization rules (apply to all 200 repos — security, testing, code quality), Technology rules (apply to repos using a specific technology — TypeScript rules, Python rules, Go rules, React rules), and Team rules (project-specific customizations). Each layer inherits from the one above and can override specific rules.

Example: a Next.js frontend repo inherits: Organization rules (security, testing) → TypeScript rules (strict mode, type patterns) → React rules (component patterns, hooks rules) → Next.js rules (App Router, Server Components) → Team rules (project-specific patterns). A Go microservice inherits: Organization rules → Go rules (error handling, naming) → Team rules. The AI reads all applicable layers in order, with later layers overriding earlier ones.

Implementation: rule files at each level are maintained by different teams. Organization rules: platform team. Technology rules: language/framework community leads. Team rules: individual teams. AI rule: 'Each inheritance level has a clear owner. Changes at higher levels require broader review (organization rules: architecture review board). Changes at lower levels: team autonomy. The sync tool resolves inheritance and produces a single effective rule set per repo.'

💡 Technology Rules Are the Key Layer

Organization rules are too generic to be useful alone ('write tests' does not help a Go developer write idiomatic tests). Team rules are too specific to share. Technology rules are the sweet spot: 'Go error handling: always check returned errors, use errors.Is/As for comparison, wrap with fmt.Errorf and %w verb.' These rules are specific enough to generate correct code and shared enough to benefit all Go repos in the organization. Invest most effort in this layer.

Automated Compliance Checking

At 200 repos, compliance cannot be verified manually. Automated compliance checking: a CI/CD job that runs on every repo and verifies that the effective rule set meets minimum requirements. Checks: rule file exists and is not empty, rule file version is within the acceptable range (not more than 2 versions behind), required sections are present (security rules, testing rules), no prohibited overrides (team rules cannot disable security requirements), and rule file has not been modified in ways that break sync.

Compliance scoring: each repo gets a compliance score. Fully compliant: current rules, all required sections, no prohibited overrides. Partially compliant: rules exist but are outdated or missing required sections. Non-compliant: no rules, or rules that disable security requirements. AI rule: 'Generate compliance reports: per-repo scores, per-team aggregates, organization-wide trends. Flag non-compliant repos to the platform team for follow-up.'

Enforcement levels: advisory (report non-compliance but do not block), warning (CI produces a warning for non-compliant repos), and blocking (CI fails if the repo is non-compliant). Start with advisory, move to warning after 80% adoption, move to blocking after 95% adoption. AI rule: 'Gradual enforcement: do not block until the vast majority of teams are compliant. Blocking too early creates resistance. Advisory and warning levels build awareness and adoption before enforcement.'

⚠️ Do Not Block Before 95% Adoption

Blocking CI for non-compliant rule files before most teams have adopted: creates immediate friction, generates complaints to leadership, and makes the platform team look like enforcers instead of enablers. The adoption curve: start with advisory (teams see the report), move to warning at 80% (CI shows a warning), move to blocking at 95% (the remaining 5% are outliers who need individual attention). By the time you block: compliance is the norm, not the exception.

Exception Management and Platform Team

Exception requests: some repos legitimately need to deviate from base rules. A legacy repo that cannot adopt TypeScript strict mode. A research project that needs experimental patterns. A vendor integration that requires a different coding style. AI rule: 'Generate an exception request workflow: team submits a request with justification, platform team reviews, approved exceptions are documented and tracked, exceptions have expiration dates (reviewed quarterly), and the compliance dashboard shows exceptions separately from non-compliance.'

Platform team structure: at 200+ repos, a dedicated platform team maintains the rules infrastructure. Responsibilities: maintain the central rules repository, operate the sync and compliance tooling, review exception requests, produce adoption and compliance reports, support teams with rule customization, and coordinate with technology leads on technology-specific rules. AI rule: 'The platform team is the rules team, not the rules police. Their job is to make adoption easy, not to enforce compliance through punishment.'

Measuring effectiveness: beyond adoption metrics, measure whether AI rules actually improve code quality. Metrics: defect rates (do repos with rules have fewer bugs?), review velocity (do PRs in repos with rules get approved faster?), onboarding time (do new developers in repos with rules become productive faster?), and developer satisfaction (do developers find the rules helpful?). AI rule: 'Effectiveness metrics justify the investment in AI rules infrastructure. If rules do not improve outcomes: the rules need improvement, not more enforcement.'

ℹ️ Exceptions Are Feedback, Not Failures

When a team requests an exception to a base rule: that is signal. If many teams request the same exception: the base rule may be too restrictive. If a team's exception reveals a use case the rules did not consider: the rules need expansion. Track exception patterns. Quarterly review: which exceptions are most common? Should they become standard options in the technology rules? The exception workflow is how the rules improve over time.

200-Repo Rollout Summary

Summary of enterprise AI rules management at 200+ repository scale.

  • Platform mindset: rules are a platform capability with a dedicated team, self-service, and SLAs
  • Three-layer hierarchy: organization → technology → team rules. Each level has a clear owner
  • Inheritance: repos inherit rules from all applicable layers. Later layers override earlier ones
  • Automated compliance: CI-based checking for rule presence, version, required sections, no prohibited overrides
  • Compliance scoring: fully compliant / partially / non-compliant. Dashboard with trends
  • Gradual enforcement: advisory → warning → blocking. Do not block before 95% adoption
  • Exceptions: formal request, documented, expiration dates, tracked separately on dashboard
  • Effectiveness: measure defect rates, review velocity, onboarding time, developer satisfaction