The Operations Trade-Off
Managed services: a provider handles the operational complexity. Vercel deploys and scales your Next.js app. Neon manages your Postgres database (backups, scaling, connection pooling). PlanetScale manages your MySQL (branching, migrations, replication). Auth0 manages authentication. The provider handles: provisioning, scaling, patching, monitoring, backups, and disaster recovery. You handle: configuration and code. The trade-off: less control, less operational work, higher per-unit cost.
Self-hosted: you handle everything. Docker containers on EC2 or a Kubernetes cluster. You manage: provisioning (Terraform), deployment (CI/CD pipelines), scaling (HPA, auto-scaling groups), monitoring (Prometheus + Grafana), backups (cron jobs, S3), security patching (OS updates, container image updates), and disaster recovery (multi-AZ, failover). The trade-off: full control, full responsibility, lower per-unit cost at scale, higher operational overhead.
Without infrastructure model rules: the AI generates Terraform provisioning for a Vercel-hosted project (Vercel handles provisioning โ no Terraform needed), managed service APIs for a self-hosted setup (referencing Neon connection pooler when you manage your own PgBouncer), or single-server deployment patterns for a Kubernetes project. The hosting model determines: how the AI configures infrastructure, manages secrets, and handles scaling.
Configuration: Dashboard Settings vs IaC Files
Managed configuration: settings in provider dashboards or config files. Vercel: vercel.json for redirects, environment variables in the dashboard, domains in project settings. Neon: connection string provided, branching via CLI or dashboard, scaling automatic. The configuration is: minimal (the provider has sensible defaults), declarative (vercel.json, not Terraform), and provider-specific (each managed service has its own config format). AI rule: 'Managed: configure via provider dashboard and config files (vercel.json, neon.toml). Environment vars: provider dashboard. Scaling: automatic. No Terraform for managed services.'
Self-hosted configuration: Infrastructure as Code files. Terraform: defines servers, networks, databases, load balancers. Kubernetes manifests: define deployments, services, ingress, secrets. Ansible playbooks: configure servers, install software, manage users. The configuration is: comprehensive (you define everything), code-based (versioned in git, reviewed in PRs), and provider-agnostic (Terraform works with any cloud). AI rule: 'Self-hosted: Terraform for infrastructure, K8s manifests for workloads, Helm charts for applications. Everything in git. terraform plan before apply. kubectl apply -f manifests/.'
The configuration rule prevents: the AI generating Terraform resource blocks for a Vercel project (Vercel is configured via vercel.json and dashboard, not Terraform), referencing Kubernetes manifests for a managed platform (no K8s cluster to manage), or using dashboard-only configuration for a self-hosted setup (self-hosted needs: versioned, reproducible, automated configuration in code).
- Managed: vercel.json, dashboard settings, provider CLI. Minimal config, sensible defaults
- Self-hosted: Terraform + K8s manifests + Helm. Everything in git, everything reproducible
- Managed: no Terraform needed (provider handles provisioning). Self-hosted: Terraform essential
- Managed: env vars in dashboard. Self-hosted: env vars in K8s secrets or Vault
- AI error: Terraform for Vercel (unnecessary). Dashboard config for K8s (not versioned, not reproducible)
Managed: vercel.json + dashboard settings. Provider handles the rest. Self-hosted: Terraform + K8s manifests + Helm charts, all in git, all reviewed in PRs. The configuration model matches the hosting model. AI generating Terraform for Vercel: solving a problem that does not exist.
Scaling and Monitoring
Managed scaling: automatic. Vercel: scales serverless functions per request (no configuration). Neon: scales compute up and down based on load (auto-scaling enabled by default). PlanetScale: horizontal read scaling automatic. You configure: nothing for basic scaling, or set limits for cost control (Vercel: max serverless function duration). AI rule: 'Managed: scaling is automatic. Set cost limits to prevent surprise bills. Monitor usage in provider dashboard. No auto-scaling configuration needed.'
Self-hosted scaling: you configure everything. Kubernetes HPA (Horizontal Pod Autoscaler): scales pods based on CPU or custom metrics. AWS Auto Scaling Groups: scale EC2 instances based on CloudWatch metrics. Database scaling: read replicas (manual provisioning), connection pooling (PgBouncer), and vertical scaling (resize the instance). AI rule: 'Self-hosted: K8s HPA for pod scaling (CPU > 70% = scale up). Database: read replicas for read-heavy, PgBouncer for connection pooling. Monitoring: Prometheus + Grafana dashboards.'
Monitoring follows the same pattern: managed services provide built-in dashboards (Vercel analytics, Neon metrics, PlanetScale insights). Self-hosted: you deploy and configure monitoring (Prometheus for metrics collection, Grafana for dashboards, alerting via PagerDuty or Opsgenie). The AI rule tells the AI: whether to suggest managed monitoring (check the Vercel dashboard) or self-hosted monitoring (deploy Prometheus, configure Grafana, set up alerts).
- Managed scaling: automatic, set cost limits. Self-hosted: HPA, auto-scaling groups, manual replicas
- Managed monitoring: built-in dashboards (Vercel, Neon). Self-hosted: Prometheus + Grafana + alerting
- Managed: no scaling config needed. Self-hosted: HPA, metrics, thresholds all configured in YAML
- Cost control: managed = spending limits in dashboard. Self-hosted = instance count limits in config
- AI rule: 'Check Vercel dashboard' (managed) vs 'Deploy Prometheus, configure Grafana' (self-hosted)
Secrets and Disaster Recovery
Managed secrets: stored in the provider dashboard. Vercel: environment variables (encrypted at rest, injected at build/runtime). Neon: connection string generated and rotated by the platform. The secrets are: managed by the provider, not in your code or version control. AI rule: 'Managed: secrets in provider dashboard environment variables. Reference: process.env.DATABASE_URL (injected by Vercel). Never hardcode secrets. Rotation: provider handles or you rotate in dashboard.'
Self-hosted secrets: you manage the entire lifecycle. Options: Kubernetes Secrets (base64-encoded, not encrypted by default โ enable encryption at rest), HashiCorp Vault (encrypted, audited, versioned, rotated), AWS Secrets Manager (managed secret store with rotation), or sealed-secrets (encrypted K8s secrets safe to commit to git). AI rule: 'Self-hosted: Vault or AWS Secrets Manager for production. K8s Secrets with encryption at rest for simpler setups. Never plain text in YAML. Rotate secrets quarterly.'
Disaster recovery: managed services handle it (Neon: automatic backups, point-in-time recovery. Vercel: immutable deployments, instant rollback). Self-hosted: you build it (database backups: pg_dump cron to S3, test restores monthly. Application: multi-AZ deployment, health checks, automatic failover. Recovery plan: documented, tested, time-to-recovery measured). The AI rule determines: whether disaster recovery is the provider's responsibility (managed) or your responsibility (self-hosted).
- Managed secrets: provider dashboard, encrypted at rest, injected as env vars. Provider handles rotation
- Self-hosted secrets: Vault, AWS Secrets Manager, or encrypted K8s Secrets. You manage the lifecycle
- Managed DR: automatic backups, point-in-time recovery, instant rollback (provider handles)
- Self-hosted DR: pg_dump cron, multi-AZ, failover, documented recovery plan (you handle)
- The hosting model determines: who is responsible for security and recovery. Managed: provider. Self-hosted: you
Managed: Neon handles backups, point-in-time recovery, and failover. You click 'restore.' Self-hosted: you configure pg_dump cron, test restores monthly, build multi-AZ failover, and document the recovery plan. The hosting model determines: whether DR is the provider's job or yours.
When to Choose Each Model
Choose managed when: your team is small (under 10 developers โ no dedicated ops/SRE), you want to focus on product, not infrastructure (managed services handle ops so you ship features), your traffic is variable (managed auto-scaling handles spikes without pre-provisioning), or you are building a new product (ship fast, worry about infrastructure later โ managed platforms get you to production in hours, not weeks). Managed is: the default for startups, small teams, and new products.
Choose self-hosted when: compliance requires it (data residency, air-gapped environments, specific certifications that managed providers cannot meet), cost optimization at scale (at high, steady traffic โ self-hosted is: 2-5x cheaper per compute unit), you need full control (custom runtimes, GPU access, specific network configurations, kernel tuning), or your organization has dedicated ops/SRE (the expertise to manage infrastructure exists). Self-hosted is: the choice for large-scale, compliance-driven, or highly customized deployments.
The hybrid approach: most organizations in 2026 use both. Application hosting: managed (Vercel for frontend, serverless for APIs). Database: managed (Neon, PlanetScale). Background jobs: self-hosted (Docker containers on ECS for long-running jobs that exceed serverless limits). The rule tells the AI: which components are managed and which are self-hosted. The answer is often: component-specific, not organization-wide.
- Managed: small teams, variable traffic, ship fast, no ops expertise. Default for startups
- Self-hosted: compliance, cost at scale, full control, dedicated ops/SRE. Large enterprise
- Hybrid: managed app hosting + managed DB + self-hosted background jobs. Most common in 2026
- Cost: managed = higher per-unit, zero ops cost. Self-hosted = lower per-unit, ops team cost
- Decision per component: frontend (managed), DB (managed), long-running jobs (self-hosted)
Most 2026 organizations: managed frontend (Vercel) + managed database (Neon) + self-hosted background jobs (Docker on ECS). The hosting model is: per-component, not organization-wide. The AI rule specifies: which component is managed and which is self-hosted. Different infra code for each.
Infrastructure Model Summary
Summary of managed vs self-hosted AI rules.
- Config: managed = dashboard + vercel.json. Self-hosted = Terraform + K8s manifests in git
- Scaling: managed = automatic. Self-hosted = HPA, auto-scaling groups, manual replicas
- Monitoring: managed = built-in dashboards. Self-hosted = Prometheus + Grafana + alerting
- Secrets: managed = provider dashboard env vars. Self-hosted = Vault or encrypted K8s Secrets
- DR: managed = provider handles backups and recovery. Self-hosted = you build and test DR
- Default: managed for most components. Self-hosted for: compliance, cost at scale, full control
- Hybrid: per-component decision. App = managed. DB = managed. Background jobs = self-hosted
- AI rule: specify which components are managed vs self-hosted to generate correct infrastructure code