Same Services, Different Names for Everything
AWS, GCP, and Azure offer equivalent services under different names. Serverless compute: AWS Lambda vs Google Cloud Functions vs Azure Functions. Object storage: S3 vs Cloud Storage vs Azure Blob Storage. NoSQL database: DynamoDB vs Firestore vs Cosmos DB. Container orchestration: ECS/EKS vs GKE vs AKS. Message queue: SQS vs Pub/Sub vs Azure Service Bus. CDN: CloudFront vs Cloud CDN vs Azure CDN. Every service exists on every cloud â with a different name, different API, and different SDK.
Without cloud provider rules: AI generates AWS SDK calls in a GCP project (import { S3Client } when the project uses Cloud Storage), uses AWS IAM patterns on Azure (resource-based policies vs Azure RBAC), writes Lambda handler signatures for Cloud Functions (different event/context parameters), or references DynamoDB patterns for Cosmos DB (different query APIs). The cloud provider determines every infrastructure interaction. One rule prevents every cross-cloud generation error.
This article provides: the key service name mappings, the AI rules for each cloud, and copy-paste CLAUDE.md templates. The rules tell the AI: this project uses AWS (use Lambda, S3, DynamoDB, AWS SDK v3) or GCP (use Cloud Functions, Cloud Storage, Firestore, Google Cloud client libraries) â preventing the AI from importing the wrong provider's SDK.
Serverless Compute: Lambda vs Cloud Functions vs Azure Functions
AWS Lambda: handler exports a function with (event, context) parameters. export const handler = async (event: APIGatewayProxyEvent, context: Context) => { return { statusCode: 200, body: JSON.stringify(data) }; }. Trigger: API Gateway, SQS, S3 events, CloudWatch Events. Deployment: SAM, CDK, or Serverless Framework. Runtime: Node.js 20, Python 3.12, Java 21, Go. AI rule: 'AWS Lambda: handler(event, context) returning { statusCode, body }. Deploy: CDK or SAM. Trigger: API Gateway for HTTP, SQS for queue, S3 for storage events.'
Google Cloud Functions: handler exports a function with (req, res) for HTTP triggers or (event, context) for event triggers. export const handler = async (req: Request, res: Response) => { res.json(data); }. Trigger: HTTP, Pub/Sub, Cloud Storage, Firestore. Deployment: gcloud functions deploy. Runtime: Node.js 20, Python 3.12, Java 17, Go. AI rule: 'GCP Cloud Functions: handler(req, res) for HTTP. Deploy: gcloud functions deploy. Trigger: HTTP, Pub/Sub, Cloud Storage.'
Azure Functions: handler with a specific binding model. export default async function (context: Context, req: HttpRequest): Promise<HttpResponseInit> { return { body: JSON.stringify(data) }; }. Configuration: function.json defines bindings (triggers, inputs, outputs). Deployment: Azure CLI or VS Code extension. AI rule: 'Azure Functions: handler(context, req) with function.json bindings. Deploy: Azure CLI or VS Code. Trigger: HTTP, Service Bus, Blob Storage, Timer.'
- Lambda: (event, context) => { statusCode, body }. Cloud Functions: (req, res). Azure: (context, req) + function.json
- Lambda trigger: API Gateway. Cloud Functions: HTTP trigger. Azure: HTTP binding in function.json
- Deploy: SAM/CDK (AWS) vs gcloud CLI (GCP) vs Azure CLI (Azure)
- All support: Node.js 20, Python 3.12. Different additional runtimes per cloud
- AI error: Lambda handler signature in Cloud Functions = wrong parameters entirely
Lambda: handler(event, context) returning { statusCode, body }. Cloud Functions: handler(req, res) with res.json(). Azure Functions: handler(context, req) with function.json bindings. Same job (run code on HTTP request), three incompatible signatures. One rule picks the right one.
Object Storage: S3 vs Cloud Storage vs Blob
AWS S3: import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'. const client = new S3Client({ region: 'us-east-1' }). Upload: await client.send(new PutObjectCommand({ Bucket: 'my-bucket', Key: 'file.txt', Body: data })). S3 SDK v3 uses: command pattern (each operation is a Command class), modular imports (import only what you need), and the client.send(command) pattern. AI rule: 'AWS S3: @aws-sdk/client-s3 with command pattern. S3Client + PutObjectCommand/GetObjectCommand. Buckets, keys, regions.'
Google Cloud Storage: import { Storage } from '@google-cloud/storage'. const storage = new Storage(). Upload: await storage.bucket('my-bucket').file('file.txt').save(data). GCP client libraries use: a more object-oriented API (bucket.file.save), automatic auth from environment (GOOGLE_APPLICATION_CREDENTIALS), and method chaining. AI rule: 'GCP Storage: @google-cloud/storage. storage.bucket(name).file(key).save(data). Auth: automatic from environment or service account.'
Azure Blob Storage: import { BlobServiceClient } from '@azure/storage-blob'. const client = BlobServiceClient.fromConnectionString(connStr). Upload: await client.getContainerClient('container').getBlockBlobClient('file.txt').upload(data, data.length). Azure SDK uses: connection-string-based auth, container (equivalent to bucket) + blob (equivalent to key), and a client hierarchy (service > container > blob). AI rule: 'Azure Blob: @azure/storage-blob. BlobServiceClient with connection string. Container = bucket, Blob = key.'
- AWS S3: @aws-sdk/client-s3, command pattern, PutObjectCommand/GetObjectCommand
- GCP Storage: @google-cloud/storage, object-oriented, bucket.file.save()
- Azure Blob: @azure/storage-blob, connection string, container.blob.upload()
- Auth: AWS IAM roles/keys, GCP service accounts/env, Azure connection strings/managed identity
- AI error: S3Client import in GCP project = wrong package. bucket.file() in AWS = wrong API
import { S3Client } from '@aws-sdk/client-s3' in a GCP project: the package installs but talks to AWS, not GCP. Use @google-cloud/storage for Cloud Storage. Every SDK package name is cloud-specific. One wrong import sends data to the wrong cloud provider.
IAM: Policies vs Roles vs RBAC
AWS IAM: identity-based policies (JSON documents attached to users/roles) and resource-based policies (JSON documents on resources like S3 buckets). Principle of least privilege: { Effect: 'Allow', Action: ['s3:GetObject'], Resource: 'arn:aws:s3:::bucket/*' }. Roles: assumed by services (Lambda execution role), federated users (SAML/OIDC), or cross-account access. AWS IAM is: granular (per-action permissions), policy-based, and ARN-referenced.
GCP IAM: role-based (predefined roles like roles/storage.objectViewer, custom roles with specific permissions). Roles are: bound to members (users, service accounts, groups) at a resource level (project, folder, organization). Policy binding: gcloud projects add-iam-policy-binding PROJECT --member=serviceAccount:SA --role=roles/storage.objectViewer. GCP IAM is: role-centric (bind roles, not individual permissions), hierarchy-based (project > folder > org), and simpler than AWS for common patterns.
Azure RBAC: role assignments at subscription, resource group, or resource level. Built-in roles: Reader, Contributor, Owner, plus service-specific roles. Assignment: az role assignment create --assignee USER --role 'Storage Blob Data Reader' --scope /subscriptions/SUB_ID. Azure RBAC is: scope-based (subscription > resource group > resource), role-centric (similar to GCP), and integrated with Azure AD (Entra ID) for identity. AI rule: 'Match IAM to cloud: AWS = JSON policies with ARNs. GCP = role bindings with IAM roles. Azure = RBAC role assignments with scopes.'
CLI and SDK Patterns
AWS CLI and SDK: CLI: aws s3 cp file.txt s3://bucket/. SDK: @aws-sdk/client-* (modular, one package per service). Auth: AWS_ACCESS_KEY_ID + AWS_SECRET_ACCESS_KEY environment variables, or IAM role (EC2/Lambda automatic). Region: AWS_REGION or explicit in client constructor. AI rule: 'AWS: aws CLI, @aws-sdk/client-{service} SDK (v3 modular). Auth: env vars or IAM role. Always specify region.'
GCP CLI and SDK: CLI: gcloud storage cp file.txt gs://bucket/. SDK: @google-cloud/{service} (one package per service). Auth: GOOGLE_APPLICATION_CREDENTIALS env var (path to service account key JSON), or automatic on GCP compute (metadata server). Project: GCLOUD_PROJECT or explicit. AI rule: 'GCP: gcloud CLI, @google-cloud/{service} SDK. Auth: service account JSON or automatic on GCP. Specify project ID.'
Azure CLI and SDK: CLI: az storage blob upload --account-name ACCT --container CONTAINER --file file.txt. SDK: @azure/{service} (one package per service). Auth: connection string, Azure AD token (DefaultAzureCredential), or managed identity. Subscription: az account set --subscription SUB_ID. AI rule: 'Azure: az CLI, @azure/{service} SDK. Auth: DefaultAzureCredential (tries multiple auth methods automatically). Specify subscription.'
- AWS: aws CLI, @aws-sdk/client-* SDK, env var auth or IAM role, region required
- GCP: gcloud CLI, @google-cloud/* SDK, service account JSON or auto, project required
- Azure: az CLI, @azure/* SDK, DefaultAzureCredential, subscription required
- SDK pattern: AWS = command classes. GCP = object-oriented methods. Azure = client hierarchy
- AI error: importing @aws-sdk in a GCP project = wrong SDK. gcloud commands on AWS = wrong CLI
aws uses env vars + IAM roles. gcloud uses service account JSON + auto-detect. az uses DefaultAzureCredential (tries 5 auth methods). The CLI and auth model are cloud-determined. AI generating aws s3 cp on GCP: command not found. gcloud storage cp is the equivalent.
Ready-to-Use Rule Templates
AWS CLAUDE.md template: '# Cloud (AWS). CLI: aws. SDK: @aws-sdk/client-{service} (v3 modular). Compute: Lambda (handler(event, context), CDK/SAM deploy). Storage: S3 (PutObjectCommand, GetObjectCommand). Database: DynamoDB or RDS. Queue: SQS. Auth: IAM roles, env vars (AWS_ACCESS_KEY_ID, AWS_REGION). Infrastructure: CDK (TypeScript). Never: gcloud, @google-cloud, az, @azure imports.'
GCP CLAUDE.md template: '# Cloud (Google Cloud). CLI: gcloud. SDK: @google-cloud/{service}. Compute: Cloud Functions (handler(req, res), gcloud deploy). Storage: Cloud Storage (bucket.file.save/download). Database: Firestore or Cloud SQL. Queue: Pub/Sub. Auth: service account JSON, GOOGLE_APPLICATION_CREDENTIALS. Infrastructure: Terraform or Pulumi. Never: aws, @aws-sdk, az, @azure imports.'
Azure CLAUDE.md template: '# Cloud (Azure). CLI: az. SDK: @azure/{service}. Compute: Azure Functions (handler(context, req), function.json bindings). Storage: Blob Storage (BlobServiceClient, container.blob). Database: Cosmos DB or Azure SQL. Queue: Service Bus. Auth: DefaultAzureCredential, connection strings. Infrastructure: Bicep or Terraform. Never: aws, @aws-sdk, gcloud, @google-cloud imports.'
Comparison Summary
Summary of AWS vs GCP vs Azure AI rules.
- Compute: Lambda (event, context) vs Cloud Functions (req, res) vs Azure Functions (context, req + bindings)
- Storage: S3 (@aws-sdk/client-s3) vs Cloud Storage (@google-cloud/storage) vs Blob (@azure/storage-blob)
- IAM: AWS JSON policies + ARNs vs GCP role bindings vs Azure RBAC scope assignments
- CLI: aws vs gcloud vs az â three different tools with different command structures
- SDK: @aws-sdk/client-* vs @google-cloud/* vs @azure/* â different packages, different APIs
- Auth: AWS env vars/IAM vs GCP service account/auto vs Azure DefaultAzureCredential
- Never cross-import: @aws-sdk in a GCP project or @google-cloud in an Azure project
- Templates: cloud provider + service names + SDK imports = the three critical rules