$ npx rulesync-cli pull✓ Wrote CLAUDE.md (2 rulesets)# Coding Standards- Always use async/await- Prefer named exports
Best Practices

AI Rules for Protobuf and gRPC

AI generates proto files with breaking changes, wrong field types, and no backward compatibility. Rules for schema evolution, service design, and gRPC patterns.

7 min read·June 4, 2025

A reused Protobuf field number silently corrupts data across every service

Schema evolution, field numbering, gRPC error handling, and service design

Why Protobuf and gRPC Need AI Rules

Protobuf schemas are contracts between services. A breaking change in a proto file can cascade across every service that depends on it — and unlike REST APIs where you can inspect the JSON, binary-encoded Protobuf failures are cryptic and hard to debug. AI assistants don't understand schema evolution and generate proto files that break backward compatibility.

The most common AI failures: reusing or reassigning field numbers (silently corrupts data), changing field types (int32 to string breaks every consumer), removing fields instead of deprecating them, generating RPC methods without proper error handling, and ignoring proto3 conventions when the project uses proto3.

These rules apply to any project using Protobuf for serialization or gRPC for service communication — regardless of implementation language.

Rule 1: Field Numbering and Backward Compatibility

The rule: 'Never reuse a field number. Once a field number is assigned, it's permanent — even if the field is removed. When removing a field, mark it as reserved: reserved 3, 7; reserved "old_field_name";. Never change a field's type — add a new field with a new number instead. Field numbers 1-15 use 1 byte on the wire — reserve them for frequently accessed fields.'

For evolution: 'Adding new fields is always safe — consumers that don't know about the field ignore it. Removing fields is safe if you reserve the number. Renaming fields is safe in proto3 (wire format uses numbers, not names). Changing field types is never safe — add a new field instead.'

This is the most critical Protobuf rule. A reused field number silently corrupts data — the decoder interprets bytes meant for one type as another. The bug is invisible until the data reaches business logic.

⚠️ Silent Data Corruption

Reusing a Protobuf field number silently corrupts data — the decoder interprets bytes meant for one type as another. Once assigned, a field number is permanent. Always reserve removed fields.

Rule 2: Message Design Patterns

The rule: 'Use wrapper messages for all RPC request and response types — never use primitive types directly. Request: message GetUserRequest { string user_id = 1; }. Response: message GetUserResponse { User user = 1; }. This allows adding fields to requests and responses without breaking the RPC signature.'

For naming: 'Messages use PascalCase (UserProfile). Fields use snake_case (user_id, created_at). Enums use PascalCase with SCREAMING_CASE values: enum Status { STATUS_UNSPECIFIED = 0; STATUS_ACTIVE = 1; STATUS_INACTIVE = 2; }. Always include an UNSPECIFIED = 0 value in enums — proto3 defaults to 0 for unset fields.'

For composition: 'Use nested messages for types only used within a parent. Use shared messages (in common.proto) for types used across services. Use oneof for mutually exclusive fields. Use maps (map<string, Value>) for dynamic key-value data.'

  • Wrapper messages for all RPC request/response — never raw primitives
  • PascalCase messages, snake_case fields, SCREAMING_CASE enum values
  • UNSPECIFIED = 0 in all enums — proto3 defaults unset to 0
  • oneof for mutually exclusive fields — maps for key-value data
  • Shared types in common.proto — nested types for parent-scoped data

Rule 3: gRPC Service Design

The rule: 'Design services around domain capabilities, not CRUD operations. A UserService has GetUser, SearchUsers, CreateUser — not Create, Read, Update, Delete. Use unary RPCs for simple request/response. Use server streaming for large result sets or real-time data. Use client streaming for uploads or batch inputs. Use bidirectional streaming only when both sides need to send data concurrently.'

For method design: 'Each RPC method does one thing. Methods that need multiple steps should be separate RPCs composed by the client, not a single God-method. Use metadata (headers) for cross-cutting concerns (auth tokens, request IDs) — not request fields.'

For versioning: 'Version services in the package name: package myapp.users.v1;. When making breaking changes, create a v2 package — keep v1 running until all consumers migrate. Never modify existing RPC signatures — add new methods instead.'

💡 Version in Package

Version services in the package name: package myapp.users.v1. When making breaking changes, create v2 — keep v1 running until all consumers migrate. Never modify existing RPC signatures.

Rule 4: gRPC Error Handling

The rule: 'Use standard gRPC status codes — never return OK with an error message in the response body. Use INVALID_ARGUMENT for validation errors, NOT_FOUND for missing resources, PERMISSION_DENIED for authorization failures, INTERNAL for unexpected server errors. Attach error details using the richer error model (google.rpc.Status with google.rpc.BadRequest, google.rpc.ErrorInfo).'

For error details: 'Include field-level validation errors in BadRequest.FieldViolation. Include error metadata in ErrorInfo (reason, domain, metadata map). The standard error model is supported across all gRPC languages — use it instead of inventing custom error types.'

AI assistants often return gRPC OK status with error information in the response message. This defeats the purpose of gRPC's error handling — clients check the status code first and may never read the response body on OK.

  • Standard status codes: INVALID_ARGUMENT, NOT_FOUND, PERMISSION_DENIED, INTERNAL
  • Never OK with error in body — use the correct error status code
  • google.rpc.BadRequest for field-level validation errors
  • google.rpc.ErrorInfo for error metadata (reason, domain)
  • DEADLINE_EXCEEDED for timeout — UNAVAILABLE for transient failures (retryable)
ℹ️ Never OK with Error

AI returns gRPC OK status with error info in the response body. Clients check status code first — they may never read the body on OK. Use the correct error status code every time.

Rule 5: Proto File Organization

The rule: 'Organize proto files by service domain: proto/users/v1/users.proto, proto/orders/v1/orders.proto. Keep common types in proto/common/v1/common.proto. One service definition per file. Import shared types — never duplicate message definitions across files. Use buf or protoc-gen-validate for schema linting and validation.'

For code generation: 'Generate code from proto files — never write gRPC stubs by hand. Use buf generate for multi-language code generation. Commit generated code or generate in CI — be consistent across the team. Pin protoc and plugin versions for reproducible builds.'

For documentation: 'Add comments to all messages, fields, and RPC methods. Proto comments become documentation in generated code. Describe the semantics, not the obvious: // The user's preferred display name (may differ from legal name) is better than // The name field.'

Complete Protobuf/gRPC Rules Template

Consolidated rules for Protobuf and gRPC projects.

  • Never reuse field numbers — reserve removed fields with reserved keyword
  • Never change field types — add a new field with a new number
  • Wrapper messages for all RPCs — never raw primitives in RPC signatures
  • UNSPECIFIED = 0 in all enums — PascalCase messages, snake_case fields
  • Standard gRPC status codes — never OK with error in response body
  • Rich error model: BadRequest for validation, ErrorInfo for metadata
  • Version in package name: v1, v2 — never modify existing RPC signatures
  • buf for linting + code generation — pin tool versions for reproducibility