Key Concepts
Schema
A definition of message structure. Supports Avro, JSON Schema, and Protobuf formats.
Subject
A named collection of schema versions. Typically named after the topic with
-key or -value suffix.Compatibility
Rules that determine what schema changes are allowed. Prevents breaking changes.
Version
Each schema update creates a new version. All versions are retained for compatibility checking.
Required Permissions
| Action | Permission |
|---|---|
| View schemas | iam:project:infrastructure:kafka:read |
| Create/Update schemas | iam:project:infrastructure:kafka:write |
| Delete schemas | iam:project:infrastructure:kafka:delete |
Schema Types
| Type | Description | Use Case |
|---|---|---|
| Avro | Binary format with schema evolution support | High-performance, compact serialization |
| JSON | JSON Schema for validation | Human-readable, web-friendly |
| Protobuf | Google’s Protocol Buffers | Cross-language, efficient serialization |
Subject Naming Convention
Subjects follow a naming pattern based on the topic name:| Pattern | Example | Use |
|---|---|---|
{topic}-key | orders-key | Schema for message keys |
{topic}-value | orders-value | Schema for message values |
Subject names must be 1-255 characters. Use descriptive names that match your topic naming convention.
How to Create a Schema
Select Schema Type
Choose the schema format:
- Avro: For compact binary serialization
- JSON: For JSON Schema validation
- Protobuf: For Protocol Buffers
Example Schemas
Avro SchemaHow to View Schema Details
How to Register a New Version
Compatibility Modes
Compatibility modes control what schema changes are allowed:| Mode | Description |
|---|---|
| BACKWARD | New schema can read data written by the previous version |
| BACKWARD_TRANSITIVE | New schema can read data written by all previous versions |
| FORWARD | Previous schema can read data written by the new version |
| FORWARD_TRANSITIVE | All previous schemas can read data written by the new version |
| FULL | Both backward and forward compatible with the previous version |
| FULL_TRANSITIVE | Both backward and forward compatible with all previous versions |
| NONE | No compatibility checking (not recommended for production) |
Choosing a Compatibility Mode
| Scenario | Recommended Mode |
|---|---|
| Consumers updated before producers | BACKWARD |
| Producers updated before consumers | FORWARD |
| Unknown update order | FULL |
| Schema migrations with coordination | NONE (temporarily) |
BACKWARD is the default and most common mode. It ensures new consumers can read messages from older producers.
Compatible Changes by Mode
BACKWARD Compatible Changes:- Adding optional fields with defaults
- Removing fields
- Adding fields
- Removing optional fields with defaults
- Changing field types
- Renaming fields
- Removing required fields (BACKWARD)
- Adding required fields (FORWARD)
How to Change Compatibility
How to Delete a Schema
Troubleshooting
Schema registration failed - compatibility error
Schema registration failed - compatibility error
- Your schema change violates the compatibility mode
- Review the error message for specific field issues
- Consider adding default values to new fields
- Temporarily set compatibility to NONE if migration is coordinated
Schema validation error
Schema validation error
- Check JSON syntax (missing commas, brackets)
- Verify Avro type declarations are valid
- Ensure Protobuf syntax matches the declared version
- Use external validators to test schema before registration
Cannot find schema
Cannot find schema
- Verify you’re connected to the correct Kafka cluster
- Check the subject name spelling
- Schema may have been deleted
- Refresh the page or clear filters
Producer/Consumer failing with schema error
Producer/Consumer failing with schema error
- Verify the schema ID exists in the registry
- Check that the client is configured to use schema registry
- Ensure network connectivity to schema registry endpoint
- Verify authentication credentials if required
Cannot delete schema
Cannot delete schema
- You need delete permission
- Check if the schema is referenced by other subjects
- Verify the schema registry is healthy
FAQ
What happens when I delete a schema?
What happens when I delete a schema?
Soft delete removes the schema from normal queries but retains data for potential recovery. Hard delete permanently removes all versions. Producers and consumers using the schema will fail after deletion.
Can I have multiple schemas for one topic?
Can I have multiple schemas for one topic?
Yes. Typically you have two subjects per topic: one for the message key (
topic-key) and one for the value (topic-value). Each can have different schemas and evolve independently.How do schema IDs work?
How do schema IDs work?
Each schema version gets a unique numeric ID. Producers embed this ID in messages. Consumers use the ID to fetch the correct schema for deserialization. IDs are global across the registry.
What's the difference between version and ID?
What's the difference between version and ID?
Version is sequential within a subject (1, 2, 3…). ID is globally unique across all subjects. The same schema content in different subjects gets different IDs.
Should I use Avro, JSON, or Protobuf?
Should I use Avro, JSON, or Protobuf?
Avro: Best for Kafka-native workflows, compact format, excellent schema evolution. JSON Schema: Good for web APIs, human-readable, widely understood. Protobuf: Best for cross-language systems, very efficient, strong typing.
Can I change schema type after creation?
Can I change schema type after creation?
No. The schema type (Avro, JSON, Protobuf) cannot be changed for an existing subject. Create a new subject with the desired type and migrate producers/consumers.
How do I handle breaking changes?
How do I handle breaking changes?
For breaking changes: (1) Create a new subject with the new schema, (2) Update consumers to handle both formats, (3) Migrate producers to the new subject, (4) Deprecate the old subject when migration is complete.
What's the recommended workflow for schema changes?
What's the recommended workflow for schema changes?
- Test the new schema locally
- Register in development environment
- Update consumers first (for BACKWARD compatibility)
- Update producers
- Promote to production following the same order
Best Practices
Schema Design
- Include optional fields with defaults for future extensibility
- Use meaningful field names that describe the data
- Add documentation/comments to complex schemas
- Version your schemas in source control
Compatibility Strategy
- Use BACKWARD compatibility for most use cases
- Use FULL_TRANSITIVE for critical data pipelines
- Avoid NONE in production except during coordinated migrations
- Test compatibility changes in non-production first
Naming Conventions
Follow consistent naming patterns:Evolution Guidelines
- Always add new fields as optional with defaults
- Never remove required fields
- Never change field types
- Use union types (Avro) or oneOf (JSON) for flexible fields
- Document breaking changes and migration plans