Skip to main content
Schema Registry provides centralized schema management for Kafka messages. It stores and validates schemas, ensuring producers and consumers agree on message formats and preventing incompatible changes.

Key Concepts

Schema

A definition of message structure. Supports Avro, JSON Schema, and Protobuf formats.

Subject

A named collection of schema versions. Typically named after the topic with -key or -value suffix.

Compatibility

Rules that determine what schema changes are allowed. Prevents breaking changes.

Version

Each schema update creates a new version. All versions are retained for compatibility checking.

Required Permissions

ActionPermission
View schemasiam:project:infrastructure:kafka:read
Create/Update schemasiam:project:infrastructure:kafka:write
Delete schemasiam:project:infrastructure:kafka:delete

Schema Types

TypeDescriptionUse Case
AvroBinary format with schema evolution supportHigh-performance, compact serialization
JSONJSON Schema for validationHuman-readable, web-friendly
ProtobufGoogle’s Protocol BuffersCross-language, efficient serialization

Subject Naming Convention

Subjects follow a naming pattern based on the topic name:
PatternExampleUse
{topic}-keyorders-keySchema for message keys
{topic}-valueorders-valueSchema for message values
Subject names must be 1-255 characters. Use descriptive names that match your topic naming convention.

How to Create a Schema

1

Select Kafka Cluster

Choose the target Kafka cluster from the dropdown.
2

Click Create Schema

Click the Create Schema button.
3

Enter Subject Name

Provide a subject name following the naming convention (e.g., orders-value).
4

Select Schema Type

Choose the schema format:
  • Avro: For compact binary serialization
  • JSON: For JSON Schema validation
  • Protobuf: For Protocol Buffers
5

Define Schema

Enter the schema definition in the code editor. Examples are provided for each type.
6

Set Compatibility (Optional)

Choose a compatibility mode or leave as default (BACKWARD).
7

Create

Click Create Schema to register the schema.

Example Schemas

Avro Schema
{
  "type": "record",
  "name": "User",
  "namespace": "com.example",
  "fields": [
    {"name": "id", "type": "string"},
    {"name": "name", "type": "string"},
    {"name": "email", "type": ["null", "string"], "default": null}
  ]
}
JSON Schema
{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "type": "object",
  "properties": {
    "id": {"type": "string"},
    "name": {"type": "string"},
    "email": {"type": "string"}
  },
  "required": ["id", "name"]
}
Protobuf Schema
syntax = "proto3";
package com.example;

message User {
  string id = 1;
  string name = 2;
  optional string email = 3;
}

How to View Schema Details

1

Find the Schema

Locate the schema in the list using search or type filter.
2

Click to Open

Click on the schema row to open the detail page.
3

Explore Tabs

The detail page has three tabs:
  • Overview: Current schema definition and metadata
  • Versions: All registered versions of this schema
  • Compatibility: Current compatibility settings

How to Register a New Version

1

Open Schema Details

Navigate to the schema detail page.
2

Click Register Version

Click the Register New Version button.
3

Update Schema

Modify the schema definition. Changes must comply with the compatibility mode.
4

Submit

Click Register to add the new version.
New versions must pass compatibility checks. If the schema violates compatibility rules, registration will fail.

Compatibility Modes

Compatibility modes control what schema changes are allowed:
ModeDescription
BACKWARDNew schema can read data written by the previous version
BACKWARD_TRANSITIVENew schema can read data written by all previous versions
FORWARDPrevious schema can read data written by the new version
FORWARD_TRANSITIVEAll previous schemas can read data written by the new version
FULLBoth backward and forward compatible with the previous version
FULL_TRANSITIVEBoth backward and forward compatible with all previous versions
NONENo compatibility checking (not recommended for production)

Choosing a Compatibility Mode

ScenarioRecommended Mode
Consumers updated before producersBACKWARD
Producers updated before consumersFORWARD
Unknown update orderFULL
Schema migrations with coordinationNONE (temporarily)
BACKWARD is the default and most common mode. It ensures new consumers can read messages from older producers.

Compatible Changes by Mode

BACKWARD Compatible Changes:
  • Adding optional fields with defaults
  • Removing fields
FORWARD Compatible Changes:
  • Adding fields
  • Removing optional fields with defaults
Breaking Changes (Not Compatible):
  • Changing field types
  • Renaming fields
  • Removing required fields (BACKWARD)
  • Adding required fields (FORWARD)

How to Change Compatibility

1

Open Schema Details

Navigate to the schema detail page.
2

Go to Compatibility Tab

Select the Compatibility tab.
3

Select New Mode

Choose the desired compatibility mode from the dropdown.
4

Update

Click Update Compatibility to apply the change.
Changing compatibility mode only affects future versions. Existing versions retain their original compatibility relationships.

How to Delete a Schema

1

Find the Schema

Locate the schema in the list.
2

Click Delete

Click the delete button or open the detail page and click Delete Schema.
3

Confirm

Confirm the deletion. This performs a soft delete by default.
Deleting a schema removes all versions. Ensure no producers or consumers depend on this schema before deletion.

Troubleshooting

  • Your schema change violates the compatibility mode
  • Review the error message for specific field issues
  • Consider adding default values to new fields
  • Temporarily set compatibility to NONE if migration is coordinated
  • Check JSON syntax (missing commas, brackets)
  • Verify Avro type declarations are valid
  • Ensure Protobuf syntax matches the declared version
  • Use external validators to test schema before registration
  • Verify you’re connected to the correct Kafka cluster
  • Check the subject name spelling
  • Schema may have been deleted
  • Refresh the page or clear filters
  • Verify the schema ID exists in the registry
  • Check that the client is configured to use schema registry
  • Ensure network connectivity to schema registry endpoint
  • Verify authentication credentials if required
  • You need delete permission
  • Check if the schema is referenced by other subjects
  • Verify the schema registry is healthy

FAQ

Soft delete removes the schema from normal queries but retains data for potential recovery. Hard delete permanently removes all versions. Producers and consumers using the schema will fail after deletion.
Yes. Typically you have two subjects per topic: one for the message key (topic-key) and one for the value (topic-value). Each can have different schemas and evolve independently.
Each schema version gets a unique numeric ID. Producers embed this ID in messages. Consumers use the ID to fetch the correct schema for deserialization. IDs are global across the registry.
Version is sequential within a subject (1, 2, 3…). ID is globally unique across all subjects. The same schema content in different subjects gets different IDs.
Avro: Best for Kafka-native workflows, compact format, excellent schema evolution. JSON Schema: Good for web APIs, human-readable, widely understood. Protobuf: Best for cross-language systems, very efficient, strong typing.
No. The schema type (Avro, JSON, Protobuf) cannot be changed for an existing subject. Create a new subject with the desired type and migrate producers/consumers.
For breaking changes: (1) Create a new subject with the new schema, (2) Update consumers to handle both formats, (3) Migrate producers to the new subject, (4) Deprecate the old subject when migration is complete.

Best Practices

Schema Design

  • Include optional fields with defaults for future extensibility
  • Use meaningful field names that describe the data
  • Add documentation/comments to complex schemas
  • Version your schemas in source control

Compatibility Strategy

  • Use BACKWARD compatibility for most use cases
  • Use FULL_TRANSITIVE for critical data pipelines
  • Avoid NONE in production except during coordinated migrations
  • Test compatibility changes in non-production first

Naming Conventions

Follow consistent naming patterns:
{domain}.{entity}.{event-type}

Examples:
orders.payment.completed-value
users.profile.updated-value
inventory.stock.adjusted-value

Evolution Guidelines

  • Always add new fields as optional with defaults
  • Never remove required fields
  • Never change field types
  • Use union types (Avro) or oneOf (JSON) for flexible fields
  • Document breaking changes and migration plans