Confluent Schema Registry provides a serving layer for your metadata. It provides a RESTful interface for storing and retrieving Apache Avro schemas. It stores a versioned history of all schemas based on a specified subject name strategy, provides multiple compatibility settings and allows evolution of schemas according to the configured compatibility settings and expanded Avro support.
This chart bootstraps a deployment of a Confluent Schema Registry
Hypertrace uses Confluent schema-registry as a serialization mechanism for the avro messages published to Kafka and these Schemas are defined in the code along with their respective owner modules. All the avro messages schema are registered with the schema registry and kafka producer/consumers uses it while serializing/de-searlizing avro messages.
Hypertrace Ingestion Pipeline |
- Kubernetes 1.10.0+
- Helm 3.0.0+
- A healthy and accessible Kafka Cluster
This chart will do the following:
- Create a schema registry cluster using a Deployment.
- Create a Service configured to connect to the available schema registry instance on the configured port.
- Optionally apply a Pod Anti-Affinity to spread the schema registry instances across nodes.
- Optionally add an Ingress resource.
- Optionally start a JMX Exporter container inside schema registry pods.
- Optionally create a Prometheus ServiceMonitor for each enabled jmx exporter container.
- Optionally add a cronjob to take backup the schema registry topic and save it in Google Cloud Storage or AWS S3
helm upgrade schema-registry ./helm --install --namespace hypertrace
You can specify each parameter using the --set key=value[,key=value]
argument to helm install
.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
$ helm upgrade my-release ./helm --install --namespace hypertrace -f values.yaml
- You can find all user-configurable settings, their defaults in values.yaml.