Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] kubeblocks upgrade form 0.9.0-beta.29 to 0.9.0-beta.30 error: "kafka-cc" is invalid: spec.fileFormatConfig: Required value #7503

Closed
JashBook opened this issue Jun 6, 2024 · 4 comments · Fixed by #7638
Assignees
Labels
kind/bug Something isn't working severity/major Great chance user will encounter the same problem
Milestone

Comments

@JashBook
Copy link
Collaborator

JashBook commented Jun 6, 2024

Describe the bug

kbcli version
Kubernetes: v1.26.3
KubeBlocks: 0.9.0-beta.29
kbcli: 0.9.0-beta.24

To Reproduce
Steps to reproduce the behavior:

  1. install kubeblocks
kbcli kubeblocks install --version 0.9.0-beta.29                                     
KubeBlocks will be installed to namespace "kb-system"
Kubernetes version 1.26.3
kbcli version 0.9.0-beta.24
Collecting data from cluster                       OK
Kubernetes cluster preflight                       OK
  Warn
  - This application requires at least 3 nodes
  - The default storage class was not found. You can use option --set storageClass=<storageClassName> when creating cluster
Create CRDs                                        OK
Add and update repo kubeblocks                     OK
Install KubeBlocks 0.9.0-beta.29                   OK
Wait for addons to be enabled
  apecloud-mysql                                   OK
  kafka                                            OK
  mongodb                                          OK
  postgresql                                       OK
  pulsar                                           OK
  redis                                            OK
  snapshot-controller                              OK

KubeBlocks 0.9.0-beta.29 installed to namespace kb-system SUCCESSFULLY!

-> Basic commands for cluster:
    kbcli cluster create -h     # help information about creating a database cluster
    kbcli cluster list          # list all database clusters
    kbcli cluster describe <cluster name>  # get cluster information

-> Uninstall KubeBlocks:
    kbcli kubeblocks uninstall

  1. upgrade kubeblocks
kbcli kubeblocks upgrade --auto-approve  --set upgradeAddons=true --version 0.9.0-beta.30
Current KubeBlocks version 0.9.0-beta.29.
Kubernetes version 1.26.3
kbcli version 0.9.0-beta.24
Add and update repo kubeblocks                     OK
Keep addons                                        OK
Stop KubeBlocks 0.9.0-beta.29                      OK
Stop DataProtection                                OK
Conversion old version[0.9.0-beta.29] CRs to new version[0.9.0-beta.30] OK
Upgrade CRDs                                       OK
update new version CRs                             OK
Upgrading KubeBlocks to 0.9.0-beta.30              FAIL
error: pre-upgrade hooks failed: 1 error occurred:
	* timed out waiting for the condition      

  1. See error
kubectl get pod -n kb-system
NAME                                            READY   STATUS             RESTARTS      AGE
kb-addon-snapshot-controller-66f5569c9d-xbds9   1/1     Running            0             4m5s
kubeblocks-upgrade-hook-job-8rwwz               0/1     CrashLoopBackOff   4 (79s ago)   3m4s
➜  ~ 
➜  ~ kubectl logs -n kb-system kubeblocks-upgrade-hook-job-8rwwz
addon[alertmanager-webhook-adaptor] is not installed and pass
addon[apecloud-otel-collector] is not installed and pass
addon[aws-load-balancer-controller] is not installed and pass
addon[csi-driver-nfs] is not installed and pass
addon[csi-hostpath-driver] is not installed and pass
addon[csi-s3] is not installed and pass
addon[external-dns] is not installed and pass
addon[fault-chaos-mesh] is not installed and pass
addon[grafana] is not installed and pass
addon[kubebench] is not installed and pass
addon[kubeblocks-csi-driver] is not installed and pass
addon[llm] is not installed and pass
addon[loki] is not installed and pass
addon[migration] is not installed and pass
addon[minio] is not installed and pass
addon[mysql] is not installed and pass
addon[nvidia-gpu-exporter] is not installed and pass
addon[nyancat] is not installed and pass
addon[prometheus] is not installed and pass
addon[pyroscope-server] is not installed and pass
addon[qdrant] is not installed and pass
addon[victoria-metrics-agent] is not installed and pass
reading CRDs from path: /kubeblocks/crd
read CRDs from file: apps.kubeblocks.io_backuppolicytemplates.yaml
read CRDs from file: apps.kubeblocks.io_clusterdefinitions.yaml
read CRDs from file: apps.kubeblocks.io_clusters.yaml
read CRDs from file: apps.kubeblocks.io_clusterversions.yaml
read CRDs from file: apps.kubeblocks.io_componentclassdefinitions.yaml
read CRDs from file: apps.kubeblocks.io_componentdefinitions.yaml
read CRDs from file: apps.kubeblocks.io_componentresourceconstraints.yaml
read CRDs from file: apps.kubeblocks.io_components.yaml
read CRDs from file: apps.kubeblocks.io_componentversions.yaml
read CRDs from file: apps.kubeblocks.io_configconstraints.yaml
read CRDs from file: apps.kubeblocks.io_configurations.yaml
read CRDs from file: apps.kubeblocks.io_opsdefinitions.yaml
read CRDs from file: apps.kubeblocks.io_opsrequests.yaml
read CRDs from file: apps.kubeblocks.io_servicedescriptors.yaml
read CRDs from file: dataprotection.kubeblocks.io_actionsets.yaml
read CRDs from file: dataprotection.kubeblocks.io_backuppolicies.yaml
read CRDs from file: dataprotection.kubeblocks.io_backuprepos.yaml
read CRDs from file: dataprotection.kubeblocks.io_backups.yaml
read CRDs from file: dataprotection.kubeblocks.io_backupschedules.yaml
read CRDs from file: dataprotection.kubeblocks.io_restores.yaml
read CRDs from file: dataprotection.kubeblocks.io_storageproviders.yaml
read CRDs from file: experimental.kubeblocks.io_nodecountscalers.yaml
read CRDs from file: extensions.kubeblocks.io_addons.yaml
read CRDs from file: storage.kubeblocks.io_storageproviders.yaml
read CRDs from file: workloads.kubeblocks.io_instancesets.yaml
create/update CRD: backuppolicytemplates.apps.kubeblocks.io
create/update CRD: clusterdefinitions.apps.kubeblocks.io
create/update CRD: clusters.apps.kubeblocks.io
create/update CRD: clusterversions.apps.kubeblocks.io
create/update CRD: componentclassdefinitions.apps.kubeblocks.io
create/update CRD: componentdefinitions.apps.kubeblocks.io
create/update CRD: componentresourceconstraints.apps.kubeblocks.io
create/update CRD: components.apps.kubeblocks.io
create/update CRD: componentversions.apps.kubeblocks.io
create/update CRD: configconstraints.apps.kubeblocks.io
create/update CRD: configurations.apps.kubeblocks.io
create/update CRD: opsdefinitions.apps.kubeblocks.io
create/update CRD: opsrequests.apps.kubeblocks.io
create/update CRD: servicedescriptors.apps.kubeblocks.io
create/update CRD: actionsets.dataprotection.kubeblocks.io
create/update CRD: backuppolicies.dataprotection.kubeblocks.io
create/update CRD: backuprepos.dataprotection.kubeblocks.io
create/update CRD: backups.dataprotection.kubeblocks.io
create/update CRD: backupschedules.dataprotection.kubeblocks.io
create/update CRD: restores.dataprotection.kubeblocks.io
create/update CRD: storageproviders.dataprotection.kubeblocks.io
create/update CRD: nodecountscalers.experimental.kubeblocks.io
create/update CRD: addons.extensions.kubeblocks.io
create/update CRD: storageproviders.storage.kubeblocks.io
create/update CRD: instancesets.workloads.kubeblocks.io
update GVR resource: apps.kubeblocks.io/v1beta1, Resource=configconstraints
update resource: kafka-cc
panic: ConfigConstraint.apps.kubeblocks.io "kafka-cc" is invalid: spec.fileFormatConfig: Required value

goroutine 1 [running]:
github.com/apecloud/kubeblocks/cmd/helmhook/hook.CheckErr(...)
	/src/cmd/helmhook/hook/utils.go:38
main.main()
	/src/cmd/helmhook/main.go:73 +0x5c0

get kafka cc yaml

k get cc kafka-cc -oyaml
apiVersion: apps.kubeblocks.io/v1beta1
kind: ConfigConstraint
metadata:
  annotations:
    meta.helm.sh/release-name: kb-addon-kafka
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2024-06-06T03:35:28Z"
  finalizers:
  - config.kubeblocks.io/finalizer
  generation: 2
  labels:
    app.kubernetes.io/instance: kb-addon-kafka
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kafka
    app.kubernetes.io/version: 3.3.2
    helm.sh/chart: kafka-0.9.0
  name: kafka-cc
  resourceVersion: "386000825"
  uid: 2cac5012-770f-4b0c-8e08-34af818972a1
spec:
  fileFormatConfig:
    format: properties
  parametersSchema:
    cue: "//Copyright (C) 2022-2023 ApeCloud Co., Ltd\n//\n//This file is part of
      KubeBlocks project\n//\n//This program is free software: you can redistribute
      it and/or modify\n//it under the terms of the GNU Affero General Public License
      as published by\n//the Free Software Foundation, either version 3 of the License,
      or\n//(at your option) any later version.\n//\n//This program is distributed
      in the hope that it will be useful\n//but WITHOUT ANY WARRANTY; without even
      the implied warranty of\n//MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
      \ See the\n//GNU Affero General Public License for more details.\n//\n//You
      should have received a copy of the GNU Affero General Public License\n//along
      with this program.  If not, see <http://www.gnu.org/licenses/>.\n\n// https://kafka.apache.org/documentation/#brokerconfigs\n#KafkaParameter:
      {\n\n\t\"allow.everyone.if.no.acl.found\"?: bool\n\n\t// The replication factor
      for the offsets topic (set higher to ensure availability). Internal topic creation
      will fail until the cluster size meets this replication factor requirement.\n\t\"offsets.topic.replication.factor\"?:
      int & >=1 & <=32767\n\n\t// The replication factor for the transaction topic
      (set higher to ensure availability). Internal topic creation will fail until
      the cluster size meets this replication factor requirement.\n\t\"transaction.state.log.replication.factor\"?:
      int & >=1 & <=32767\n\n\t// The maximum time in ms that a message in any topic
      is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms
      is used\n\t\"log.flush.interval.ms\"?: int\n\n\t// The number of messages accumulated
      on a log partition before messages are flushed to disk\n\t\"log.flush.interval.messages\"?:
      int & >=1\n\n\t// Overridden min.insync.replicas config for the transaction
      topic.\n\t\"transaction.state.log.min.isr\"?: int & >=1\n\n\t// Enables delete
      topic. Delete topic through the admin tool will have no effect if this config
      is turned off\n\t\"delete.topic.enable\"?: bool\n\n\t// The largest record batch
      size allowed by Kafka\n\t\"message.max.bytes\"?: int & >=0\n\n\t// The number
      of threads that the server uses for receiving requests from the network and
      sending responses to the network\n\t\"num.network.threads\"?: int & >=1\n\n\t//
      The number of threads that the server uses for processing requests, which may
      include disk I/O\n\t\"num.io.threads\"?: int & >=1\n\n\t// The number of threads
      that can move replicas between log directories, which may include disk I/O\n\t\"num.replica.alter.log.dirs.threads\"?:
      int\n\n\t// The number of threads to use for various background processing tasks\n\t\"background.threads\"?:
      int & >=1\n\n\t// The number of queued requests allowed for data-plane, before
      blocking the network threads\n\t\"queued.max.requests\"?: int & >=1\n\n\t//
      The number of queued bytes allowed before no more requests are read\n\t\"queued.max.request.bytes\"?:
      int\n\n\t// The configuration controls the maximum amount of time the client
      will wait for the response of a request\n\t\"request.timeout.ms\"?: int & >=0\n\n\t//
      The amount of time the client will wait for the socket connection to be established.
      If the connection is not built before the timeout elapses, clients will close
      the socket channel.\n\t\"socket.connection.setup.timeout.ms\"?: int\n\n\t//
      The maximum amount of time the client will wait for the socket connection to
      be established.\n\t\"socket.connection.setup.timeout.max.ms\"?: int\n\n\t//
      This is the maximum number of bytes in the log between the latest snapshot and
      the high-watermark needed before generating a new snapshot.\n\t\"metadata.log.max.record.bytes.between.snapshots\"?:
      int & >=1\n\n\t// The length of time in milliseconds between broker heartbeats.
      Used when running in KRaft mode.\n\t\"broker.heartbeat.interval.ms\"?: int\n\n\t//
      The length of time in milliseconds that a broker lease lasts if no heartbeats
      are made. Used when running in KRaft mode.\n\t\"broker.session.timeout.ms\"?:
      int\n\n\t// SASL mechanism used for communication with controllers. Default
      is GSSAPI.\n\t\"sasl.mechanism.controller.protocol\"?: string\n\n\t// The maximum
      size of a single metadata log file.\n\t\"metadata.log.segment.bytes\"?: int
      & >=12\n\n\t// The maximum time before a new metadata log file is rolled out
      (in milliseconds).\n\t\"metadata.log.segment.ms\"?: int\n\n\t// The maximum
      combined size of the metadata log and snapshots before deleting old snapshots
      and log files.\n\t\"metadata.max.retention.bytes\"?: int\n\n\t// The number
      of milliseconds to keep a metadata log file or snapshot before deleting it.
      Since at least one snapshot must exist before any logs can be deleted, this
      is a soft limit.\n\t\"metadata.max.retention.ms\"?: int\n\n\t// This configuration
      controls how often the active controller should write no-op records to the metadata
      partition.\n\t// If the value is 0, no-op records are not appended to the metadata
      partition. The default value is 500\n\t\"metadata.max.idle.interval.ms\"?: int
      & >=0\n\n\t// The fully qualified name of a class that implements org.apache.kafka.server.authorizer.Authorizer
      interface, which is used by the broker for authorization.\n\t\"authorizer.class.name\"?:
      string\n\n\t// A comma-separated list of listener names which may be started
      before the authorizer has finished initialization.\n\t\"early.start.listeners\"?:
      string\n\n\t// Name of listener used for communication between controller and
      brokers.\n\t\"control.plane.listener.name\"?: string\n\n\t// The SO_SNDBUF buffer
      of the socket server sockets. If the value is -1, the OS default will be used.\n\t\"socket.send.buffer.bytes\"?:
      int\n\n\t// The SO_RCVBUF buffer of the socket server sockets. If the value
      is -1, the OS default will be used.\n\t\"socket.receive.buffer.bytes\"?: int\n\n\t//
      The maximum number of bytes in a socket request\n\t\"socket.request.max.bytes\"?:
      int & >=1\n\n\t// The maximum number of pending connections on the socket.\n\t//
      In Linux, you may also need to configure `somaxconn` and `tcp_max_syn_backlog`
      kernel parameters accordingly to make the configuration takes effect.\n\t\"socket.listen.backlog.size\"?:
      int & >=1\n\n\t// The maximum number of connections we allow from each ip address.\n\t\"max.connections.per.ip\"?:
      int & >=0\n\n\t// A comma-separated list of per-ip or hostname overrides to
      the default maximum number of connections. An example value is \"hostName:100,127.0.0.1:200\"\n\t\"max.connections.per.ip.overrides\"?:
      string\n\n\t// The maximum number of connections we allow in the broker at any
      time.\n\t\"max.connections\"?: int & >=0\n\n\t// The maximum connection creation
      rate we allow in the broker at any time.\n\t\"max.connection.creation.rate\"?:
      int & >=0\n\n\t// Close idle connections after the number of milliseconds specified
      by this config.\n\t\"connections.max.idle.ms\"?: int\n\n\t// Connection close
      delay on failed authentication: this is the time (in milliseconds) by which
      connection close will be delayed on authentication failure.\n\t// This must
      be configured to be less than connections.max.idle.ms to prevent connection
      timeout.\n\t\"connection.failed.authentication.delay.ms\"?: int & >=0\n\n\t//
      Rack of the broker. This will be used in rack aware replication assignment for
      fault tolerance.\n\t\"broker.rack\"?: string\n\n\t// The default number of log
      partitions per topic\n\t\"num.partitions\"?: int & >=1\n\n\t// The maximum size
      of a single log file\n\t\"log.segment.bytes\"?: int & >=14\n\n\t// The maximum
      time before a new log segment is rolled out (in milliseconds). If not set, the
      value in log.roll.hours is used\n\t\"log.roll.ms\"?: int\n\n\t// The maximum
      time before a new log segment is rolled out (in hours), secondary to log.roll.ms
      property\n\t\"log.roll.hours\"?: int & >=1\n\n\t// The maximum jitter to subtract
      from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours
      is used\n\t\"log.roll.jitter.ms\"?: int\n\n\t// The maximum jitter to subtract
      from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property\n\t\"log.roll.jitter.hours\"?:
      int & >=0\n\n\t// The number of milliseconds to keep a log file before deleting
      it (in milliseconds), If not set, the value in log.retention.minutes is used.
      If set to -1, no time limit is applied.\n\t\"log.retention.ms\"?: int\n\n\t//
      The number of minutes to keep a log file before deleting it (in minutes), secondary
      to log.retention.ms property. If not set, the value in log.retention.hours is
      used\n\t\"log.retention.minutes\"?: int\n\n\t// The number of hours to keep
      a log file before deleting it (in hours), tertiary to log.retention.ms property\n\t\"log.retention.hours\"?:
      int\n\n\t// The maximum size of the log before deleting it\n\t\"log.retention.bytes\"?:
      int\n\n\t// The frequency in milliseconds that the log cleaner checks whether
      any log is eligible for deletion\n\t\"log.retention.check.interval.ms\"?: int
      & >=1\n\n\t// The default cleanup policy for segments beyond the retention window.
      A comma separated list of valid policies.\n\t\"log.cleanup.policy\"?: string
      & \"compact\" | \"delete\"\n\n\t// The number of background threads to use for
      log cleaning\n\t\"log.cleaner.threads\"?: int & >=0\n\n\t// The log cleaner
      will be throttled so that the sum of its read and write i/o will be less than
      this value on average\n\t\"log.cleaner.io.max.bytes.per.second\"?: number\n\n\t//
      The total memory used for log deduplication across all cleaner threads\n\t\"log.cleaner.dedupe.buffer.size\"?:
      int\n\n\t// The total memory used for log cleaner I/O buffers across all cleaner
      threads\n\t\"log.cleaner.io.buffer.size\"?: int & >=0\n\n\t// Log cleaner dedupe
      buffer load factor. The percentage full the dedupe buffer can become. A higher
      value will allow more log to be cleaned at once but will lead to more hash collisions\n\t\"log.cleaner.io.buffer.load.factor\"?:
      number\n\n\t// The amount of time to sleep when there are no logs to clean\n\t\"log.cleaner.backoff.ms\"?:
      int & >=0\n\n\t// The minimum ratio of dirty log to total log for a log to eligible
      for cleaning.\n\t\"log.cleaner.min.cleanable.ratio\"?: number & >=0 & <=1\n\n\t//
      Enable the log cleaner process to run on the server.\n\t\"log.cleaner.enable\"?:
      bool\n\n\t// The amount of time to retain delete tombstone markers for log compacted
      topics.\n\t\"log.cleaner.delete.retention.ms\"?: int & >=0\n\n\t// The minimum
      time a message will remain uncompacted in the log. Only applicable for logs
      that are being compacted.\n\t\"log.cleaner.min.compaction.lag.ms\"?: int & >=0\n\n\t//
      The maximum time a message will remain ineligible for compaction in the log.
      Only applicable for logs that are being compacted.\n\t\"log.cleaner.max.compaction.lag.ms\"?:
      int & >=1\n\n\t// The maximum size in bytes of the offset index\n\t\"log.index.size.max.bytes\"?:
      int & >=4\n\n\t// The interval with which we add an entry to the offset index\n\t\"log.index.interval.bytes\"?:
      int & >=0\n\n\t// The amount of time to wait before deleting a file from the
      filesystem\n\t\"log.segment.delete.delay.ms\"?: int & >=0\n\n\t// The frequency
      in ms that the log flusher checks whether any log needs to be flushed to disk\n\t\"log.flush.scheduler.interval.ms\"?:
      int\n\n\t// The frequency with which we update the persistent record of the
      last flush which acts as the log recovery point\n\t\"log.flush.offset.checkpoint.interval.ms\"?:
      int & >=0\n\n\t// The frequency with which we update the persistent record of
      log start offset\n\t\"log.flush.start.offset.checkpoint.interval.ms\"?: int
      & >=0\n\n\t// Should pre allocate file when create new segment? If you are using
      Kafka on Windows, you probably need to set it to true.\n\t\"log.preallocate\"?:
      bool\n\n\t// The number of threads per data directory to be used for log recovery
      at startup and flushing at shutdown\n\t\"num.recovery.threads.per.data.dir\"?:
      int & >=1\n\n\t// Enable auto creation of topic on the server\n\t\"auto.create.topics.enable\"?:
      bool\n\n\t// When a producer sets acks to \"all\" (or \"-1\"), min.insync.replicas
      specifies the minimum number of replicas that must acknowledge a write for the
      write to be considered successful.\n\t// If this minimum cannot be met, then
      the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).\n\t\"min.insync.replicas\"?:
      int & >=1\n\n\t// Specify the message format version the broker will use to
      append messages to the logs.\n\t\"log.message.format.version\"?: string & \"0.8.0\"
      | \"0.8.1\" | \"0.8.2\" | \"0.9.0\" | \"0.10.0-IV0\" | \"0.10.0-IV1\" | \"0.10.1-IV0\"
      | \"0.10.1-IV1\" | \"0.10.1-IV2\" | \"0.10.2-IV0\" | \"0.11.0-IV0\" | \"0.11.0-IV1\"
      | \"0.11.0-IV2\" | \"1.0-IV0\" | \"1.1-IV0\" | \"2.0-IV0\" | \"2.0-IV1\" | \"2.1-IV0\"
      | \"2.1-IV1\" | \"2.1-IV2\" | \"2.2-IV0\" | \"2.2-IV1\" | \"2.3-IV0\" | \"2.3-IV1\"
      | \"2.4-IV0\" | \"2.4-IV1\" | \"2.5-IV0\" | \"2.6-IV0\" | \"2.7-IV0\" | \"2.7-IV1\"
      | \"2.7-IV2\" | \"2.8-IV0\" | \"2.8-IV1\" | \"3.0-IV0\" | \"3.0-IV1\" | \"3.1-IV0\"
      | \"3.2-IV0\" | \"3.3-IV0\" | \"3.3-IV1\" | \"3.3-IV2\" | \"3.3-IV3\" | \"3.4-IV0\"\n\n\t//
      Define whether the timestamp in the message is message create time or log append
      time.\n\t\"log.message.timestamp.type\"?: string & \"CreateTime\" | \"LogAppendTime\"\n\n\t//
      The maximum difference allowed between the timestamp when a broker receives
      a message and the timestamp specified in the message.\n\t\"log.message.timestamp.difference.max.ms\"?:
      int & >=0\n\n\t// The create topic policy class that should be used for validation.\n\t\"create.topic.policy.class.name\"?:
      string\n\n\t// The alter configs policy class that should be used for validation.\n\t\"alter.config.policy.class.name\"?:
      string\n\n\t// This configuration controls whether down-conversion of message
      formats is enabled to satisfy consume requests.\n\t\"log.message.downconversion.enable\"?:
      bool\n\n\t// The socket timeout for controller-to-broker channels\n\t\"controller.socket.timeout.ms\"?:
      int\n\n\t// The default replication factors for automatically created topics\n\t\"default.replication.factor\"?:
      int\n\n\t// If a follower hasn't sent any fetch requests or hasn't consumed
      up to the leaders log end offset for at least this time, the leader will remove
      the follower from isr\n\t\"replica.lag.time.max.ms\"?: int\n\n\t// The socket
      timeout for network requests. Its value should be at least replica.fetch.wait.max.ms\n\t\"replica.socket.timeout.ms\"?:
      int\n\n\t// The socket receive buffer for network requests\n\t\"replica.socket.receive.buffer.bytes\"?:
      int\n\n\t// The number of bytes of messages to attempt to fetch for each partition.\n\t\"replica.fetch.max.bytes\"?:
      int & >=0\n\n\t// The maximum wait time for each fetcher request issued by follower
      replicas.\n\t\"replica.fetch.wait.max.ms\"?: int\n\n\t// The amount of time
      to sleep when fetch partition error occurs.\n\t\"replica.fetch.backoff.ms\"?:
      int & >=0\n\n\t// Minimum bytes expected for each fetch response. If not enough
      bytes, wait up to replica.fetch.wait.max.ms (broker config).\n\t\"replica.fetch.min.bytes\"?:
      int\n\n\t// Maximum bytes expected for the entire fetch response.\n\t\"replica.fetch.response.max.bytes\"?:
      int & >=0\n\n\t// Number of fetcher threads used to replicate records from each
      source broker.\n\t\"num.replica.fetchers\"?: int\n\n\t// The frequency with
      which the high watermark is saved out to disk\n\t\"replica.high.watermark.checkpoint.interval.ms\"?:
      int\n\n\t// The purge interval (in number of requests) of the fetch request
      purgatory\n\t\"fetch.purgatory.purge.interval.requests\"?: int\n\n\t// The purge
      interval (in number of requests) of the producer request purgatory\n\t\"producer.purgatory.purge.interval.requests\"?:
      int\n\n\t// The purge interval (in number of requests) of the delete records
      request purgatory\n\t\"delete.records.purgatory.purge.interval.requests\"?:
      int\n\n\t// Enables auto leader balancing.\n\t\"auto.leader.rebalance.enable\"?:
      bool\n\n\t// The ratio of leader imbalance allowed per broker.\n\t\"leader.imbalance.per.broker.percentage\"?:
      int\n\n\t// The frequency with which the partition rebalance check is triggered
      by the controller\n\t\"leader.imbalance.check.interval.seconds\"?: int & >=1\n\n\t//
      Indicates whether to enable replicas not in the ISR set to be elected as leader
      as a last resort, even though doing so may result in data loss\n\t\"unclean.leader.election.enable\"?:
      bool\n\n\t// Security protocol used to communicate between brokers.\n\t\"security.inter.broker.protocol\"?:
      string & \"PLAINTEXT\" | \"SSL\" | \"SASL_PLAINTEXT\" | \"SASL_SSL\"\n\n\t//
      The fully qualified class name that implements ReplicaSelector.\n\t\"replica.selector.class\"?:
      string\n\n\t// Controlled shutdown can fail for multiple reasons. This determines
      the number of retries when such failure happens\n\t\"controlled.shutdown.max.retries\"?:
      int\n\n\t// Before each retry, the system needs time to recover from the state
      that caused the previous failure (Controller fail over, replica lag etc)\n\t\"controlled.shutdown.retry.backoff.ms\"?:
      int\n\n\t// Enable controlled shutdown of the server\n\t\"controlled.shutdown.enable\"?:
      bool\n\n\t// The minimum allowed session timeout for registered consumers.\n\t\"group.min.session.timeout.ms\"?:
      int\n\n\t// The maximum allowed session timeout for registered consumers.\n\t\"group.max.session.timeout.ms\"?:
      int\n\n\t// The amount of time the group coordinator will wait for more consumers
      to join a new group before performing the first rebalance.\n\t\"group.initial.rebalance.delay.ms\"?:
      int\n\n\t// The maximum number of consumers that a single consumer group can
      accommodate.\n\t\"group.max.size\"?: int & >=1\n\n\t// The maximum size for
      a metadata entry associated with an offset commit\n\t\"offset.metadata.max.bytes\"?:
      int\n\n\t// Batch size for reading from the offsets segments when loading offsets
      into the cache (soft-limit, overridden if records are too large).\n\t\"offsets.load.buffer.size\"?:
      int & >=1\n\n\t// The number of partitions for the offset commit topic (should
      not change after deployment)\n\t\"offsets.topic.num.partitions\"?: int & >=1\n\n\t//
      The offsets topic segment bytes should be kept relatively small in order to
      facilitate faster log compaction and cache loads\n\t\"offsets.topic.segment.bytes\"?:
      int & >=1\n\n\t// Compression codec for the offsets topic - compression may
      be used to achieve \"atomic\" commits\n\t\"offsets.topic.compression.codec\"?:
      int\n\n\t// For subscribed consumers, committed offset of a specific partition
      will be expired and discarded when 1) this retention\n\t// period has elapsed
      after the consumer group loses all its consumers (i.e. becomes empty); 2) this
      retention period has elapsed\n\t// since the last time an offset is committed
      for the partition and the group is no longer subscribed to the corresponding
      topic.\n\t\"offsets.retention.minutes\"?: int & >=1\n\n\t// Frequency at which
      to check for stale offsets\n\t\"offsets.retention.check.interval.ms\"?: int
      & >=1\n\n\t// Offset commit will be delayed until all replicas for the offsets
      topic receive the commit or this timeout is reached. This is similar to the
      producer request timeout.\n\t\"offsets.commit.timeout.ms\"?: int & >=1\n\n\t//
      The required acks before the commit can be accepted. In general, the default
      (-1) should not be overridden\n\t\"offsets.commit.required.acks\"?: int\n\n\t//
      Specify the final compression type for a given topic.\n\t\"compression.type\"?:
      string & \"uncompressed\" | \"zstd\" | \"lz4\" | \"snappy\" | \"gzip\" | \"producer\"\n\n\t//
      The time in ms that the transaction coordinator will wait without receiving
      any transaction status updates for the current transaction before expiring its
      transactional id.\n\t\"transactional.id.expiration.ms\"?: int & >=1\n\n\t//
      The maximum allowed timeout for transactions.\n\t\"transaction.max.timeout.ms\"?:
      int & >=1\n\n\t// Batch size for reading from the transaction log segments when
      loading producer ids and transactions into the cache (soft-limit, overridden
      if records are too large).\n\t\"transaction.state.log.load.buffer.size\"?: int
      & >=1\n\n\t// The number of partitions for the transaction topic (should not
      change after deployment).\n\t\"transaction.state.log.num.partitions\"?: int
      & >=1\n\n\t// The transaction topic segment bytes should be kept relatively
      small in order to facilitate faster log compaction and cache loads\n\t\"transaction.state.log.segment.bytes\"?:
      int & >=1\n\n\t// The interval at which to rollback transactions that have timed
      out\n\t\"transaction.abort.timed.out.transaction.cleanup.interval.ms\"?: int
      & >=1\n\n\t// The interval at which to remove transactions that have expired
      due to transactional.id.expiration.ms passing\n\t\"transaction.remove.expired.transaction.cleanup.interval.ms\"?:
      int & >=1\n\n\t// The time in ms that a topic partition leader will wait before
      expiring producer IDs.\n\t// \"producer.id.expiration.ms\"?: int & >=1\n\n\t//
      The maximum number of incremental fetch sessions that we will maintain.\n\t\"max.incremental.fetch.session.cache.slots\"?:
      int & >=0\n\n\t// The maximum amount of data the server should return for a
      fetch request.\n\t\"fetch.max.bytes\"?: int & >=0\n\n\t// The number of samples
      maintained to compute metrics.\n\t\"metrics.num.samples\"?: int & >=1\n\n\t//
      The window of time a metrics sample is computed over.\n\t\"metrics.sample.window.ms\"?:
      int & >=0\n\n\t// The highest recording level for metrics.\n\t\"metrics.recording.level\"?:
      string & \"INFO\" | \"DEBUG\" | \"TRACE\"\n\n\t// A list of classes to use as
      metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter
      interface allows plugging in classes that will be notified of new metric creation.
      The JmxReporter is always included to register JMX statistics.\n\t\"metric.reporters\"?:
      string\n\n\t// A list of classes to use as Yammer metrics custom reporters.\n\t\"kafka.metrics.reporters\"?:
      string\n\n\t// The metrics polling interval (in seconds) which can be used in
      kafka.metrics.reporters implementations.\n\t\"kafka.metrics.polling.interval.secs\"?:
      int & >=1\n\n\t// The number of samples to retain in memory for client quotas\n\t\"quota.window.num\"?:
      int & >=1\n\n\t// The number of samples to retain in memory for replication
      quotas\n\t\"replication.quota.window.num\"?: int & >=1\n\n\t// The number of
      samples to retain in memory for alter log dirs replication quotas\n\t\"alter.log.dirs.replication.quota.window.num\"?:
      int & >=1\n\n\t// The number of samples to retain in memory for controller mutation
      quotas\n\t\"controller.quota.window.num\"?: int & >=1\n\n\t// The time span
      of each sample for client quotas\n\t\"quota.window.size.seconds\"?: int & >=1\n\n\t//
      The time span of each sample for replication quotas\n\t\"replication.quota.window.size.seconds\"?:
      int & >=1\n\n\t// The time span of each sample for alter log dirs replication
      quotas\n\t\"alter.log.dirs.replication.quota.window.size.seconds\"?: int & >=1\n\n\t//
      The time span of each sample for controller mutations quotas\n\t\"controller.quota.window.size.seconds\"?:
      int & >=1\n\n\t// The fully qualified name of a class that implements the ClientQuotaCallback
      interface, which is used to determine quota limits applied to client requests.\n\t\"client.quota.callback.class\"?:
      string\n\n\t// When explicitly set to a positive number (the default is 0, not
      a positive number), a session lifetime that will not exceed the configured value
      will be communicated to v2.2.0 or later clients when they authenticate.\n\t\"connections.max.reauth.ms\"?:
      int\n\n\t// The maximum receive size allowed before and during initial SASL
      authentication.\n\t\"sasl.server.max.receive.size\"?: int\n\n\t// A list of
      configurable creator classes each returning a provider implementing security
      algorithms.\n\t\"security.providers\"?: string\n\n\t// The SSL protocol used
      to generate the SSLContext.\n\t\"ssl.protocol\"?: string & \"TLSv1.2\" | \"TLSv1.3\"
      | \"TLS\" | \"TLSv1.1\" | \"SSL\" | \"SSLv2\" | \"SSLv3\"\n\n\t// The name of
      the security provider used for SSL connections. Default value is the default
      security provider of the JVM.\n\t\"ssl.provider\"?: string\n\n\t// The list
      of protocols enabled for SSL connections.\n\t\"ssl.enabled.protocols\"?: string\n\n\t//
      The file format of the key store file. This is optional for client. The values
      currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12,
      PEM].\n\t\"ssl.keystore.type\"?: string\n\n\t// The location of the key store
      file. This is optional for client and can be used for two-way authentication
      for client.\n\t\"ssl.keystore.location\"?: string\n\n\t// The store password
      for the key store file. This is optional for client and only needed if 'ssl.keystore.location'
      is configured. Key store password is not supported for PEM format.\n\t\"ssl.keystore.password\"?:
      string\n\n\t// The password of the private key in the key store file or the
      PEM key specified in 'ssl.keystore.key'.\n\t\"ssl.key.password\"?: string\n\n\t//
      Private key in the format specified by 'ssl.keystore.type'.\n\t\"ssl.keystore.key\"?:
      string\n\n\t// Certificate chain in the format specified by 'ssl.keystore.type'.
      Default SSL engine factory supports only PEM format with a list of X.509 certificates\n\t\"ssl.keystore.certificate.chain\"?:
      string\n\n\t// The file format of the trust store file. The values currently
      supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].\n\t\"ssl.truststore.type\"?:
      string\n\n\t// The location of the trust store file.\n\t\"ssl.truststore.location\"?:
      string\n\n\t// The password for the trust store file. If a password is not set,
      trust store file configured will still be used, but integrity checking is disabled.
      Trust store password is not supported for PEM format.\n\t\"ssl.truststore.password\"?:
      string\n\n\t// Trusted certificates in the format specified by 'ssl.truststore.type'.
      Default SSL engine factory supports only PEM format with X.509 certificates.\n\t\"ssl.truststore.certificates\"?:
      string\n\n\t// The algorithm used by key manager factory for SSL connections.
      Default value is the key manager factory algorithm configured for the Java Virtual
      Machine.\n\t\"ssl.keymanager.algorithm\"?: string\n\n\t// The algorithm used
      by trust manager factory for SSL connections. Default value is the trust manager
      factory algorithm configured for the Java Virtual Machine.\n\t\"ssl.trustmanager.algorithm\"?:
      string\n\n\t// The endpoint identification algorithm to validate server hostname
      using server certificate.\n\t\"ssl.endpoint.identification.algorithm\"?: string\n\n\t//
      The SecureRandom PRNG implementation to use for SSL cryptography operations.\n\t\"ssl.secure.random.implementation\"?:
      string\n\n\t// Configures kafka broker to request client authentication.\n\t\"ssl.client.auth\"?:
      string & \"required\" | \"requested\" | \"none\"\n\n\t// A list of cipher suites.
      This is a named combination of authentication, encryption,\n\t\"ssl.cipher.suites\"?:
      string\n\n\t// A list of rules for mapping from distinguished name from the
      client certificate to short name.\n\t\"ssl.principal.mapping.rules\"?: string\n\n\t//
      The class of type org.apache.kafka.common.security.auth.SslEngineFactory to
      provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory\n\t\"ssl.engine.factory.class\"?:
      string\n\n\t// SASL mechanism used for inter-broker communication. Default is
      GSSAPI.\n\t\"sasl.mechanism.inter.broker.protocol\"?: string\n\n\t// JAAS login
      context parameters for SASL connections in the format used by JAAS configuration
      files.\n\t\"sasl.jaas.config\"?: string\n\n\t// The list of SASL mechanisms
      enabled in the Kafka server. The list may contain any mechanism for which a
      security provider is available. Only GSSAPI is enabled by default.\n\t\"sasl.enabled.mechanisms\"?:
      string\n\n\t// The fully qualified name of a SASL server callback handler class
      that implements the AuthenticateCallbackHandler interface.\n\t\"sasl.server.callback.handler.class\"?:
      string\n\n\t// The fully qualified name of a SASL client callback handler class
      that implements the AuthenticateCallbackHandler interface.\n\t\"sasl.client.callback.handler.class\"?:
      string\n\n\t// The fully qualified name of a class that implements the Login
      interface. For brokers, login config must be prefixed with listener prefix and
      SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin\n\t\"sasl.login.class\"?:
      string\n\n\t// The fully qualified name of a SASL login callback handler class
      that implements the AuthenticateCallbackHandler interface.\n\t\"sasl.login.callback.handler.class\"?:
      string\n\n\t// The Kerberos principal name that Kafka runs as. This can be defined
      either in Kafka's JAAS config or in Kafka's config.\n\t\"sasl.kerberos.service.name\"?:
      string\n\n\t// Kerberos kinit command path.\n\t\"sasl.kerberos.kinit.cmd\"?:
      string\n\n\t// Login thread will sleep until the specified window factor of
      time from last refresh to ticket's expiry has been reached, at which time it
      will try to renew the ticket.\n\t\"sasl.kerberos.ticket.renew.window.factor\"?:
      number\n\n\t// Percentage of random jitter added to the renewal time.\n\t\"sasl.kerberos.ticket.renew.jitter\"?:
      number\n\n\t// Login thread sleep time between refresh attempts.\n\t\"sasl.kerberos.min.time.before.relogin\"?:
      int\n\n\t// A list of rules for mapping from principal names to short names
      (typically operating system usernames).\n\t\"sasl.kerberos.principal.to.local.rules\"?:
      string\n\n\t// Login refresh thread will sleep until the specified window factor
      relative to the credential's lifetime has been reached, at which time it will
      try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%)
      inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently
      applies only to OAUTHBEARER.\n\t\"sasl.login.refresh.window.factor\"?: number
      & >=0.5 & <=1.0\n\n\t// The maximum amount of random jitter relative to the
      credential's lifetime that is added to the login refresh thread's sleep time.\n\t\"sasl.login.refresh.window.jitter\"?:
      number & >=0.0 & <=0.25\n\n\t// The desired minimum time for the login refresh
      thread to wait before refreshing a credential, in seconds.\n\t\"sasl.login.refresh.min.period.seconds\"?:
      int & >=0 & <=900\n\n\t// The amount of buffer time before credential expiration
      to maintain when refreshing a credential, in seconds.\n\t\"sasl.login.refresh.buffer.seconds\"?:
      int & >=0 & <=3600\n\n\t// The (optional) value in milliseconds for the external
      authentication provider connection timeout. Currently applies only to OAUTHBEARER.\n\t\"sasl.login.connect.timeout.ms\"?:
      int\n\n\t// The (optional) value in milliseconds for the external authentication
      provider read timeout. Currently applies only to OAUTHBEARER.\n\t\"sasl.login.read.timeout.ms\"?:
      int\n\n\t// The (optional) value in milliseconds for the maximum wait between
      login attempts to the external authentication provider.\n\t\"sasl.login.retry.backoff.max.ms\"?:
      int\n\n\t// The (optional) value in milliseconds for the initial wait between
      login attempts to the external authentication provider.\n\t\"sasl.login.retry.backoff.ms\"?:
      int\n\n\t// The OAuth claim for the scope is often named \"scope\", but this
      (optional) setting can provide a different name to use for the scope included
      in the JWT payload's claims if the OAuth/OIDC provider uses a different name
      for that claim.\n\t\"sasl.oauthbearer.scope.claim.name\"?: string\n\n\t// The
      OAuth claim for the subject is often named \"sub\", but this (optional) setting
      can provide a different name to use for the subject included in the JWT payload's
      claims if the OAuth/OIDC provider uses a different name for that claim.\n\t\"sasl.oauthbearer.sub.claim.name\"?:
      string\n\n\t// The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based,
      it is the issuer's token endpoint URL to which requests will be made to login
      based on the configuration in sasl.jaas.config.\n\t\"sasl.oauthbearer.token.endpoint.url\"?:
      string\n\n\t// The OAuth/OIDC provider URL from which the provider's JWKS (JSON
      Web Key Set) can be retrieved.\n\t\"sasl.oauthbearer.jwks.endpoint.url\"?: string\n\n\t//
      The (optional) value in milliseconds for the broker to wait between refreshing
      its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature
      of the JWT.\n\t\"sasl.oauthbearer.jwks.endpoint.refresh.ms\"?: int\n\n\t// The
      (optional) value in milliseconds for the initial wait between JWKS (JSON Web
      Key Set) retrieval attempts from the external authentication provider.\n\t\"sasl.oauthbearer.jwks.endpoint.retry.backoff.ms\"?:
      int\n\n\t// The (optional) value in milliseconds for the maximum wait between
      attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication
      provider.\n\t\"sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms\"?: int\n\n\t//
      The (optional) value in seconds to allow for differences between the time of
      the OAuth/OIDC identity provider and the broker.\n\t\"sasl.oauthbearer.clock.skew.seconds\"?:
      int\n\n\t// The (optional) comma-delimited setting for the broker to use to
      verify that the JWT was issued for one of the expected audiences.\n\t\"sasl.oauthbearer.expected.audience\"?:
      string\n\n\t// The (optional) setting for the broker to use to verify that the
      JWT was created by the expected issuer.\n\t\"sasl.oauthbearer.expected.issuer\"?:
      string\n\n\t// Secret key to generate and verify delegation tokens. The same
      key must be configured across all the brokers. If the key is not set or set
      to empty string, brokers will disable the delegation token support.\n\t\"delegation.token.secret.key\"?:
      string\n\n\t// The token has a maximum lifetime beyond which it cannot be renewed
      anymore. Default value 7 days.\n\t\"delegation.token.max.lifetime.ms\"?: int
      & >=1\n\n\t// The token validity time in miliseconds before the token needs
      to be renewed. Default value 1 day.\n\t\"delegation.token.expiry.time.ms\"?:
      int & >=1\n\n\t// Scan interval to remove expired delegation tokens.\n\t\"delegation.token.expiry.check.interval.ms\"?:
      int & >=1\n\n\t// The secret used for encoding dynamically configured passwords
      for this broker.\n\t\"password.encoder.secret\"?: string\n\n\t// The old secret
      that was used for encoding dynamically configured passwords.\n\t\"password.encoder.old.secret\"?:
      string\n\n\t// The SecretKeyFactory algorithm used for encoding dynamically
      configured passwords.\n\t\"password.encoder.keyfactory.algorithm\"?: string\n\n\t//
      The Cipher algorithm used for encoding dynamically configured passwords.\n\t\"password.encoder.cipher.algorithm\"?:
      string\n\n\t// The key length used for encoding dynamically configured passwords.\n\t\"password.encoder.key.length\"?:
      int & >=8\n\n\t// The iteration count used for encoding dynamically configured
      passwords.\n\t\"password.encoder.iterations\"?: int & >=1024\n\n\t// Maximum
      time in milliseconds to wait without being able to fetch from the leader before
      triggering a new election\n\t\"controller.quorum.election.timeout.ms\"?: int\n\n\t//
      Maximum time without a successful fetch from the current leader before becoming
      a candidate and triggering an election for voters; Maximum time without receiving
      fetch from a majority of the quorum before asking around to see if there's a
      new epoch for leader\n\t\"controller.quorum.fetch.timeout.ms\"?: int\n\n\t//
      Maximum time in milliseconds before starting new elections. This is used in
      the binary exponential backoff mechanism that helps prevent gridlocked elections\n\t\"controller.quorum.election.backoff.max.ms\"?:
      int\n\n\t// The duration in milliseconds that the leader will wait for writes
      to accumulate before flushing them to disk.\n\t\"controller.quorum.append.linger.ms\"?:
      int\n\n\t// The configuration controls the maximum amount of time the client
      will wait for the response of a request.\n\t\"controller.quorum.request.timeout.ms\"?:
      int\n\n\t// The amount of time to wait before attempting to retry a failed request
      to a given topic partition.\n\t\"controller.quorum.retry.backoff.ms\"?: int\n\n\t//
      other parameters\n\t...\n}\n\nconfiguration: #KafkaParameter & {\n}"
    schemaInJSON:
      properties:
        spec:
          description: https://kafka.apache.org/documentation/#brokerconfigs
          properties:
            allow.everyone.if.no.acl.found:
              type: boolean
            alter.config.policy.class.name:
              description: The alter configs policy class that should be used for
                validation.
              type: string
            alter.log.dirs.replication.quota.window.num:
              description: The number of samples to retain in memory for alter log
                dirs replication quotas
              minimum: 1
              type: integer
            alter.log.dirs.replication.quota.window.size.seconds:
              description: The time span of each sample for alter log dirs replication
                quotas
              minimum: 1
              type: integer
            authorizer.class.name:
              description: The fully qualified name of a class that implements org.apache.kafka.server.authorizer.Authorizer
                interface, which is used by the broker for authorization.
              type: string
            auto.create.topics.enable:
              description: Enable auto creation of topic on the server
              type: boolean
            auto.leader.rebalance.enable:
              description: Enables auto leader balancing.
              type: boolean
            background.threads:
              description: The number of threads to use for various background processing
                tasks
              minimum: 1
              type: integer
            broker.heartbeat.interval.ms:
              description: The length of time in milliseconds between broker heartbeats.
                Used when running in KRaft mode.
              type: integer
            broker.rack:
              description: Rack of the broker. This will be used in rack aware replication
                assignment for fault tolerance.
              type: string
            broker.session.timeout.ms:
              description: The length of time in milliseconds that a broker lease
                lasts if no heartbeats are made. Used when running in KRaft mode.
              type: integer
            client.quota.callback.class:
              description: The fully qualified name of a class that implements the
                ClientQuotaCallback interface, which is used to determine quota limits
                applied to client requests.
              type: string
            compression.type:
              description: Specify the final compression type for a given topic.
              enum:
              - uncompressed
              - zstd
              - lz4
              - snappy
              - gzip
              - producer
              type: string
            connection.failed.authentication.delay.ms:
              description: |-
                Connection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on authentication failure.
                This must be configured to be less than connections.max.idle.ms to prevent connection timeout.
              minimum: 0
              type: integer
            connections.max.idle.ms:
              description: Close idle connections after the number of milliseconds
                specified by this config.
              type: integer
            connections.max.reauth.ms:
              description: When explicitly set to a positive number (the default is
                0, not a positive number), a session lifetime that will not exceed
                the configured value will be communicated to v2.2.0 or later clients
                when they authenticate.
              type: integer
            control.plane.listener.name:
              description: Name of listener used for communication between controller
                and brokers.
              type: string
            controlled.shutdown.enable:
              description: Enable controlled shutdown of the server
              type: boolean
            controlled.shutdown.max.retries:
              description: Controlled shutdown can fail for multiple reasons. This
                determines the number of retries when such failure happens
              type: integer
            controlled.shutdown.retry.backoff.ms:
              description: Before each retry, the system needs time to recover from
                the state that caused the previous failure (Controller fail over,
                replica lag etc)
              type: integer
            controller.quorum.append.linger.ms:
              description: The duration in milliseconds that the leader will wait
                for writes to accumulate before flushing them to disk.
              type: integer
            controller.quorum.election.backoff.max.ms:
              description: Maximum time in milliseconds before starting new elections.
                This is used in the binary exponential backoff mechanism that helps
                prevent gridlocked elections
              type: integer
            controller.quorum.election.timeout.ms:
              description: Maximum time in milliseconds to wait without being able
                to fetch from the leader before triggering a new election
              type: integer
            controller.quorum.fetch.timeout.ms:
              description: Maximum time without a successful fetch from the current
                leader before becoming a candidate and triggering an election for
                voters; Maximum time without receiving fetch from a majority of the
                quorum before asking around to see if there's a new epoch for leader
              type: integer
            controller.quorum.request.timeout.ms:
              description: The configuration controls the maximum amount of time the
                client will wait for the response of a request.
              type: integer
            controller.quorum.retry.backoff.ms:
              description: The amount of time to wait before attempting to retry a
                failed request to a given topic partition.
              type: integer
            controller.quota.window.num:
              description: The number of samples to retain in memory for controller
                mutation quotas
              minimum: 1
              type: integer
            controller.quota.window.size.seconds:
              description: The time span of each sample for controller mutations quotas
              minimum: 1
              type: integer
            controller.socket.timeout.ms:
              description: The socket timeout for controller-to-broker channels
              type: integer
            create.topic.policy.class.name:
              description: The create topic policy class that should be used for validation.
              type: string
            default.replication.factor:
              description: The default replication factors for automatically created
                topics
              type: integer
            delegation.token.expiry.check.interval.ms:
              description: Scan interval to remove expired delegation tokens.
              minimum: 1
              type: integer
            delegation.token.expiry.time.ms:
              description: The token validity time in miliseconds before the token
                needs to be renewed. Default value 1 day.
              minimum: 1
              type: integer
            delegation.token.max.lifetime.ms:
              description: The token has a maximum lifetime beyond which it cannot
                be renewed anymore. Default value 7 days.
              minimum: 1
              type: integer
            delegation.token.secret.key:
              description: Secret key to generate and verify delegation tokens. The
                same key must be configured across all the brokers. If the key is
                not set or set to empty string, brokers will disable the delegation
                token support.
              type: string
            delete.records.purgatory.purge.interval.requests:
              description: The purge interval (in number of requests) of the delete
                records request purgatory
              type: integer
            delete.topic.enable:
              description: Enables delete topic. Delete topic through the admin tool
                will have no effect if this config is turned off
              type: boolean
            early.start.listeners:
              description: A comma-separated list of listener names which may be started
                before the authorizer has finished initialization.
              type: string
            fetch.max.bytes:
              description: The maximum amount of data the server should return for
                a fetch request.
              minimum: 0
              type: integer
            fetch.purgatory.purge.interval.requests:
              description: The purge interval (in number of requests) of the fetch
                request purgatory
              type: integer
            group.initial.rebalance.delay.ms:
              description: The amount of time the group coordinator will wait for
                more consumers to join a new group before performing the first rebalance.
              type: integer
            group.max.session.timeout.ms:
              description: The maximum allowed session timeout for registered consumers.
              type: integer
            group.max.size:
              description: The maximum number of consumers that a single consumer
                group can accommodate.
              minimum: 1
              type: integer
            group.min.session.timeout.ms:
              description: The minimum allowed session timeout for registered consumers.
              type: integer
            kafka.metrics.polling.interval.secs:
              description: The metrics polling interval (in seconds) which can be
                used in kafka.metrics.reporters implementations.
              minimum: 1
              type: integer
            kafka.metrics.reporters:
              description: A list of classes to use as Yammer metrics custom reporters.
              type: string
            leader.imbalance.check.interval.seconds:
              description: The frequency with which the partition rebalance check
                is triggered by the controller
              minimum: 1
              type: integer
            leader.imbalance.per.broker.percentage:
              description: The ratio of leader imbalance allowed per broker.
              type: integer
            log.cleaner.backoff.ms:
              description: The amount of time to sleep when there are no logs to clean
              minimum: 0
              type: integer
            log.cleaner.dedupe.buffer.size:
              description: The total memory used for log deduplication across all
                cleaner threads
              type: integer
            log.cleaner.delete.retention.ms:
              description: The amount of time to retain delete tombstone markers for
                log compacted topics.
              minimum: 0
              type: integer
            log.cleaner.enable:
              description: Enable the log cleaner process to run on the server.
              type: boolean
            log.cleaner.io.buffer.load.factor:
              description: Log cleaner dedupe buffer load factor. The percentage full
                the dedupe buffer can become. A higher value will allow more log to
                be cleaned at once but will lead to more hash collisions
              type: number
            log.cleaner.io.buffer.size:
              description: The total memory used for log cleaner I/O buffers across
                all cleaner threads
              minimum: 0
              type: integer
            log.cleaner.io.max.bytes.per.second:
              description: The log cleaner will be throttled so that the sum of its
                read and write i/o will be less than this value on average
              type: number
            log.cleaner.max.compaction.lag.ms:
              description: The maximum time a message will remain ineligible for compaction
                in the log. Only applicable for logs that are being compacted.
              minimum: 1
              type: integer
            log.cleaner.min.cleanable.ratio:
              description: The minimum ratio of dirty log to total log for a log to
                eligible for cleaning.
              maximum: 1
              minimum: 0
              type: number
            log.cleaner.min.compaction.lag.ms:
              description: The minimum time a message will remain uncompacted in the
                log. Only applicable for logs that are being compacted.
              minimum: 0
              type: integer
            log.cleaner.threads:
              description: The number of background threads to use for log cleaning
              minimum: 0
              type: integer
            log.cleanup.policy:
              description: The default cleanup policy for segments beyond the retention
                window. A comma separated list of valid policies.
              enum:
              - compact
              - delete
              type: string
            log.flush.interval.messages:
              description: The number of messages accumulated on a log partition before
                messages are flushed to disk
              minimum: 1
              type: integer
            log.flush.interval.ms:
              description: The maximum time in ms that a message in any topic is kept
                in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms
                is used
              type: integer
            log.flush.offset.checkpoint.interval.ms:
              description: The frequency with which we update the persistent record
                of the last flush which acts as the log recovery point
              minimum: 0
              type: integer
            log.flush.scheduler.interval.ms:
              description: The frequency in ms that the log flusher checks whether
                any log needs to be flushed to disk
              type: integer
            log.flush.start.offset.checkpoint.interval.ms:
              description: The frequency with which we update the persistent record
                of log start offset
              minimum: 0
              type: integer
            log.index.interval.bytes:
              description: The interval with which we add an entry to the offset index
              minimum: 0
              type: integer
            log.index.size.max.bytes:
              description: The maximum size in bytes of the offset index
              minimum: 4
              type: integer
            log.message.downconversion.enable:
              description: This configuration controls whether down-conversion of
                message formats is enabled to satisfy consume requests.
              type: boolean
            log.message.format.version:
              description: Specify the message format version the broker will use
                to append messages to the logs.
              enum:
              - 0.8.0
              - 0.8.1
              - 0.8.2
              - 0.9.0
              - 0.10.0-IV0
              - 0.10.0-IV1
              - 0.10.1-IV0
              - 0.10.1-IV1
              - 0.10.1-IV2
              - 0.10.2-IV0
              - 0.11.0-IV0
              - 0.11.0-IV1
              - 0.11.0-IV2
              - 1.0-IV0
              - 1.1-IV0
              - 2.0-IV0
              - 2.0-IV1
              - 2.1-IV0
              - 2.1-IV1
              - 2.1-IV2
              - 2.2-IV0
              - 2.2-IV1
              - 2.3-IV0
              - 2.3-IV1
              - 2.4-IV0
              - 2.4-IV1
              - 2.5-IV0
              - 2.6-IV0
              - 2.7-IV0
              - 2.7-IV1
              - 2.7-IV2
              - 2.8-IV0
              - 2.8-IV1
              - 3.0-IV0
              - 3.0-IV1
              - 3.1-IV0
              - 3.2-IV0
              - 3.3-IV0
              - 3.3-IV1
              - 3.3-IV2
              - 3.3-IV3
              - 3.4-IV0
              type: string
            log.message.timestamp.difference.max.ms:
              description: The maximum difference allowed between the timestamp when
                a broker receives a message and the timestamp specified in the message.
              minimum: 0
              type: integer
            log.message.timestamp.type:
              description: Define whether the timestamp in the message is message
                create time or log append time.
              enum:
              - CreateTime
              - LogAppendTime
              type: string
            log.preallocate:
              description: Should pre allocate file when create new segment? If you
                are using Kafka on Windows, you probably need to set it to true.
              type: boolean
            log.retention.bytes:
              description: The maximum size of the log before deleting it
              type: integer
            log.retention.check.interval.ms:
              description: The frequency in milliseconds that the log cleaner checks
                whether any log is eligible for deletion
              minimum: 1
              type: integer
            log.retention.hours:
              description: The number of hours to keep a log file before deleting
                it (in hours), tertiary to log.retention.ms property
              type: integer
            log.retention.minutes:
              description: The number of minutes to keep a log file before deleting
                it (in minutes), secondary to log.retention.ms property. If not set,
                the value in log.retention.hours is used
              type: integer
            log.retention.ms:
              description: The number of milliseconds to keep a log file before deleting
                it (in milliseconds), If not set, the value in log.retention.minutes
                is used. If set to -1, no time limit is applied.
              type: integer
            log.roll.hours:
              description: The maximum time before a new log segment is rolled out
                (in hours), secondary to log.roll.ms property
              minimum: 1
              type: integer
            log.roll.jitter.hours:
              description: The maximum jitter to subtract from logRollTimeMillis (in
                hours), secondary to log.roll.jitter.ms property
              minimum: 0
              type: integer
            log.roll.jitter.ms:
              description: The maximum jitter to subtract from logRollTimeMillis (in
                milliseconds). If not set, the value in log.roll.jitter.hours is used
              type: integer
            log.roll.ms:
              description: The maximum time before a new log segment is rolled out
                (in milliseconds). If not set, the value in log.roll.hours is used
              type: integer
            log.segment.bytes:
              description: The maximum size of a single log file
              minimum: 14
              type: integer
            log.segment.delete.delay.ms:
              description: The amount of time to wait before deleting a file from
                the filesystem
              minimum: 0
              type: integer
            max.connection.creation.rate:
              description: The maximum connection creation rate we allow in the broker
                at any time.
              minimum: 0
              type: integer
            max.connections:
              description: The maximum number of connections we allow in the broker
                at any time.
              minimum: 0
              type: integer
            max.connections.per.ip:
              description: The maximum number of connections we allow from each ip
                address.
              minimum: 0
              type: integer
            max.connections.per.ip.overrides:
              description: A comma-separated list of per-ip or hostname overrides
                to the default maximum number of connections. An example value is
                "hostName:100,127.0.0.1:200"
              type: string
            max.incremental.fetch.session.cache.slots:
              description: The maximum number of incremental fetch sessions that we
                will maintain.
              minimum: 0
              type: integer
            message.max.bytes:
              description: The largest record batch size allowed by Kafka
              minimum: 0
              type: integer
            metadata.log.max.record.bytes.between.snapshots:
              description: This is the maximum number of bytes in the log between
                the latest snapshot and the high-watermark needed before generating
                a new snapshot.
              minimum: 1
              type: integer
            metadata.log.segment.bytes:
              description: The maximum size of a single metadata log file.
              minimum: 12
              type: integer
            metadata.log.segment.ms:
              description: The maximum time before a new metadata log file is rolled
                out (in milliseconds).
              type: integer
            metadata.max.idle.interval.ms:
              description: |-
                This configuration controls how often the active controller should write no-op records to the metadata partition.
                If the value is 0, no-op records are not appended to the metadata partition. The default value is 500
              minimum: 0
              type: integer
            metadata.max.retention.bytes:
              description: The maximum combined size of the metadata log and snapshots
                before deleting old snapshots and log files.
              type: integer
            metadata.max.retention.ms:
              description: The number of milliseconds to keep a metadata log file
                or snapshot before deleting it. Since at least one snapshot must exist
                before any logs can be deleted, this is a soft limit.
              type: integer
            metric.reporters:
              description: A list of classes to use as metrics reporters. Implementing
                the org.apache.kafka.common.metrics.MetricsReporter interface allows
                plugging in classes that will be notified of new metric creation.
                The JmxReporter is always included to register JMX statistics.
              type: string
            metrics.num.samples:
              description: The number of samples maintained to compute metrics.
              minimum: 1
              type: integer
            metrics.recording.level:
              description: The highest recording level for metrics.
              enum:
              - INFO
              - DEBUG
              - TRACE
              type: string
            metrics.sample.window.ms:
              description: The window of time a metrics sample is computed over.
              minimum: 0
              type: integer
            min.insync.replicas:
              description: |-
                When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful.
                If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).
              minimum: 1
              type: integer
            num.io.threads:
              description: The number of threads that the server uses for processing
                requests, which may include disk I/O
              minimum: 1
              type: integer
            num.network.threads:
              description: The number of threads that the server uses for receiving
                requests from the network and sending responses to the network
              minimum: 1
              type: integer
            num.partitions:
              description: The default number of log partitions per topic
              minimum: 1
              type: integer
            num.recovery.threads.per.data.dir:
              description: The number of threads per data directory to be used for
                log recovery at startup and flushing at shutdown
              minimum: 1
              type: integer
            num.replica.alter.log.dirs.threads:
              description: The number of threads that can move replicas between log
                directories, which may include disk I/O
              type: integer
            num.replica.fetchers:
              description: Number of fetcher threads used to replicate records from
                each source broker.
              type: integer
            offset.metadata.max.bytes:
              description: The maximum size for a metadata entry associated with an
                offset commit
              type: integer
            offsets.commit.required.acks:
              description: The required acks before the commit can be accepted. In
                general, the default (-1) should not be overridden
              type: integer
            offsets.commit.timeout.ms:
              description: Offset commit will be delayed until all replicas for the
                offsets topic receive the commit or this timeout is reached. This
                is similar to the producer request timeout.
              minimum: 1
              type: integer
            offsets.load.buffer.size:
              description: Batch size for reading from the offsets segments when loading
                offsets into the cache (soft-limit, overridden if records are too
                large).
              minimum: 1
              type: integer
            offsets.retention.check.interval.ms:
              description: Frequency at which to check for stale offsets
              minimum: 1
              type: integer
            offsets.retention.minutes:
              description: |-
                For subscribed consumers, committed offset of a specific partition will be expired and discarded when 1) this retention
                period has elapsed after the consumer group loses all its consumers (i.e. becomes empty); 2) this retention period has elapsed
                since the last time an offset is committed for the partition and the group is no longer subscribed to the corresponding topic.
              minimum: 1
              type: integer
            offsets.topic.compression.codec:
              description: Compression codec for the offsets topic - compression may
                be used to achieve "atomic" commits
              type: integer
            offsets.topic.num.partitions:
              description: The number of partitions for the offset commit topic (should
                not change after deployment)
              minimum: 1
              type: integer
            offsets.topic.replication.factor:
              description: The replication factor for the offsets topic (set higher
                to ensure availability). Internal topic creation will fail until the
                cluster size meets this replication factor requirement.
              maximum: 32767
              minimum: 1
              type: integer
            offsets.topic.segment.bytes:
              description: The offsets topic segment bytes should be kept relatively
                small in order to facilitate faster log compaction and cache loads
              minimum: 1
              type: integer
            password.encoder.cipher.algorithm:
              description: The Cipher algorithm used for encoding dynamically configured
                passwords.
              type: string
            password.encoder.iterations:
              description: The iteration count used for encoding dynamically configured
                passwords.
              minimum: 1024
              type: integer
            password.encoder.key.length:
              description: The key length used for encoding dynamically configured
                passwords.
              minimum: 8
              type: integer
            password.encoder.keyfactory.algorithm:
              description: The SecretKeyFactory algorithm used for encoding dynamically
                configured passwords.
              type: string
            password.encoder.old.secret:
              description: The old secret that was used for encoding dynamically configured
                passwords.
              type: string
            password.encoder.secret:
              description: The secret used for encoding dynamically configured passwords
                for this broker.
              type: string
            producer.purgatory.purge.interval.requests:
              description: The purge interval (in number of requests) of the producer
                request purgatory
              type: integer
            queued.max.request.bytes:
              description: The number of queued bytes allowed before no more requests
                are read
              type: integer
            queued.max.requests:
              description: The number of queued requests allowed for data-plane, before
                blocking the network threads
              minimum: 1
              type: integer
            quota.window.num:
              description: The number of samples to retain in memory for client quotas
              minimum: 1
              type: integer
            quota.window.size.seconds:
              description: The time span of each sample for client quotas
              minimum: 1
              type: integer
            replica.fetch.backoff.ms:
              description: The amount of time to sleep when fetch partition error
                occurs.
              minimum: 0
              type: integer
            replica.fetch.max.bytes:
              description: The number of bytes of messages to attempt to fetch for
                each partition.
              minimum: 0
              type: integer
            replica.fetch.min.bytes:
              description: Minimum bytes expected for each fetch response. If not
                enough bytes, wait up to replica.fetch.wait.max.ms (broker config).
              type: integer
            replica.fetch.response.max.bytes:
              description: Maximum bytes expected for the entire fetch response.
              minimum: 0
              type: integer
            replica.fetch.wait.max.ms:
              description: The maximum wait time for each fetcher request issued by
                follower replicas.
              type: integer
            replica.high.watermark.checkpoint.interval.ms:
              description: The frequency with which the high watermark is saved out
                to disk
              type: integer
            replica.lag.time.max.ms:
              description: If a follower hasn't sent any fetch requests or hasn't
                consumed up to the leaders log end offset for at least this time,
                the leader will remove the follower from isr
              type: integer
            replica.selector.class:
              description: The fully qualified class name that implements ReplicaSelector.
              type: string
            replica.socket.receive.buffer.bytes:
              description: The socket receive buffer for network requests
              type: integer
            replica.socket.timeout.ms:
              description: The socket timeout for network requests. Its value should
                be at least replica.fetch.wait.max.ms
              type: integer
            replication.quota.window.num:
              description: The number of samples to retain in memory for replication
                quotas
              minimum: 1
              type: integer
            replication.quota.window.size.seconds:
              description: The time span of each sample for replication quotas
              minimum: 1
              type: integer
            request.timeout.ms:
              description: The configuration controls the maximum amount of time the
                client will wait for the response of a request
              minimum: 0
              type: integer
            sasl.client.callback.handler.class:
              description: The fully qualified name of a SASL client callback handler
                class that implements the AuthenticateCallbackHandler interface.
              type: string
            sasl.enabled.mechanisms:
              description: The list of SASL mechanisms enabled in the Kafka server.
                The list may contain any mechanism for which a security provider is
                available. Only GSSAPI is enabled by default.
              type: string
            sasl.jaas.config:
              description: JAAS login context parameters for SASL connections in the
                format used by JAAS configuration files.
              type: string
            sasl.kerberos.kinit.cmd:
              description: Kerberos kinit command path.
              type: string
            sasl.kerberos.min.time.before.relogin:
              description: Login thread sleep time between refresh attempts.
              type: integer
            sasl.kerberos.principal.to.local.rules:
              description: A list of rules for mapping from principal names to short
                names (typically operating system usernames).
              type: string
            sasl.kerberos.service.name:
              description: The Kerberos principal name that Kafka runs as. This can
                be defined either in Kafka's JAAS config or in Kafka's config.
              type: string
            sasl.kerberos.ticket.renew.jitter:
              description: Percentage of random jitter added to the renewal time.
              type: number
            sasl.kerberos.ticket.renew.window.factor:
              description: Login thread will sleep until the specified window factor
                of time from last refresh to ticket's expiry has been reached, at
                which time it will try to renew the ticket.
              type: number
            sasl.login.callback.handler.class:
              description: The fully qualified name of a SASL login callback handler
                class that implements the AuthenticateCallbackHandler interface.
              type: string
            sasl.login.class:
              description: The fully qualified name of a class that implements the
                Login interface. For brokers, login config must be prefixed with listener
                prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin
              type: string
            sasl.login.connect.timeout.ms:
              description: The (optional) value in milliseconds for the external authentication
                provider connection timeout. Currently applies only to OAUTHBEARER.
              type: integer
            sasl.login.read.timeout.ms:
              description: The (optional) value in milliseconds for the external authentication
                provider read timeout. Currently applies only to OAUTHBEARER.
              type: integer
            sasl.login.refresh.buffer.seconds:
              description: The amount of buffer time before credential expiration
                to maintain when refreshing a credential, in seconds.
              maximum: 3600
              minimum: 0
              type: integer
            sasl.login.refresh.min.period.seconds:
              description: The desired minimum time for the login refresh thread to
                wait before refreshing a credential, in seconds.
              maximum: 900
              minimum: 0
              type: integer
            sasl.login.refresh.window.factor:
              description: Login refresh thread will sleep until the specified window
                factor relative to the credential's lifetime has been reached, at
                which time it will try to refresh the credential. Legal values are
                between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8
                (80%) is used if no value is specified. Currently applies only to
                OAUTHBEARER.
              maximum: 1
              minimum: 0.5
              type: number
            sasl.login.refresh.window.jitter:
              description: The maximum amount of random jitter relative to the credential's
                lifetime that is added to the login refresh thread's sleep time.
              maximum: 0.25
              minimum: 0
              type: number
            sasl.login.retry.backoff.max.ms:
              description: The (optional) value in milliseconds for the maximum wait
                between login attempts to the external authentication provider.
              type: integer
            sasl.login.retry.backoff.ms:
              description: The (optional) value in milliseconds for the initial wait
                between login attempts to the external authentication provider.
              type: integer
            sasl.mechanism.controller.protocol:
              description: SASL mechanism used for communication with controllers.
                Default is GSSAPI.
              type: string
            sasl.mechanism.inter.broker.protocol:
              description: SASL mechanism used for inter-broker communication. Default
                is GSSAPI.
              type: string
            sasl.oauthbearer.clock.skew.seconds:
              description: The (optional) value in seconds to allow for differences
                between the time of the OAuth/OIDC identity provider and the broker.
              type: integer
            sasl.oauthbearer.expected.audience:
              description: The (optional) comma-delimited setting for the broker to
                use to verify that the JWT was issued for one of the expected audiences.
              type: string
            sasl.oauthbearer.expected.issuer:
              description: The (optional) setting for the broker to use to verify
                that the JWT was created by the expected issuer.
              type: string
            sasl.oauthbearer.jwks.endpoint.refresh.ms:
              description: The (optional) value in milliseconds for the broker to
                wait between refreshing its JWKS (JSON Web Key Set) cache that contains
                the keys to verify the signature of the JWT.
              type: integer
            sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms:
              description: The (optional) value in milliseconds for the maximum wait
                between attempts to retrieve the JWKS (JSON Web Key Set) from the
                external authentication provider.
              type: integer
            sasl.oauthbearer.jwks.endpoint.retry.backoff.ms:
              description: The (optional) value in milliseconds for the initial wait
                between JWKS (JSON Web Key Set) retrieval attempts from the external
                authentication provider.
              type: integer
            sasl.oauthbearer.jwks.endpoint.url:
              description: The OAuth/OIDC provider URL from which the provider's JWKS
                (JSON Web Key Set) can be retrieved.
              type: string
            sasl.oauthbearer.scope.claim.name:
              description: The OAuth claim for the scope is often named "scope", but
                this (optional) setting can provide a different name to use for the
                scope included in the JWT payload's claims if the OAuth/OIDC provider
                uses a different name for that claim.
              type: string
            sasl.oauthbearer.sub.claim.name:
              description: The OAuth claim for the subject is often named "sub", but
                this (optional) setting can provide a different name to use for the
                subject included in the JWT payload's claims if the OAuth/OIDC provider
                uses a different name for that claim.
              type: string
            sasl.oauthbearer.token.endpoint.url:
              description: The URL for the OAuth/OIDC identity provider. If the URL
                is HTTP(S)-based, it is the issuer's token endpoint URL to which requests
                will be made to login based on the configuration in sasl.jaas.config.
              type: string
            sasl.server.callback.handler.class:
              description: The fully qualified name of a SASL server callback handler
                class that implements the AuthenticateCallbackHandler interface.
              type: string
            sasl.server.max.receive.size:
              description: The maximum receive size allowed before and during initial
                SASL authentication.
              type: integer
            security.inter.broker.protocol:
              description: Security protocol used to communicate between brokers.
              enum:
              - PLAINTEXT
              - SSL
              - SASL_PLAINTEXT
              - SASL_SSL
              type: string
            security.providers:
              description: A list of configurable creator classes each returning a
                provider implementing security algorithms.
              type: string
            socket.connection.setup.timeout.max.ms:
              description: The maximum amount of time the client will wait for the
                socket connection to be established.
              type: integer
            socket.connection.setup.timeout.ms:
              description: The amount of time the client will wait for the socket
                connection to be established. If the connection is not built before
                the timeout elapses, clients will close the socket channel.
              type: integer
            socket.listen.backlog.size:
              description: |-
                The maximum number of pending connections on the socket.
                In Linux, you may also need to configure `somaxconn` and `tcp_max_syn_backlog` kernel parameters accordingly to make the configuration takes effect.
              minimum: 1
              type: integer
            socket.receive.buffer.bytes:
              description: The SO_RCVBUF buffer of the socket server sockets. If the
                value is -1, the OS default will be used.
              type: integer
            socket.request.max.bytes:
              description: The maximum number of bytes in a socket request
              minimum: 1
              type: integer
            socket.send.buffer.bytes:
              description: The SO_SNDBUF buffer of the socket server sockets. If the
                value is -1, the OS default will be used.
              type: integer
            ssl.cipher.suites:
              description: A list of cipher suites. This is a named combination of
                authentication, encryption,
              type: string
            ssl.client.auth:
              description: Configures kafka broker to request client authentication.
              enum:
              - required
              - requested
              - none
              type: string
            ssl.enabled.protocols:
              description: The list of protocols enabled for SSL connections.
              type: string
            ssl.endpoint.identification.algorithm:
              description: The endpoint identification algorithm to validate server
                hostname using server certificate.
              type: string
            ssl.engine.factory.class:
              description: The class of type org.apache.kafka.common.security.auth.SslEngineFactory
                to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory
              type: string
            ssl.key.password:
              description: The password of the private key in the key store file or
                the PEM key specified in 'ssl.keystore.key'.
              type: string
            ssl.keymanager.algorithm:
              description: The algorithm used by key manager factory for SSL connections.
                Default value is the key manager factory algorithm configured for
                the Java Virtual Machine.
              type: string
            ssl.keystore.certificate.chain:
              description: Certificate chain in the format specified by 'ssl.keystore.type'.
                Default SSL engine factory supports only PEM format with a list of
                X.509 certificates
              type: string
            ssl.keystore.key:
              description: Private key in the format specified by 'ssl.keystore.type'.
              type: string
            ssl.keystore.location:
              description: The location of the key store file. This is optional for
                client and can be used for two-way authentication for client.
              type: string
            ssl.keystore.password:
              description: The store password for the key store file. This is optional
                for client and only needed if 'ssl.keystore.location' is configured.
                Key store password is not supported for PEM format.
              type: string
            ssl.keystore.type:
              description: The file format of the key store file. This is optional
                for client. The values currently supported by the default `ssl.engine.factory.class`
                are [JKS, PKCS12, PEM].
              type: string
            ssl.principal.mapping.rules:
              description: A list of rules for mapping from distinguished name from
                the client certificate to short name.
              type: string
            ssl.protocol:
              description: The SSL protocol used to generate the SSLContext.
              enum:
              - TLSv1.2
              - TLSv1.3
              - TLS
              - TLSv1.1
              - SSL
              - SSLv2
              - SSLv3
              type: string
            ssl.provider:
              description: The name of the security provider used for SSL connections.
                Default value is the default security provider of the JVM.
              type: string
            ssl.secure.random.implementation:
              description: The SecureRandom PRNG implementation to use for SSL cryptography
                operations.
              type: string
            ssl.trustmanager.algorithm:
              description: The algorithm used by trust manager factory for SSL connections.
                Default value is the trust manager factory algorithm configured for
                the Java Virtual Machine.
              type: string
            ssl.truststore.certificates:
              description: Trusted certificates in the format specified by 'ssl.truststore.type'.
                Default SSL engine factory supports only PEM format with X.509 certificates.
              type: string
            ssl.truststore.location:
              description: The location of the trust store file.
              type: string
            ssl.truststore.password:
              description: The password for the trust store file. If a password is
                not set, trust store file configured will still be used, but integrity
                checking is disabled. Trust store password is not supported for PEM
                format.
              type: string
            ssl.truststore.type:
              description: The file format of the trust store file. The values currently
                supported by the default `ssl.engine.factory.class` are [JKS, PKCS12,
                PEM].
              type: string
            transaction.abort.timed.out.transaction.cleanup.interval.ms:
              description: The interval at which to rollback transactions that have
                timed out
              minimum: 1
              type: integer
            transaction.max.timeout.ms:
              description: The maximum allowed timeout for transactions.
              minimum: 1
              type: integer
            transaction.remove.expired.transaction.cleanup.interval.ms:
              description: The interval at which to remove transactions that have
                expired due to transactional.id.expiration.ms passing
              minimum: 1
              type: integer
            transaction.state.log.load.buffer.size:
              description: Batch size for reading from the transaction log segments
                when loading producer ids and transactions into the cache (soft-limit,
                overridden if records are too large).
              minimum: 1
              type: integer
            transaction.state.log.min.isr:
              description: Overridden min.insync.replicas config for the transaction
                topic.
              minimum: 1
              type: integer
            transaction.state.log.num.partitions:
              description: The number of partitions for the transaction topic (should
                not change after deployment).
              minimum: 1
              type: integer
            transaction.state.log.replication.factor:
              description: The replication factor for the transaction topic (set higher
                to ensure availability). Internal topic creation will fail until the
                cluster size meets this replication factor requirement.
              maximum: 32767
              minimum: 1
              type: integer
            transaction.state.log.segment.bytes:
              description: The transaction topic segment bytes should be kept relatively
                small in order to facilitate faster log compaction and cache loads
              minimum: 1
              type: integer
            transactional.id.expiration.ms:
              description: The time in ms that the transaction coordinator will wait
                without receiving any transaction status updates for the current transaction
                before expiring its transactional id.
              minimum: 1
              type: integer
            unclean.leader.election.enable:
              description: Indicates whether to enable replicas not in the ISR set
                to be elected as leader as a last resort, even though doing so may
                result in data loss
              type: boolean
          type: object
      type: object
    topLevelKey: KafkaParameter
status:
  observedGeneration: 2
  phase: Available

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

@JashBook JashBook added the kind/bug Something isn't working label Jun 6, 2024
@JashBook JashBook added this to the Release 0.9.0 milestone Jun 6, 2024
@JashBook JashBook added the severity/major Great chance user will encounter the same problem label Jun 6, 2024
@shanshanying
Copy link
Contributor

there are some API renaming in kbv9 beta versions.
Please test upgrade from previous minor version (from 0.8) to 0.9 beta 30 and after.

@shanshanying shanshanying removed the severity/major Great chance user will encounter the same problem label Jun 10, 2024
@JashBook JashBook assigned sophon-zt and unassigned ldming Jun 11, 2024
@sophon-zt
Copy link
Contributor

there are some API renaming in kbv9 beta versions. Please test upgrade from previous minor version (from 0.8) to 0.9 beta 30 and after.

Yes, beta version upgrades are not supported.

@ahjing99
Copy link
Collaborator

upgrade from 0.9.0-beta.37 to 0.9.0-beta.38 failed for the same reason

➜  ~ kbcli kubeblocks upgrade --version 0.9.0-beta.38
Current KubeBlocks version 0.9.0-beta.37.
Kubernetes version 1.29.4
Kubernetes provider GKE
kbcli version 0.9.0-beta.27
Upgrade KubeBlocks from 0.9.0-beta.37 to 0.9.0-beta.38
Please type 'Yes/yes' to confirm your operation: yes
Add and update repo kubeblocks                     OK
Keep addons                                        OK
Stop KubeBlocks 0.9.0-beta.37                      OK
Stop DataProtection                                OK
Conversion old version[0.9.0-beta.37] CRs to new version[0.9.0-beta.38] OK
Upgrade CRDs                                       OK
update new version CRs                             OK
Upgrading KubeBlocks to 0.9.0-beta.38              FAIL
error: pre-upgrade hooks failed: 1 error occurred:
	* timed out waiting for the condition

➜  ~ k get pod -n kb-system
NAME                                            READY   STATUS             RESTARTS      AGE
kb-addon-minio-866cfbc8b5-jzswc                 1/1     Running            0             14h
kb-addon-snapshot-controller-66b659ccf4-fwp2z   1/1     Running            0             14h
kubeblocks-upgrade-hook-job-7jjf4               0/1     CrashLoopBackOff   5 (87s ago)   5m15s
➜  ~ k logs kubeblocks-upgrade-hook-job-7jjf4  -n kb-system
addon[alertmanager-webhook-adaptor] is not installed and pass
addon[apecloud-otel-collector] is not installed and pass
addon[aws-load-balancer-controller] is not installed and pass
addon[csi-driver-nfs] is not installed and pass
addon[csi-hostpath-driver] is not installed and pass
addon[csi-s3] is not installed and pass
addon[external-dns] is not installed and pass
addon[fault-chaos-mesh] is not installed and pass
addon[grafana] is not installed and pass
addon[kubebench] is not installed and pass
addon[kubeblocks-csi-driver] is not installed and pass
addon[llm] is not installed and pass
addon[loki] is not installed and pass
addon[migration] is not installed and pass
addon[nvidia-gpu-exporter] is not installed and pass
addon[nyancat] is not installed and pass
addon[prometheus] is not installed and pass
addon[pyroscope-server] is not installed and pass
addon[qdrant] is not installed and pass
addon[victoria-metrics-agent] is not installed and pass
reading CRDs from path: /kubeblocks/crd
read CRDs from file: apps.kubeblocks.io_backuppolicytemplates.yaml
read CRDs from file: apps.kubeblocks.io_clusterdefinitions.yaml
read CRDs from file: apps.kubeblocks.io_clusters.yaml
read CRDs from file: apps.kubeblocks.io_clusterversions.yaml
read CRDs from file: apps.kubeblocks.io_componentclassdefinitions.yaml
read CRDs from file: apps.kubeblocks.io_componentdefinitions.yaml
read CRDs from file: apps.kubeblocks.io_componentresourceconstraints.yaml
read CRDs from file: apps.kubeblocks.io_components.yaml
read CRDs from file: apps.kubeblocks.io_componentversions.yaml
read CRDs from file: apps.kubeblocks.io_configconstraints.yaml
read CRDs from file: apps.kubeblocks.io_configurations.yaml
read CRDs from file: apps.kubeblocks.io_opsdefinitions.yaml
read CRDs from file: apps.kubeblocks.io_opsrequests.yaml
read CRDs from file: apps.kubeblocks.io_servicedescriptors.yaml
read CRDs from file: dataprotection.kubeblocks.io_actionsets.yaml
read CRDs from file: dataprotection.kubeblocks.io_backuppolicies.yaml
read CRDs from file: dataprotection.kubeblocks.io_backuprepos.yaml
read CRDs from file: dataprotection.kubeblocks.io_backups.yaml
read CRDs from file: dataprotection.kubeblocks.io_backupschedules.yaml
read CRDs from file: dataprotection.kubeblocks.io_restores.yaml
read CRDs from file: dataprotection.kubeblocks.io_storageproviders.yaml
read CRDs from file: experimental.kubeblocks.io_nodecountscalers.yaml
read CRDs from file: extensions.kubeblocks.io_addons.yaml
read CRDs from file: storage.kubeblocks.io_storageproviders.yaml
read CRDs from file: workloads.kubeblocks.io_instancesets.yaml
create/update CRD: backuppolicytemplates.apps.kubeblocks.io
create/update CRD: clusterdefinitions.apps.kubeblocks.io
create/update CRD: clusters.apps.kubeblocks.io
create/update CRD: clusterversions.apps.kubeblocks.io
create/update CRD: componentclassdefinitions.apps.kubeblocks.io
create/update CRD: componentdefinitions.apps.kubeblocks.io
create/update CRD: componentresourceconstraints.apps.kubeblocks.io
create/update CRD: components.apps.kubeblocks.io
create/update CRD: componentversions.apps.kubeblocks.io
create/update CRD: configconstraints.apps.kubeblocks.io
create/update CRD: configurations.apps.kubeblocks.io
create/update CRD: opsdefinitions.apps.kubeblocks.io
create/update CRD: opsrequests.apps.kubeblocks.io
create/update CRD: servicedescriptors.apps.kubeblocks.io
create/update CRD: actionsets.dataprotection.kubeblocks.io
create/update CRD: backuppolicies.dataprotection.kubeblocks.io
create/update CRD: backuprepos.dataprotection.kubeblocks.io
create/update CRD: backups.dataprotection.kubeblocks.io
create/update CRD: backupschedules.dataprotection.kubeblocks.io
create/update CRD: restores.dataprotection.kubeblocks.io
create/update CRD: storageproviders.dataprotection.kubeblocks.io
create/update CRD: nodecountscalers.experimental.kubeblocks.io
create/update CRD: addons.extensions.kubeblocks.io
create/update CRD: storageproviders.storage.kubeblocks.io
create/update CRD: instancesets.workloads.kubeblocks.io
update GVR resource: apps.kubeblocks.io/v1beta1, Resource=configconstraints
update resource: elasticsearch-config-constraint
panic: ConfigConstraint.apps.kubeblocks.io "elasticsearch-config-constraint" is invalid: spec.fileFormatConfig: Required value

goroutine 1 [running]:
github.com/apecloud/kubeblocks/cmd/helmhook/hook.CheckErr(...)
	/src/cmd/helmhook/hook/utils.go:38
main.main()
	/src/cmd/helmhook/main.go:73 +0x66b
➜  ~

@ahjing99 ahjing99 reopened this Jun 26, 2024
@ahjing99
Copy link
Collaborator

reopen the issue, since 0.9.0-beta.37 was newly installed, and es can be run successfully on beta.37, please check whether there is a way to fix this

@ahjing99 ahjing99 added the severity/major Great chance user will encounter the same problem label Jun 26, 2024
@sophon-zt sophon-zt linked a pull request Jun 26, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working severity/major Great chance user will encounter the same problem
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants