Ns4Kafka introduces namespace functionality to Apache Kafka, as well as a new deployment model for Kafka resources using Kafkactl, which follows best practices from Kubernetes.
Ns4Kafka is an API that provides controllers for listing, creating, and deleting various Kafka resources, including topics, connectors, schemas, and Kafka Connect clusters. The solution is built on several principles.
Ns4Kafka implements the concept of namespaces, which enable encapsulation of Kafka resources within specific namespaces. Each namespace can only view and manage the resources that belong to it, with other namespaces being isolated from each other. This isolation is achieved by assigning ownership of names and prefixes to specific namespaces.
Whenever you deploy a Kafka resource using Ns4Kafka, the solution saves it to a dedicated topic and synchronizes the Kafka cluster to ensure that the resource's desired state is achieved.
Ns4Kafka allows you to apply customizable validation rules to ensure that your resources are configured with the appropriate values.
Ns4Kafka includes Kafkactl, a command-line interface (CLI) that enables you to deploy your Kafka resources 'as code' within your namespace using YAML descriptors. This tool can also be used in continuous integration/continuous delivery (CI/CD) pipelines.
You can download Ns4Kafka as a fat jar from the project's releases page on GitHub at https://github.com/michelin/ns4kafka/releases.
Additionally, a Docker image of the solution is available at https://hub.docker.com/repository/docker/michelin/ns4kafka.
To operate, Ns4Kafka requires a Kafka broker for data storage and GitLab for user authentication.
The solution is built on the Micronaut framework and can be configured with any Micronaut property source loader.
To override the default properties from the application.yml
file, you can set the micronaut.config.file
system property when running the fat jar file, like so:
java -Dmicronaut.config.file=application.yml -jar ns4kafka.jar
Alternatively, you can set the MICRONAUT_CONFIG_FILE
environment variable and then run the jar file without additional parameters, as shown below:
MICRONAUT_CONFIG_FILE=application.yml
java -jar ns4kafka.jar
To run and try out the application, you can use the provided docker-compose
file located in the .docker
directory.
docker-compose up -d
This command will start multiple containers, including:
- 1 Zookeeper
- 1 Kafka broker
- 1 Schema registry
- 1 Kafka Connect
- 1 Control Center
- Ns4Kafka, with customizable
config.yml
andlogback.xml
files - Kafkactl, with multiple deployable resources in
/resources
Please note that SASL/SCRAM authentication and authorization using ACLs are enabled on the broker.
To get started, you'll need to perform the following steps:
- Define a GitLab admin group for Ns4Kafka in the
application.yml
file. You can find an example here. It is recommended to choose a GitLab group you belong to in order to have admin rights. - Define a GitLab token for Kafkactl in the
config.yml
file. You can refer to the installation instructions here. - Define a GitLab group you belong to in the role bindings of the
resources/admin/namespace.yml
file. This is demonstrated in the example here.
To set up authentication with GitLab, you can use the following configuration:
micronaut:
security:
enabled: true
gitlab:
enabled: true
url: https://gitlab.com
token:
jwt:
signatures:
secret:
generator:
secret: "changeit"
To configure the admin user, you can use the following:
ns4kafka:
security:
admin-group: "MY_ADMIN_GROUP"
If the admin group is set to "MY_ADMIN_GROUP", users will be granted admin privileges if they belong to the GitLab group "MY_ADMIN_GROUP".
You can configure authentication to the Kafka brokers using the following:
kafka:
bootstrap.servers: "localhost:9092"
sasl.mechanism: "PLAIN"
security.protocol: "SASL_PLAINTEXT"
sasl.jaas.config: "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"admin\" password=\"admin\";"
The configuration will depend on the authentication method selected for your broker.
Managed clusters are the clusters where Ns4Kafka namespaces are deployed, and Kafka resources are managed.
You can configure your managed clusters with the following properties:
ns4kafka:
managed-clusters:
clusterNameOne:
manage-users: true
manage-acls: true
manage-topics: true
manage-connectors: true
drop-unsync-acls: true
provider: "SELF_MANAGED"
config:
bootstrap.servers: "localhost:9092"
sasl.mechanism: "PLAIN"
security.protocol: "SASL_PLAINTEXT"
sasl.jaas.config: "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"admin\" password=\"admin\";"
schema-registry:
url: "http://localhost:8081"
basicAuthUsername: "user"
basicAuthPassword: "password"
connects:
connectOne:
url: "http://localhost:8083"
basicAuthUsername: "user"
basicAuthPassword: "password"
The name for each managed cluster has to be unique. This is this name you have to set in the field metadata.cluster of your namespace descriptors.
Property | type | description |
---|---|---|
manage-users | boolean | Does the cluster manages users ? |
manage-acls | boolean | Does the cluster manages access control entries ? |
manage-topics | boolean | Does the cluster manages topics ? |
manage-connectors | boolean | Does the cluster manages connects ? |
drop-unsync-acls | boolean | Should Ns4Kafka drop unsynchronized ACLs |
provider | boolean | The kind of cluster. Either SELF_MANAGED or CONFLUENT_CLOUD |
config.bootstrap.servers | string | The location of the clusters servers |
schema-registry.url | string | The location of the Schema Registry |
schema-registry.basicAuthUsername | string | Basic authentication username to the Schema Registry |
schema-registry.basicAuthPassword | string | Basic authentication password to the Schema Registry |
connects.connect-name.url | string | The location of the kafka connect |
connects.connect-name.basicAuthUsername | string | Basic authentication username to the Kafka Connect |
connects.connect-name.basicAuthPassword | string | Basic authentication password to the Kafka Connect |
The configuration will depend on the authentication method selected for your broker, schema registry and Kafka Connect.
AKHQ can be integrated with Ns4Kafka to provide access to resources within your namespace during the authentication process.
To enable this integration, follow these steps:
- Configure LDAP authentication in AKHQ.
- Add the Ns4Kafka claim endpoint to AKHQ's configuration:
akhq:
security:
rest:
enabled: true
url: https://ns4kafka/akhq-claim/v2
For AKHQ versions prior to v0.20, use the /akhq-claim/v1
endpoint.
- In your Ns4Kafka configuration, specify the following settings for AKHQ:
ns4kafka:
akhq:
admin-group: LDAP-ADMIN-GROUP
admin-roles:
- topic/read
- topic/data/read
- group/read
- registry/read
- connect/read
- connect/state/update
- users/reset-password
group-label: support-group
roles:
- topic/read
- topic/data/read
- group/read
- registry/read
- connect/read
- connect/state/update
If the admin group is set to "LDAP-ADMIN-GROUP", users belonging to this LDAP group will be granted admin privileges.
- In your namespace configuration, define an LDAP group:
apiVersion: v1
kind: Namespace
metadata:
name: myNamespace
cluster: local
labels:
contacts: namespace.owner@example.com
support-group: NAMESPACE-LDAP-GROUP
Once the configuration is in place, after successful authentication in AKHQ, users belonging to the NAMESPACE-LDAP-GROUP
will be able to access the resources within the myNamespace
namespace.
The setup of namespaces, owner ACLs, role bindings, and quotas is the responsibility of Ns4Kafka administrators, as these resources define the context in which project teams will work. To create your first namespace, please refer to the Kafkactl documentation.
We welcome contributions from the community! Before you get started, please take a look at our contribution guide to learn about our guidelines and best practices. We appreciate your help in making Ns4Kafka a better tool for everyone.