Skip to content

Commit

Permalink
format project add precommit config (inspired by onuralpszr's PR #111)
Browse files Browse the repository at this point in the history
  • Loading branch information
mabulgu committed Nov 2, 2023
1 parent e30214d commit d02efa5
Show file tree
Hide file tree
Showing 61 changed files with 4,536 additions and 1,638 deletions.
5 changes: 0 additions & 5 deletions .flake8

This file was deleted.

2 changes: 1 addition & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,4 +43,4 @@ jobs:
kfk
make test
- name: Build
run: make build
run: make build
2 changes: 0 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -136,5 +136,3 @@ dmypy.json

#macOs
.DS_Store


54 changes: 54 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: end-of-file-fixer
- id: trailing-whitespace
- id: check-yaml
- id: check-docstring-first
- id: check-executables-have-shebangs
- id: check-toml
- id: check-case-conflict
- id: check-added-large-files
args: [ '--maxkb=2048' ]
exclude: ^logo/
- id: detect-private-key
- id: forbid-new-submodules
- id: pretty-format-json
args: [ '--autofix', '--no-sort-keys', '--indent=4' ]
- id: end-of-file-fixer
- id: mixed-line-ending
- id: debug-statements


- repo: https://github.com/PyCQA/isort
rev: 5.12.0
hooks:
- id: isort

- repo: https://github.com/PyCQA/docformatter
rev: v1.7.5
hooks:
- id: docformatter

- repo: https://github.com/PyCQA/flake8
rev: 6.1.0
hooks:
- id: flake8
entry: flake8
additional_dependencies: [ Flake8-pyproject ]


- repo: https://github.com/PyCQA/bandit
rev: '1.7.5'
hooks:
- id: bandit
args: [ "-c", "pyproject.toml" ]
additional_dependencies: [ "bandit[toml]" ]


- repo: https://github.com/psf/black
rev: 23.9.1
hooks:
- id: black
language_version: python3
1 change: 0 additions & 1 deletion CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,4 +131,3 @@ For answers to common questions about this code of conduct, see the FAQ at
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations

2 changes: 1 addition & 1 deletion examples/.gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
*.jks
*.p12
*.bck
*.bck
2 changes: 1 addition & 1 deletion examples/2_tls_authentication/client.properties
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@ security.protocol=SSL
ssl.truststore.location=./truststore.jks
ssl.truststore.password=123456
ssl.keystore.location=./user.p12
ssl.keystore.password=123456
ssl.keystore.password=123456
2 changes: 1 addition & 1 deletion examples/2_tls_authentication/get_keys.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ oc extract secret/my-cluster-cluster-ca-cert -n kafka --keys=ca.crt --to=- > ca.
echo "yes" | keytool -import -trustcacerts -file ca.crt -keystore truststore.jks -storepass 123456
RANDFILE=/tmp/.rnd openssl pkcs12 -export -in user.crt -inkey user.key -name my-user -password pass:123456 -out user.p12

rm user.crt user.key ca.crt
rm user.crt user.key ca.crt
8 changes: 4 additions & 4 deletions examples/2_tls_authentication/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ First lets list the clusters and see our clusters list.
```shell
kfk clusters --list
```

---
**IMPORTANT**

Expand Down Expand Up @@ -210,7 +210,7 @@ user.password: 12 bytes
In order create the truststore and keystore files just run the get_keys.sh file in the [example directory](https://github.com/systemcraftsman/strimzi-kafka-cli/blob/master/examples/2_tls_authentication/get_keys.sh):
```shell
chmod a+x ./get_keys.sh;./get_keys.sh
chmod a+x ./get_keys.sh;./get_keys.sh
```

This will generate two files:
Expand All @@ -220,7 +220,7 @@ This will generate two files:

TLS authentications are made with bidirectional TLS handshake. In order to do this apart from a truststore that has the public key imported, a keystore file that has both the public and private keys has to be created and defined in the client configuration file.

So let's create our client configuration file.
So let's create our client configuration file.

Our client configuration should have a few definitions like:

Expand Down Expand Up @@ -254,7 +254,7 @@ Be careful to run producer and consumer commands from example's directory. Other
```shell
kfk console-producer --topic my-topic -n kafka -c my-cluster --producer.config client.properties
```
The console producer seems to be working just fine since we can produce messages.
The console producer seems to be working just fine since we can produce messages.

```
>message1
Expand Down
18 changes: 8 additions & 10 deletions examples/3_simple_acl_authorization/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
In the previous example we implemented TLS authentication on Strimzi Kafka cluster with Strimzi Kafka CLI. In this example, we will be continuing with enabling the ACL authorization, so that we will be able to restrict access to our topics and only allow the users or groups we want to.


Let's first see our cluster list.
Let's first see our cluster list.

```shell
kfk clusters --list
Expand All @@ -16,7 +16,7 @@ kafka my-cluster 3 3
---
**IMPORTANT**

You should have a cluster called `my-cluster` on the namespace `kafka` we created before. If you don't have the cluster and haven't yet done the authentication part please go back to the previous example and do it first since for authorization you will need authentication to be set up before.
You should have a cluster called `my-cluster` on the namespace `kafka` we created before. If you don't have the cluster and haven't yet done the authentication part please go back to the previous example and do it first since for authorization you will need authentication to be set up before.

Also please copy the `truststore.jks` and the `user.p12` files or recreate them as explained in the previous example and put it along the example folder which we ignore in git.

Expand Down Expand Up @@ -47,7 +47,7 @@ my-user tls

As you can see we have the `my-user` user that we created and authenticated in the previous example.

Now let's configure our cluster to enable for ACL authorization. We have to alter our cluster for this:
Now let's configure our cluster to enable for ACL authorization. We have to alter our cluster for this:

```shell
kfk clusters --alter --cluster my-cluster -n kafka
Expand Down Expand Up @@ -89,7 +89,7 @@ Processed a total of 0 messages

As you might also observe, both the producer and consumer returned `TopicAuthorizationException` by saying `Not authorized to access topics: [my-topic]`. So let's define authorization access to this topic for the user `my-user`.

In order to enable user's authorization, we have to both define the user's authorization type as `simple` for it to use `SimpleAclAuthorizer` of Apache Kafka, and the ACL definitions for the relevant topic -in this case it is `my-topic`. To do this, we need to alter the user with the following command options:
In order to enable user's authorization, we have to both define the user's authorization type as `simple` for it to use `SimpleAclAuthorizer` of Apache Kafka, and the ACL definitions for the relevant topic -in this case it is `my-topic`. To do this, we need to alter the user with the following command options:

```shell
kfk users --alter --user my-user --authorization-type simple --add-acl --resource-type topic --resource-name my-topic -n kafka -c my-cluster
Expand Down Expand Up @@ -122,7 +122,7 @@ So in this case we used the defaults of `type:allow`, `host:*` and `operation:Al
kfk users --alter --user my-user --authorization-type simple --add-acl --resource-type topic --resource-name my-topic --type allow --host * --operation All -n kafka -c my-cluster
```

In order to see the ACL that is defined for allowing all operations of `my-topic` for the user `my-user`, let's describe it, in this case as YAML format:
In order to see the ACL that is defined for allowing all operations of `my-topic` for the user `my-user`, let's describe it, in this case as YAML format:

```shell
kfk users --describe --user my-user -n kafka -c my-cluster -o yaml
Expand Down Expand Up @@ -177,7 +177,7 @@ org.apache.kafka.common.errors.GroupAuthorizationException: Not authorized to ac
Processed a total of 0 messages
```

Whoops! It did not work like the producer. But why? Because the consumer group that is randomly generated for us (because we did not define it anywhere) doesn't have at least `read` permission on `my-topic` topic.
Whoops! It did not work like the producer. But why? Because the consumer group that is randomly generated for us (because we did not define it anywhere) doesn't have at least `read` permission on `my-topic` topic.

---
**IMPORTANT**
Expand All @@ -188,7 +188,7 @@ In Apache Kafka, if you want to consume messages you have to do it via a consume

Ok then. Now let's add the ACL for a group in order to give `read` permission for `my-topic` topic. Let's call this group `my-group`, which we will also use it as the group id in our consumer client configuration. This time let's use `kfk acls` command which works like `kfk users --alter --add-acl` command. In order to give the best traditional experience to Strimzi CLI users, just like the traditional `bin/kafka-acls.sh` command, we have the `kfk acls` command which works mostly the same with the traditional one.

With the following command, we give the `my-group` group the `read` right for consuming the messages.
With the following command, we give the `my-group` group the `read` right for consuming the messages.

```shell
kfk acls --add --allow-principal User:my-user --group my-group --operation Read -n kafka -c my-cluster
Expand Down Expand Up @@ -260,7 +260,7 @@ ssl.keystore.password=123456
group.id=my-group
```

Running the consumer again with the updated client configuration -this time consuming from the beginning- let's see the previously produced logs:
Running the consumer again with the updated client configuration -this time consuming from the beginning- let's see the previously produced logs:

```shell
kfk console-consumer --topic my-topic -n kafka -c my-cluster --consumer.config client.properties --from-beginning
Expand All @@ -275,5 +275,3 @@ message3
Voilà!

We are able to configure the Strimzi cluster for ACL authorization, define ACLs easily with different methods and use the client configurations successfully with Strimzi Kafka CLI.


52 changes: 26 additions & 26 deletions examples/4_configuration/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,12 @@ Strimzi Kafka CLI enables users to describe, create, delete configurations of to
While `kfk configs` command can be used to change the configuration of these three entities, one can change relevant entities' configuration by using the following as well:

* `kfk topics --config/--delete-config` for adding and deleting configurations to topics.

* `kfk users --quota/--delete-quota` for managing quotas as a part of the configuration of it.

* `kfk clusters --config/--delete-config` for adding and deleting configurations to all brokers.
In this example we will show you to do the configuration by using `kfk configs` only but will mention about the options above.

In this example we will show you to do the configuration by using `kfk configs` only but will mention about the options above.
So let's start with `topic` configuration.

## Topic Configuration
Expand Down Expand Up @@ -55,7 +55,7 @@ Dynamic configs for topic my-topic are:
---
**INFO**

Additionally you can describe all of the topic configurations natively on current cluster.
Additionally you can describe all of the topic configurations natively on current cluster.
To do this, just remove the `entity-name` option:

```shell
Expand All @@ -78,7 +78,7 @@ Spec:
...
```

Now let's add a configuration like `min.insync.replicas`, which configures the sync replica count through the broker, between the leaders and followers.
Now let's add a configuration like `min.insync.replicas`, which configures the sync replica count through the broker, between the leaders and followers.
In order to add a configuration you must use `--alter` and for each config to be add `--add-config` following the `kfk config` command:


Expand All @@ -98,7 +98,7 @@ Alternatively you can set the topic configuration by using `kfk topics` with `--
kfk topics --alter --topic my-topic --config min.insync.replicas=3 -c my-cluster -n kafka
```

In order to add two configs -let's say that we wanted to add `cleanup.policy=compact` configuration along with `min.insync.replicas`- run a command like following:
In order to add two configs -let's say that we wanted to add `cleanup.policy=compact` configuration along with `min.insync.replicas`- run a command like following:

```shell
kfk configs --alter --add-config 'min.insync.replicas=3,cleanup.policy=compact' --entity-type topics --entity-name my-topic -c my-cluster -n kafka
Expand Down Expand Up @@ -142,7 +142,7 @@ kfk configs --describe --entity-type topics --entity-name my-topic -c my-cluster
...
```

Like adding a configuration, deleting a configuration is very easy. You can remove all the configurations
Like adding a configuration, deleting a configuration is very easy. You can remove all the configurations
that you've just set with a single command:

```shell
Expand All @@ -167,7 +167,7 @@ Dynamic configs for topic my-topic are:
retention.ms=7200000 sensitive=false synonyms={DYNAMIC_TOPIC_CONFIG:retention.ms=7200000}
```

As you can see we could easily manipulate the topic configurations almost like the native shell
As you can see we could easily manipulate the topic configurations almost like the native shell
executables of Apache Kafka. Now let's see how it is done for user configuration.

## User Configuration
Expand Down Expand Up @@ -196,21 +196,21 @@ kfk users --alter --user my-user --quota request_percentage=55 --quota consumer_
In traditional `kafka-configs.sh` command there are actually 5 configurations, 3 of which are quota related ones:

```
consumer_byte_rate
producer_byte_rate
request_percentage
consumer_byte_rate
producer_byte_rate
request_percentage
```

and the other 2 is for authentication type:

```
SCRAM-SHA-256
SCRAM-SHA-512
SCRAM-SHA-256
SCRAM-SHA-512
```

While these two configurations are also handled by `kafka-configs.sh` in traditional Kafka usage,
in Strimzi CLI they are configured by altering the cluster by running the `kfk clusters --alter`
command and altering the user by using the `kfk users --alter` command for adding the relevant authentication type.
While these two configurations are also handled by `kafka-configs.sh` in traditional Kafka usage,
in Strimzi CLI they are configured by altering the cluster by running the `kfk clusters --alter`
command and altering the user by using the `kfk users --alter` command for adding the relevant authentication type.
So `kfk configs` command will not be used for these two configurations since it's not supported.

---
Expand All @@ -228,7 +228,7 @@ Configs for user-principal 'CN=my-user' are consumer_byte_rate=2097152.0, reques
---
**INFO**

Additionally you can describe all of the user configurations natively on current cluster.
Additionally you can describe all of the user configurations natively on current cluster.
To do this, just remove the `entity-name` option:

```shell
Expand Down Expand Up @@ -270,13 +270,13 @@ You can see that empty response returning since there is no configuration anymor
kfk configs --describe --entity-type users --entity-name my-user -c my-cluster -n kafka --native
```

So we could easily update/create/delete the user configurations for Strimzi, almost like the native shell
So we could easily update/create/delete the user configurations for Strimzi, almost like the native shell
executables of Apache Kafka. Now let's take our final step to see how it is done for broker configuration.

## Broker Configuration

Adding configurations either as dynamic ones or static ones are as easy as it is for the topics and users.
For both configuration types, Strimzi takes care about it itself by rolling update the brokers for the static
For both configuration types, Strimzi takes care about it itself by rolling update the brokers for the static
configurations and reflecting directly the dynamic configurations.

Here is a way to add a static configuration that will be reflected after the rolling update of the brokers:
Expand All @@ -294,7 +294,7 @@ kfk clusters --alter --cluster my-cluster --config log.retention.hours=168 -n ka
---
**IMPORTANT**

Unlike the native `kafka-configs.sh` command, for the `entity-name`, the Kafka cluster name should be set rather than the
Unlike the native `kafka-configs.sh` command, for the `entity-name`, the Kafka cluster name should be set rather than the
broker ids.

---
Expand Down Expand Up @@ -344,8 +344,8 @@ the first broker's configuration which will be totally the same with the cluster

---

Now let's add a dynamic configuration in order to see it while describing with `native` flag.
We will change `log.cleaner.threads` configuration which is responsible for controlling the background threads
Now let's add a dynamic configuration in order to see it while describing with `native` flag.
We will change `log.cleaner.threads` configuration which is responsible for controlling the background threads
that do log compaction and is 1 one by default.

```shell
Expand Down Expand Up @@ -432,7 +432,7 @@ transaction.state.log.min.isr=2
transaction.state.log.replication.factor=3
```

So that's all!
So that's all!

We are able to create, update, delete the configurations of topics, users and Kafka cluster itself and describe the changed
configurations both Kubernetes and Kafka natively using Strimzi Kafka CLI.
We are able to create, update, delete the configurations of topics, users and Kafka cluster itself and describe the changed
configurations both Kubernetes and Kafka natively using Strimzi Kafka CLI.
8 changes: 4 additions & 4 deletions examples/5_connect/connect.properties
Original file line number Diff line number Diff line change
Expand Up @@ -33,15 +33,15 @@ offset.storage.replication.factor=1
status.storage.replication.factor=1

# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
# (connectors, converters, transformations). The list should consist of top level directories that include
# any combination of:
# (connectors, converters, transformations). The list should consist of top level directories that include
# any combination of:
# a) directories immediately containing jars with plugins and their dependencies
# b) uber-jars with plugins and their dependencies
# c) directories immediately containing the package directory structure of classes of plugins and their dependencies
# Note: symlinks will be followed to discover dependencies or plugins.
# Examples:
# Examples:
# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
#plugin.path=connectors

image=quay.io/systemcraftsman/demo-connect-cluster:latest
plugin.url=https://github.com/jcustenborder/kafka-connect-twitter/releases/download/0.2.26/kafka-connect-twitter-0.2.26.tar.gz
plugin.url=https://github.com/jcustenborder/kafka-connect-twitter/releases/download/0.2.26/kafka-connect-twitter-0.2.26.tar.gz
Loading

0 comments on commit d02efa5

Please sign in to comment.