This module is the attached source code from the blog post Getting Started with Scala and Apache Kafka. It discusses how to use the basic Kafka Clients in a Scala application. Originally inpired by the first scala example, it goes beyond by showing multiple ways to produce, to consume and to configure the clients.
git clone https://github.com/DivLoic/kafka-application4s.git
cd kafka-application4s
sbt compile
You first need to run Kafka and the Schema Registry. Any recent installation of Kafka or the Confluent platform can be used. Many installation methods can be found on the CP Download Page.
i.e. Confluent Cli on Mac
curl -sL https://cnfl.io/cli | sh -s -- latest -b /usr/local/bin
export CONFLUENT_HOME=...
export PATH=$PATH:$CONFLUENT_HOME
confluent local services schema-registry start
The module also works with a cluster hosted on Confluent Cloud. You will find in consumer.conf and producer.conf the commented config related to the cloud. After that, you will need either to edit these files or to define the following variables:
export BOOTSTRAP_SERVERS="...:9092"
export CLUSTER_API_KEY="..."
export CLUSTER_API_SECRET="..."
export SCHEMA_REGISTRY_URL="https:/..."
export SR_API_KEY="..."
export SR_API_SECRET="..."
For more on Confluent Cloud login see the documentation.
Run:
sbt produce "-Djline.terminal=none" --error
Run:
sbt consume "-Djline.terminal=none" --error
- The code is detail in the blog post
- For a step by step approach including tests checkout this Kafka Tutorial