ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. All of these kinds of services are used in some form or other by distributed applications.
$ docker run --name zookeeper bitnami/zookeeper:latest
version: '2'
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
ports:
- '2181:2181'
- Bitnami closely tracks upstream source changes and promptly publishes new versions of this image using our automated systems.
- With Bitnami images the latest bug fixes and features are available as soon as possible.
- Bitnami containers, virtual machines and cloud images use the same components and configuration approach - making it easy to switch between formats based on your project needs.
- All our images are based on minideb a minimalist Debian based container image which gives you a small base container image and the familiarity of a leading Linux distribution.
- All Bitnami images available in Docker Hub are signed with Docker Content Trust (DCT). You can use
DOCKER_CONTENT_TRUST=1
to verify the integrity of the images. - Bitnami container images are released daily with the latest distribution packages available.
This CVE scan report contains a security report with all open CVEs. To get the list of actionable security issues, find the "latest" tag, click the vulnerability report link under the corresponding "Security scan" field and then select the "Only show fixable" filter on the next page.
Deploying Bitnami applications as Helm Charts is the easiest way to get started with our applications on Kubernetes. Read more about the installation in the Bitnami ZooKeeper Chart GitHub repository.
Bitnami containers can be used with Kubeapps for deployment and management of Helm Charts in clusters.
Non-root container images add an extra layer of security and are generally recommended for production environments. However, because they run as a non-root user, privileged tasks are typically off-limits. Learn more about non-root containers in our docs.
Learn more about the Bitnami tagging policy and the difference between rolling tags and immutable tags in our documentation page.
Subscribe to project updates by watching the bitnami/zookeeper GitHub repo.
The recommended way to get the Bitnami ZooKeeper Docker Image is to pull the prebuilt image from the Docker Hub Registry.
$ docker pull bitnami/zookeeper:latest
To use a specific version, you can pull a versioned tag. You can view the list of available versions in the Docker Hub Registry.
$ docker pull bitnami/zookeeper:[TAG]
If you wish, you can also build the image yourself.
docker build -t bitnami/zookeeper:latest 'https://github.com/bitnami/bitnami-docker-zookeeper.git#master:3/debian-10'
If you remove the container all your data and configurations will be lost, and the next time you run the image the database will be reinitialized. To avoid this loss of data, you should mount a volume that will persist even after the container is removed.
Note! If you have already started using ZooKeeper, follow the steps on backing up and restoring to pull the data from your running container down to your host.
The image exposes a volume at /bitnami/zookeeper
for the ZooKeeper data. For persistence you can mount a directory at this location from your host. If the mounted directory is empty, it will be initialized on the first run.
$ docker run -v /path/to/zookeeper-persistence:/bitnami/zookeeper bitnami/zookeeper:latest
or by modifying the docker-compose.yml
file present in this repository:
services:
zookeeper:
...
volumes:
- /path/to/zookeeper-persistence:/bitnami/zookeeper
...
NOTE: As this is a non-root container, the mounted files and directories must have the proper permissions for the UID
1001
.
Using Docker container networking, a ZooKeeper server running inside a container can easily be accessed by your application containers.
Containers attached to the same network can communicate with each other using the container name as the hostname.
In this example, we will create a ZooKeeper client instance that will connect to the server instance that is running on the same docker network as the client.
$ docker network create app-tier --driver bridge
Use the --network app-tier
argument to the docker run
command to attach the ZooKeeper container to the app-tier
network.
$ docker run -d --name zookeeper-server \
--network app-tier \
bitnami/zookeeper:latest
Finally we create a new container instance to launch the ZooKeeper client and connect to the server created in the previous step:
$ docker run -it --rm \
--network app-tier \
bitnami/zookeeper:latest zkCli.sh -server zookeeper-server:2181 get /
When not specified, Docker Compose automatically sets up a new network and attaches all deployed services to that network. However, we will explicitly define a new bridge
network named app-tier
. In this example we assume that you want to connect to the ZooKeeper server from your own custom application image which is identified in the following snippet by the service name myapp
.
version: '2'
networks:
app-tier:
driver: bridge
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
networks:
- app-tier
myapp:
image: 'YOUR_APPLICATION_IMAGE'
networks:
- app-tier
IMPORTANT:
- Please update the
YOUR_APPLICATION_IMAGE
placeholder in the above snippet with your application image- In your application container, use the hostname
zookeeper
to connect to the ZooKeeper server
Launch the containers using:
$ docker-compose up -d
The configuration can easily be setup with the Bitnami ZooKeeper Docker image using the following environment variables:
ZOO_PORT_NUMBER
: ZooKeeper client port. Default: 2181ZOO_SERVER_ID
: ID of the server in the ensemble. Default: 1ZOO_TICK_TIME
: Basic time unit in milliseconds used by ZooKeeper for heartbeats. Default: 2000ZOO_INIT_LIMIT
: ZooKeeper uses to limit the length of time the ZooKeeper servers in quorum have to connect to a leader. Default: 10ZOO_SYNC_LIMIT
: How far out of date a server can be from a leader. Default: 5ZOO_MAX_CNXNS
: Limits the total number of concurrent connections that can be made to a ZooKeeper server. Setting it to 0 entirely removes the limit. Default: 0ZOO_MAX_CLIENT_CNXNS
: Limits the number of concurrent connections that a single client may make to a single member of the ZooKeeper ensemble. Default 60ZOO_4LW_COMMANDS_WHITELIST
: List of whitelisted 4LW commands. Default srvr, mntrZOO_SERVERS
: Comma, space or semi-colon separated list of servers. Example: zoo1:2888:3888,zoo2:2888:3888 or if specifying server IDs zoo1:2888:3888::1,zoo2:2888:3888::2. No defaults.ZOO_CLIENT_USER
: User that will use ZooKeeper clients to auth. Default: No defaults.ZOO_CLIENT_PASSWORD
: Password that will use ZooKeeper clients to auth. No defaults.ZOO_CLIENT_PASSWORD_FILE
: Absolute path to a file that contains the password that will be used by ZooKeeper clients to perform authentication. No defaults.ZOO_SERVER_USERS
: Comma, semicolon or whitespace separated list of user to be created. Example: user1,user2,admin. No defaultsZOO_SERVER_PASSWORDS
: Comma, semicolon or whitespace separated list of passwords to assign to users when created. Example: pass4user1, pass4user2, pass4admin. No defaultsZOO_SERVER_PASSWORDS_FILE
: Abslute path to a file that contains a comma, semicolon or whitespace separated list of passwords to assign to users when created. Example: pass4user1, pass4user2, pass4admin. No defaultsZOO_ENABLE_AUTH
: Enable ZooKeeper auth. It uses SASL/Digest-MD5. Default: noZOO_RECONFIG_ENABLED
: Enable ZooKeeper Dynamic Reconfiguration. Default: noZOO_LISTEN_ALLIPS_ENABLED
: Listen for connections from its peers on all available IP addresses. Default: noZOO_AUTOPURGE_INTERVAL
: The time interval in hours for which the autopurge task is triggered. Set to a positive integer (1 and above) to enable auto purging of old snapshots and log files. Default: 0ZOO_MAX_SESSION_TIMEOUT
: Maximum session timeout in milliseconds that the server will allow the client to negotiate. Default: 40000ZOO_AUTOPURGE_RETAIN_COUNT
: When auto purging is enabled, ZooKeeper retains the most recent snapshots and the corresponding transaction logs in the dataDir and dataLogDir respectively to this number and deletes the rest. Minimum value is 3. Default: 3ZOO_HEAP_SIZE
: Size in MB for the Java Heap options (Xmx and XMs). This env var is ignored if Xmx an Xms are configured viaJVMFLAGS
. Default: 1024ZOO_ENABLE_PROMETHEUS_METRICS
: Expose Prometheus metrics. Default: noZOO_PROMETHEUS_METRICS_PORT_NUMBER
: Port where a Jetty server will expose Prometheus metrics. Default: 7000ALLOW_ANONYMOUS_LOGIN
: If set to true, Allow to accept connections from unauthenticated users. Default: noZOO_LOG_LEVEL
: ZooKeeper log level. Available levels are:ALL
,DEBUG
,INFO
,WARN
,ERROR
,FATAL
,OFF
,TRACE
. Default: INFOJVMFLAGS
: Default JVMFLAGS for the ZooKeeper process. No defaultsZOO_TLS_CLIENT_ENABLE
: Enable tls for client communication. Default: falseZOO_TLS_PORT_NUMBER
: Zookeeper TLS port. Default: 3181ZOO_TLS_CLIENT_KEYSTORE_FILE
: KeyStore file file: Default: No DefaultsZOO_TLS_CLIENT_KEYSTORE_PASSWORD
: KeyStore file password. This can be an evironment variable. It will be evaled by bash. No DefaultsZOO_TLS_CLIENT_TRUSTSTORE_FILE
: TrustStore file file: Default: No DefaultsZOO_TLS_CLIENT_TRUSTSTORE_PASSWORD
: TrustStore file password. This can be an evironment variable. It will be evaled by bash. No DefaultsZOO_TLS_QUORUM_ENABLE
: Enable tls for quorum communication. Default: falseZOO_TLS_QUORUM_KEYSTORE_FILE
: KeyStore file file: Default: No DefaultsZOO_TLS_QUORUM_KEYSTORE_PASSWORD
: KeyStore file password. This can be an evironment variable. It will be evaled by bash. No DefaultsZOO_TLS_QUORUM_KEYSTORE_FILE
: TrustStore file file: Default: No DefaultsZOO_TLS_QUORUM_KEYSTORE_PASSWORD
: TrustStore file password. This can be an evironment variable. It will be evaled by bash. No Defaults
$ docker run --name zookeeper -e ZOO_SERVER_ID=1 bitnami/zookeeper:latest
or modify the docker-compose.yml
file present in this repository:
services:
zookeeper:
...
environment:
- ZOO_SERVER_ID=1
...
The image looks for configuration in the conf/
directory of /opt/bitnami/zookeeper
.
$ docker run --name zookeeper -v /path/to/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg bitnami/zookeeper:latest
After that, your changes will be taken into account in the server's behaviour.
Run the ZooKeeper image, mounting a directory from your host.
$ docker run --name zookeeper -v /path/to/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg bitnami/zookeeper:latest
or using Docker Compose:
version: '2'
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
ports:
- '2181:2181'
volumes:
- /path/to/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg
Edit the configuration on your host using your favorite editor.
vi /path/to/zoo.cfg
After changing the configuration, restart your ZooKeeper container for changes to take effect.
$ docker restart zookeeper
or using Docker Compose:
$ docker-compose restart zookeeper
Authentication based on SASL/Digest-MD5 can be easily enabled by passing the ZOO_ENABLE_AUTH
env var.
When enabling the ZooKeeper authentication, it is also required to pass the list of users and passwords that will
be able to login.
Note: Authentication is enabled using the CLI tool
zkCli.sh
. Therefore, it's necessary to setZOO_CLIENT_USER
andZOO_CLIENT_PASSWORD
environment variables too.
$ docker run -it -e ZOO_ENABLE_AUTH=yes \
-e ZOO_SERVER_USERS=user1,user2 \
-e ZOO_SERVER_PASSWORDS=pass4user1,pass4user2 \
-e ZOO_CLIENT_USER=user1 \
-e ZOO_CLIENT_PASSWORD=pass4user1 \
bitnami/zookeeper
or modify the docker-compose.yml
file present in this repository:
services:
zookeeper:
...
environment:
- ZOO_ENABLE_AUTH=yes
- ZOO_SERVER_USERS=user1,user2
- ZOO_SERVER_PASSWORDS=pass4user1,pass4user2
- ZOO_CLIENT_USER=user1
- ZOO_CLIENT_PASSWORD=pass4user1
...
A ZooKeeper (https://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html) cluster can easily be setup with the Bitnami ZooKeeper Docker image using the following environment variables:
ZOO_SERVERS
: Comma, space or semi-colon separated list of servers.This can be done with or without specifying the ID of the server in the ensemble. No defaults. Examples:- without Server ID - zoo1:2888:3888,zoo2:2888:3888
- with Server ID - zoo1:2888:3888::1,zoo2:2888:3888::2
For reliable ZooKeeper service, you should deploy ZooKeeper in a cluster known as an ensemble. As long as a majority of the ensemble are up, the service will be available. Because ZooKeeper requires a majority, it is best to use an odd number of machines. For example, with four machines ZooKeeper can only handle the failure of a single machine; if two machines fail, the remaining two machines do not constitute a majority. However, with five machines ZooKeeper can handle the failure of two machines.
You have to use 0.0.0.0 as the host for the server. More concretely, if the ID of the zookeeper1 container starting is 1, then the ZOO_SERVERS environment variable has to be 0.0.0.0:2888:3888,zookeeper2:2888:3888,zookeeper3:2888:3888 or if the ID of zookeeper servers are non-sequential then they need to be specified 0.0.0.0:2888:3888::2,zookeeper2:2888:3888::4.zookeeper3:2888:3888::6
See below:
Create a Docker network to enable visibility to each other via the docker container name
$ docker network create app-tier --driver bridge
The first step is to create one ZooKeeper instance.
$ docker run --name zookeeper1 \
--network app-tier \
-e ZOO_SERVER_ID=1 \
-e ZOO_SERVERS=0.0.0.0:2888:3888,zookeeper2:2888:3888,zookeeper3:2888:3888 \
-p 2181:2181 \
-p 2888:2888 \
-p 3888:3888 \
bitnami/zookeeper:latest
Next we start a new ZooKeeper container.
$ docker run --name zookeeper2 \
--network app-tier \
-e ZOO_SERVER_ID=2 \
-e ZOO_SERVERS=zookeeper1:2888:3888,0.0.0.0:2888:3888,zookeeper3:2888:3888 \
-p 2181:2181 \
-p 2888:2888 \
-p 3888:3888 \
bitnami/zookeeper:latest
Next we start another new ZooKeeper container.
$ docker run --name zookeeper3 \
--network app-tier \
-e ZOO_SERVER_ID=3 \
-e ZOO_SERVERS=zookeeper1:2888:3888,zookeeper2:2888:3888,0.0.0.0:2888:3888 \
-p 2181:2181 \
-p 2888:2888 \
-p 3888:3888 \
bitnami/zookeeper:latest
You now have a two node ZooKeeper cluster up and running. You can scale the cluster by adding/removing slaves without incurring any downtime.
With Docker Compose the ensemble can be setup using:
version: '2'
services:
zookeeper1:
image: 'bitnami/zookeeper:latest'
ports:
- '2181'
- '2888'
- '3888'
volumes:
- /path/to/zookeeper-persistence:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=1
- ZOO_SERVERS=0.0.0.0:2888:3888,zookeeper2:2888:3888,zookeeper3:2888:3888
zookeeper2:
image: 'bitnami/zookeeper:latest'
ports:
- '2181'
- '2888'
- '3888'
volumes:
- /path/to/zookeeper-persistence:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=2
- ZOO_SERVERS=zookeeper1:2888:3888,0.0.0.0:2888:3888,zookeeper3:2888:3888
zookeeper3:
image: 'bitnami/zookeeper:latest'
ports:
- '2181'
- '2888'
- '3888'
volumes:
- /path/to/zookeeper-persistence:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=3
- ZOO_SERVERS=zookeeper1:2888:3888,zookeeper2:2888:3888,0.0.0.0:2888:3888
docker run --name zookeeper \
-v /path/to/domain.key:/bitnami/zookeeper/certs/domain.key:ro
-v /path/to/domain.crs:/bitnami/zookeeper/certs/domain.crs:ro
-e ALLOW_EMPTY_PASSWORD=yes \
-e ZOO_TLS_CLIENT_ENABLE=yes \
-e ZOO_TLS_CLIENT_KEYSTORE_FILE=/bitnami/zookeeper/certs/domain.key\
-e ZOO_TLS_CLIENT_TRUSTSTORE_FILE=/bitnami/zookeeper/certs/domain.crs\
bitnami/zookeeper:latest
The Bitnami ZooKeeper Docker image sends the container logs to the stdout
. To view the logs:
$ docker logs zookeeper
or using Docker Compose:
$ docker-compose logs zookeeper
You can configure the containers logging driver using the --log-driver
option if you wish to consume the container logs differently. In the default configuration docker uses the json-file
driver.
To backup your data, follow these simple steps:
$ docker stop zookeeper
or using Docker Compose:
$ docker-compose stop zookeeper
We need to mount two volumes in a container we will use to create the backup: a directory on your host to store the backup in, and the volumes from the container we just stopped so we can access the data.
$ docker run --rm -v /path/to/zookeeper-backups:/backups --volumes-from zookeeper busybox \
cp -a /bitnami/zookeeper:latest /backups/latest
or using Docker Compose:
$ docker run --rm -v /path/to/zookeeper-backups:/backups --volumes-from `docker-compose ps -q zookeeper` busybox \
cp -a /bitnami/zookeeper:latest /backups/latest
Restoring a backup is as simple as mounting the backup as volumes in the container.
$ docker run -v /path/to/zookeeper-backups/latest:/bitnami/zookeeper bitnami/zookeeper:latest
or using Docker Compose:
version: '2'
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
ports:
- '2181:2181'
volumes:
- /path/to/zookeeper-backups/latest:/bitnami/zookeeper
Bitnami provides up-to-date versions of ZooKeeper, including security patches, soon after they are made upstream. We recommend that you follow these steps to upgrade your container.
$ docker pull bitnami/zookeeper:latest
or if you're using Docker Compose, update the value of the image property to
bitnami/zookeeper:latest
.
Before continuing, you should backup your container's data, configuration and logs.
Follow the steps on creating a backup.
$ docker rm -v zookeeper
or using Docker Compose:
$ docker-compose rm -v zookeeper
Re-create your container from the new image, restoring your backup if necessary.
$ docker run --name zookeeper bitnami/zookeeper:latest
or using Docker Compose:
$ docker-compose up zookeeper
- ZooKeeper configuration moved to bash scripts in the rootfs/ folder.
- Configuration is not persisted, it is regenerated each time the container is created or it is used as volume.
- The zookeeper container has been migrated to a non-root container approach. Previously the container run as
root
user and the zookeeper daemon was started aszookeeper
user. From now own, both the container and the zookeeper daemon run as user1001
. As a consequence, the configuration files are writable by the user running the zookeeper process.
- New release
We'd love for you to contribute to this container. You can request new features by creating an issue, or submit a pull request with your contribution.
If you encountered a problem running this container, you can file an issue. For us to provide better support, be sure to include the following information in your issue:
- Host OS and version
- Docker version (
docker version
) - Output of
docker info
- Version of this container (
echo $BITNAMI_IMAGE_VERSION
inside the container) - The command you used to run the container, and any relevant output you saw (masking any sensitive information)
Copyright (c) 2015-2021 Bitnami
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.