🎯 Goal: configuring and starting the Ansible playbook that automates the creation and deployment of the ZDM Proxy on the target machine(s).
First start a bash
shell on the zdm-ansible-container
: this
will be needed a few times in the rest of this lab
(and will be in the "zdm-ansible-console" terminal).
The next command will result in the prompt changing to
something like ubuntu@4fb20a9b:~$
:
this terminal will stay in the container until the end.
### {"terminalId": "container", "backgroundColor": "#C5DDD2"}
docker exec -it zdm-ansible-container bash
It is time to configure the settings for the proxy that is
about to be created. To do so, you are going to edit
the file zdm_proxy_cluster_config.yml
on the container,
adding connection parameters for both Origin and Target.
First check the IP address of the Cassandra node, with:
### host
. /workspace/zdm-scenario-katapod/scenario_scripts/find_addresses.sh
Moreover you'll need the Target database ID: if you went through the Astra CLI path, your database ID is simply given by this command (check this link if you used the Astra UI instead):
### host
grep ASTRA_DB_ID /workspace/zdm-scenario-katapod/.env
In file zdm_proxy_cluster_config.yml
, you'll have to uncomment and edit the entries in the following table.
(Note that, within the container, all the file editing will have to be done in the console. To save and quit
nano
when you are done, hit Ctrl-X
, then Y
, then Enter
.)
Variable | Value |
---|---|
origin_username |
cassandra |
origin_password |
cassandra |
origin_contact_points |
The IP of the Cassandra seed node (Note: this is the value of CASSANDRA_SEED_IP as printed by find_addresses.sh , and not the ZDM host address) |
origin_port |
9042 |
target_username |
Client ID found in your Astra DB Token |
target_password |
Client Secret found in your Astra DB Token |
target_astra_db_id |
Your Database ID from the Astra DB dashboard |
target_astra_token |
the "token" string in your Astra DB Token" (the one starting with AstraCS:... ) |
### {"terminalId": "container", "backgroundColor": "#C5DDD2"}
cd /home/ubuntu/zdm-proxy-automation/
nano ansible/vars/zdm_proxy_cluster_config.yml
Note: nano
might occasionally fail to start. In that case, hitting Ctrl-C in the console and re-launching the command would help.
Once the changes are saved, you can run the Ansible playbook that will provision and start the proxy containers in the proxy host: still in the Ansible container, launch the command:
### {"terminalId": "container", "backgroundColor": "#C5DDD2"}
cd /home/ubuntu/zdm-proxy-automation/ansible
ansible-playbook deploy_zdm_proxy.yml -i zdm_ansible_inventory
This will provision, configure and start the ZDM Proxy, one container per instance
(in this exercise there'll be a single instance, zdm-proxy-container
).
Once this is done, you can check the new container is listed in the output of
### {"terminalId": "host", "backgroundColor": "#C5DDD2"}
docker ps
By inspecting the logs of the containerized proxy instance, you can verify that it has indeed succeeded in connecting to the clusters:
### {"terminalId": "host", "backgroundColor": "#C5DDD2"}
docker logs zdm-proxy-container 2>&1 | grep "Proxy connected"
Alternatively, the ZDM Proxy exposes a health-status HTTP endpoint: you can query it with
### {"terminalId": "host", "backgroundColor": "#C5DDD2"}
. /workspace/zdm-scenario-katapod/scenario_scripts/find_addresses.sh
curl http://${ZDM_HOST_IP}:14001/health/readiness | jq