✅ Open /workspace/ds201-lab10/node1/conf/cassandra.yaml
in a nano or the text editor of your choice and find the endpoint_snitch
setting:
nano /workspace/ds201-lab10/node1/conf/cassandra.yaml
The default snitch, SimpleSnitch
is only appropriate for single datacenter deployments.
✅ Change the endpoint_snitch
to GossipingPropertyFileSnitch
, save and close the file.
Note: GossipingPropertyFileSnitch
should be your go-to snitch for production use. The rack and datacenter for the local node are defined in cassandra-rackdc.properties
and propagated to other nodes via gossip.
✅ Make the same change to node2:
nano /workspace/ds201-lab10/node2/conf/cassandra.yaml
✅ Open /workspace/ds201-lab10/node1/conf/cassandra-rackdc.properties
in a nano or the text editor of your choice and find the endpoint_snitch
setting:
nano /workspace/ds201-lab10/node1/conf/cassandra-rackdc.properties
✅ Set the following values, then close and save the file:
dc=dc-east
rack=rack-red
This is the file that the GossipingPropertyFileSnitch uses to determine the rack and data center this particular node belongs to.
Racks and datacenters are purely logical assignments to Cassandra. You will want to ensure that your logical racks and data centers align with your physical failure zones.
✅ Open /workspace/ds201-lab10/node2/conf/cassandra-rackdc.properties
:
nano /workspace/ds201-lab10/node2/conf/cassandra-rackdc.properties
✅ Set the following values, then close and save the file:
dc=dc-west
rack=rack-red
Although the rack names for the two nodes are the same, each rack lives independently in a different data center (dc-east vs. dc-west).
✅ Start node1:
/workspace/ds201-lab10/node1/bin/cassandra
Wait for node1 to start.
✅ Start node2:
/workspace/ds201-lab10/node2/bin/cassandra
✅ Check on the cluster status:
/workspace/ds201-lab10/node2/bin/nodetool status
You should now see that the nodes are in different datacenters.