An attempt to make a reliable, distributed file system inspired by Hadoop File System.
- A cli tool to start and configure Data Node and Name Node servers.
Starts a Name Node server on a host.
Usage:
bin/rdfs.sh namenode
Flags:
--name-node-port, Default: 3620
--name-node-heartbeat-port, Default: 3630
Starts a Data Node server on a host and join it to the RDFS cluster.
Usage:
bin/rdfs.sh datanode
Flags:
--name-node-address, Default: 0.0.0.0
--name-node-heartbeat-port, Default: 3630
--data-node-port, Default: 3530
- A cli client tool to interact with RDFS.
Writes the contents of a local file onto RDFS.
Usage:
bin/rdfs-client.sh write <local-filepath> <rdfs-file-name>
Flags:
--name-node-address, Default: 0.0.0.0
--name-node-port, Default: 3620
--block-size, Default: 128 x 10^6
Reads the contents of a file on RDFS and writes it to a file locally.
Usage:
bin/rdfs-client.sh read <new-local-filename> <rdfs-file-name>
Flags:
--name-node-address, Default: 0.0.0.0
--name-node-port, Default: 3620
Deletes a file from RDFS.
Usage:
bin/rdfs-client.sh delete <rdfs-file-name>
Flags:
--name-node-address, Default: 0.0.0.0
--name-node-port, Default: 3620
docker build -t rdfs -f docker/Dockerfile.rdfs .
docker build -t rdfs-client -f docker/Dockerfile.client .