Skip to content

Designed and implemented a Distributed System which is byzantine fault tolerant. Verified the system by injecting various byzantine failures.

Notifications You must be signed in to change notification settings

rahulrane50/Byzantine-chain-replication

Repository files navigation

PLATFORM:
    Testing : Json file created for providing testdata and used python json loader module.
    DistAlgo version : 1.0.9 source code compiled and installed.
    Python version: 3.5.0 source code compiled and installed.
    Operating system: CentOS release 6.3 (Final)
    Host: VM running on Oracle VM virtual box on laptop

INSTRUCTIONS:
    A main process is a entry point for whole system which automatically reads config.txt to get configuration and testing.json to
    read input testdata. This can be run as :
    python3 -m da -f -F output --logfilename ../logs/logfile.log --message-buffer-size 65536 -m main_process

    Start nodes of replica, client and olympus on different terminals:
    python3 -m da -f -F output --logfilename /home/rasika/Documents/Asynchronous\ Systems/Phase\ 2/asyncbcr/logs/logfile.log --message-buffer-size 65536 --cookie SECRET -D -n OlympusNode -m olympus
    python3 -m da -f -F output --logfilename /home/rasika/Documents/Asynchronous\ Systems/Phase\ 2/asyncbcr/logs/logfile.log --message-buffer-size 65536 --cookie SECRET -D -n ReplicasNode -m replica
    python3 -m da -f -F output --logfilename /home/rasika/Documents/Asynchronous\ Systems/Phase\ 2/asyncbcr/logs/logfile.log --message-buffer-size 65536 --cookie SECRET -D -n ClientsNode -m client

WORKLOAD GENERATION:
    For workload generation, the process reads the workload given in either testing.json or config.txt
    It is stored as a list in the properties map for client. Index into the list gives workload for the respective client.
    For pseudorandom workload generation, there is a dictionary with default operations, the specified random number generates operations from that dictionary.

BUGS AND LIMITAIONS:
    None.

CONTRIBUTIONS:
    rrane:  Implemented public and private key generation using pynacl library.
            Signing and verification of messages is implemented using pynacl library.
            Added handling for failure injection.
            Added verification for result statements.
            Testing framework for getting input testdata implemented using simple python json module.
            Added workflow for wedge requests, catchup and caught messages.
            Added fault errors for phase 3.
    
    rpohankar: Implemented workload generation, generation of request sequence in config file
               Implemented dictionary object: support put, get, slice, append
               Implemented replica functionality
                   Head: handle new request: assign slot, sign order stmt & result stmt, send shuttle
                   Head: handle retransmitted request as described in paper
                   Handle shuttle: check validity of order proof (incl. signatures), add signed order statement and signed result statement, send updated shuttle
                   Tail: send result to client; send result shuttle to predecessor
                   Handle result shuttle: validate, save, and forward it
                   Non-head: handle request: send cached result, send error, or forward req
                   Creation of logs using command line option

MAIN FILES:
    main process : main_process.da
    client : client.da
    olympus : olympus.da
    replica : replica.da

CODE SIZE:
    LOC: algorithm - approximately 1400 
         other - approximately 500
         total - approximately 1900

LANGUAGE FEATURE USAGE:
    Used the quantifier some in 12 await statements for handling messages.

About

Designed and implemented a Distributed System which is byzantine fault tolerant. Verified the system by injecting various byzantine failures.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages