Skip to content

Latest commit

 

History

History
78 lines (66 loc) · 5.95 KB

TODO.md

File metadata and controls

78 lines (66 loc) · 5.95 KB

new TODOs while work in progress

  • update documentation:
    • explain new scripts like ./run.sh and ./run-all_....sh
    • radically shorten main README.md = put everything else into docs/chainhammer.md
    • deploy.py notest --> deploy.py; get-set-get test is now run with deploy.py andtests
    • include methodology chapter as manual - perhaps wait until it is read?
    • run link-checker.sh again once upstreamed to github
    • reproduce_outdated.md = perhaps sort pieces into each per-client-infofile ?
  • timestamp transformation = different units depending on client, see tps.timestampToSeconds():
    • next time when trying 'raft' consensus - test whether timestamp transformation is working correctly
    • testrpc-py blocktime is badly estimated - check back with pipermerriam/eth-testrpc#117 if problem is solved now
  • parity:
    • parity instantseal produces 1 block per 1 transaction, but with an integer block timestamp - totally non-sensical. Needs finer time resolution!
    • parity v2.x.y breaks down when shot at with multi-threaded sending, so for now chainhammer is testing it only single-threaded. See issue PE#9582
    • parity: why the empty blocks in parity aura runs?
    • parity: accelerate = best combination of CLI parameters when starting parity? That should be IMHO done by parity team because they know their code best; I can just provide the benchmarking platform so that they notice what helps and what not. See PE#9393
  • quorum:
    • even with gasLimit=0x1312D00 (20,000,000), quorum blocks initially max out
    • what is with the higher initial blocktime? Perhaps modify is_up.py to wait for moving chain?
    • run with newer than Geth/v1.7.2-stable-3f1817ea/linux-amd64/go1.10.7, waiting for issue BC#57
    • try also raft consensus, waiting for issue BC#51
  • base tech
    • also try to connect via IPC (currently RPC) - faster?
    • current mempool size, per each node?
  • results
    • is it good or bad to store the results (reader/img/ diagrams, results/run/___.html pages) directly in the same repo? Where else?
    • make a new docs/quorum.md, docs/geth.md, etc. per client - and move the issues there!
    • run everything again, then replace the images on the main README.md
  • display
    • multi-terminal tool (e.g terminator), to show all logs/___.log files at once

beware: some of this older collection is outdated:

TODO general

interesting next questions:

what else? Please YOU make suggestions.

N.B.: No guarantees that I will get time to continue with this at all - so please feel invited to fork this repo, and keep on working on benchmarking this. I'll happily merge your pull request. Thanks.

other places:

  • quorum.md - quickstart how to use this chainhammer tool
    • log.md - sequence of everything that I've already optimized, to get this faster
    • non-vagrant/README.md - attempt to run it on host machine instead of inside vagrant VB; currently broken, issue unanswered.
  • tobalaba.md also benchmarked the parity fork of the EnergyWebFoundation: --chain Tobalaba
  • quorum-IBFT.md
  • parity.md
  • reader/ chainreader: traverse whole chain, display as 4 diagrams: TPS, size, gas, blocktime
  • main README.md - entry point for this repo, now with quickstart