-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERROR: Service 'node1' failed to build: The command '/bin/sh -c chmod +x start.sh ...' returned a non-zero code: 2 #39
Comments
I just followed the exact same steps and it works completely fine for me and I am on mac. |
AWS |
Use this AMI (ami-07ae98bd023ea9955) to reproduce the exact same system. It is Debian-stretch based. |
I have now done the same again, starting all over in a new instance of the same AWS image ... and I could reproduce the problem. |
Are you seeing it too, when you run it in that ami-07ae98bd023ea9955? |
I have the same problem running locally in a MacOS. As said here: gliderlabs/docker-alpine#207 (comment) There is a problem doing I changed this lines: https://github.com/blk-io/crux/blob/master/docker/crux/Dockerfile#L3-L5 in Dockerfile to:
It solves the problem. @drandreaskrueger I will test in AWS. |
Thanks a lot. Good news. Please commit. Thanks. |
Also, please fix this too https://github.com/blk-io/crux/blob/master/docker/quorum-crux/Dockerfile because that is what is failing - I start with
|
I have tried replacing in https://github.com/blk-io/crux/blob/master/docker/quorum-crux/Dockerfile with
and ... it works. Hooray. Please commit to master. Thanks. |
I submitted a PR #40 waiting to be merged. @Puneetha17 Here: https://github.com/carlosfaria94/crux I made a fork with Dockerfile fix and with your performance tweaks. I also tested in AWS and it works well (Ubuntu 16 LTS). @drandreaskrueger I have been following your work, I think it is really important to start stress testing all these implementations. Thank you for your contributions! (And I'm sorry to see Parity team not interested in searching for a solution, it's sad.) P.S: Be careful with AuRa, the algorithm relays on the UNIX timestamp being sync between nodes. Bad things can happen if timestamps don't sync (lack of consistency):
AuRa favors availability instead of consistency. I prefer a PBFT approach (e.g. IBFT), it favors consistency instead of availability. But depends a lot on your use-case and your project risk tolerance. One more question, I'm trying to benchmark quorum-crux IBFT in AWS, following your amazing work: https://gitlab.com/electronDLT/chainhammer/blob/master/reproduce_TODO-crux.md When you do |
Oh wow, thanks for all that interesting infos! Hopefully get the time to look deeper next week. And most of all THANKS for the encouragement. Much needed right now. But of course, we are all busy, so I also understand their reluctance to run someone else's code. Anyways ... digress ... back to this:
Yes, I have just committed an update to exactly that, a few minutes ago. Still work in progress, and I want to wait for @Puneetha17 and then simplify that too long text reproduce_TODO-crux.md before I include it into the main reproduce.md. Then I will also create yet another newer version of the AMI. But already: I could get it working!! (preliminary!) results:
And:
I need that But that Yes, from yet another
Fantastic, that'll help. I'll continue not before next week. But you feel free to play with parameters, perhaps you can accelerate even more? Also try raft consensus? Good luck. Thanks A LOT for the good vibes! ;-) Muito obrigado! |
I only have researched the AuRa consensus algorithm, I'm personally never worked with Parity, only Geth. I will certainly perform more tests until the end of the week. I only started this week looking to your work, I'm digesting it, and then I will perform some benchmarking, and I look forward to contribute with something :) Raft is not byzantine fault-tolerant, our internal project require a BFT consensus algorithm. But I will certainly look to Raft to see the performance compared with IBFT. Thank you for all of your work :) Obrigado :D |
edit so that it looks like this
start:
full log:
The text was updated successfully, but these errors were encountered: