-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Stage 1 block verification failed for 5c00…6234: Block(DifficultyOutOfBounds(OutOfBounds { min: None, max: Some(340282366920938463463374607431768211455), found: 340282366920938463463374607431768211456 })) #5689
Comments
What are the Parity versions on each of the nodes? |
1.6.6 Beta on one sealing and one non-sealing. The rest is 1.7.0 Unstable of the 12th or 13th of April. |
Unstable has been changing the difficulty calculation. I would not recommend running it on the sealing nodes (especially without adjusting the chain spec). You can keep your workaround and make sure to upgrade all your nodes to latest beta before the transition block occurs. |
Yes, 1.7 is no release and receives updates daily. If you plan to stick with nightly builds, make sure you recompile daily, or switch to beta. |
I get the same error after some transactions between nodes.
The node who fail can't accept the block and the network goes down. I'm stressing my PoA network with high load with tsung. |
Are you using the latest 1.7 release? |
I'm using |
yes, the same bug was reproduced twice with 1.7.0-beta |
As far as I know, @MasX also have had this issue. |
Is with --force-sealing the bug resolved? |
I also don't use "force-sealing" option as well. I will try to use 1.6.6, let see if problem exists with this version. |
Before the 1.7.0 official release i was using 1.6.9 and it works well with and without 'force sealing', but i'm really interested on the new version. |
Issue was reproduced with 1.6.7. The version 1.6.6 works well. However, it is still unclear how to clearly to reproduce this bug. It looks like it happens after some error with network and further disconnect between PoA nodes. I keep further investigation. |
alright, I reproduced this issue with 1.7.2. It happened after several disconnects between 2 nodes and snapshot was created. |
@5chdn what info can I collect to help with resolving this bug? I'm able to reproduce it each time. |
@Ochir how can you reproduce the bug? Just with several network interruptions? |
No, after load testing. It usually happens when 5000 blocks are imported.
чт, 5 окт. 2017 г., 17:01 Stefano De Angelis <notifications@github.com>:
… @Ochir <https://github.com/ochir> how can you reproduce the bug? Just
with several network interruptions?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#5689 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADFcZ5gFxEG9ISAoto3le7TK2GZx_9mdks5spOEjgaJpZM4Nk4io>
.
|
As I had already written before, the version 1.6.6 works well after heavy load. |
this happens if two validator nodes are using the same key to author blocks. make sure to use different keys per validator node as described in the wiki. |
Hi, I have a private net with 2 sealer and 4 non-sealing nodes (6 nodes). The non-sealing nodes erorred with the following:
and lost connection two minutes later. After around 15 minutes they reconnected but didn't import any blocks. Deleted the chainfolder on one of the nodes and tried to resync, but the same error occured.
here is my spec:
I added
"validateScoreTransition": 1000000
to the spec as a quick workaround.The text was updated successfully, but these errors were encountered: