Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PoA network, all the sealers are waiting for each other after 2 months running, possible deadlock? #18402

Closed
marcosmartinez7 opened this issue Jan 7, 2019 · 80 comments

Comments

@marcosmartinez7
Copy link

marcosmartinez7 commented Jan 7, 2019

System information

My current version is:

Geth
Version: 1.8.17-stable
Git Commit: 8bbe72075e4e16442c4e28d999edee12e294329e
Architecture: amd64
Protocol Versions: [63 62]
Network Id: 1
Go Version: go1.10.1
Operating System: linux
GOPATH=
GOROOT=/usr/lib/go-1.10

Expected behaviour

Keep the normal signing .

Actual behaviour

I was running a go-ethereum private network with 6 sealers.

Each sealer is run by:

directory=/home/poa
command=/bin/bash -c 'geth --datadir sealer4/  --syncmode 'full' --port 30393 --rpc --rpcaddr 'localhost' --rpcport 8600 --rpcapi='net,web3,eth' --networkid 30 --gasprice '1' -unlock 'someaddress' --password sealer4/password.txt --mine '

The blockchain was running good for about 1-2 months.

Today i found that all the nodes were having issues. Each node was emmiting the message "Signed recently, must wait for others"

I check out the logs and i found this message every 1 hour, no more information, the nodes where not mining:

Regenerated local transaction journal transactions=0 accounts=0
Regenerated local transaction journal transactions=0 accounts=0
Regenerated local transaction journal transactions=0 accounts=0
Regenerated local transaction journal transactions=0 accounts=0

Experimenting the same issue with 6 sealers, i restarted each node but now im get stucked in

INFO [01-07|18:17:30.645] Etherbase automatically configured address=0x5Bc69DC4dba04b6955aC94BbdF129C3ce2d20D34
INFO [01-07|18:17:30.645] Commit new mining work number=488677 sealhash=a506ec…8cb403 uncles=0 txs=0 gas=0 fees=0 elapsed=133.76µs
INFO [01-07|18:17:30.645] Signed recently, must wait for others

The first thing that is weird is that, some nodes are stucked on the 488677 and others are on 488676, this behaviour was reported on this issue #16406 same for the user @lyhbarry

Example:
Signer 1

image

Signer 2

image

Note that there is no votes pending

So, right now, i shut down and restar each node, i have found that:

  • Each node is paired with the others
  • Each node is part of clique.getSigners()
  • Each node is waiting for another to sign...
INFO [01-07|18:41:56.134] Signed recently, must wait for others 
INFO [01-07|19:41:42.125] Regenerated local transaction journal    transactions=0 accounts=0
INFO [01-07|18:41:56.134] Signed recently, must wait for others 

So, the syncronization fail but also i just can start signing again because each node is stucked waiting for the others, that means, the network is useless?

The comment of @tudyzhb on that issue mention that:

Ref clique-seal of v1.8.11, I think there is no an effective mechanism to retry seal, when an in-turn/out-of-turn seal fail occur. So our dev network useless easily.

After this problem, i take a look at the logs, each signer has this error messages:

Synchronisation failed, dropping peer peer=7875a002affc775b err="retrieved hash chain is invalid"

INFO [01-02|16:42:10.902] Signed recently, must wait for others 
WARN [01-02|16:42:11.960] Synchronisation failed, dropping peer    peer=7875a002affc775b err="retrieved hash chain is invalid"
INFO [01-02|16:42:12.128] Imported new chain segment               blocks=1  txs=0 mgas=0.000 elapsed=540.282µs mgasps=0.000  number=488116 hash=269920…afd3c7 cache=5.99kB
INFO [01-02|16:42:12.129] Commit new mining work                   number=488117 sealhash=f7b00c…787d5c uncles=2 txs=0 gas=0     fees=0          elapsed=307.314µs
INFO [01-02|16:42:20.929] Successfully sealed new block            number=488117 sealhash=f7b00c…787d5c hash=f17438…93ffe3 elapsed=8.800s
INFO [01-02|16:42:20.929] 🔨 mined potential block                  number=488117 hash=f17438…93ffe3
INFO [01-02|16:42:20.930] Commit new mining work                   number=488118 sealhash=b09b33…1526ba uncles=2 txs=0 gas=0     fees=0          elapsed=520.754µs
INFO [01-02|16:42:20.930] Signed recently, must wait for others 
INFO [01-02|16:42:31.679] Imported new chain segment               blocks=1  txs=0 mgas=0.000 elapsed=2.253ms   mgasps=0.000  number=488118 hash=763a32…a579f5 cache=5.99kB
INFO [01-02|16:42:31.680] 🔗 block reached canonical chain          number=488111 hash=3d44dc…df0be5
INFO [01-02|16:42:31.680] Commit new mining work                   number=488119 sealhash=c8a5e7…db78a1 uncles=2 txs=0 gas=0     fees=0          elapsed=214.155µs
INFO [01-02|16:42:31.680] Signed recently, must wait for others 
INFO [01-02|16:42:40.901] Imported new chain segment               blocks=1  txs=0 mgas=0.000 elapsed=808.903µs mgasps=0.000  number=488119 hash=accc3f…44bc4c cache=5.99kB
INFO [01-02|16:42:40.901] Commit new mining work                   number=488120 sealhash=f73978…c03fa7 uncles=2 txs=0 gas=0     fees=0          elapsed=275.72µs
INFO [01-02|16:42:40.901] Signed recently, must wait for others 
WARN [01-02|16:42:41.961] Synchronisation failed, dropping peer    peer=7875a002affc775b err="retrieved hash chain is invalid"

I also see some:

INFO [01-02|16:58:10.902] 😱 block lost number=488205 hash=1fb1c5…a41a42
This error about hash chain was just a warning, so the node keep mining until the 2th of january, then i saw this on each of the 6 nodes

image

I was looking that there are a lot of issues about this error, the most similar is the one i posted here but is unresolved.

Most of the issues workarrounds seems to be a restart, but in this case, the chain seems to be is in a unconsistent state and the nodes are always waiting for the others

So,

  1. any ideas? peers are connected, accounts are unlocked, it just entered into a deadlock situation after 450k blocks
  2. any logs that i can provide? i only see the warnings of the error described and the block lost, but nothing when the node stoped to be mining
  3. Is this PR related? les: fix fetcher syncing logic #18072
  4. Maybe is related with the comment of @karalabe onthis issue Geth signing stops after a period of time #16406?
    5 Upgrading from 1.8.17 to 1.8.20 will solve this?
  5. In my opinion, seems like a race condition or something, since i have 2 chains, one running for 2 months, the other one for three months and is the first time this error happens

This are other related issues:

#16444 (Same issue but i dont have votes pending in my snapshot)

#14381

#16825

#16406

@marcosmartinez7
Copy link
Author

Based on this image

image

That is the situation of all the sealers, they just stop sealing waiting for each other, seems like a deadlock situation

Wich files can i check for errors since the js console isnt throwing anything?

@marcosmartinez7
Copy link
Author

marcosmartinez7 commented Jan 7, 2019

This is the debug.stacks() info, i dont know it is important here but this is executing while the sealers are stucked:

image

image

image

@marcosmartinez7 marcosmartinez7 changed the title PoA network, all the sealers are waiting for each other PoA network, all the sealers are waiting for each other after 2 months running, possible deadlock? Jan 8, 2019
@marcosmartinez7
Copy link
Author

marcosmartinez7 commented Jan 8, 2019

Found that i have a lot of block lost on each node..

image

Can be this the problem? the chain was running with that warnings without any issues anyway..

Btw, it is caused by bad connection between nodes? Im using Digital Ocean droplets

NOTE: if i check eth.getBlockNuber i get 488676 or 488675 depending on the sealer

@jmank88
Copy link
Contributor

jmank88 commented Jan 8, 2019

We experienced a similar deadlock on our fork, and the cause was due to out-of-turn blocks all being difficulty 1, mixed with a bit of bad luck. When the difficulties are the same, a random block is chosen as canonical, which can produce split decisions. You can compare hashes and recent signers on each node to confirm if your network is deadlocked in the same way. We had to both modify the protocol to produce distinct difficulties for each signer, and modify the same difficulty tie-breaker logic to make more deterministic choices.

@marcosmartinez7
Copy link
Author

Thanks for the response, can you give me an idea of how "compare hashes and recent signers on each node to confirm if your network is deadlocked in the same way." ?

Thanks

@jmank88
Copy link
Contributor

jmank88 commented Jan 8, 2019

By getting the last 2 blocks from each node, you should be able to see exactly why they are stuck based on their view of the world. They all think that they have signed too recently, so they must disagree on what the last few blocks are supposed to be, so you'll see different hashes and signers for blocks with the same number (and difficulty!).

@marcosmartinez7
Copy link
Author

marcosmartinez7 commented Jan 8, 2019

Good idea!!

Sealer 1

Last block 488676

image

Last -1 = 488675

image

Sealer 2

Last is 488675

image

The second node didnt reach the 488676 block.

The hashes of block 488675 are different, but the difficulty are differents (1 and 2)

For other blocks, like block 8, the hashes are equals and the difficulty is 2 for both..

Seems like all the blocks has 2 of difficulty except that conflictive one..did you find any logical explanation to that?

Btw, dont know why difficulty = 2 since the genesis file uses 0x1

Thoughts?

@jmank88
Copy link
Contributor

jmank88 commented Jan 8, 2019

The in-turn signer always signs with difficulty 2. Out-of-turn signers sign with difficulty 1. This is built-in to the clique protocol, and the primary cause of this problem in the first place. It looks like you have 6 signers. You will have to check them all to make sense of this.

@marcosmartinez7
Copy link
Author

marcosmartinez7 commented Jan 8, 2019

So, if i found two signers (into my 6) with the same difficulty and different hash the deadlock would make sense right?

Same block, different difficulty and different hash doesnt probe anything?

I have deleted the chaindata of the other node with the same last block 488675

#fail

@jmank88
Copy link
Contributor

jmank88 commented Jan 8, 2019

Not necessarily. Those kind of ambiguous splits happen very frequently with clique and would normally sort themselves out.

Are you still trying to recover this chain?

@marcosmartinez7
Copy link
Author

marcosmartinez7 commented Jan 8, 2019

If it is not necesarrily and it normally sort themselves out, then the deadlock theory maybe isnt valid..

What did you mean about "It looks like you have 6 signers. You will have to check them all to make sense of this."?

About the chain: I wanted to know what happened basically, i dont know if i can provide any kind of logs or something since the sealers just stoped to wait each other and i dont have any other information.

Also, getting this scenario in a production environment sucks, since i cant continue mining..and there is nothing on go-ethereum that guarantees that this will not happen again

So, just to make the things more clear, if the block 488675 has different difficulty and different hash doesnt probe that there was an issue? It is normal to have different hashes comparing in-turn with out-turn then?

@jmank88
Copy link
Contributor

jmank88 commented Jan 8, 2019

Resyncing the signers that you deleted may produce a different distributed state which doesn't deadlock. Or it could deadlock again right away (or at any point in the future). Making fundamental protocol changes to clique like we did for GoChain is necessary to avoid the possibility completely, but can't be applied to an existing chain (without coding in a custom hard fork). You could start a new chain with GoChain instead.

@jmank88
Copy link
Contributor

jmank88 commented Jan 8, 2019

What did you mean about "It looks like you have 6 signers. You will have to check them all to make sense of this."?

They all have different views of the chain. You can't be sure why each one was stuck without looking at them all individually.

@marcosmartinez7
Copy link
Author

Ok, but, what i am looking for?

Right now im deleting chain data for all the nodes except 1 and resync the rest of them (5 singers) from that node.

About this comment:

"By getting the last 2 blocks from each node, you should be able to see exactly why they are stuck based on their view of the world. They all think that they have signed too recently, so they must disagree on what the last few blocks are supposed to be, so you'll see different hashes and signers for blocks with the same number (and difficulty!)."

If i see two in turn or two out turn with the same difficulty and different hash that will confirm that they think that they have signed recently?

@jmank88
Copy link
Contributor

jmank88 commented Jan 8, 2019

If i see two in turn or two out turn with the same difficulty and different hash that will confirm that they think that they have signed recently?

If they logged that they signed too recently then you can trust that they did. Inspecting the recent blocks would just give you a more complete picture of what exactly happened.

@marcosmartinez7
Copy link
Author

marcosmartinez7 commented Jan 8, 2019

Well, i delete all the chain data for the 5 sealers and sync from 1

Started to work again but there is a sealer that seems to have connectivity issues or something..

The sealer starts with 6 peers, then goes to 4, 3, 2 then again to 4, 6 , etc...

image

And thats why i suppose the blocks are being lost... and probably thats why the sycnronization fail warning is throwed since is always the same node

Any ideas of why is this happening?

Connectivity issues since they are separate droplets?

Any way to troubleshoot this?

Thanks

@jmank88
Copy link
Contributor

jmank88 commented Jan 8, 2019

I don't think the peer count is related to lost blocks, and neither peers or lost blocks are related to the logical deadlock caused by the same-difficulty ambiguity.

Regardless, you can use static/trusted enodes to add the peers automatically.

@marcosmartinez7
Copy link
Author

marcosmartinez7 commented Jan 8, 2019

I add the nodes mannually, but it is weird that a sealer is always getting connectivity issues with the rest of the peers

I will try the static/trusted nodes.

I will put the block lost in a separate issue, but i would like to have a response from the geth team about the initial problem, because it seems like i can go into another deadlock issue again

Thanks @jmank88

PS: Do you think that the block sealing time can be an issue here? Im using 10 secs

@jmank88
Copy link
Contributor

jmank88 commented Jan 8, 2019

'Lost blocks' are just blocks that were signed but didn't make the canonical chain. These happen constantly in clique, because most (~1/2) of the signers are eligible to sign at any given time, but only one block is chosen (usually the in-turn signer, with difficulty 2) - all of the other out-of-turn candidates become 'lost blocks'.

@jmank88
Copy link
Contributor

jmank88 commented Jan 8, 2019

PS: Do you think that the block sealing time can be an issue here? Im using 10 secs

Faster times might increase the chances of bad luck or produce more opportunities for it to go wrong, but the fundamental problem always exists.

@marcosmartinez7
Copy link
Author

Right, i understand, so, nothing to worry into a PoA network then?

About time, yeah, completely agree

Thanks a lot!

@jmank88
Copy link
Contributor

jmank88 commented Jan 8, 2019

Right, i understand, so, nothing to worry into a PoA network then?

I'm not sure what you mean. IMHO the ambiguous difficulty issues are absolutely fatal flaws - the one affecting the protocol itself is much more severe, but the client changes I linked addressed deadlocks as well.

@jmank88
Copy link
Contributor

jmank88 commented Jan 8, 2019

It's also worth noting that increasing the number of signers may reduce the chance of deadlock, possibly having an odd number rather than even as well.

@marcosmartinez7
Copy link
Author

marcosmartinez7 commented Jan 8, 2019

Yes sure, i mean, i didnt know about that, but is really good information and i really apreciate it. I was talking about the lost block warning, your explanation make sense for PoA

About # of signers, yes, i have read about that, makes sense. I have also implemented a PoC with just 2 sealers and, maybe im lucky, but in 700k blocks i did not experimented this issue.

Right now im using a odd number

@jmank88
Copy link
Contributor

jmank88 commented Jan 8, 2019

Limiting to just 2 signers is a special case with no ambiguous same-difficulty blocks.

@marcosmartinez7
Copy link
Author

marcosmartinez7 commented Jan 9, 2019

After removing 1 node and resync from the data of 1 of the nodes, i was running the network with 5 sealers without issues.

Summary:

  • Last block not sealed
  • Last block-1 sealed by 2 nodes

After 1 day it got stucked again, but now in weirdest situation:

  • The 5 nodes are stucked at the same block
  • If y query the block on each node, i have 3 different hashes for the same block
  • The last block is allways signed with difficulty = 1 , so, there is no sealing with difficulty = 2 so no sealing at all?
  • The last block -1 was sealed twice... two nodes with difficulty 2

Last block 503076

Sealer 1

image

Sealer 2

image

Sealer 4 (off turn with different hash and parent hash)

image

Sealer 5 (off turn, 3 side chain)

image

Sealer 6 (Same hash as sidechain but different parent)

image

The number of signers is 5

image

Each node is paired with 4 signers and 1 standard node

image

Last block -1: 503075

Sealer 1 (out off turn)

image

Sealer 2 (out off turn, same hash)

image

Sealer 4 (out off turn, different hash, same parent..)

image

Sealer 5 (in turn)

image

Sealer 6 (int turn too)

image

@jmank88
Copy link
Contributor

jmank88 commented Jan 9, 2019

You can remove the stack traces, they are not necessary. This looks like a logical deadlock again. Can you double check your screenshots?

@am0xf
Copy link

am0xf commented Mar 9, 2020

Has this problem been solved? I have the exact same issue with 2 nodes, both sealers (limited resource setup). Even for contract deployment I get a hash and contract address but the contract is not present in chain. A restart of the chain fixes it though. After that all transactions timeout. Both nodes are connected (admin.peers shows 1 peer)

@marcosmartinez7
Copy link
Author

Has this problem been solved? I have the exact same issue with 2 nodes, both sealers (limited resource setup). Even for contract deployment I get a hash and contract address but the contract is not present in chain. A restart of the chain fixes it though. After that all transactions timeout. Both nodes are connected (admin.peers shows 1 peer)

Are you sure that is the same problem? Because a restart should not fix it. The problem behind this issue makes a fork of the chain

@am0xf
Copy link

am0xf commented Mar 9, 2020

A restart only makes the initially deployed contract available. Without a restart I get the error "...is chain synced / contract deployed correctly?" I haven't found a fix for further transactions timing out.

@am0xf
Copy link

am0xf commented Mar 13, 2020

Can we not run a PoA network with 1 sealer? Logs show "Signed recently. Must wait for others"

@cyrilnavessamuel
Copy link

Hello ,

In my private Clique network with 4 Nodes (A,B,C,D) I noticed a fork in the chain for block period 1.

I noticed that it happens some times with block period 1 & 2.

I noticed that the fork happened at block height 1500 for example. Nodes A & D have a similar chain data meanwhile Nodes B& C have similar data. (Fork occurence)

At block 1500, I noticed the difference in data between 2 chains: 1) Block hashes are different 2) Block of one chain fork is an uncle block while for other chain fork has 5000 txs included 3) Both blocks have same difficulty 2 which means that it was mined in turn as well as same sealer(complication) 4) Another complication arises when I noticed it was the same sealer who sealed the block.

This results in fork of the network and stalling at the end which cannot undergo any reorg in this deadlock situation.

In previous comments I noticed that there was atleast different difficulty and different sealers at the same block height between the forks

Please can some one let me know if you faced this issue or a logical explanation of this issue

@lmvirufan
Copy link

Do we have a solution for this PR.

We have also encountered the same issue for our network which had 5 signers and worked good for almost 2 months.
The block generation time was 1 sec.

It suddenly started to show message:
"INFO [09-05|08:50:16.267] Looking for peers peercount=4 tried=4 static=0".

We tried to start the mining by using the miner.start( ) function from all the miners/signers but it does not started to mine in the network and 3 of the nodes showed the response something like:

INFO [09-05|08:53:23.471] Commit new mining work number=7961999 sealhash="d93ccf…cdb147" uncles=0 txs=0 gas=0 fees=0 elapsed="94.336µs"
INFO [09-05|08:53:23.471] Signed recently, must wait for others INFO [09-05|08:57:23.483] Commit new mining work number=7961999 sealhash="c3f025…388121" uncles=0 txs=1 gas=21000 fees=0.0050589 elapsed="562.983µs"
INFO [09-05|08:57:23.484] Signed recently, must wait for others

and rest 2 showed showed the same response with number = 7961998.

The amazing thing was transactions were showing different in the txpool
2 nodes was showing 3 transactions in pending status.
2 nodes was showing 1 transactions in pending status.
1 nodes was showing 0 transactions in pending status.

Can anyone suggest what should I do that all nodes start mining again? I've tried a few steps and solutions but it did not help.

@q9f
Copy link
Contributor

q9f commented Oct 5, 2020

Reviewed in team call: 5chdn suggestions are good. We could solve this by making out-of-turn difficulty more complicated. There should be some deterministic order to out-of-turn blocks. karalabe fears that this will introduce too much protocol complexity or large reorgs.

just came across this again. here's peter's comment on that matter: ethereum/EIPs#2181

we drafted the EIPs 218{1,2,3} after EthCapeTown for consideration.

@abramsymons
Copy link

It seems the rinkeby network was stopped working for more than an hour 3 times in only one month:

time block stop time in minutes
12/28/2020, 09:28:25 AM 7797794 771
12/02/2020, 07:04:35 AM 7648430 101
11/30/2020, 02:47:51 PM 7639067 64

Are rinkeby down times related to this issue?

@abramsymons
Copy link

To reproduce deadlock:

  • Run a network with 5 sealers and stop one of them to have 4 sealers active
  • Use 1 second as block time
  • Rebuild geth by setting wiggleTime from 500 milliseconds to 1 millisecond to increase racing conditions

With such a configuration, you should have 2-3 deadlocks each hour.

@abramsymons
Copy link

abramsymons commented Jan 3, 2021

We experienced such deadlocks on IDChain and solved the issue by running a deadlock resolver script on all sealer nodes that monitor the node and if chain stopped, uses debug.setHead to return the node state to n/2+1 blocks ago where n is the number of sealers. The only disadvantage of using such approach to resolve the deadlock is that it increases the number of blocks required to wait for finality from n/2+1 to n/2+2.
This script uses eth rpc api to get the last blocks, clique to get number of signers to calculate n/2+1, debug to return the node state using debug.setHead and miner to restart miner after returning state.

@uxname
Copy link

uxname commented Mar 1, 2022

We experienced such deadlocks on IDChain and solved the issue by running a deadlock resolver script on all sealer nodes that monitor the node and if chain stopped, uses debug.setHead to return the node state to n/2+1 blocks ago where n is the number of sealers. The only disadvantage of using such approach to resolve the deadlock is that it increases the number of blocks required to wait for finality from n/2+1 to n/2+2. This script uses eth rpc api to get the last blocks, clique to get number of signers to calculate n/2+1, debug to return the node state using debug.setHead and miner to restart miner after returning state.

But in this case, the last blocks disappear, right? For example, some transactions can be sent through Metamask, Metamask will show that the transaction has been completed, and then the block with this transaction can disappear?
And the second question, how many nodes do you have in your project? Is there a possibility that the more nodes, the less chance of a deadlock?

@pangzheng
Copy link

We experienced such deadlocks on IDChain and solved the issue by running a deadlock resolver script on all sealer nodes that monitor the node and if chain stopped, uses debug.setHead to return the node state to n/2+1 blocks ago where n is the number of sealers. The only disadvantage of using such approach to resolve the deadlock is that it increases the number of blocks required to wait for finality from n/2+1 to n/2+2. This script uses eth rpc api to get the last blocks, clique to get number of signers to calculate n/2+1, debug to return the node state using debug.setHead and miner to restart miner after returning state.

But in this case, the last blocks disappear, right? For example, some transactions can be sent through Metamask, Metamask will show that the transaction has been completed, and then the block with this transaction can disappear? And the second question, how many nodes do you have in your project? Is there a possibility that the more nodes, the less chance of a deadlock?

Transactions should not go away

@Gelassen
Copy link

Gelassen commented Apr 20, 2023

Have the same (or similar?) issue after migration from ethash to clique.

My scenario is 3 nodes, 7 seconds to mine a new block: get err="signed recently, must wait for others" at first blocks.

eth.getBlock("latest") shown the 1st node mined the first block and 2nd on top of it. Last two mined (or just share the ledger copy?) only 1st block. Check was done over hashes. 1st node has last record with difficulty 2, last two nodes has last record with difficulty of 1.

However, I haven't got yet why deadlock could happen in this case. Block has been mined and shared with the rest of the chain. All the rest nodes accept it and start execute next block calculations on the top of it. In case of race conditions algorithm chooses the block with the longest chain length, time of mining, order in queue, etc. Where is reason for a dead lock?

@jmank88 , it seems you have the most experience with this issue and workarounds regarding it. May I ask you to share more information about this, please?

@jmank88
Copy link
Contributor

jmank88 commented Apr 20, 2023

In case of race conditions algorithm chooses the block with the longest chain length, time of mining, order in queue, etc. > Where is reason for a dead lock?

@jmank88 , it seems you have the most experience with this issue and workarounds regarding it. May I ask you to share more information about this, please?

I haven't dealt with this in a few years, but the general problem was that there is no fully deterministic choice in the case of same length - the tie-breaker was whichever block arrived first, which is subjective due to network latency/partitions/etc. and not representative of when blocks were created. The tie-breaker logic can be patched in the implementation as a workaround to avoid upgrading the difficulty protocol itself.

@Gelassen
Copy link

@jmank88 , Thank you for more details regarding this issue!

@karalabe
Copy link
Member

karalabe commented Jun 3, 2024

This was a long standing issue, but with the drop of Clique, we can elegantly close it without addressing it.

@karalabe karalabe closed this as completed Jun 3, 2024
@marcosmartinez7
Copy link
Author

This was a long standing issue, but with the drop of Clique, we can elegantly close it without addressing it.

This was undoubtedly the longest-standing issue of my career.
It was enjoyable to see all the updates year after year.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.