-
-
Notifications
You must be signed in to change notification settings - Fork 299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: prune BlsToExecutionChange opPool with head state #6252
feat: prune BlsToExecutionChange opPool with head state #6252
Conversation
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## unstable #6252 +/- ##
=========================================
Coverage 80.31% 80.31%
=========================================
Files 202 202
Lines 19543 19543
Branches 1169 1169
=========================================
Hits 15695 15695
Misses 3820 3820
Partials 28 28 |
Performance Report✔️ no performance regression detected Full benchmark results
|
|
||
const recentBlsToExecutionChangeIndexes = new Set( | ||
recentBlsToExecutionChanges.map((blsToExecutionChange) => blsToExecutionChange.message.validatorIndex) | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This check looks insufficient. The prune handler is only called once per finalization event ~1 time / epoch. Then only the execution changes included in that block will be pruned, with all remaining changes staying there.
A more complete solution, using our state caches:
Attach new cache to the state:
- appliedBlsToExecutionChangesCurrentEpoch
- appliedBlsToExecutionChangesPreviousEpoch
During block processing you register the validator index of applied execution changes. During epoch transition you rotate the caches. Then to prune you use the appliedBlsToExecutionChangesPreviousEpoch
list to prune those. This will give you the complete list of items to prune if there are no re-orgs deeper than 1 epoch, which is an okay compromise.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then only the execution changes included in that block will be pruned, with all remaining changes staying there.
just to clarify, execution changes in the head block is not pruned, others will be pruned. So we store BLSToExecutionChange messages in max 33 slots
A more complete solution, using our state caches
the solution works, however I think it's too much for this feature since we have to apply ImmutableJS to maintain these 2 arrays like in EIP-6110 implementation #6042 Also it's tricky in the edge case
I think there are pros and cons for these 2 solutions:
# | BLSToExecutionChange messages in block slot in opPool | pros | cons |
---|---|---|---|
current | 3 epochs | straightforward | have to query the finalized state |
no state cache (this PR) | 1 slot to 33 slots | no state cache is needed, lighthouse runs it since capella | cover 1-slot reorg scenario only |
state cache (Lion's suggestion) | 32 slots to 64 slots | cover up to 1-epoch reorg scenario | need to have 2 ImmutableJS arrays maintained, also need to think about edge case when finalized checkpoint happens >4s, after clock epoch |
I prefer the no state cache approach as it's worked for lighthouse since capella and in the worse case if the reorg happen, it'll not affect block reward (if it's the bn's turn to propose block) and messages will be added back once gossip seenTTL
passes anyway, I'd like to know others' opinions cc @wemeetagain @g11tech
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what @tuyennhv proposed here is sufficient and his reasoning is sound.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, thanks for the table! please monitor metrics after deploying to check the pool is not growing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the heuristic is fine i guess, since no one is anyway producing these messages and deep reorgs are rare on mainnet
🎉 This PR is included in v1.14.0 🎉 |
* feat: prune BlsToExecutionChange opPool with head state * fix: address PR comment
Motivation
TODO
listDescription
this is a prerequisite for #6250