-
Notifications
You must be signed in to change notification settings - Fork 726
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimise historic sequential state lookups #3873
Comments
@AgeManning I'd like to research on this, if @michaelsproul has not started :) |
I haven't started, that would be great! Have a look in the beacon_node/store crate. I think that would be the most natural place to add the cache, and it should be used in get_cold_state either as the result or as the starting point for the result |
The most naive solution to this issue might be: Add a lru cache in In @michaelsproul WDYT? |
ping @michaelsproul :) |
I don't think a simple LRU cache on its own will be very useful because it will only accelerate lookups for the exact same state. The way state lookups work is by retrieving a base state (a "restore point") and then replaying blocks on top of that. I'll write-up how a single state lookup works so you can see what I mean: Imagine the user wants to load the state at slot 8128. The Now imagine that the user wants to lookup the state one epoch after the previous one looked up, the state at slot 8160. Currently Lighthouse will load the state at slot 0 again, and replay all the blocks from 1-8160. This is a huge waste, because if we had saved the state from slot 8128 we could just replay the last epoch of blocks, which is going to be much faster. To be able to take advantage of that cached state we need two things:
The algorithm for using the cache could plug-in here to choose a different To find a suitable state in the cache we could use one of 2 obvious algorithms:
I probably prefer approach (1) for now, especially as it's probably only feasible to keep <32 states in the cache. |
@michaelsproul appreciate your detailed explanation :) So the cache only work for states in the coldDB? As in hotDB, we would save full state on epoch boundary, it would not be too wasteful to replay there. WDYT? |
Yeah, lets just target the cold DB. You're right that replaying in the hot DB isn't so bad, and we have another long-running project to make the caching of hot states more efficient (#3206). |
## Issue Addressed #3873 ## Proposed Changes add a cache to optimise historical state lookup. ## Additional Info N/A Co-authored-by: Michael Sproul <micsproul@gmail.com>
Implemented in #4228 |
## Issue Addressed sigp#3873 ## Proposed Changes add a cache to optimise historical state lookup. ## Additional Info N/A Co-authored-by: Michael Sproul <micsproul@gmail.com>
Description
When searching for sequential historical states, Lighthouse repeatedly reconstructs the state from the last checkpoint.
We could optimize requests of this kind by temporarily caching the reconstructed state to potentially use for following calls. This could help prevent significant state reconstruction.
The text was updated successfully, but these errors were encountered: