-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add byte limit implementation for cache #40
base: dev/iavl_data_locality
Are you sure you want to change the base?
Conversation
@@ -53,6 +54,12 @@ func (fn *FastNode) GetKey() []byte { | |||
return fn.key | |||
} | |||
|
|||
func (fn *FastNode) GetFullSize() int { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add a godoc describing the value returned here? It's not super clear to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed
Co-authored-by: Aleksandr Bezobchuk <alexanderbez@users.noreply.github.com>
type lruCacheWithBytesLimit struct { | ||
lruCache | ||
bytesLimit int | ||
curBytesEstimate int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is it called estimate? Should we add a comment that it may potentially slightly undercount, but we deem it fine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is called estimate because I did not find a straightforward way to test actual allocations.
I tried grabbing memory from runtime to test but it was slightly off. I think it's because it doesn't produce the memory needed for allocating "slice metadata" - length, capacity and pointer to data.
This estimate is based on the knowledge of how slices and strings are represented in memory in Go. That's why I named it estimate.
I'll add a comment about this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh gotcha. Yeah being off by a small constant for go memory layout details (which aren't guaranteed to be preserved across versions) is totally fine!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment is added
const ( | ||
LRU Type = 0 | ||
LRU_node_limit Type = 1 | ||
LRU_bytes_limit Type = 2 | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thoughts on switching this to iota syntax? https://yourbasic.org/golang/iota/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also are we using all three types?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are using the last 2 in iavl, and the first one is the abstract implementation of the last 2.
I made it so that it can't be initialized outside of cache package. However, it still needs to have its own GetType() method
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need the first one to be implemented? Perhaps we make a follow-up issue to delete it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohh the point is its needed, since we're doing things decorator style.
I guess I don't understand the type enum + decorator syntax combination. I don't think it needs to block the PR, but maybe we should make a follow-up issue about it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The decorator pattern implies that LRU cache (the underlying implementation) can still be used on its own. We just don't use it in IAVL. Decorators are wrappers around the main abstraction to provide an additional layer of functionality. In our case, this functionality is limiting the cache.
We have three types implemented:
type Type int
const (
LRU Type = 0
LRU_node_limit Type = 1
LRU_bytes_limit Type = 2
)
Only the last 2 are used in IAVL directly. However, this is composable by design - if we want the limit to be removed in the future - we just swap the cache for the regular LRU type
Let me know if this makes sense
nodedb.go
Outdated
@@ -29,7 +29,7 @@ const ( | |||
// Using semantic versioning: https://semver.org/ | |||
defaultStorageVersionValue = "1.0.0" | |||
fastStorageVersionValue = "1.1.0" | |||
fastNodeCacheLimit = 100000 | |||
fastNodeCacheLimit = 100 * 1024 * 1024 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should rename this to fastNodeCacheBytesLimit right?
I am working on speedruns for db comparisons. So I am going to be a teensy bit bold and backport this now. I will report back on if it successfully resolves the issue that I am hitting with rocksdb (basically v6 becomes extremely ram hungry and I've not been able to do anything to change that.) Previous state: Nodes would reach 128gb and be killed by the oom reaper New sate: unknown Speaking unscientifically, the update seems much faster. |
Background
This PR introduces byte limit implementation for cache and switches the fast cache to this implementation. It introduces a small decrease in performance in exchange for modular and reusable design as well as the ability to keep track of the number of bytes used.
Deployed nodes with 50MB, 75MB, 100MB, and 150MB fast cache. The nodes are stable after running for a day, and I'm waiting to have more time to understand the RAM usage with this change better.
Implementation Details
Refactored
cache
module to have a general LRU cache implementation. Introduced node and byte limit decorators to wrap around LRU cache and provide a composable design.osmosis-labs/osmosis#1187