-
Notifications
You must be signed in to change notification settings - Fork 20.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
common/lru: add generic LRU implementation #26162
Conversation
It seems that there is no fully typed library implementation of an LRU cache. So I wrote one. The methods names are the same as github.com/hashicorp/golang-lru, the new type can be used as a drop-in replacement.
Two optimizations here: - Code to add/remove elements is simplified because the list is now 'anchored' in a root element. - In case of eviction, the list element used by the evicted item is re-used, and no allocations are performed by Add.
This can save some space when type V is smaller than a machine word.
} | ||
} | ||
|
||
// Add adds a value to the cache. Returns true if an eviction occurred. | ||
// OBS: This cache assumes that items are content-addressed: keys are unique per content. | ||
// In other words: two Add(..) with the same key K, will always have the same value V. | ||
// OBS: The value is _not_ copied on Add, so the caller must not modify it afterwards. | ||
func (c *SizeConstrainedLRU) Add(key common.Hash, value []byte) (evicted bool) { | ||
func (c *SizeConstrainedCache[K, V]) Add(key K, value V) (evicted bool) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing that I feel should be checked, is the behaviour with nil
values. Right now, we never store nil
nor empty
code in the codecache, but that might change.
So basically, how does the SizeConstrainedCache
cache behave with regards to nil
vs 0-length
byte arrays. (And how do we want it to behave?)
I'm just writing this down here so it isn't forgotten
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// TestBlobEmpties tests that empty/nil values are handled correctly.
func TestBlobEmpties(t *testing.T) {
lru := NewBlobLRU[common.Hash, []byte](100)
// This test abuses the lru a bit, using different keys for identical value(s).
for i := 0; i < 10; i++ {
lru.Add(common.Hash{byte(i)}, []byte{})
lru.Add(common.Hash{byte(255 - i)}, nil)
}
// The size should not count, only the values count. So this could be a DoS
// since it basically has no cap, and it is intentionally overloaded with
// different-keyed 0-length values.
if have, want := lru.size, uint64(0); have != want {
t.Fatalf("size wrong, have %d want %d", have, want)
}
for i := 0; i < 10; i++ {
if v, ok := lru.Get(common.Hash{byte(i)}); !ok {
t.Fatalf("test %d: expected presence", i)
} else if v == nil {
t.Fatalf("test %d, v is nil", i)
}
if v, ok := lru.Get(common.Hash{byte(255 - i)}); !ok {
t.Fatalf("test %d: expected presence", i)
} else if v != nil {
t.Fatalf("test %d, v is not nil", i)
}
}
}
We could use this test. It even passes :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added the test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This upstream PR removed use of fastcache in favor of a size-bounded LRU cache ethereum/go-ethereum#26162
FWIW looks like |
To be fair, I did not check for open PRs on golang-lru at the time. I was pretty happy to find an opportunity for writing some plain-simple generics code. As for switching back, it's unlikely for several reasons: I found that embedding the linked-list implementation enables additional optimizations in BasicLRU that weren't possible before. I don't know if these are possible and/or wanted upstream. We could try, but... honestly it feels easier to just maintain our own version here. It's not a massive amount of code anyway. It's also very unlikely to have bugs. We also need a package to house our own cache related code (only the size-limited LRU), and it's nice being able to have everything together. The golang-lru API has some quirks that make it annoying to use: |
It seems there is no fully typed library implementation of an LRU cache. So I wrote one. Method names are the same as github.com/hashicorp/golang-lru, and the new type can be used as a drop-in replacement. Two reasons to do this: - It's much easier to understand what a cache is for when the types are right there. - Performance: the new implementation is slightly faster and performs zero memory allocations in Add when the cache is at capacity. Overall, memory usage of the cache is much reduced because keys are values are no longer wrapped in interface.
It seems there is no fully typed library implementation of an LRU cache. So I wrote one.
Method names are the same as github.com/hashicorp/golang-lru and the new type can be used as a drop-in replacement (mostly).
Two reasons to do this:
Add
when the cache is at capacity. Results for
BasicLRU[int, int]
below.