Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

common/lru: add generic LRU implementation #26162

Merged
merged 23 commits into from
Nov 14, 2022
Merged

common/lru: add generic LRU implementation #26162

merged 23 commits into from
Nov 14, 2022

Conversation

fjl
Copy link
Contributor

@fjl fjl commented Nov 11, 2022

It seems there is no fully typed library implementation of an LRU cache. So I wrote one.
Method names are the same as github.com/hashicorp/golang-lru and the new type can be used as a drop-in replacement (mostly).

Two reasons to do this:

  • It's much easier to understand what a cache is for when the types are right there.
  • Performance: the new implementation performs zero memory allocations in Add
    when the cache is at capacity. Results for BasicLRU[int, int] below.
goos: darwin
goarch: arm64
pkg: github.com/ethereum/go-ethereum/common/lru
BenchmarkLRU/Add/BasicLRU-8         	20671774	        58.07 ns/op	       0 B/op	       0 allocs/op
BenchmarkLRU/Add/simplelru.LRU-8    	 8095117	       146.8 ns/op	      96 B/op	       3 allocs/op
BenchmarkLRU/Get/BasicLRU-8         	56364932	        21.17 ns/op	       0 B/op	       0 allocs/op
BenchmarkLRU/Get/simplelru.LRU-8    	43399506	        27.83 ns/op	       0 B/op	       0 allocs/op

@fjl fjl requested a review from zsfelfoldi as a code owner November 11, 2022 19:28
fjl added 10 commits November 11, 2022 20:31
Two optimizations here:

- Code to add/remove elements is simplified because the list is
  now 'anchored' in a root element.

- In case of eviction, the list element used by the evicted item
  is re-used, and no allocations are performed by Add.
This can save some space when type V is smaller than a machine word.
}
}

// Add adds a value to the cache. Returns true if an eviction occurred.
// OBS: This cache assumes that items are content-addressed: keys are unique per content.
// In other words: two Add(..) with the same key K, will always have the same value V.
// OBS: The value is _not_ copied on Add, so the caller must not modify it afterwards.
func (c *SizeConstrainedLRU) Add(key common.Hash, value []byte) (evicted bool) {
func (c *SizeConstrainedCache[K, V]) Add(key K, value V) (evicted bool) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing that I feel should be checked, is the behaviour with nil values. Right now, we never store nil nor empty code in the codecache, but that might change.

So basically, how does the SizeConstrainedCache cache behave with regards to nil vs 0-length byte arrays. (And how do we want it to behave?)

I'm just writing this down here so it isn't forgotten

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// TestBlobEmpties tests that empty/nil values are handled correctly.
func TestBlobEmpties(t *testing.T) {
	lru := NewBlobLRU[common.Hash, []byte](100)
	// This test abuses the lru a bit, using different keys for identical value(s).
	for i := 0; i < 10; i++ {
		lru.Add(common.Hash{byte(i)}, []byte{})
		lru.Add(common.Hash{byte(255 - i)}, nil)
	}
	// The size should not count, only the values count. So this could be a DoS
	// since it basically has no cap, and it is intentionally overloaded with
	// different-keyed 0-length values.
	if have, want := lru.size, uint64(0); have != want {
		t.Fatalf("size wrong, have %d want %d", have, want)
	}
	for i := 0; i < 10; i++ {
		if v, ok := lru.Get(common.Hash{byte(i)}); !ok {
			t.Fatalf("test %d: expected presence", i)
		} else if v == nil {
			t.Fatalf("test %d, v is nil", i)
		}
		if v, ok := lru.Get(common.Hash{byte(255 - i)}); !ok {
			t.Fatalf("test %d: expected presence", i)
		} else if v != nil {
			t.Fatalf("test %d, v is not nil", i)
		}
	}
}

We could use this test. It even passes :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the test.

Copy link
Contributor

@holiman holiman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@fjl fjl merged commit 9afc681 into ethereum:master Nov 14, 2022
@fjl fjl added this to the 1.11.0 milestone Nov 14, 2022
Tristan-Wilson added a commit to OffchainLabs/go-ethereum that referenced this pull request Feb 28, 2023
This upstream PR removed use of fastcache in favor of a size-bounded
LRU cache
ethereum/go-ethereum#26162
@ryanschneider
Copy link
Contributor

FWIW looks like golang-lru/v2 w/ generics support landing the same day this PR was merged (weird coincidence?). Just mentioning in case you wanted to switch to it to slightly lower the amount of code you maintain.

@fjl
Copy link
Contributor Author

fjl commented Mar 21, 2023

To be fair, I did not check for open PRs on golang-lru at the time. I was pretty happy to find an opportunity for writing some plain-simple generics code.

As for switching back, it's unlikely for several reasons: I found that embedding the linked-list implementation enables additional optimizations in BasicLRU that weren't possible before. I don't know if these are possible and/or wanted upstream. We could try, but... honestly it feels easier to just maintain our own version here. It's not a massive amount of code anyway. It's also very unlikely to have bugs.

We also need a package to house our own cache related code (only the size-limited LRU), and it's nice being able to have everything together.

The golang-lru API has some quirks that make it annoying to use: lru.New returns an error, which is stupid, and the 'unlocked' LRU is in a separate package.

shekhirin pushed a commit to shekhirin/go-ethereum that referenced this pull request Jun 6, 2023
It seems there is no fully typed library implementation of an LRU cache.
So I wrote one. Method names are the same as github.com/hashicorp/golang-lru,
and the new type can be used as a drop-in replacement.

Two reasons to do this:

- It's much easier to understand what a cache is for when the types are right there.
- Performance: the new implementation is slightly faster and performs zero memory
   allocations in Add when the cache is at capacity. Overall, memory usage of the cache
   is much reduced because keys are values are no longer wrapped in interface.
gzliudan added a commit to gzliudan/XDPoSChain that referenced this pull request Mar 7, 2024
JukLee0ira added a commit to JukLee0ira/XDPoSChain that referenced this pull request Jul 23, 2024
JukLee0ira added a commit to JukLee0ira/XDPoSChain that referenced this pull request Aug 3, 2024
gzliudan pushed a commit to XinFinOrg/XDPoSChain that referenced this pull request Aug 3, 2024
gzliudan added a commit to gzliudan/XDPoSChain that referenced this pull request Aug 20, 2024
gzliudan added a commit to gzliudan/XDPoSChain that referenced this pull request Aug 20, 2024
wanwiset25 pushed a commit to XinFinOrg/XDPoSChain that referenced this pull request Aug 23, 2024
gzliudan added a commit to gzliudan/XDPoSChain that referenced this pull request Aug 25, 2024
gzliudan added a commit to gzliudan/XDPoSChain that referenced this pull request Aug 25, 2024
gzliudan added a commit to gzliudan/XDPoSChain that referenced this pull request Aug 26, 2024
gzliudan added a commit to XinFinOrg/XDPoSChain that referenced this pull request Aug 26, 2024
gzliudan added a commit to gzliudan/XDPoSChain that referenced this pull request Nov 12, 2024
gzliudan added a commit to gzliudan/XDPoSChain that referenced this pull request Nov 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants