Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make Cache More Memory Efficient #77546

Merged

Conversation

original-brownbear
Copy link
Member

@original-brownbear original-brownbear commented Sep 10, 2021

Found the memory savings while looking at some heap dumps and added the indirection savings on top:

No need to keep counters per segment when we only use them in aggregate.
Also, removed a few spots of using needless indirection by making
segments reference the cache itself directly and saved a bunch of empty maps
for idle caches on e.g. rarely used indices.

This halves the size of an empty cache (I think it was ~110k to ~60k in the empty case) which isn't a massive saving but also not entirely irrelevant when thinking about large index counts per node (+ it speeds up tests a little by saving 10M+ on heap during most internal cluster tests from all kinds of empty caches :)).

No need to keep counters per segment when we only use them in aggregate.
Also, removed a few spots of using needless indirection by making
segments reference the cache itself directly and saved a bunch of empty maps
for idle caches on e.g. rarely used indices.
@elasticmachine elasticmachine added the Team:Core/Infra Meta label for core/infra team label Sep 10, 2021
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-core-infra (Team:Core/Infra)

@@ -742,25 +733,29 @@ public long getEvictions() {
}
}

private boolean promote(Entry<K, V> entry, long now) {
boolean promoted = true;
private void promote(Entry<K, V> entry, long now) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a minor thought here, splitting this method to avoid double locking may not be worth the complexity. ReentrantLock will check if a lock is already owned by the same thread and avoid the compare and set operation. There's still going to be a volatile int field increment, but it should be relatively inexpensive.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point :) reverted that part of the change

Copy link
Contributor

@grcevski grcevski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. LGTM!

@original-brownbear
Copy link
Member Author

Thanks Nikola!

@original-brownbear original-brownbear merged commit 6ed3eaa into elastic:master Sep 13, 2021
@original-brownbear original-brownbear deleted the more-efficient-cache branch September 13, 2021 16:50
original-brownbear added a commit to original-brownbear/elasticsearch that referenced this pull request Sep 13, 2021
No need to keep counters per segment when we only use them in aggregate.
Also, removed a few spots of using needless indirection by making
segments reference the cache itself directly and saved a bunch of empty maps
for idle caches on e.g. rarely used indices.
original-brownbear added a commit that referenced this pull request Sep 13, 2021
No need to keep counters per segment when we only use them in aggregate.
Also, removed a few spots of using needless indirection by making
segments reference the cache itself directly and saved a bunch of empty maps
for idle caches on e.g. rarely used indices.
@original-brownbear original-brownbear restored the more-efficient-cache branch April 18, 2023 20:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Core/Infra/Core Core issues without another label >non-issue Team:Core/Infra Meta label for core/infra team v7.16.0 v8.0.0-beta1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants