Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: increase trie cache size #5706

Merged
merged 6 commits into from
Dec 8, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion core/store/src/trie/trie_storage.rs
Original file line number Diff line number Diff line change
Expand Up @@ -118,8 +118,12 @@ impl TrieStorage for TrieMemoryPartialStorage {
}

/// Maximum number of cache entries.
/// It was chosen to fit into RAM well. RAM spend on trie cache should not exceed
/// 50_000 * 4 (number of shards) * TRIE_LIMIT_CACHED_VALUE_SIZE = 800 MB.
/// In our tests on a single shard, it barely occupied 40 MB, which is dominated by state cache size
/// with 512 MB limit. The total RAM usage for a single shard was 1 GB.
#[cfg(not(feature = "no_cache"))]
const TRIE_MAX_CACHE_SIZE: usize = 10000;
Copy link
Contributor

@pmnoxx pmnoxx Dec 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think Self(Arc::new(Mutex::new(SizedCache::with_size(TRIE_MAX_CACHE_SIZE)))), could be the problem.

Replacing it with SyncLruCache could help a little bit #5632
Once, this gets approved, I'll work on even better version, as there are a few other issues with it, that could cause big issues.

It is theoretically, possible, that the current implementation of TrieCache causes the issues.
I have a plan to fix it, but I need to wait for #5632 to be approved first.

To improve:

  • use SyncLruCache, that uses a better implemented LruCache instead of cached.
  • Instead of using single Mutex, use a shared mutex, so we have for example, 16 smaller caches, and the chances of hitting a mutex that's busy, are much lower. Latency, spikes can happen, if potentially, a thread is swapped out, while hitting a mutex, etc.
  • There, are a few other issues with the cache, though we need to wait for review first.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anyway, that's a bit out of topic, not blocking this PR. I think increase cache size is an excellent idea!

const TRIE_MAX_CACHE_SIZE: usize = 50000;
Longarithm marked this conversation as resolved.
Show resolved Hide resolved

#[cfg(feature = "no_cache")]
const TRIE_MAX_CACHE_SIZE: usize = 1;
Expand Down