Trigger run_pending_tasks() in eviction listener [moka 0.12] #344
-
As I store Is there a method to run the pending task of interest in the eviction listener function itself? (So it looks like a chicken and egg problem: the cache is not created, but I want to connect already a reference to it). Or is this even a bug and should it be done already? (I see the next time the Cache is called it drops the Arc pointer, but then the eviction listener function was already processed, looking at a too high |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Bug #345 will be one of the reasons for it. But even after fixing it, you will still see too high
From the README of crossbeam-epoch:
If you store One way to workaround this will be to store key value to // Cargo.toml
//
// [dependencies]
// dashmap = "5.5.1"
// moka = "0.11.3"
use std::{
borrow::Borrow, collections::hash_map::RandomState, hash::Hash, sync::Arc, time::Duration,
};
use dashmap::mapref::{entry::Entry, one::Ref};
pub struct MyCache<K, V, S> {
map: Arc<dashmap::DashMap<K, Vec<V>, S>>,
ttl_tracker: moka::sync::Cache<K, ()>,
}
impl<K, V> MyCache<K, V, RandomState>
where
K: Hash + Eq + Send + Sync + 'static,
V: Default + Send + Sync + 'static,
{
pub fn new(max_capacity: u64, ttl: Duration) -> Self {
use moka::notification::RemovalCause;
let map = Arc::new(dashmap::DashMap::new());
let map1 = Arc::clone(&map);
let eviction_listener = move |key: Arc<K>, _v, cause| {
match cause {
RemovalCause::Expired | RemovalCause::Size => {
// Remove the key from DashMap.
map1.remove(&*key);
}
RemovalCause::Explicit | RemovalCause::Replaced => (), // ignore
}
};
let ttl_tracker = moka::sync::Cache::builder()
.max_capacity(max_capacity)
.time_to_live(ttl)
.eviction_listener(eviction_listener)
.build();
Self { map, ttl_tracker }
}
pub fn get<Q>(&self, key: &Q) -> Option<Ref<'_, K, Vec<V>, RandomState>>
where
K: Borrow<Q>,
Q: Hash + Eq + ?Sized,
{
self.ttl_tracker.get(key);
self.map.get(key)
}
pub fn append_to(&self, key: K, value: V)
where
K: Clone,
{
// Append the value to the vector in the map. `entry` acquires a write lock
// for the key, so it's safe to mutate the vector.
let mut v = self.map.entry(key.clone()).or_default();
v.push(value);
// Reset the TTL timer for the key. If the key exists in the TTL tracker,
// the eviction listener will be called with `RemovalCause::Replaced`,
// which will be ignored by the listener.
self.ttl_tracker.insert(key, ());
// Ensure that the lock should be held until now.
drop(v);
}
pub fn remove<Q>(&self, key: K) -> Option<(K, Vec<V>)>
where
K: Clone,
{
// Acquire a write lock for the key.
let entry = self.map.entry(key.clone());
// Remove the key from the TTL tracker. If the key exists in the TTL tracker,
// the eviction listener will be called with `RemovalCause::Explicit`, which
// will be ignored by the listener.
self.ttl_tracker.invalidate(&key);
// Remove the key from the map.
if let Entry::Occupied(oe) = entry {
Some(oe.remove_entry())
} else {
None
}
}
}
fn main() {
let cache = MyCache::new(100, Duration::from_secs(3));
cache.append_to("key1".to_string(), "Hello, ".to_string());
cache.append_to("key1".to_string(), "World!".to_string());
assert_eq!(
dbg!(&*cache.get("key1").unwrap()),
&vec!["Hello, ".into(), "World!".to_string()],
);
println!("Sleeping for 4 seconds...");
std::thread::sleep(Duration::from_secs(4));
// This key should have been expired.
assert!(dbg!(cache.get("key1").is_none()));
println!("Done");
} |
Beta Was this translation helpful? Give feedback.
Bug #345 will be one of the reasons for it.
But even after fixing it, you will still see too high
string_count()
. It is expected because:Arc
).Cache
's internal concurrent hash table (cht) usescrossbeam-epoch
, which is a epoch-based garbage collector.From the README of crossbeam-epoch: