v0.12.0 has major breaking changes on the API and internal behavior. This section describes the code changes required to migrate to v0.12.0.
- The thread pool was removed from
future::Cache
. It no longer spawns background threads. - The
notification::DeliveryMode
for eviction listener was changed fromQueued
toImmediate
.
To support these changes, the following API changes were made:
future::Cache::get
method is nowasync fn
, so you mustawait
for the result.future::Cache::blocking
method was removed.- Please use async runtime's blocking API instead.
- See Replacing the blocking API for more details.
- Now
or_insert_with_if
method of the entry API requiresSend
bound for thereplace_if
closure. eviction_listener_with_queued_delivery_mode
method offuture::CacheBuilder
was removed.- Please use one of the new methods
eviction_listener
orasync_eviction_listener
instead. - See Updating the eviction listener for more details.
- Please use one of the new methods
future::ConcurrentCacheExt::sync
method is renamed tofuture::Cache::run_pending_tasks
. It is also changed toasync fn
.
The following internal behavior changes were made:
- Maintenance tasks such as removing expired entries are not executed periodically
anymore.
- See Maintenance tasks for more details.
- Now
future::Cache
only supportsImmediate
delivery mode for eviction listener.- In older versions, only
Queued
delivery mode was supported.- If you need
Queued
delivery mode back, please file an issue.
- If you need
- In older versions, only
future::Cache::blocking
method was removed. Please use async runtime's blocking API
instead.
Tokio
- Call
tokio::runtime::Handle::current()
in async context to obtain a handle to the current Tokio runtime. - From outside async context, call cache's async function using
block_on
method of the runtime.
use std::sync::Arc;
#[tokio::main]
async fn main() {
// Create a future cache.
let cache = Arc::new(moka::future::Cache::new(100));
// In async context, you can obtain a handle to the current Tokio runtime.
let rt = tokio::runtime::Handle::current();
// Spawn an OS thread. Pass the handle and cache.
let thread = {
let cache = Arc::clone(&cache);
std::thread::spawn(move || {
// Call async function using block_on method of Tokio runtime.
rt.block_on(cache.insert(0, 'a'));
})
};
// Wait for the threads to complete.
thread.join().unwrap();
// Check the result.
assert_eq!(cache.get(&0).await, Some('a'));
}
async-std
- From outside async context, call cache's async function using
async_std::task::block_on
method.
use std::sync::Arc;
#[async_std::main]
async fn main() {
// Create a future cache.
let cache = Arc::new(moka::future::Cache::new(100));
// Spawn an OS thread. Pass the cache.
let thread = {
let cache = Arc::clone(&cache);
std::thread::spawn(move || {
use async_std::task::block_on;
// Call async function using block_on method of async_std.
block_on(cache.insert(0, 'a'));
})
};
// Wait for the threads to complete.
thread.join().unwrap();
// Check the result.
assert_eq!(cache.get(&0).await, Some('a'));
}
eviction_listener_with_queued_delivery_mode
method of future::CacheBuilder
was
removed. Please use one of the new methods eviction_listener
or
async_eviction_listener
instead.
eviction_listener
takes the same closure as the old method. If you do not need to
.await
anything in the eviction listener, use this method.
This code snippet is borrowed from an example in the document of
future::Cache
:
let eviction_listener = |key, _value, cause| {
println!("Evicted key {key}. Cause: {cause:?}");
};
let cache = Cache::builder()
.max_capacity(100)
.expire_after(expiry)
.eviction_listener(eviction_listener)
.build();
async_eviction_listener
takes a closure that returns a Future
. If you need to
.await
something in the eviction listener, use this method. The actual return type
of the closure is future::ListenerFuture
, which is a type alias of
Pin<Box<dyn Future<Output = ()> + Send>>
. You can use the boxed
method of
future::FutureExt
trait to convert a regular Future
into this type.
This code snippet is borrowed from an example in the document of
future::Cache
:
use moka::notification::ListenerFuture;
// FutureExt trait provides the boxed method.
use moka::future::FutureExt;
let listener = move |k, v: PathBuf, cause| -> ListenerFuture {
println!(
"\n== An entry has been evicted. k: {:?}, v: {:?}, cause: {:?}",
k, v, cause
);
let file_mgr2 = Arc::clone(&file_mgr1);
// Create a Future that removes the data file at the path `v`.
async move {
// Acquire the write lock of the DataFileManager.
let mut mgr = file_mgr2.write().await;
// Remove the data file. We must handle error cases here to
// prevent the listener from panicking.
if let Err(_e) = mgr.remove_data_file(v.as_path()).await {
eprintln!("Failed to remove a data file at {:?}", v);
}
}
// Convert the regular Future into ListenerFuture. This method is
// provided by moka::future::FutureExt trait.
.boxed()
};
// Create the cache. Set time to live for two seconds and set the
// eviction listener.
let cache = Cache::builder()
.max_capacity(100)
.time_to_live(Duration::from_secs(2))
.async_eviction_listener(listener)
.build();
In older versions, the maintenance tasks needed by the cache were periodically
executed in background by a global thread pool managed by moka
. Now future::Cache
does not use the thread pool anymore, so those maintenance tasks are executed
sometimes in foreground when certain cache methods (get
, get_with
, insert
,
etc.) are called by user code.
Figure 1. The lifecycle of cached entries
These maintenance tasks include:
- Determine whether to admit a "temporary admitted" entry or not.
- Apply the recording of cache reads and writes to the internal data structures, such as LFU filter, LRU queues, and timer wheels.
- When cache's max capacity is exceeded, select existing entries to evict and remove them from cache.
- Remove expired entries.
- Remove entries that have been invalidated by
invalidate_all
orinvalidate_entries_if
methods. - Deliver removal notifications to the eviction listener. (Call the eviction listener closure with the information about evicted entry)
They will be executed in the following cache methods when necessary:
- All cache write methods:
insert
,get_with
,invalidate
, etc. - Some of the cache read methods:
get
run_pending_tasks
method, which executes the pending maintenance tasks explicitly.
Although expired entries will not be removed until the pending maintenance tasks are
executed, they will not be returned by cache read methods such as get
, get_with
and contains_key
. So unless you need to remove expired entries immediately (e.g. to
free some memory), you do not need to call run_pending_tasks
method.
- (Not in v0.12.0-beta.1)
sync
caches will be no longer enabled by default. Use a crate featuresync
to enable it. - (Not in v0.12.0-beta.1) The thread pool will be disabled by default.
- In older versions, the thread pool was used to execute maintenance tasks in background.
- When disabled:
- those maintenance tasks are executed sometimes in foreground when certain
cache methods (
get
,get_with
,insert
, etc.) are called by user code - See Maintenance tasks for more details.
- those maintenance tasks are executed sometimes in foreground when certain
cache methods (
- To enable it, see Enabling the thread pool for more details.
To enable the thread pool, do the followings:
- Specify a crate feature
thread-pool
. - At the cache creation time, call the
thread_pool_enabled
method ofCacheBuilder
.