-
-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Entry count never reaches zero even when calling run_pending_tasks() in a loop [moka future 0.12] #345
Comments
I use the same version for
(I've also located the file at the examples directory of moka, so I actually run |
Thank you for experimenting further.
That was it. Now I can reproduce the issue by running it on $ rustc -Vv
rustc 1.73.0 (cc66ad468 2023-10-03)
binary: rustc
commit-hash: cc66ad468955717ab92600c770da8c1601a4ff33
commit-date: 2023-10-03
host: x86_64-unknown-linux-gnu
release: 1.73.0
LLVM version: 17.0.2
$ rg 'Core.*i7' /proc/cpuinfo | head -1
model name : 12th Gen Intel(R) Core(TM) i7-12700F $ cargo r --release
...
Evicted (4C7CAF90,"Bob") because Replaced
Wake up!
Evicted (55679ED1,"Charlie") because Expired
Evicted (55679ED1,"Charlie") because Size
Evicted (1A260816,"Dave") because Size
Evicted (033D3957,"Eve") because Size
Evicted (28106A94,"Faythe") because Size
Evicted (6751FC53,"Alice") because Expired
Evicted (4C7CAF90,"Bob") because Expired
[src/main.rs:68] cache.entry_count() = 1
[src/main.rs:68] cache.entry_count() = 1
[src/main.rs:68] cache.entry_count() = 1
[src/main.rs:68] cache.entry_count() = 1
[src/main.rs:68] cache.entry_count() = 1
[src/main.rs:68] cache.entry_count() = 1
... So, on Linux x86_64, I do not see Charlie being evicted after Faythe. # macOS arm64
...
Evicted (28106A94,"Faythe") because Size
Evicted (55679ED1,"Charlie") because Size // <= This one is missing on Linux x86_64.
Evicted (6751FC53,"Alice") because Explicit
... I will look into this hopefully later today. (It is 8:50 AM in my timezone UTC+0800 now) |
I can reproduce it on |
A quick update:
let ttl = 1;
// With 1-second ttl, keys 0 and 1 will be evicted if we wait long enough.
- sleep(Duration::from_secs(ttl));
+ sleep(Duration::from_millis(ttl * 1000 + 100)); |
No, it is not. It is a bug in
I have a plan to remove the 500 items limit from Lines 1456 to 1459 in ce70442
|
Just some worries and suggestions from my side: I have no clear picture how the eviction algorithm is implemented, but what I remember from (ri-) stretto is that is was possible to invalidate a huge amount of entries at once due to lifetime constraints. Can this happen in moka too? And does the eviction handler then block data or threads for such a long time that the end-user observes a very long response time? If that is the case, would it not be more handy to give the user some control over this behavior? Either beforehand by giving the |
Thank you for sharing your thoughts. I am not familiar with the issue that (re-)stretto have with their eviction algorithms. I read this article a couple of years ago, but I have not updated my knowledge since then. Here are the cases in moka:
If the provided eviction listener closure is really slow, and cache is receiving a high volume of writes, the write operation log buffer can get full. Then end-user will observe a very long response time. Before moka v0.12, it had an optional notification delivery mode called
So, without queued delivery mode, cache writes could be blocked in the above case. As you see in the migration guide, |
Hi @peter-scholtens, I am working on the fix for the original issue here: Some unimportant tests are failing for some 32-bit targets now, and I will fix them later. But, I believe #348 is already good enough for you to try. You will no longer see the wired behavior in your example program. Use the following dependency to try it out: [dependencies]
moka = { git = "https://github.com/moka-rs/moka.git", branch = "fix-race-in-handle-upseart", features = ["future"] } |
It seems now both:
and also this work correctly... (I tried to re-produce the error, and failed)
Could it be solved by an update of other dependencies? (I assume I did a cargo update also, I'm not sure) |
Thank you for trying!
No. It was clearly Moka's bug. In your program, try to sleep a bit longer than the TTL. It may help to reproduce the issue: let ttl = 1;
// With 1-second ttl, keys 0 and 1 will be evicted if we wait long enough.
- sleep(Duration::from_secs(ttl));
+ sleep(Duration::from_millis(ttl * 1000 + 100)); Also, I put a minimized version of your program as a test: I confirmed it fails on v0.12.1, and passes with #348. |
Confirmed! (I forgot to uncomment my own code). |
FYI, I published moka v0.12.2 with the fix to crates.io. |
FYI, I reported the above issue to quanta project: metrics-rs/quanta#96 |
Trying to convert the eviction listener example from a
sync
version to afuture
version, I see some weird behavior. Depending on the presence of a second insertion of the same key, theentry_count()
will never reach zero, even after callingrun_pending_tasks()
in a loop. See the example below. Uncommenting the line number 48 (the line after the line with the triple question mark) will hold theentry_count()
above zero forever, causing the loop to continue forever.Some remarks:
run_pending_tasks()
return atrue
value if it is finished or return afalse
if more than 500 items were processed and it is still not finished. (Finished from the perspective of e.g. the timestamp of calling the function, later insertions can be ignored for this judgement).Arc::strong_count()
in the eviction listener, see discussion 344 where I want to avoid orphaned entries in theBtree
with sharedArc
's.The text was updated successfully, but these errors were encountered: