-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Random replacement cache #131
base: master
Are you sure you want to change the base?
Conversation
5fb1368
to
7ea7f60
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @jchook,
thanks for your detailed PR. Reading through it was a real pleasure.
I left a few comments and some ideas for a few changes. Feel free to reply and discuss about them.
I've never thought of implementing such a kind of cache. What do you use it for?
src/cache/RrMapCache.js
Outdated
const index = this._cacheKeys.indexOf(key); // O(1) | ||
if (index > -1) { | ||
delete this._cache.delete(key); | ||
this._cacheLength--; | ||
this._cacheKeys[index] = this._cacheKeys[this._cacheLength]; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If _cacheLength
and _cacheKeys
got removed, this could become a simple:
this._cache.delete(key);
..right? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To follow-up here -- yes. It would be much simpler.
This is a CPU performance optimization at the cost of storing keys redundantly in memory.
Thinking of a better way to acquire a random key from a Map...
src/cache/RrMapCache.js
Outdated
if (this._cacheLength >= this._cacheSize) { | ||
this._randomReplace(key, selectorFn); | ||
} else { | ||
this._cache.set(key, selectorFn); | ||
this._cacheKeys[this._cacheLength] = key; | ||
this._cacheLength++; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I get it right, here we have 2 different setting logic:
- push new entry normally
- replace existing entry when cache size limit is hit
Would it be possible to get the same behaviour with a single setting logic?
Something like:
1- delete existing random entry if cache limit is hit (_randomReplace
might change its scope)
2- add new entry
I understand this would not be anymore a strict implementation of a RR cache but users would get the same result. If I haven't missed anything.
Thanks for taking the time to review!
Basically I have run into "worst case" scenarios where the amount of data exceeds the cache size, plus a tight loop sequentially accesses cacheSize + n items. In that scenario, both LRU and FIFO have many guaranteed / systematic misses. The RR method alleviates this issue somewhat, while still providing a reasonably competitive hit/miss ratio with other access patterns. I'm not an expert on the topic, but I can cite at least one study from 2004:
Most of the academic studies surrounding the issue focus on hardware cache, and don't apply directly to JavaScript land, but I think the intuition is there for why an RR cache might compliment FIFO + LRU. |
I agree an RR cache could be an interesting tool to provide. If cache folder grew further I'd consider splitting the extra implementation in a separate package but I'll put this pain off. I was curious to see how this library is used for applications never taken into account when it was originally written. To make a long story short, what if we just:
Thanks again! |
How are you doing @jchook? All the best 🖖 :) |
Hey @toomuchdesign thanks for your cordial patience on this. When deploying these changes in the wild, I discovered a performance issue with the Currently I'm helping my mom build an addition on her home (fun! but also time-consuming). When that finishes I will have plenty of time. If you're willing to let this bake in the sun for a while I will eventually get to it. Otherwise we can close as I am happily using it in production now and just wanted to share. |
Hi @jchook, great to hear from you! Feel free to complete the job when you have time and ping me when you're done. Cheers! |
Hey, finally revisiting this PR. After reviewing the PR, I think we should drop the
Also it was a joy to re-read your feedback and I made adjustments. |
Hey @jchook, Yep, I agree about removing the plain object caches. We might publish your PR as a minor and then release a major with the cleanup. I've enabled tests, there seems to be something red :) |
What kind of change does this PR introduce? (Bug fix, feature, docs update, ...)
Feature
What is the current behaviour? (You can also link to an open issue here)
Very good.
What is the new behaviour?
Adds random replacement (RR) to the out-of-the-box cache strategies.
Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?)
No.
Other information:
LRU and FIFO strategies can both lead to many guaranteed cache misses with "worst-case" access patterns. Random replacement alleviates this problem, enabling a predictable trade between cache size and computation, even with those access patterns.
Please check if the PR fulfills these requirements: