-
Notifications
You must be signed in to change notification settings - Fork 706
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch cache to custom without write-lock for reads. #5576
Conversation
✅ Deploy Preview for kubeapps-dev ready!
To edit notification comments on pull requests, go to your Netlify site settings. |
/// Importantly, checking the cache does not require a write-lock | ||
/// (unlike the [`Cached` trait's `cache_get`](https://github.com/jaemk/cached/blob/f5911dc3fbc03e1db9f87192eb854fac2ee6ac98/src/lib.rs#L203)) | ||
#[derive(Default)] | ||
struct LockableCache<K, V>(RwLock<HashMap<K, V>>); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This struct is a NewType pattern which is a zero-cost abstraction (ie. no penalty at run-time) that defines a new type as a thin wrapper - a 1-tuple of another type, so that we can add our caching functions (get
, insert
) to this type without needing an extra struct. So in this case, a LockableCache is really just a read-write lock wrapping a hash, but one which behaves like a cache.
record.args() | ||
) | ||
}) | ||
.init(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Woops - I'd left the .init()
off when I re-enabled logging in my last PR, but didn't check it (so nothing will be logged without this change).
c2149e7
to
733673b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for digging into the root cause; it's a pity we had to drop the cached
crait and implement it on our own. I didn't expect a read operation to be blocking :S
Yeah - either did I, though I did afterwards remember that the |
733673b
to
beec648
Compare
Signed-off-by: Michael Nelson <minelson@vmware.com>
Signed-off-by: Michael Nelson <minelson@vmware.com>
beec648
to
c848f59
Compare
Signed-off-by: Michael Nelson minelson@vmware.com
Follows on from #5518, this time replacing the
cached
package with a custom credential cache.Description of the change
After further digging, I found that one cause of the slow handling of 50 concurrent requests going through the pinniped-proxy was that:
Cached
trait specifies that even acache_get
operation mutates the cache (in our case, just for statistics of hits/misses), which, as a result, requires acquiring a write lock to the cache to read a cached value.For more details, please see the discussion with the
Cached
author.To avoid both of those issues, this PR:
cache
module that provides a generic read/writeLockableCache
(for multiple readers, single writer) and builds on that with aPruningCache
that will prune entries (given a test function) when they should no longer be cached,CredentialCache
on startup (inmain.rs
) specifically for cachingTokenCredentialRequest
objects and pruning expired entries, and then passes this through for use in different threads concurrently.Benefits
Fetching from the cache is now non-blocking (generally, except when an entry is being added) and so leads to less task switching, improving the total query time by ~2s (down to 3-4).
There is still something else using significant CPU when creating the client itself (cert-related), which I'm investigating now in a separate PR.
Possible drawbacks
Applicable issues
Additional information
Example log when using
RUST_LOG=info,pinniped_proxy::pinniped=debug
which shows the cache being used after the first request. I've not included it in the output generally, but the cache get is now always under a millisecond. As above, the significant delays (some calls to prepare_and_call_pinniped_exchange only 4ms, others 98ms) are what I'll look at next.