Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(facade): Limit request cache using runtime flag #793

Merged
merged 1 commit into from
Feb 14, 2023

Conversation

romange
Copy link
Collaborator

@romange romange commented Feb 13, 2023

No description provided.

@romange romange requested a review from dranikpg February 13, 2023 16:58
Comment on lines +40 to +42
ABSL_FLAG(std::uint64_t, request_cache_limit, 1ULL << 26, // 64MB
"Amount of memory to use for request cache in bytes - per IO thread.");

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we had those postifxes for memory sizes 😄

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what do you mean @dranikpg ?

Comment on lines 620 to +631
// Gradually release the request pool.
// The request pool is shared by all the connections in the thread so we do not want
// to release it aggressively just because some connection is running in
// non-pipelined mode. So we wait at least N times,
// where N is the number of connections in the thread.
if (!free_req_pool_.empty()) {
free_req_pool_.pop_back();
++free_req_release_weight;
if (free_req_release_weight > stats_->num_conns) {
free_req_release_weight = 0;
stats_->pipeline_cache_capacity -= free_req_pool_.back()->StorageCapacity();
free_req_pool_.pop_back();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I forgot, do you release them to reduce memory usage after peaks? Or for some other reason?
Your current mechanism doesn't look very reliable for this case 🤔

Copy link
Collaborator Author

@romange romange Feb 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason is pragmatic. I want to keep the cache if it's needed and free it if it's not. This heuristic is simple, indeed, but it will gradually release the cache if it's not being used, and since we share it among all connections Dragonfly won't overallocate it.

@romange romange merged commit 81f0b50 into main Feb 14, 2023
@romange romange deleted the LimitRequestCache branch February 14, 2023 15:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants