Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add metrics for cache entry memory size #5770

Merged
merged 23 commits into from
Aug 7, 2024
Merged

Conversation

BrynCooke
Copy link
Contributor

@BrynCooke BrynCooke commented Aug 5, 2024

Query planner cache entries may use an significant amount of memory in the Router.
To help users understand and monitor this the Router now exposes a new metric apollo.router.cache.storage.estimated_size.

This metric give an estimated size in bytes for the cache entry and has the following attributes:

  • kind: query planner.
  • storage: memory.

As the size is only an estimation, users should check for correlation with pod memory usage to determine if cache needs to be updated.

Usage scenario:

  1. Your pods are being terminated due to memory pressure.
  2. Add the following metrics to your monitoring system to track:
  • apollo.router.cache.storage.estimated_size.
  • apollo_router_cache_size.
  • ratio of apollo_router_cache_hits - apollo_router_cache_misses.
  1. Observe the apollo.router.cache.storage.estimated_size to see if it grows over time and correlates with pod memory usage.
  2. Observe the ratio of cache hits to misses to determine if the cache is being effective.

Remediation:

  • Adjust the cache size to lower if the cache reaches near 100% hit rate but the cache size is still growing.
  • Increase the pod memory to higher if cache hit rate is low and cache size is still growing.
  • Adjust the cache size to lower if the latency of query planning cache misses is acceptable and memory availability is limited.

Technical info

The estimate is only implemented for query plans and uses serde to create the estimate without actually writing to a string. It is very difficult to provide accurate memory usage, so it's only useful to see a trend.

Providing an estimated size for a cache entry is optional, and only implemented for query planner at this time.
Where no estimated size is set the gauge will not be emitted, e.g. APQ or entity caching.

Testing

For manual testing we will deploy internally and observe that the metric tops out when the query cache is full.


ROUTER-669
Checklist

Complete the checklist (and note appropriate exceptions) before the PR is marked ready-for-review.

  • Changes are compatible1
  • Documentation2 completed
  • Performance impact assessed and acceptable
  • Tests added and passing3
    • Unit Tests
    • Integration Tests
    • Manual Tests

Exceptions

Note any exceptions here

Notes

Footnotes

  1. It may be appropriate to bring upcoming changes to the attention of other (impacted) groups. Please endeavour to do this before seeking PR approval. The mechanism for doing this will vary considerably, so use your judgement as to how and when to do this.

  2. Configuration is an important part of many changes. Where applicable please try to document configuration examples.

  3. Tick whichever testing boxes are applicable. If you are adding Manual Tests, please document the manual testing (extensively) in the Exceptions.

bryn added 5 commits August 2, 2024 12:54
These are:
 * currently unpopulated.
 * needs unit tests.
 * needs to be propagated when a new in memory cache is spawned they are correct.

This comment has been minimized.

@router-perf
Copy link

router-perf bot commented Aug 5, 2024

CI performance tests

  • const - Basic stress test that runs with a constant number of users
  • demand-control-instrumented - A copy of the step test, but with demand control monitoring and metrics enabled
  • demand-control-uninstrumented - A copy of the step test, but with demand control monitoring enabled
  • enhanced-signature - Enhanced signature enabled
  • events - Stress test for events with a lot of users and deduplication ENABLED
  • events_big_cap_high_rate - Stress test for events with a lot of users, deduplication enabled and high rate event with a big queue capacity
  • events_big_cap_high_rate_callback - Stress test for events with a lot of users, deduplication enabled and high rate event with a big queue capacity using callback mode
  • events_callback - Stress test for events with a lot of users and deduplication ENABLED in callback mode
  • events_without_dedup - Stress test for events with a lot of users and deduplication DISABLED
  • events_without_dedup_callback - Stress test for events with a lot of users and deduplication DISABLED using callback mode
  • extended-reference-mode - Extended reference mode enabled
  • large-request - Stress test with a 1 MB request payload
  • no-tracing - Basic stress test, no tracing
  • reload - Reload test over a long period of time at a constant rate of users
  • step-jemalloc-tuning - Clone of the basic stress test for jemalloc tuning
  • step-local-metrics - Field stats that are generated from the router rather than FTV1
  • step-with-prometheus - A copy of the step test with the Prometheus metrics exporter enabled
  • step - Basic stress test that steps up the number of users over time
  • xlarge-request - Stress test with 10 MB request payload
  • xxlarge-request - Stress test with 100 MB request payload

@BrynCooke BrynCooke requested review from Geal and IvanGoncharov August 5, 2024 10:06
@BrynCooke BrynCooke marked this pull request as ready for review August 5, 2024 11:19
@BrynCooke BrynCooke requested review from a team as code owners August 5, 2024 11:19
}

fn serialize_none(self) -> Result<Self::Ok, Self::Error> {
Ok(self)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to double check: None would still use the size of Some in memory, right?
We estimate it as 0 because of serve limitations?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I guess in theory we can do something better. But None values won't have strings etc in them so I don't think we should worry about this and maybe we can follow up later to improve things.

kind = %self.caller,
storage = &tracing::field::display(CacheStorageName::Memory),
);
if let Some((_, v)) = in_memory.push(key.clone(), value.clone()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the in memory cache lock is held until the end of the function, it should only be held when inserting the old entry.
Maybe fixed with something like:

let old_entry = {
  self.inner.lock().await.push(key.clone(), value.clone())
};
if let Some((_, v)) = old_entry {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in: e37c87e

@BrynCooke BrynCooke requested a review from Geal August 5, 2024 13:13

As the size is only an estimation, users should check for correlation with pod memory usage to determine if cache needs to be updated.

Usage scenario:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shorgi Maybe this info should also be in the docs under cache troubleshooting?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great content for docs. Since it's related to pod memory pressure, moving it to the k8s page.

@BrynCooke BrynCooke requested a review from shorgi August 6, 2024 10:41
@BrynCooke
Copy link
Contributor Author

Going to wait for a docs review before merging.

Copy link
Contributor

@shorgi shorgi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved the usage scenario from the changeset into docs

.changesets/fix_missing_cache_gauge.md Outdated Show resolved Hide resolved
.changesets/feat_enhanced_observability.md Outdated Show resolved Hide resolved
.changesets/feat_enhanced_observability.md Outdated Show resolved Hide resolved

As the size is only an estimation, users should check for correlation with pod memory usage to determine if cache needs to be updated.

Usage scenario:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great content for docs. Since it's related to pod memory pressure, moving it to the k8s page.

.changesets/feat_enhanced_observability.md Outdated Show resolved Hide resolved
.changesets/feat_enhanced_observability.md Outdated Show resolved Hide resolved
BrynCooke and others added 5 commits August 7, 2024 14:06
Co-authored-by: Edward Huang <edward.huang@apollographql.com>
Co-authored-by: Edward Huang <edward.huang@apollographql.com>
Co-authored-by: Edward Huang <edward.huang@apollographql.com>
Co-authored-by: Edward Huang <edward.huang@apollographql.com>
Co-authored-by: Edward Huang <edward.huang@apollographql.com>
@BrynCooke BrynCooke enabled auto-merge (squash) August 7, 2024 13:07
@BrynCooke BrynCooke merged commit cac2750 into dev Aug 7, 2024
13 of 14 checks passed
@BrynCooke BrynCooke deleted the enhanced-observability branch August 7, 2024 13:23
@abernix abernix mentioned this pull request Aug 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants