-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introspection: Change the caching behavior #1516
Comments
o0Ignition0o
changed the title
Introspection: Remove the cache
Introspection: Change the caching behavior
Aug 16, 2022
Sounds good. Per our conversation, suggest a low capacity LRU with no TTL since introspection is tied to the life-time of the schema. |
o0Ignition0o
added a commit
that referenced
this issue
Aug 16, 2022
fixes: #1516, #1466 Since @bnjjj s refactoring on introspection, we don't need to cache well known introspection queries. Pros: Caching those queries and responses can lead to high memory footprints, especially on very large schemas. Cons: The first introspection query will take a bit longer (~130ms on my machine) than the cached one (~2ms on my machine) The default introspection cache size is now 5, and cannot be configured. 5 has been chosen because we do not expect local developers to run tools that need more than 5 different introspection queries with < 3ms latency requirements.
o0Ignition0o
added a commit
that referenced
this issue
Aug 17, 2022
#1517) fixes: #1516, #1466 Since @bnjjj s refactoring on introspection, we don't need to cache well known introspection queries. Pros: Caching those queries and responses can lead to high memory footprints, especially on very large schemas. Cons: The first introspection query will take a bit longer (~130ms on my machine) than the cached one (~2ms on my machine) The default introspection cache size is now 5, and cannot be configured. 5 has been chosen because we do not expect local developers to run tools that need more than 5 different introspection queries with < 3ms latency requirements.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Since @bnjjj s refactoring of introspection, we don't need to cache every introspection query we know of anymore.
Moreover it causes the router to use quite a fair bit of memory when introspection is enabled.
Now would be a great time to remove it from the codebase, and add a simple LRU introspection cache with TTL.
The text was updated successfully, but these errors were encountered: