You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Before plugins used savedObjects#find which would create one request to Elasticsearch. If ever an Elasticsearch response exceeds 536MB Kibana will refuse to deserialise it. So even with a really large page size the saved objects client would not consume more heap than what's necessary to parse a 536MB response (still quite a lot).
Proposed circuit breaker
In almost all cases, loading, decompressing and deserialising more than 500mb of response data is undesirable, even if this happens in several pages. We should rather run aggregations in Elasticsearch, or load a small set of fields to reduce the response payload size.
We should add a circuit breaker to createPointInTimeFinder to stop paging after it collected 536MB (or 256MB?) of response pages. Plugins that need to load more data and know that their code is memory safe, can decide to increase the circuit breaker.
The text was updated successfully, but these errors were encountered:
We should add a circuit breaker to createPointInTimeFinder to stop paging after it collected 536MB (or 256MB?) of response pages
Using a config setting for this default would allow to have an easy escape hatch for customers if needed.
We could then set it to 512mb for traditional and 256mb for serverless, maybe?
Plugins that need to load more data and know that their code is memory safe, can decide to increase the circuit breaker
I wonder, do we want to allow them to increase the memory limit, or to simply disable the circuit breaker?
If we follow the logic that they know their code is "memory safe", then I would assume the best approach, in term of DX, is to simply being able to disable the circuit breaker logic via an option, instead of having to pass an arbitrary memory limit, wdyt?
With the introduction of the point in time finder after #93770 and #92981 the following problematic pattern has become fairly common
Whenever we load all results in memory this adds a lot of memory pressure and worst case can lead to OOMs.
Our developer docs specifically call out that this should not be used in a route handler unless the results are streamed https://github.com/elastic/kibana/blob/main/packages/core/saved-objects/core-saved-objects-api-server/src/saved_objects_client.ts#L340-L346 but I think we should have been much more clear why we have this warning and provide a proper example.
Before plugins used
savedObjects#find
which would create one request to Elasticsearch. If ever an Elasticsearch response exceeds 536MB Kibana will refuse to deserialise it. So even with a really large page size the saved objects client would not consume more heap than what's necessary to parse a 536MB response (still quite a lot).Proposed circuit breaker
In almost all cases, loading, decompressing and deserialising more than 500mb of response data is undesirable, even if this happens in several pages. We should rather run aggregations in Elasticsearch, or load a small set of fields to reduce the response payload size.
We should add a circuit breaker to
createPointInTimeFinder
to stop paging after it collected 536MB (or 256MB?) of response pages. Plugins that need to load more data and know that their code is memory safe, can decide to increase the circuit breaker.The text was updated successfully, but these errors were encountered: