Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Status] /api/stats?extended=true throttling/caching #75886

Closed
afharo opened this issue Aug 25, 2020 · 3 comments
Closed

[Status] /api/stats?extended=true throttling/caching #75886

afharo opened this issue Aug 25, 2020 · 3 comments
Labels
Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc

Comments

@afharo
Copy link
Member

afharo commented Aug 25, 2020

Describe the feature:

It would be beneficial to apply some sort of throttling or caching of the requests in the API /api/stats?extended=true.

Describe a specific use case for the feature:

We've noticed some clusters are suffering from timeouts and Kibana slowness when requesting /api/stats?extended=true[&exclude_usage=false].

Looking at Metricbeat, it forces a timeout to all HTTP requests after 10s and polls every 10s. It requests the usage (exclude_usage=false) only once every 24h but, if the request fails, it will attempt again on the next poll.

This leads to a situation where Kibana might take a bit longer to reply and Metricbeat will issue a new request after 10s, doubling the load on retrieving the usage and, potentially, taking longer to respond. So the process repeats endlessly.

On Metricbeat, we are studying a backoff mechanism to not request the usage again for a while if it fails.

But I think we should also apply some throttling/caching/memoization to avoid overloading due to something repeating the request too eagerly.

@afharo afharo added the Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc label Aug 25, 2020
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-platform (Team:Platform)

@afharo
Copy link
Member Author

afharo commented Aug 25, 2020

In this comment, there are some suggestions about the possibility of returning 429 Too Many Requests responses or even using the Retry-After header when the service is overloaded to gently ask the consumer to backoff the frequency of their requests.

@afharo
Copy link
Member Author

afharo commented Feb 17, 2022

Closing this issue because we implemented a caching mechanism to the piece of logic gathering data from the collectors.

@afharo afharo closed this as completed Feb 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc
Projects
None yet
Development

No branches or pull requests

3 participants