-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add batched exports to the Prometheus Remote Write Exporter #2249
Add batched exports to the Prometheus Remote Write Exporter #2249
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2249 +/- ##
==========================================
+ Coverage 91.98% 92.01% +0.03%
==========================================
Files 271 271
Lines 15664 15681 +17
==========================================
+ Hits 14408 14429 +21
+ Misses 854 851 -3
+ Partials 402 401 -1
Continue to review full report at Codecov.
|
@huyan0 will you be able to review this as the original author of the exporter? |
@amanbrar1999 is actually the current maintainer from AWS for the Prometheus Remote Write Exporter. Would also be great if we could get a review from @jmacd. Thank you! cc - @alolita |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* Add sfx_skip_repo option to ansible * rename sfx_skip_repo to splunk_skip_repo, add the when clause to debian installs as well, add more documentation Co-authored-by: asavenko <asavenko@redhat.com> Co-authored-by: Antoine Toulme <atoulme@splunk.com>
Description:
Currently, there is no limit on the size of the remote_write export request to a Cortex instance. This could create problems if one Prometheus scrape produces an extreme amount of timeseries metrics.
This PR creates a batch of concurrent requests where the max size of a request is 3MB and the max number of timeseries per request is 40,000.
A future PR will tackle making the max size of each request configurable.
Testing: