-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
output/cloudv2: Unlimited size for body payload #3120
Conversation
Codecov Report
@@ Coverage Diff @@
## master #3120 +/- ##
==========================================
- Coverage 73.86% 73.84% -0.03%
==========================================
Files 243 243
Lines 18492 18502 +10
==========================================
+ Hits 13659 13662 +3
- Misses 3964 3969 +5
- Partials 869 871 +2
Flags with carried forward coverage won't be shown. Click here to find out more.
|
79be0d6
to
e1b7419
Compare
1a8650f
to
bcd5532
Compare
We discussed it internally and decided to drop the check on the client side. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but out of curiosity, why drop the check altogether instead of setting a reasonable limit?
There must be some limit in the backend for payload size. Couldn't this be exceeded in some cases? I suppose the backend would return HTTP 413, and the request would fail anyway. 🤔
@imiric It could but it would be something deterministic that we could fix. For example, if we have a lot of aggregates per time series and we set on the backend a value that needs to be higher to support it. But it means we detect it, adjust the config, the limit and the chunk limit then it should not happen again because we are aggregating using a finite variable (the time - 3s for the aggregation period and 8s for the waiting period).
Yeah, it is the main reason for dropping it, we are delegating the control over the backend so we can tweak the limit without a direct dependency here that would increase the complexity. I expect we and the backend will monitor our systems and we adjust things in case the metrics don't sound correct. |
Hey @codebien. Are there any blockers to merging this, or when we understand if we're going to merge this? |
Hey @olegbespalov, the release was the blocker, it's now ready to be merged. |
We have decided to drop the check for the body's payload. The remote service will take care of splitting the chunk more if it turns out they are too big to be handled.