-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Retry Behaviour to AWS Kinesis Data Firehose Sink #12835
Comments
Hi @mattaltberg ! What version of Vector were you using when you observed this behavior? |
Hey @jszwedko, I'm using 0.16.1. I'll ping you again after I try using the latest version. |
Thanks @mattaltberg . I am curious about the newest version (0.21.2) since there were some changes to switch to the new AWS SDK and handle throttling responses consistently in 0.21.0. |
@jszwedko I tested using 0.21.2, and still got the same throttling issue with my Firehose stream. Is it possible it's not used for the Firehose sink? |
Interesting. You are seeing throttle messages in Vector's logs? Can you share the log output? The vector/src/sinks/aws_kinesis_firehose/config.rs Lines 258 to 270 in 498a4c8
which calls: Lines 26 to 63 in 498a4c8
|
If you are able to re-run with the environment variable |
@jszwedko The oddness continues. I'm not seeing any throttling exceptions, but I am seeing plenty of
The thing is, when I check AWS Console, I can see a bunch of records in the Throttled Records chart, which is surprisingly not showing up in the SDK errors |
Interesting, thanks for sharing the response @mattaltberg ! It seems like maybe we should be looking for |
Also noticing that it seems to return one response per record, in your log some of the records went in. It seems like we'll need to handle retrying only subsets of the records (similar to Elasticsearch). At the least, for an initial implementation, we could retry the full request though. |
Yeah, if you want more information, my setup is using a |
Hey @jszwedko any updates? I'm curious if I'll need to keep upping my Firehose quota |
Hi @mattaltberg ! Unfortunately nothing yet, but this is in the backlog. |
Is this issue already fixed? Or is there any plan to fix it? |
It's not fixed yet, but a contributor has a PR open for it: #16771 |
A note for the community
Use Cases
Currently, if any records are throttled by AWS, Vector does not retry sending the records to Firehose. We should add a retry to ensure records are sent, because right now, the data is simply dropped.
Attempted Solutions
No response
Proposal
No response
References
No response
Version
No response
The text was updated successfully, but these errors were encountered: