Cherry-pick #15590 to 7.x: [Filebeat] Add timeout to GetObjectRequest for s3 input #15901
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Cherry-pick of PR #15590 to 7.x branch. Original message:
Problem we see when using s3 input:
When using s3 input to read logs from S3 bucket, after a while with high amount of logs
read: connection reset by peer
error showed up. This error is triggered byreader.ReadString
function, thenprocessorKeepAlive
found it's taking too long to runprocessMessage
, which is longer than half of the set visibility timeout. SochangeVisibilityTimeout
function keep getting called repeatedly.This PR is to add timeout into GetObjectRequest API call by using context pattern to implement timeout logic that will cancel the request if it takes too long. This way, after the default timeout 2 minute is hit, this specific S3 object will be skipped, SQS message will return back to the queue later. So Filebeat can try to read it again later.
I decided to add a config option called
context_timeout
for s3 input because based on your visibility_timeout value, context_timeout can be as large as half of the visibility_timeout. This will allow users to modify both timeout values when using s3 input or filebeat aws module with larger s3 objects or smaller network bandwidth.closes #15502