Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Segment Replication] Fix timeout issue by calculating time needed to process getSegmentFiles. #4426
[Segment Replication] Fix timeout issue by calculating time needed to process getSegmentFiles. #4426
Changes from 2 commits
8138702
3fb0e25
c2db07d
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These can be reduced with below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this change is needed ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, while testing benchmarks there are scenarios where the size of files to fetch will be in negative. May be something like this "-1239252992" bytes. If size of files to fetch is negative then time value will also be negative, which doesn't makes sense and throws error. I tried to use Math.abs() but that is forbidden so I have no other option. Please suggest if there are any ways to overcome this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is interesting. Where do you see negative file length value @Rishikesh1159. @mch2 : Do you know when do we have negative file length value ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When running benchmarks I was logging size of files to fetch, I observed that with couple of replication events. I also observed time taken to process both "-1239252992" bytes size and "1239252992" bytes size is same. So, I was looking for it to convert to a positive value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for confusion. Size of files to fetch will never be in negative. When I was testing locally I was using int type to store the size, so for bigger value int was overflowing and I saw negative size of files to fetch in logs. Later I used Long type and this is no more an issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
m5.xlarge
instance type, which means all instance types with lower throughput may fail persistently ? Can we relax(reduce) this baseline to smaller value to accomodate lower grade instance types. It doesn't hurt estimated time for high grade instance type, as we estimate higher number of minutes needed but actual transfer will complete faster. Though on the other hand, it will prevent timeout issues on lower grade machines.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. This feels a bit high, maybe bump by 1M for every 100mb? This value should be toggleable with a setting, we can shoot for that in next release.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Steps to identify baseline:
-> I started a 3 node cluster and ran opensearch-benchmarks with SO dataset. I logged files to fetch size and check the logs later for every replication event. After checking all replication event logs i observed the maximum files to fetch size that has finished within 1min time was 330mb, so I took 330mb as baseline. Later after this change no timeouts happened.
->As I mentioned above that after observing logs I put baseline as 330mb, but as both @dreamer-89 and @mch2 mentioned it makes sense to reduce the baseline, so that lower throughput machines don't have a timeout. I will use bump by 1min for every 100 mb as @mch2 suggested.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be initialized with 0 ?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this initialization can reduced as mentioned in below comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this be reduced with below ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's take example where baseline is 100000000 bytes (100mb) and size of segment files is 250000000(250mb). When we do a sizeOfSegmentFiles / baseSegmentFilesSize result will be 2min. But we need 3min to process this size of segment, so we will have a timeout. To avoid this we need to add a 1 before and after " sizeOfSegmentFiles / baseSegmentFilesSize"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: We can add a unit test to verify the timeouts here. I am fine taking up this in separate PR.