-
Notifications
You must be signed in to change notification settings - Fork 602
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix possible OOME in ChannelInputStream #430
base: master
Are you sure you want to change the base?
Conversation
Codecov Report
@@ Coverage Diff @@
## master #430 +/- ##
===========================================
+ Coverage 55.37% 55.4% +0.03%
- Complexity 1180 1181 +1
===========================================
Files 191 191
Lines 7755 7761 +6
Branches 702 703 +1
===========================================
+ Hits 4294 4300 +6
- Misses 3110 3111 +1
+ Partials 351 350 -1
Continue to review full report at Codecov.
|
If the buffer is read slower than than incoming data arrives, the buffer might continuing growing endlessly, finally resulting in an OOME.
synchronized (buf) { | ||
if (buf.wpos() >= chan.getLocalMaxPacketSize() && buf.available() > 0) { | ||
buf.notifyAll(); | ||
Thread.yield(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would the yield
be appropriate here? As the documentation specifies this is not something you typically call. Furthermore this is a busy wait construct, which is not pretty... Can't we solve this with a lock or latch.
Without a load of tests I'll also not gladly accept this PR..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could replace the yield by a very short sleep as a 32K buffer becomes quickly filled when having a high bandwidth -- maybe 1ms? To me, this seemed the most simple solution, but I'm quite new to SSHJ.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is exactly the reason why I think that limiting it on the current starting buffer size is a bad idea. We should limit it on some configurable amount, else you get a very staggered pattern...
Did you experience something in production or is this something hypothetical...?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is exactly the reason why I think that limiting it on the current starting buffer size is a bad idea. We should limit it on some configurable amount, else you get a very staggered pattern...
That's true; I'm actually seeing this kind of pattern.
Did you experience something in production or is this something hypothetical...?
We are getting such OOME's reported quite frequently from our users (Git client) and I'm perfectly able to reproduce this when cloning a Git repository from my local VM.
@mstrap Can you work on the comments? |
@hierynomus , so your suggestions are to:
? I'm happy to follow up with a patch, however without any additional tests. |
If the buffer is read slower than than incoming data arrives,
the buffer might continuing growing endlessly, finally resulting
in an OOME.