fix incorrect comparison in BufferedOutputStream #1010
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This bug was causing some unintuitive behavior such as a maximum file size of 2 GiB on POSIX systems which I mentioned in #1008 (reply in thread)
Intuitively, imagine if
len = 5676725273
andbufSize = 131072
. Previously, the loop would iterate once.n = MIN(bufSize - pos, len)
gets evaluated tobufSize
. After copying 131072 bytes to the buffer, it goes onto the next iteration.pos == bufSize
. It writes the 131072 bytes and resetspos
to zero. Now,len
(which is5676725273 - 131072 = 5676594201
) is much bigger thanbufSize
so it callsthis->s->write(b, len)
. This is an invalid operation since the len is much bigger than the buffer. In fact, the linuxwrite
function can only write 2147479552 bytes. So inFile.h
, theif (x < (ssize_t)len)
fails withSTREAM_ERROR
.The loop quits with
STREAM_ERROR
but we never check the return status ofostream->write
here in PLY.cpp so we end up with a mysterious 2 GB file.This PR fixes it to the intended behavior where we write the buffer size and then this branch only writes whatever straggler is left behind on the last iteration. I've verified that it works by printing out debug statements and running
DensifyPointCloud --estimate-scale 1
it on a large point cloud.With this fix, densified point clouds with scale can now go over 2^31 bytes. For example: