Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chunker: Always seek on the uncompressed stream. #15669

Closed
wants to merge 1 commit into from

Conversation

benjaminp
Copy link
Collaborator

The WriteRequest.write_offset field has bizarre semantics during compressed uploads as documented in the remote API protos: https://github.com/bazelbuild/remote-apis/blob/3b4b6402103539d66fcdd1a4d945f660742665ca/build/bazel/remote/execution/v2/remote_execution.proto#L241-L248 In particular, the write offset of the first WriteRequest refers to the offset in the uncompressed source.

This change ensures we always seek the uncompressed stream to the correct offset when starting an upload call. The old code could fail to resume compressed uploads under some conditions. The progressiveCompressedUploadShouldWork test purported to exercise this situation. The test, however, contained the same logic error as the code under test.

The `WriteRequest.write_offset` field has bizarre semantics during compressed uploads as documented in the remote API protos: https://github.com/bazelbuild/remote-apis/blob/3b4b6402103539d66fcdd1a4d945f660742665ca/build/bazel/remote/execution/v2/remote_execution.proto#L241-L248 In particular, the write offset of the first `WriteRequest` refers to the offset in the uncompressed source.

This change ensures we always seek the uncompressed stream to the correct offset when starting an upload call. The old code could fail to resume compressed uploads under some conditions. The `progressiveCompressedUploadShouldWork` test purported to exercise this situation. The test, however, contained the same logic error as the code under test.
@benjaminp benjaminp requested a review from a team as a code owner June 13, 2022 21:11
@brentleyjones
Copy link
Contributor

Would this fully fix #14654 then? Might be worth an inclusion in a 5.3 if one gets made.

@sgowroji sgowroji added team-Remote-Exec Issues and PRs for the Execution (Remote) team awaiting-review PR is awaiting review from an assigned reviewer labels Jun 14, 2022
@benjaminp benjaminp deleted the chunker-seeking branch June 15, 2022 14:32
@brentleyjones
Copy link
Contributor

@bazel-io flag

@bazel-io bazel-io added the potential release blocker Flagged by community members using "@bazel-io flag". Should be added to a release blocker milestone label Jun 15, 2022
@ckolli5
Copy link

ckolli5 commented Jun 17, 2022

@bazel-io fork 5.3.0

@bazel-io bazel-io removed the potential release blocker Flagged by community members using "@bazel-io flag". Should be added to a release blocker milestone label Jun 17, 2022
@ckolli5
Copy link

ckolli5 commented Jun 30, 2022

Hello @benjaminp, I am trying to cherry pick these changes to release-5.3.0 but presubmit checks are failing. Could you please help me in cherry picking these changes with appropriate commits. Thanks!

ckolli5 added a commit that referenced this pull request Jul 7, 2022
* Chunker: Always seek on the uncompressed stream.

The `WriteRequest.write_offset` field has bizarre semantics during compressed uploads as documented in the remote API protos: https://github.com/bazelbuild/remote-apis/blob/3b4b6402103539d66fcdd1a4d945f660742665ca/build/bazel/remote/execution/v2/remote_execution.proto#L241-L248 In particular, the write offset of the first `WriteRequest` refers to the offset in the uncompressed source.

This change ensures we always seek the uncompressed stream to the correct offset when starting an upload call. The old code could fail to resume compressed uploads under some conditions. The `progressiveCompressedUploadShouldWork` test purported to exercise this situation. The test, however, contained the same logic error as the code under test.

Closes #15669.

PiperOrigin-RevId: 455083727
Change-Id: Ie22dacf31f15644c7a83f49776e7a633d8bb4bca

* Updated chunker.java file.

* Update src/test/java/com/google/devtools/build/lib/remote/ByteStreamUploaderTest.java

Co-authored-by: Benjamin Peterson <benjamin@locrian.net>

* Update src/test/java/com/google/devtools/build/lib/remote/ByteStreamUploaderTest.java

Co-authored-by: Benjamin Peterson <benjamin@locrian.net>

* Update src/test/java/com/google/devtools/build/lib/remote/ByteStreamUploaderTest.java

Co-authored-by: Benjamin Peterson <benjamin@locrian.net>

Co-authored-by: Benjamin Peterson <benjamin@engflow.com>
Co-authored-by: Benjamin Peterson <benjamin@locrian.net>
*/
public void seek(long toOffset) throws IOException {
if (toOffset < offset) {
if (initialized && toOffset >= offset && !compressed) {
ByteStreams.skipFully(data, toOffset - offset);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was just reviewing the release notes today for bazel 5.3.0 and came across this.

it looks like with this change, offset no longer updated here when skipFully is called. Just want to sanity check that this is the intentional behavior? (I am not super familiar with bazel internals and how seek is called, but just worried this would result in extra bytes discarded if offset is not updated)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks; this is a bug. It think it's unlikely to be triggered in practice, since seeking an initialized chunker forward is rare.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

got it, thanks for clarifying!

@ShreeM01 ShreeM01 removed the awaiting-review PR is awaiting review from an assigned reviewer label Sep 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
team-Remote-Exec Issues and PRs for the Execution (Remote) team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants