Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(lib-storage): call AbortMultipartUpload when failing to CompleteMultipartUpload #6112

Merged
merged 3 commits into from
May 21, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 9 additions & 6 deletions lib/lib-storage/src/Upload.ts
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,9 @@ const MIN_PART_SIZE = 1024 * 1024 * 5;

export class Upload extends EventEmitter {
/**
* S3 multipart upload does not allow more than 10000 parts.
* S3 multipart upload does not allow more than 10,000 parts.
*/
private MAX_PARTS = 10000;
private MAX_PARTS = 10_000;

// Defaults.
private readonly queueSize: number = 4;
Expand All @@ -61,6 +61,7 @@ export class Upload extends EventEmitter {
private abortMultipartUploadCommand: AbortMultipartUploadCommand | null = null;

private uploadedParts: CompletedPart[] = [];
private uploadEnqueuedPartsCount = 0;
/**
* Last UploadId if the upload was done with MultipartUpload and not PutObject.
*/
Expand Down Expand Up @@ -207,17 +208,17 @@ export class Upload extends EventEmitter {

private async __doConcurrentUpload(dataFeeder: AsyncGenerator<RawDataPart, void, undefined>): Promise<void> {
for await (const dataPart of dataFeeder) {
if (this.uploadedParts.length > this.MAX_PARTS) {
if (this.uploadEnqueuedPartsCount > this.MAX_PARTS) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reason for this change: this.uploadedParts.length is updated after an async operation (UploadPart), which causes a delay.

Due to request concurrency, this.uploadedParts.length can be far behind the actual number of parts that have been iterated and submitted.

throw new Error(
`Exceeded ${this.MAX_PARTS} as part of the upload to ${this.params.Key} and ${this.params.Bucket}.`
`Exceeded ${this.MAX_PARTS} parts in multipart upload to Bucket: ${this.params.Bucket} Key: ${this.params.Key}.`
);
}

if (this.abortController.signal.aborted) {
return;
}

// Use put instead of multi-part for one chunk uploads.
// Use put instead of multipart for one chunk uploads.
if (dataPart.partNumber === 1 && dataPart.lastPart) {
return await this.__uploadUsingPut(dataPart);
}
Expand Down Expand Up @@ -263,6 +264,8 @@ export class Upload extends EventEmitter {
eventEmitter.on("xhr.upload.progress", uploadEventListener);
}

this.uploadEnqueuedPartsCount += 1;

const partResult = await this.client.send(
new UploadPartCommand({
...this.params,
Expand Down Expand Up @@ -378,7 +381,7 @@ export class Upload extends EventEmitter {
}

/**
* Abort the last multi-part upload in progress
* Abort the last multipart upload in progress
* if we know the upload id, the user did not specify to leave the parts, and
* we have a prepared AbortMultipartUpload command.
*/
Expand Down
Loading