-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[S3 upload] API to upload outputstream #1268
Comments
Can you describe your use-case in a little bit more depth? You'd like us to return an Something like this? (Ignore syntax for now, just trying to understand) SdkOutputStream<PutObjectResponse> contentStream = s3.initiatePutObject(request);
contentStream.write("My Data".getBytes());
PutObjectResponse response = contentStream.complete(); |
See also: #1139 (comment) |
Sorry for Late reply...
Actually I will be wrapping the contentStream into my zip output stream object
|
Would a PipedInputStream and output stream work for you? You'd still need to know the content length in this case and set in on the PutObjectRequest otherwise the SDK will buffer it into memory. If you don't know the content length then I think your best option is to do a multipart upload and buffer each part into memory. The minimum for part size is 5MB so memory consumption is manageable. I don't believe S3 supports chunked encoding which would allow for dynamic content, if that's desired than you'll have to make a feature request to the S3 service team. |
@shorea I have tried the Piped Input/Output Stream, it involves making a reader thread and a writer thread and synchronizing them and messaging between them, that wasn't quite effective and had few performance hits as well, also there is lot of boiler plate code, I am already considering this as my backup option. I was hoping an option from the sdk itself, but since you have mentioned that the support is not from the s3 itself, than I think I will have to rely on workarounds. |
Yeah if we don't know the content length we have to buffer contents into memory which is obviously not ideal. Doing the multipart upload makes the buffering less of a problem but adds complexity to the upload. |
Going to close this and open a feature request in our V2 repo. I think there are a couple of things we can do to make streaming easier via TransferManager |
Current API in sdk accepts either file or inputStream object to upload content, which is more sort of a pull mechanism from s3 to upload content.
Can we have a API more sort of a push mechanism, like an API which accepts outputStream object to upload any content.
Actually we are using s3 to upload large files in GBs to be distributed to our users, which we generate based on our users request, currently we save the file locally and then upload it, in ideal case we would like to generate/write the zip file directly to s3.
The text was updated successfully, but these errors were encountered: