avoid chunked transfers in putBlob when possible #26
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is to avoid Downloads.jl from initiating chunked transfer when a file IOStream is supplied to
putBlob
.In 0.4.0, we have switched to using Downloads.jl as the HTTP client instead of HTTP.jl. This issue happens because Downloads.jl
is not able to determine the content length when an IO instance is provided to upload data from, and switches to using a chunked transfer encoding. And the putBlob Azure API endpoint does not support chunked transfers.
The solution is to not use chunked transfers for putBlob. This PR (JuliaLang/Downloads.jl#167) to Downloads.jl would let it also look into the content-length header that we supply to determine the data size to expect in the IO instance we pass.
But until the Downloads.jl PR is merged (and backported), we need this change in Azure.jl to detect IO handles that point to locally mappable files and supply the correct content length and
IOBuffer
of memory mapped file contents as data to upload instead.fixes: #25