Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SpooledTemporaryFile.write() is meant to be called with small chunks,… #400

Closed
wants to merge 1 commit into from

Conversation

jeffcjohnson
Copy link

… not once with a large payload

This fixes #383.

It was sending the entire contents of the S3 file to SpooledTemporaryFile.write() in a single call which defeats the purpose of using SpooledTemporaryFile as it only checks if it needs to switch to disk after each write.

@codecov-io
Copy link

codecov-io commented Oct 4, 2017

Codecov Report

Merging #400 into master will decrease coverage by 0.05%.
The diff coverage is 0%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #400      +/-   ##
==========================================
- Coverage    76.1%   76.05%   -0.06%     
==========================================
  Files          11       11              
  Lines        1578     1566      -12     
==========================================
- Hits         1201     1191      -10     
+ Misses        377      375       -2
Impacted Files Coverage Δ
storages/backends/s3boto3.py 86.54% <0%> (-0.05%) ⬇️
storages/backends/s3boto.py 87.41% <0%> (-0.05%) ⬇️
storages/backends/gcloud.py 95.7% <0%> (+0.9%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ea5649f...4188866. Read the comment docs.

@jschneier
Copy link
Owner

This fix was merged in ea0986d.

@jschneier jschneier closed this Aug 31, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Amazon S3: can't get file chunk without loading whole file into memory
4 participants