Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Throttling of S3 transfers #343

Closed
lukemundy opened this issue Nov 6, 2015 · 5 comments
Closed

Throttling of S3 transfers #343

lukemundy opened this issue Nov 6, 2015 · 5 comments
Labels
duplicate This issue is a duplicate. response-requested Waiting on additional information or feedback.

Comments

@lukemundy
Copy link

According to the comments in boto3/s3/transfer.py it is possible for S3 transfers to be throttled based on user-configurable max bandwidth setting:

https://github.com/boto/boto3/blob/develop/boto3/s3/transfer.py#L21

I've been browsing this repo as well as the botocore repo for the last 20 minutes or so but can't find any further reference to any throttling behaviour nor a configuration option for max bandwidth.

Does this functionality exist or was it a planned feature that hasn't been implemented yet? If its the latter, when is this functionality due to be completed? I'm currently doing long-term uploads to S3 and am looking for something to limit the bandwidth use in order to prevent network saturation.

@kyleknap
Copy link
Contributor

kyleknap commented Nov 6, 2015

There is no functionality for max bandwidth. In terms automatic throttling, it means that it will do exponential backoff retry logic if request start failing due to throttling related issues (i.e. read timeouts).

The best option you have as of now is setting the max_concurrency on TransferConfig object: https://boto3.readthedocs.org/en/latest/reference/customizations/s3.html#boto3.s3.transfer.TransferConfig. This will ensure that only a set amount of requests are being made at a time. Let me know if this helps. Otherwise, I can mark it as a feature to be able to specify a max bandwidth rate. We have the same request in the AWS CLI.

@kyleknap kyleknap added the response-requested Waiting on additional information or feedback. label Nov 6, 2015
@jamesls
Copy link
Member

jamesls commented Nov 6, 2015

@kyleknap We should probably remove that snippet from the docstrings though. The intent was to implement it in the initial version, but given that it didn't make the first cut of this feature, we should probably remove it until we support this so other users don't get confused.

@lukemundy
Copy link
Author

Thanks for the response.

Unfortunately I don't think passing values for max_concurrency will help, at least not for my use case. Essentially I want to be able to upload large backups into S3 over the course of 8 or so hours without it saturating the network link and causing issues for other services. Reducing max_concurrency I feel will have very little if any effect on the total amount of bandwidth used. That being said, I haven't had an opportunity to do a test yet.

It would be great if this could be considered for a feature addition on either the AWS S3 CLI or even just this particular library. Thus far I haven't been able to find a practical solution that will allow me to control bandwidth usage, at least not on Windows platforms.

@JordonPhillips JordonPhillips added the duplicate This issue is a duplicate. label Jul 13, 2017
@JordonPhillips
Copy link
Contributor

This is the same as aws/aws-cli#1090

@ctappy
Copy link

ctappy commented Aug 22, 2017

Sorry to necropost but i feel if someone else had the same issue they wouldn't be resolving their issue with using trickle after seeing this post. Trickle and large s3 files will cause the trickle to crash using boto3 with 10 concurrent(default settings) uploads, changing the concurrent uploads will resolve the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue is a duplicate. response-requested Waiting on additional information or feedback.
Projects
None yet
Development

No branches or pull requests

5 participants