Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 compatible storage #52

Closed
snerdish opened this issue Feb 15, 2016 · 13 comments
Closed

S3 compatible storage #52

snerdish opened this issue Feb 15, 2016 · 13 comments

Comments

@snerdish
Copy link

Would be useful to be able to specify an endpoint for S3 compatible storage, such as radosgw for ceph.

@erincerys
Copy link

Agreed. host_base and host_bucket do not seem to be supported.

@triwats
Copy link

triwats commented Feb 17, 2016

+1

This would make the tool much more useful, allow for a profile design like that found in s3cmd could potentially be a good way to do it?

@skydiver
Copy link

These 2 options needs to be read from .s3cmd to connect to a differente service (in this case, dreamhost):

host_base = objects.dreamhost.com
host_bucket = %(bucket)s.objects.dreamhost.com

@chouhanyang
Copy link

I currently don't have access to those services. Maybe someone can help test those settings?

BTW, new s4cmd switched to boto3 library. I assume those services are API-compatible with S3. So should have no problem.

@ursenj
Copy link

ursenj commented Sep 26, 2016

host_base and host_bucket do not work or are being ignored. In my 30 second search of the the the py code I could not find any reference to them.

Also if you set the options in your configuration file and start tcpdump you will still see it trying to head out to s3-1.amazonaws.com

@erincerys
Copy link

@chouhanyang I have access to a Cloudian installation that I can help test someone's implementation of these changes against.

@ursenj
Copy link

ursenj commented Sep 26, 2016

I have access to a cloudian and riak-s2 cluster as well that I can test against.

@arnolde
Copy link

arnolde commented Oct 18, 2016

too bad, i was really excited to find this tool after waiting for hours for s3cmd to upload 10000 files... but unfortunately the fact that it seems ONLY to work with AWS makes it useless to me, since we have our own S3 compatible storage. Please implement support of the host_base and host_bucket parameters!

@triwats
Copy link

triwats commented Oct 18, 2016

@arnolde until this is fixed and added as a feature, consider the duck.sh tool from those who wrote Cyberduck. I have found it is much faster than S3Cmd for many/large files. Let us hope this feature is added soon, it doesn't seem far away.

https://duck.sh/

@navinpai
Copy link
Contributor

A change to support --endpoint-url was added to s4cmd recently #82 (still pending release), and I was wondering if that allows access to radosgw or cloudian as mentioned above. Since I also don't have access to either of those, it is difficult for me to test out any changes for this.

@ajostergaard
Copy link

ajostergaard commented Oct 14, 2018

@navinpai this works for me with a 3rd party s3 compatible API as long as I add AWS_DEFAULT_REGION to the environment, for example:

# AWS_DEFAULT_REGION=<MYREGION> s4cmd --endpoint-url=https://<MYEP> ls s3://<MYBUCKET>/

s4cmd v2.1.0

@navinpai
Copy link
Contributor

Awesome! Sounds great! I'll go ahead and close this issue then. Thanks for the confirmation @ajostergaard

@ajostergaard
Copy link

ajostergaard commented Oct 16, 2018

:) Worth adding a note to the docs? If a PR would be welcome happy to give it a go.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants