-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 compatible storage #52
Comments
Agreed. host_base and host_bucket do not seem to be supported. |
+1 This would make the tool much more useful, allow for a profile design like that found in s3cmd could potentially be a good way to do it? |
These 2 options needs to be read from .s3cmd to connect to a differente service (in this case, dreamhost):
|
I currently don't have access to those services. Maybe someone can help test those settings? BTW, new s4cmd switched to boto3 library. I assume those services are API-compatible with S3. So should have no problem. |
host_base and host_bucket do not work or are being ignored. In my 30 second search of the the the py code I could not find any reference to them. Also if you set the options in your configuration file and start tcpdump you will still see it trying to head out to s3-1.amazonaws.com |
@chouhanyang I have access to a Cloudian installation that I can help test someone's implementation of these changes against. |
I have access to a cloudian and riak-s2 cluster as well that I can test against. |
too bad, i was really excited to find this tool after waiting for hours for s3cmd to upload 10000 files... but unfortunately the fact that it seems ONLY to work with AWS makes it useless to me, since we have our own S3 compatible storage. Please implement support of the host_base and host_bucket parameters! |
@arnolde until this is fixed and added as a feature, consider the duck.sh tool from those who wrote Cyberduck. I have found it is much faster than S3Cmd for many/large files. Let us hope this feature is added soon, it doesn't seem far away. |
A change to support --endpoint-url was added to s4cmd recently #82 (still pending release), and I was wondering if that allows access to radosgw or cloudian as mentioned above. Since I also don't have access to either of those, it is difficult for me to test out any changes for this. |
@navinpai this works for me with a 3rd party s3 compatible API as long as I add
s4cmd v2.1.0 |
Awesome! Sounds great! I'll go ahead and close this issue then. Thanks for the confirmation @ajostergaard |
:) Worth adding a note to the docs? If a PR would be welcome happy to give it a go. |
Would be useful to be able to specify an endpoint for S3 compatible storage, such as radosgw for ceph.
The text was updated successfully, but these errors were encountered: