-
-
Notifications
You must be signed in to change notification settings - Fork 906
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
s3cmd info does not work with ceph nautilus radosgw #1090
Comments
Hi, this is curious. |
Hello, here the result of the command on the 2 clusters : Ceph Nautilus : image-net_KO.log |
It was a misconfiguration on the ceph side, I close this issue. |
Thanks for the update, I was about to reply to you. To better understand, thanks to your debug logs, we can see that,
So, what I'm curious to understand is:
Thanks |
the "rgw dns name" entry was not set, the rgw didn't manage vhost correctly. |
Ok, I see, it is just a side effect of something unrelated. |
…a in ?lifecycle response
Despite not being a client issue, I just pushed a fix to be more safe/resilient if ever it was to happening again. |
Hello,
I have a new fresh ceph cluster in Nautilus and I cannot get s3cmd info on buckets. It works on my older cluster in Luminous. I only change URL endpoint, access_key and secret_key in the .s3cfg file.
Is this issue on s3cmd or ceph side ?
Thanks for your help.
Best regards,
Yoann
The text was updated successfully, but these errors were encountered: