-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prod cluster noobaa-default-backing-store out of default 50Gi storage #222
Comments
@Milstein mentioned for his project "It was working well until it didn't. We have ~240GB of data, and I was able to move 95.833 GiB into OpenShift. Then things stopped working. At some point, I started getting 500 errors saying InternalError: We encountered an internal error. Please try again. For more debugging info, I tried using aws s3 CLI instead of rclone and got the same result." |
@computate mentioned "I'm also having problems writing to object storage: |
Anybody know if we should:
I was trying to find the most relevant documentation, and it might be this document on tweaking object storage, and it might be this one for deploying OpenShift container storage on Google Cloud. |
Will |
The default noobaa-default-backing-store was created with too small (1 50Gi volume, max 20 50Gi volumes = 1Ti). We will create a larger object-backing-store with 1 10Ti volume up to 20. Closes nerc-project/operations#222
Now NERC OpenShift users can create Object Bucket Claims like this in their projects, pointed to the new apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: my-bucket
namespace: my-project
spec:
generateBucketName: my-bucket
storageClassName: object-bucket-storage |
It looks like the NooBaa default backing store is in Rejected state, probably because it NooBaa in prod has a tiny 50Gi disk that is 99% full. See these links:
The text was updated successfully, but these errors were encountered: