You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What did you do?
We are performing a medusa backup of a cassandra datacenter with more than 200GBs of data per node, using AWS EBS volumes, then trying to restore it either on a different cluster or on the same. We are not achieving that because medusa-restore init container downloads backup data on tmp folder, that is an emptydir, causing disk-pressure on the node (only has 30Gb storage locally) and pod eviction.
Did you expect to see some different?
I would expect medusa to download data on the paths mounted to the EBS volume in order to use a storage that was defined accordingly to the sizes needed by the data.
insert K8ssandra Operator logs relevant to the issue here
Anything else we need to know?:
┆Issue is synchronized with this Jira Task by Unito
┆friendlyId: K8SSAND-1596
┆priority: Medium
The text was updated successfully, but these errors were encountered:
sync-by-unitobot
changed the title
Medusa Restore is downloading data to /tmp folder that is ephemeral.
K8SSAND-1596 ⁃ Medusa Restore is downloading data to /tmp folder that is ephemeral.
Jun 23, 2022
You're totally right, we didn't change the default which will perform the restore on /tmp, nor did we give the ability to change the location where files will be downloaded.
We should straight away download the files on the volume that holds the Cassandra data, which is already mounted anyway.
Hello k8ssandra comunity!
What did you do?
We are performing a medusa backup of a cassandra datacenter with more than 200GBs of data per node, using AWS EBS volumes, then trying to restore it either on a different cluster or on the same. We are not achieving that because medusa-restore init container downloads backup data on tmp folder, that is an emptydir, causing disk-pressure on the node (only has 30Gb storage locally) and pod eviction.
Did you expect to see some different?
I would expect medusa to download data on the paths mounted to the EBS volume in order to use a storage that was defined accordingly to the sizes needed by the data.
Environment
AWS EKS 1.21
AWS EBS GP3 via EBS CSI Driver
K8ssandra Operator version:
v1.1.1
Kubernetes version information:
Anything else we need to know?:
┆Issue is synchronized with this Jira Task by Unito
┆friendlyId: K8SSAND-1596
┆priority: Medium
The text was updated successfully, but these errors were encountered: