You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is currently a PR adding Rook Ceph to Nebari for the conda store and Jupyterhub PVCs. It will be an alpha feature initially. It will be tested for a period of time, and likely moved to be the default configuration option later on pending results of testing in normal usage. This issue is to remind us to replace all the other PVCs with rook ceph as well.
Here's a list of all the PVCs running on a GCP deployment of Nebari not using rook currently
The PR mentioned earlier will handle conda-store-dev-share, nebari-conda-store-storage, and nfs-server-nfs-storage. The remaining PVCs are associated with Keycloak, Loki, Conda store's database storage, jupyterhub, conda store's minio storage, conda store's redis storage, and traefik's ingress certs. In the first RookCeph PR, Rook Ceph is deployed in stage 07, but it will need to be moved to an earlier stage so that we can create the PVCs with Rook Ceph.
Value and/or benefit
There is some benefit in reducing complexity by using the same storage type for all PVCs. There is potentially increased performance since Ceph can scale. Ceph will likely help solve the issues associated with availability zones and subnets that we see on AWS such as #1683.
Anything else?
No response
The text was updated successfully, but these errors were encountered:
Context
There is currently a PR adding Rook Ceph to Nebari for the conda store and Jupyterhub PVCs. It will be an alpha feature initially. It will be tested for a period of time, and likely moved to be the default configuration option later on pending results of testing in normal usage. This issue is to remind us to replace all the other PVCs with rook ceph as well.
Here's a list of all the PVCs running on a GCP deployment of Nebari not using rook currently

The PR mentioned earlier will handle conda-store-dev-share, nebari-conda-store-storage, and nfs-server-nfs-storage. The remaining PVCs are associated with Keycloak, Loki, Conda store's database storage, jupyterhub, conda store's minio storage, conda store's redis storage, and traefik's ingress certs. In the first RookCeph PR, Rook Ceph is deployed in stage 07, but it will need to be moved to an earlier stage so that we can create the PVCs with Rook Ceph.
Value and/or benefit
There is some benefit in reducing complexity by using the same storage type for all PVCs. There is potentially increased performance since Ceph can scale. Ceph will likely help solve the issues associated with availability zones and subnets that we see on AWS such as #1683.
Anything else?
No response
The text was updated successfully, but these errors were encountered: