Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cephfs: Add volumesnapshotclass for external-storage (backport #4541) #4543

Merged
merged 2 commits into from
Apr 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion scripts/k8s-storage/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,13 @@ This job runs the [Kubernetes end-to-end external storage tests][1] with
different driver configurations/manifests (in the `driver-*.yaml` files). Each
driver configuration refers to a StorageClass that is used while testing.

The StorageClasses are created with the `create-storageclass.sh` script and the
The StorageClasses are created with the `create-storageclasses.sh` script and the
`sc-*.yaml.in` templates.

The VolumeSnapshotClasses are created with the
`create-volumesnapshotclasses.sh` script and the
`volumesnapshotclass-*.yaml.in` templates.

The Ceph-CSI Configuration from the `ceph-csi-config` ConfigMap is created with
`create-configmap.sh` after the deployment is finished. The ConfigMap is
referenced in the StorageClasses and contains the connection details for the
Expand Down
27 changes: 27 additions & 0 deletions scripts/k8s-storage/create-volumesnapshotclasses.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
#!/bin/sh
#
# Create VolumeSnapshotClasses from a template (volumesnapshotclass-*.yaml.in) and replace keywords
# like @@CLUSTER_ID@@.
#
# These VolumeSnapshotClasses can then be used by driver-*.yaml manifests in the
# k8s-e2e-external-storage CI job.
#
# Requirements:
# - kubectl in the path
# - working KUBE_CONFIG either in environment, or default config files
# - deployment done with Rook
#

# exit on error
set -e

WORKDIR=$(dirname "${0}")

TOOLBOX_POD=$(kubectl -n rook-ceph get pods --no-headers -l app=rook-ceph-tools -o=jsonpath='{.items[0].metadata.name}')
FS_ID=$(kubectl -n rook-ceph exec "${TOOLBOX_POD}" -- ceph fsid)

for sc in "${WORKDIR}"/volumesnapshotclass-*.yaml.in
do
sed "s/@@CLUSTER_ID@@/${FS_ID}/" "${sc}" |
kubectl create -f -
done
2 changes: 1 addition & 1 deletion scripts/k8s-storage/sc-rbd.yaml.in
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ parameters:
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
Expand Down
11 changes: 11 additions & 0 deletions scripts/k8s-storage/volumesnapshotclass-cephfs.yaml.in
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: k8s-storage-e2e-cephfs
driver: cephfs.csi.ceph.com
parameters:
clusterID: @@CLUSTER_ID@@
csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph
deletionPolicy: Delete
Loading