Releases: ceph/ceph-csi
Ceph-CSI v3.7.2 Release
Changelog or Highlights:
Bug Fixes:
CephFS
- Delete subvolume if SetAllMetadata fails #3435
- Allow subvolume creation if ceph cluster doesnt support metadata API #3423
RBD
- Fix volume leak if metadata operation fails #3436
Vendor Update
- Rebase: golang.org/x/text/language to v0.3.8 to fix a vulnerability #3439
CI improvements
- Create kubernetes cluster with podman driver #3420
Breaking Changes
None.
Ceph-CSI v3.7.1 Release
Bug Fixes:
- rbd: fix bug in kmip kms Decrypt function & improve error msg #3341
- rbd: modify stripSecret mechanism in logGRPC() #3350
- cephfs: return success if metadata operation not supported #3352
- rbd: change default FsGroupPolicy to "File" for RBD CSI driver #3364
- rbd: map only primary image #3373
- ci: use resync to sync helm charts #3374
- cephfs: Fix subvolumegroup creation #3376
- rbd: create token and use it for vault SA everytime possible #3378
- rbd: use blocklist range cmd, fallback if it fails #3386
NOTE
Helm upgrade may fail with message:
UPGRADE FAILED: cannot patch "rbd.csi.ceph.com" with kind CSIDriver: CSIDriver.storage.k8s.io "rbd.csi.ceph.com" is invalid: spec.fsGroupPolicy: Invalid value: "File": field is immutable"
FAILED! => {"changed": false, "command": "/usr/sbin/helm --version=v3.7.1 upgrade -i --reset-values --create-namespace -f=/tmp/tmp2sr2me9a.yml ceph-csi ceph-csi/ceph-csi-rbd", "msg": "Failure when executing Helm command. Exited 1.\nstdout: \nstderr: Error: UPGRADE FAILED: cannot patch \"rbd.csi.ceph.com\" with kind CSIDriver: CSIDriver.storage.k8s.io \"rbd.csi.ceph.com\" is invalid: spec.fsGroupPolicy: Invalid value: \"File\": field is immutable\n", "stderr": "Error: UPGRADE FAILED: cannot patch \"rbd.csi.ceph.com\" with kind CSIDriver: CSIDriver.storage.k8s.io \"rbd.csi.ceph.com\" is invalid: spec.fsGroupPolicy: Invalid value: \"File\": field is immutable\n", "stderr_lines": ["Error: UPGRADE FAILED: cannot patch \"rbd.csi.ceph.com\" with kind CSIDriver: CSIDriver.storage.k8s.io \"rbd.csi.ceph.com\" is invalid: spec.fsGroupPolicy: Invalid value: \"File\": field is immutable"], "stdout": "", "stdout_lines": []}
If so, delete the csidriver object
kubectl delete csidriver rbd.csi.ceph.com
Then do helm upgrade
Ceph-CSI v3.7.0 Release
We are excited to announce another feature packed release of Ceph CSI , v3.7.0. This is another great step towards making it possible to use enhanced features of Container Storage Interface ( CSI) with Ceph Cluster in the backend. With this release, we are introducing many brand new features and enhancements to Ceph CSI driver. Also this release enabled a smooth integration to various projects. Here are the changelog / release highlights..
Changelog and Highlights:
Features
- KMIP integration for RBD PVC encryption
- The Key Management Interoperability Protocol (KMIP)
is an extensible communication protocol
that defines message formats for the manipulation
of cryptographic keys on a key management server.
Ceph-CSI can now be configured to connect to
various KMS using KMIP for encrypting RBD volumes.
- The Key Management Interoperability Protocol (KMIP)
- NFS
- Added support for volume expansion, snapshot, restore and clone.
- Added NFS nodeserver within CephCSI with support for pod networking with nsenter.
- Support enabling PV and snapshot metadata on the RBD images and CephFS subvolumes
- For persistent volumes, clones and volume restores we support adding PVName/PVCName/PVCNamespace and ClusterName details
- For snapshot volumes we support adding snapshot-name/snapshot-namespace/snapshotcontent-name and ClusterName details
- Shallow Read Only support for Ceph CSI driver:
- cephfs-csi expose CephFS snapshots as shallow, read-only volumes, without needing to clone the underlying snapshot data (https://github.com/ceph/ceph-csi/blob/devel/docs/design/proposals/cephfs-snapshot-shallow-ro-vol.md ) which enables users to Restore snapshots selectively - users may want to traverse snapshots, restoring data to a writable volume more selectively instead of restoring the whole snapshot and this feature also help to perform more efficient Volume backup.
Enhancements
- All kubernetes sidecars ( external provisioner,snapshotter, resizer..etc) are rebased to latest available versions. Along with other dependency module updates this release consume go-ceph v0.17.0 and kubernetes 1.24.4 version.
- snapshot API support has been lifted to GA version in this release.
- From this release onwards, the CSI driver make use of
File
fsgroup policy for its fsgroup based operations. - New feature gates are enabled ( HonorPVReclaimPolicy..etc) in the sidecar deployments.
Bug Fixes
- While mounting the volume, CSI drivers no longer open world wide permission on mount path ( See ).
- Support linux kernels <=4.11.0, /sys/bus/rbd/supported_features is part of Linux kernel v4.11.0, prepare the supported feature attributes and use them in case if supported_features file is missing (See #2678)
- Fix volume healer for StagingTargetPath issue for Kubernetes 1.24 (See #3176)
- RBACs are restricted to a great extend in this release version compared to previous. The CSI driver operate on least required RBAC in a cluster from now on.
E2E
- many tests are added for making sure we stay with backward compatibility for existing features of v3.6.
- new tests are added for features introduced in this release
- lots of cleanup and deprecated API removals done on the test framework
- Dropped support for kubernetes v<=1.22 tests in the framework
Deprecation
- Volumereplication service running on controller server is deprecated and replaced by CSI-Addons, see #3314 for more details
- cephfs provisioner will not make use of attacher sidecar from this release onwards. See #3149 for more details
Breaking Changes
- NFS daemonset is renamed from
csi-nfs-node
tocsi-nfsplugin
, refer to upgrade steps for more details.
NOTE
Helm upgrade may fail with message:
UPGRADE FAILED: cannot patch "rbd.csi.ceph.com" with kind CSIDriver: CSIDriver.storage.k8s.io "rbd.csi.ceph.com" is invalid: spec.fsGroupPolicy: Invalid value: "File": field is immutable"
FAILED! => {"changed": false, "command": "/usr/sbin/helm --version=v3.7.0 upgrade -i --reset-values --create-namespace -f=/tmp/tmp2sr2me9a.yml ceph-csi ceph-csi/ceph-csi-rbd", "msg": "Failure when executing Helm command. Exited 1.\nstdout: \nstderr: Error: UPGRADE FAILED: cannot patch \"rbd.csi.ceph.com\" with kind CSIDriver: CSIDriver.storage.k8s.io \"rbd.csi.ceph.com\" is invalid: spec.fsGroupPolicy: Invalid value: \"File\": field is immutable\n", "stderr": "Error: UPGRADE FAILED: cannot patch \"rbd.csi.ceph.com\" with kind CSIDriver: CSIDriver.storage.k8s.io \"rbd.csi.ceph.com\" is invalid: spec.fsGroupPolicy: Invalid value: \"File\": field is immutable\n", "stderr_lines": ["Error: UPGRADE FAILED: cannot patch \"rbd.csi.ceph.com\" with kind CSIDriver: CSIDriver.storage.k8s.io \"rbd.csi.ceph.com\" is invalid: spec.fsGroupPolicy: Invalid value: \"File\": field is immutable"], "stdout": "", "stdout_lines": []}
If so, delete the csidriver object
kubectl delete csidriver rbd.csi.ceph.com
Then do helm upgrade
Release Image : docker pull quay.io/cephcsi/cephcsi:v3.7.0
New Contributors ( Thanks !! π )
- @losil made their first contribution in #2993
- @Cytrian made their first contribution in #3091
- @naveensrinivasan made their first contribution in #3127
- @irq0 made their first contribution in #2912
- @iceman91176 made their first contribution in #3177
- @BenoitKnecht made their first contribution in #3232
- @takmatsu made their first contribution in #3233
- @anthonyeleven made their first contribution in #3274
- @palvarez89 made their first contribution in #3273
Full Changelog: v3.6.2...v3.7.0
Thanks to awesome Ceph CSI community for this great release π π
Ceph-CSI v3.6.2 Release
Changelog or Highlights:
Bug Fixes:
- Add allowPrivilegeEscalation: true to containerSecurityContext to nodeplugin daemonset
NFS
- Delete the CephFS volume when the export is already removed
RBD
- Use vaultAuthPath variable name in error msg
- Support pvc-pvc clone with different sc & encryption
- Consider rbd as default mounter if not set
- Fix bug with missing supported_features
CephFS
- Skip NetNamespaceFilePath if the volume is pre-provisioned
CI improvements
- Improve logging for kubectl_retry helper
- Fix commitlint problem
- Prevent ERR trap inheritance for kubectl_retry
Breaking Changes
None.
Ceph-CSI v3.6.1 Release
Changelog or Highlights:
Feature:
- Add network namespace to support pod networking for CephFS and RBD plugins.
Bug Fixes/Enhancements:
NFS
- Add NFS provisioner & plugin sa to scc.yaml
- Use go-ceph API for creating/deleting exports
- Return gRPC status from CephFS CreateVolume failure
RBD
- Fix logging in ExecuteCommandWithNSEnter
- Check nbd tool features only for RBD driver
- Use leases for leader election in RBD omap controller
- Consider remote image health state for PromoteVolume
Breaking Changes
None.
Ceph-CSI v3.6.0 Release
We are excited to announce another feature packed release of Ceph CSI , v3.6.0. This is another great step towards making it possible to use enhanced features of Container Storage Interface ( CSI) with Ceph Cluster in the backend. With this release, we are introducing many brand new features and enhancements to Ceph CSI driver. Also this release enabled a smooth integration to various projects. Here are the changelog / release highlights..
Changelog and Highlights:
New Features
NFS based dynamic provisioner:
Ceph-CSI already creates CephFS volumes, that can be mounted over the native CephFS protocol. A new provisioner in Ceph-CSI can create CephFS volumes, and include the required NFS CSI parameters so that the NFS CSI driver can mount the CephFS volume over NFS. The CephFS volumes would be internally managed by the NFS provisioner, and only be exposed as NFS CSI volumes towards the consumers.
Fuse Mount recovery
Mounts managed by ceph-fuse may get corrupted by e.g. the ceph-fuse process exiting abruptly, or its parent container being terminated, taking down its child processes with it. This was an issue for FUSE based CephFS mounts performed by the Ceph CSI driver, however from this release onwards CSI driver is capable of detecting the corrupted ceph fuse mounts and it will try to remount automatically.
AWS KMS Encryption
Ceph-CSI can be configured to use Amazon STS, when kubernetes cluster is configured with OIDC identity provider to fetch credentials to access Amazon KMS. With Amazon STS and kubernetes cluster is configured with OIDC identity provider, credentials to access Amazon KMS can be fetched using oidc-token(serviceaccount token).
Quincy Support
Ceph CSI driver has been built on top of Quincy release of Ceph.
Enhancements
-
Improved
RBD image flattening
support: from this release onwards, only temporary intermediate clones and snapshot will be flattened. See #2190 for more details. -
Topology aware
provisioning has been revisited with this release and enhancements have been made to make it more production ready. -
image features
as optional parameter in Storage Class make the rbd images features in the storageclass parameter list as optional so that default image features of librbd can be used. -
Added support for
deep-flatten
image feature: as deep-flatten is long supported in ceph and its enabled by default in the librbd, via this enhancement we are providing an option to enable it in cephcsi for the rbd images we are creating. -
Added
selinuxMount
flag to enable/disable/etc/selinux
host mount:selinuxMount
flag has been added to enable/disable/etc/selinux host
mount inside pods to support selinux-enabled filesystems -
A new reference tracker has been introduced with this release which is a key-based implementation of a reference counter. This allows accounting in situations where idempotency must be preserved.
Bug Fixes:
-
BlockMode recalimspace request has been adjusted to avoid data loss on the reclaim space operation
-
RBD and CephFS driver has fixed an issue at node mount operation, to take care explicit permission set done by the CSI driver previous to this release which was causing unwanted pod delay.
-
RBD force promote timeout has been increased to 2 minutes to give enough time for rollback to complete.
-
Storage class map options has been corrected to ensure it works in various combinations of the input setting from the storage class and also made it flexible to work with different mounters like kernel,nbd..etc.
-
Previously, restoring a snapshot with a new PVC results with a wrong
dataPoolName
in case of initial volume linked
to a storageClass with topology constraints and erasure coding. This has been fixed in this release. -
omap
deletion in DeleteSnapshot operation has been fixed with this release which helps to cleanup the omap properly once the subvolume snapshot is deleted.
Rebase
The dependencies of Ceph CSI driver are updated to latest version to consume various fixes and enhancements in the same.
E2E
Documentation
Breaking Changes
- RBD Thick provisioning support is removed see #2795 for more details.
Release Image : docker pull quay.io/cephcsi/cephcsi:v3.6.0
Thanks to awesome Ceph CSI community for this great release π π
Ceph-CSI v3.5.1 Release
Changelog or Highlights:
Bug Fix:
- Log cephfs clone failure message in CreateVolumeRequest
- Use ceph 16.2.7 as the base image
- Fix RBD parallel PVC creation hang issue
Breaking Changes
None.
Ceph-CSI v3.5.0 Release
We are excited to announce another feature packed release of Ceph CSI , v3.5.0. This is another great step towards making it possible to use enhanced features of Container Storage Interface ( CSI) with Ceph Cluster in the backend. With this release, we are introducing many brand new features and enhancements to Ceph CSI driver. Also this release enabled a smooth integration to various projects. Here are the changelog / release highlights..
Ceph CSI 3.5.0 Release Changelog/Highlights
New features
IBM HPCS/Key Protect KMS Support
Ceph CSI added support for IBM HPCS/Key protect KMS services. This enables admins to enable PV encryption by making use of IBM key protect services in a kubernetes or openshift cluster. ( #2723)
Network Fencing
Ceph CSI now supports Network Fencing; which allows admins to blocklist any malicious clients. (#2738)
Kubernetes in-tree RBD volume migration
Ceph CSI support in-tree kubernetes volume migration to CSI driver ( kubernetes.io/rbd to rbd.ceph.csi.com
) which is available with kube 1.23 release. All requests to the kubernetes in-tree provisioner will be redirected to the Ceph CSI RBD driver for its operations. Refer here for more details.
Support for Reclaimspace operation
The Ceph CSI driver has added support for csi addon's nodeReclaimSpace
and controllerReclaimSpace
operation while csi addons sidecar request these services from the CSI driver. (#2724 )
Ephemeral Volume
Ephermeral Volume Support have been validated with this release, With ephemeral volume support a user can specify ephemeral volumes in its pod spec and tie the lifecycle of the PVC with the POD.
RWOP PVC access mode
By advertising proper capabilities introduced in latest CSI spec 1.5, the Ceph CSI driver have been validated against RWOP PVC access mode which is introduced recently in kubernetes release.
Enhancements
Go-Ceph
Ceph CSI now uses go-ceph API for adding task to flatten image and remove image from trash instead of cmdline. This is expected to improve performance.
RBD krbd mounter
This release added RBD feature support for object-map, fast-diff
..etc with krbd mounter.
RBD nbd mounter
rbd-nbd can now support expansion of volumes, encrypted volumes and journal based mirroring. rbd-nbd log strategies can be tuned to, preserve, compress, remove
on detach, read more about it here. nbd mounter utilize rbd-nbd cookie support at ceph-csi, to avoid any misconfiguration issues on nodeplugin restart, this adds to more reliable functionality of volume healer.
StorageClass Enhancements
The fixed security context can be enabled for PVs by mount options in the SC. This make it possible to specify selinux-related mount options like context.
Ceph CSI now provides a way to supply multiple mounters mapOption
from storageclass, like mapOption: "kbrd:v1,v2,v3;nbd:v1,v2,v3"
Expansion of Volumes
The user can create the bigger PVC from an existing PVC and restore a snapshot to a bigger size PVC
Rebase
Along with many other dependency update of go packages which Ceph CSI uses, Ceph CSI have been rebased to make use of latest code release of kubernetes (v1.23) and also to make use of latest available sidecars.
e2e
- rwop validation for cephfs and rbd volumes
- added tests for bigger size rbd and cephfs Volumes
- ephemeral validation have been enabled for rbd and cephfs in the e2e
- test is added to validate encrypted image mount inside the nodeplugin
- validation added for thick encrypted PVC restore
- added tests to validate PVC restore from vaultKMS to vaulttenantSAKMS
- intree migration tests are part of the e2e
- ceph.conf deployment model has been accommodated in the tests
- test cases added for pvc-pvcclone chain with depth 2
- added tests for volume expansion, encrypted volumes with rbd-nbd mounter
- covered tests for different accessModes and volumeModes with rbd-nbd mounter
- added cases for snapshot restore chain with depth 2
...etc.
Documentation
- design doc added for, CephFS snapshots as shallow RO volumes, in-tree migration, hpcs/key protect integration, clusterid poolid mapping,..etc
- updated support matrix for deprecated ceph csi releases
- updated development guide for new rules
- updated rbd-nbd documentation with volume expansion, encryption volume support, various rbd-nbd log strategies..etc
- support matrix update to readme
....etc
Breaking Changes
None
Release Image : docker pull quay.io/cephcsi/cephcsi:v3.5.0
Thanks to awesome Ceph CSI community for this great release π π
Ceph-CSI v3.4.0 Release
We are excited to announce another feature packed release of Ceph CSI , v3.4.0. This is another great step towards making it possible to use enhanced features of Container Storage Interface ( CSI) with Ceph Cluster in the backend. With this release, we have lifted many highly usable production features ( Snapshot, Clone, Metrics..etc) to its higher level of support. Also enhancements have been done on features like Encryption, Disaster Recovery, NBD mounter, Thick Provisioning..etc. Code improvements which increase performance on various CSI operations are also part of this release. With this release Ceph CSI make use of latest versions of kubernetes , sidecar containers, go ceph library which include many bug fixes and enhancements its own.
Changelog or Highlights:
Features:
Beta:
Below features have been lifted from its Alpha
support to Beta
- Snapshot creation and deletion
- Volume restore from snapshot
- Volume clone support
- Volume/PV Metrics of File Mode Volume
- Volume/PV Metrics of Block Mode Volume
Alpha:
- rbd-nbd volume mounter
Enhancement:
- Restore RBD snapshot to a different Pool
- Snapshot schedule support for RBD mirrored PVC
- Mirroring support for thick PVC
- Multi-Tenant support for vault encryption
- AmazonMetadata KMS provider support
- rbd-nbd volume healer support
- Locking enhancement for improving POD deletion performance
- Improvements in lock handling for snap and clone operations
- Better thick provisioning support
- Create CephFS subvolume with VolumeNamePrefix
- CephFS Subvolume path addition in PV object
- Consumption of go-ceph APIs for various CephFS controller and node operations.
- Resize of the RBD encrypted volume
- Better error handling for GRPC
- Golang profiling support for debugging
- Updated Kubernetes sidecar versions to the latest release
- Kubernetes dependency update to v1.21.2
- Create storageclass and secrets using helm charts
CI/E2E
- Expansion of RBD encrypted volumes
- Update and addition of new static golang tools
- Kubernetes v1.21 support
- Unit tests for SecretsKMS
- Test for Vault with ServiceAccount per Tenant
- E2E for user secret based metadata encryption
- Update rook.sh and Ceph cluster version in E2E
- Added RBD test for testing sc, secret via helm
- Update feature gates setting from minikube.sh
- Add CephFS test for sc, secret via helm
- Add e2e for static PVC without imageFeature parameter
- Make use of snapshot v1 API and client sets in e2e tests
- Validate thick-provisioned PVC-PVC cloning
- Adding retry support for various e2e failure scenarios
- Refactor KMS configuration and usage
Documentation
- Hashicorp Vault with a ServiceAccount per Tenant
- Added documentation for Disaster Recovery
- rbd-nbd mounter
- Updated helm chart doc
- Contribution guide update
Breaking Changes
None
Thanks to awesome Ceph CSI community for this great release π π
Ceph CSI v3.2.2 Release
Changelog or Highlights:
Bug Fixes
Build
- Update ceph to 15.2.11 to fix CVE-2021-20288
Breaking Changes
None.