From db34fb26ec80c4f6188f1668809ccb804143ad3d Mon Sep 17 00:00:00 2001 From: abhi16394 <32352976+abhi16394@users.noreply.github.com> Date: Thu, 1 Sep 2022 19:27:36 -0400 Subject: [PATCH 01/15] Update _index.md --- content/docs/snapshots/volume-group-snapshots/_index.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/content/docs/snapshots/volume-group-snapshots/_index.md b/content/docs/snapshots/volume-group-snapshots/_index.md index c266498bef..94dfdfe670 100644 --- a/content/docs/snapshots/volume-group-snapshots/_index.md +++ b/content/docs/snapshots/volume-group-snapshots/_index.md @@ -6,6 +6,8 @@ Description: > Volume Group Snapshot module of Dell CSI drivers --- ## Volume Group Snapshot Feature +The Dell CSM Volume Group Snapshotter is an operator which extends Kubernetes API to support crash-consistent snapshots of groups of volumes. +Volume Group Snapshot supports Powerflex and Powerstore driver. In order to use Volume Group Snapshots, ensure the volume snapshot module is enabled. - Kubernetes Volume Snapshot CRDs @@ -28,6 +30,7 @@ spec: # "Delete" - delete VolumeSnapshot instances memberReclaimPolicy: "Retain" volumesnapshotclass: "" + timeout: 90sec pvcLabel: "vgs-snap-label" # pvcList: # - "pvcName1" From c8acbed18a846343a0ff856da54c34f99b55282c Mon Sep 17 00:00:00 2001 From: Randeep Sharma Date: Fri, 2 Sep 2022 10:32:28 +0530 Subject: [PATCH 02/15] update-powerscale-quota-features-static-provisioning --- content/docs/csidriver/features/powerscale.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/docs/csidriver/features/powerscale.md b/content/docs/csidriver/features/powerscale.md index 824ba6b3e5..5c9e0e13bb 100644 --- a/content/docs/csidriver/features/powerscale.md +++ b/content/docs/csidriver/features/powerscale.md @@ -22,7 +22,7 @@ You can use existent volumes from the PowerScale array as Persistent Volumes in 1. Open your volume in One FS, and take a note of volume-id. 2. Create PersistentVolume and use this volume-id as a volumeHandle in the manifest. Modify other parameters according to your needs. 3. In the following example, the PowerScale cluster accessZone is assumed as 'System', storage class as 'isilon', cluster name as 'pscale-cluster' and volume's internal name as 'isilonvol'. The volume-handle should be in the format of =_=_==_=_==_=_= -4. If Quotas are enabled in the driver, it is recommended to add the Quota ID to the description of the NFS export in the following format: +4. If Quotas are enabled in the driver, it is required to add the Quota ID to the description of the NFS export in the following format: `CSI_QUOTA_ID:sC-kAAEAAAAAAAAAAAAAQEpVAAAAAAAA` 5. Quota ID can be identified by querying the PowerScale system. From af4f6cb2c1c320909ceb1c034dc736ad0f67c219 Mon Sep 17 00:00:00 2001 From: panigs7 Date: Fri, 2 Sep 2022 06:30:16 -0400 Subject: [PATCH 03/15] CSI Unity XT-links added in release notes --- content/docs/csidriver/release/unity.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/docs/csidriver/release/unity.md b/content/docs/csidriver/release/unity.md index feb5ced18b..226a0d9831 100644 --- a/content/docs/csidriver/release/unity.md +++ b/content/docs/csidriver/release/unity.md @@ -7,7 +7,7 @@ description: Release notes for Unity XT CSI driver ### New Features/Changes -- Added support to configure fsGroupPolicy. +- [Added support to configure fsGroupPolicy](https://github.com/dell/csm/issues/361) ### Fixed Issues CSM Resiliency: Occasional failure unmounting Unity volume for raw block devices via iSCSI. From 0615420eea9ab7eae4e9e0a9dff76e3662c7903f Mon Sep 17 00:00:00 2001 From: Utkarsh Dubey Date: Fri, 2 Sep 2022 18:44:25 +0530 Subject: [PATCH 04/15] Updates replication for auto SRDFG --- .../docs/replication/deployment/powermax.md | 21 ++++++++++++++----- content/docs/replication/high-availability.md | 4 ++-- 2 files changed, 18 insertions(+), 7 deletions(-) diff --git a/content/docs/replication/deployment/powermax.md b/content/docs/replication/deployment/powermax.md index 2d9fca7e0a..14d3c845e7 100644 --- a/content/docs/replication/deployment/powermax.md +++ b/content/docs/replication/deployment/powermax.md @@ -22,10 +22,21 @@ While using any SRDF groups, ensure that they are for exclusive use by the CSI P * If an SRDF group is already in use by a CSI driver, don't use it for provisioning replicated volumes outside CSI provisioning workflows. There are some important limitations that apply to how CSI PowerMax driver uses SRDF groups - -* One replicated storage group __always__ contains volumes provisioned from a single namespace -* While using SRDF mode Async/Metro, a single SRDF group can be used to provision volumes within a single namespace. You can still create multiple storage classes using the same SRDF group for different Service Levels. +* One replicated storage group using Async/Sync __always__ contains volumes provisioned from a single namespace. +* While using SRDF mode Async, a single SRDF group can be used to provision volumes within a single namespace. You can still create multiple storage classes using the same SRDF group for different Service Levels. But all these storage classes will be restricted to provisioning volumes within a single namespace. -* When using SRDF mode Sync, a single SRDF group can be used to provision volumes from multiple namespaces. +* When using SRDF mode Sync/Metro, a single SRDF group can be used to provision volumes from multiple namespaces. + +#### Automatic creation of SRDF Groups +CSI Driver for Powermax supports automatic creation of SRDF Groups starting **v2.4.0** with help of **10.0** REST endpoints. +To use this feature: +* Remove _replication.storage.dell.com/RemoteRDFGroup_ and replication.storage.dell.com/RDFGroup params from the storage classes before creating first replicated volume. +* Driver will check next available RDF pair and use them to create volumes. +* This enables customers to use same storage class across namespace to create volume. + +Limitation of Auto SRDFG: +* For Async mode, this feature is supported for namespaces with at most 7 characters. +* RDF label used to map namespace with the RDF group has limit of 10 char. 3 char is used for cluster prefix to make RDFG unique across clusters. #### In Kubernetes Ensure you installed CRDs and replication controller in your clusters. @@ -123,8 +134,8 @@ Let's go through each parameter and what it means: METRO, driver does not need `RemoteStorageClassName` and `RemoteClusterID` as it supports METRO with single cluster configuration. * `replication.storage.dell.com/Bias` when the RdfMode is set to METRO, this parameter is required to indicate driver to use Bias or Witness. If set to true, the driver will configure METRO with Bias, if set to false, the driver will configure METRO with Witness. -* `replication.storage.dell.com/RdfGroup` is the local SRDF group number, as configured. -* `replication.storage.dell.com/RemoteRDFGroup` is the remote SRDF group number, as configured. +* `replication.storage.dell.com/RdfGroup` is the local SRDF group number, as configured. It is optional for using Auto SRDF group by driver. +* `replication.storage.dell.com/RemoteRDFGroup` is the remote SRDF group number, as configured. It is optional for using Auto SRDF group by driver. Let's follow up that with an example, let's assume we have two Kubernetes clusters and two PowerMax storage arrays: diff --git a/content/docs/replication/high-availability.md b/content/docs/replication/high-availability.md index 1f2d9b7fe2..3f4aacf5d6 100644 --- a/content/docs/replication/high-availability.md +++ b/content/docs/replication/high-availability.md @@ -37,9 +37,9 @@ parameters: SYMID: '000000000001' ServiceLevel: 'Bronze' replication.storage.dell.com/IsReplicationEnabled: 'true' - replication.storage.dell.com/RdfGroup: '7' + replication.storage.dell.com/RdfGroup: '7' # Optional for Auto SRDF group replication.storage.dell.com/RdfMode: 'METRO' - replication.storage.dell.com/RemoteRDFGroup: '7' + replication.storage.dell.com/RemoteRDFGroup: '7' # Optional for Auto SRDF group replication.storage.dell.com/RemoteSYMID: '000000000002' replication.storage.dell.com/RemoteServiceLevel: 'Bronze' reclaimPolicy: Delete From 34e705e06f60dd7a9405a82cc606710f2ef896c6 Mon Sep 17 00:00:00 2001 From: abhi16394 <32352976+abhi16394@users.noreply.github.com> Date: Fri, 2 Sep 2022 17:06:33 -0400 Subject: [PATCH 05/15] Update powerflex.md --- content/docs/csidriver/release/powerflex.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/content/docs/csidriver/release/powerflex.md b/content/docs/csidriver/release/powerflex.md index b77837c82e..9a3b0cd0fa 100644 --- a/content/docs/csidriver/release/powerflex.md +++ b/content/docs/csidriver/release/powerflex.md @@ -6,13 +6,12 @@ description: Release notes for PowerFlex CSI driver ## Release Notes - CSI PowerFlex v2.4.0 ### New Features/Changes -- Added InstallationID annotation for volume attributes. -- Added optional parameter protectionDomain to storageclass. +- [Added optional parameter protectionDomain to storageclass](https://github.com/dell/csm/issues/415) +- [Added InstallationID annotation for volume attributes.](https://github.com/dell/csm/issues/434) - RHEL 8.6 support added -### Fixed Issues - -- Enhancements to volume group snapshotter. +### Fixed Issues +- [Enhancements and fixes to volume group snapshotter](https://github.com/dell/csm/issues/371) ### Known Issues From d30d4c45330c09f905cd9ee34d010c4ffb8d430f Mon Sep 17 00:00:00 2001 From: Randeep Sharma Date: Mon, 5 Sep 2022 12:30:52 +0530 Subject: [PATCH 06/15] add csm issue --- content/docs/csidriver/release/powerscale.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/docs/csidriver/release/powerscale.md b/content/docs/csidriver/release/powerscale.md index f9359b74c1..cfa57705a0 100644 --- a/content/docs/csidriver/release/powerscale.md +++ b/content/docs/csidriver/release/powerscale.md @@ -7,7 +7,7 @@ description: Release notes for PowerScale CSI driver ### New Features/Changes -- Added support to add client only to root clients when RO volume is created from snapshot and RootClientEnabled is set to true. +- [Added support to add client only to root clients when RO volume is created from snapshot and RootClientEnabled is set to true.](https://github.com/dell/csm/issues/362) ### Fixed Issues @@ -19,8 +19,8 @@ There are no fixed issues in this release. | If the length of the nodeID exceeds 128 characters, the driver fails to update the CSINode object and installation fails. This is due to a limitation set by CSI spec which doesn't allow nodeID to be greater than 128 characters. | The CSI PowerScale driver uses the hostname for building the nodeID which is set in the CSINode resource object, hence we recommend not having very long hostnames in order to avoid this issue. This current limitation of 128 characters is likely to be relaxed in future Kubernetes versions as per this issue in the community: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/issues/581

**Note:** In kubernetes 1.22 this limit has been relaxed to 192 characters. | | If some older NFS exports /terminated worker nodes still in NFS export client list, CSI driver tries to add a new worker node it fails (For RWX volume). | User need to manually clean the export client list from old entries to make successful addition of new worker nodes. | | Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation. | Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100 | -| fsGroupPolicy may not work as expected without root privileges for NFS only
https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "RootClientEnabled" = "true" in the storage class parameter | -| Driver controller logs shows "VendorVersion=2.3.0+dirty" | Retagging of the 2.3.0 driver image to fix dirty tag will cause issue with the certified operator functionality. Update driver to 2.4.0 version | +| fsGroupPolicy may not work as expected without root privileges for NFS only
https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "RootClientEnabled" = "true" in the storage class parameter | +| Driver controller logs shows "VendorVersion=2.3.0+dirty" | Update the driver to csi-powerscale 2.4.0 version | ### Note: From be3d6f79ecc3187061b194fede455199531426be Mon Sep 17 00:00:00 2001 From: Randeep Sharma Date: Mon, 5 Sep 2022 17:15:43 +0530 Subject: [PATCH 07/15] update-offline-upgrade-unpack-link --- content/docs/csidriver/release/powerscale.md | 2 +- content/docs/csidriver/upgradation/drivers/offline.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/content/docs/csidriver/release/powerscale.md b/content/docs/csidriver/release/powerscale.md index cfa57705a0..01909ced74 100644 --- a/content/docs/csidriver/release/powerscale.md +++ b/content/docs/csidriver/release/powerscale.md @@ -20,7 +20,7 @@ There are no fixed issues in this release. | If some older NFS exports /terminated worker nodes still in NFS export client list, CSI driver tries to add a new worker node it fails (For RWX volume). | User need to manually clean the export client list from old entries to make successful addition of new worker nodes. | | Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation. | Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100 | | fsGroupPolicy may not work as expected without root privileges for NFS only
https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "RootClientEnabled" = "true" in the storage class parameter | -| Driver controller logs shows "VendorVersion=2.3.0+dirty" | Update the driver to csi-powerscale 2.4.0 version | +| Driver logs shows "VendorVersion=2.3.0+dirty" | Update the driver to csi-powerscale 2.4.0 | ### Note: diff --git a/content/docs/csidriver/upgradation/drivers/offline.md b/content/docs/csidriver/upgradation/drivers/offline.md index 1a7b1392fe..752de08e0f 100644 --- a/content/docs/csidriver/upgradation/drivers/offline.md +++ b/content/docs/csidriver/upgradation/drivers/offline.md @@ -5,5 +5,5 @@ description: Offline Upgrade of Dell CSI Storage Providers --- 1. To perform offline upgrade of the driver, please create an offline bundle as mentioned [here](./../../../installation/offline#building-an-offline-bundle). -2. Once the bundle is created, please unpack the bundle by following the steps mentioned [here](./../../../installation/offline##unpacking-the-offline-bundle-and-preparing-for-installation). +2. Once the bundle is created, please unpack the bundle by following the steps mentioned [here](./../../../installation/offline#unpacking-the-offline-bundle-and-preparing-for-installation). 3. Please use the driver specific upgrade steps to upgrade. \ No newline at end of file From 77385ee2a792df67cbe5dfd93e80ba36ad2ba5a0 Mon Sep 17 00:00:00 2001 From: Utkarsh Dubey Date: Mon, 5 Sep 2022 17:32:53 +0530 Subject: [PATCH 08/15] Update upgrade section for v2.4.0 --- .../csidriver/installation/helm/powermax.md | 15 +++---- .../csidriver/installation/offline/_index.md | 6 +-- .../installation/operator/powermax.md | 39 ++++++++++++++++--- .../csidriver/upgradation/drivers/powermax.md | 10 ++++- 4 files changed, 53 insertions(+), 17 deletions(-) diff --git a/content/docs/csidriver/installation/helm/powermax.md b/content/docs/csidriver/installation/helm/powermax.md index a7aeac6568..54750ae294 100644 --- a/content/docs/csidriver/installation/helm/powermax.md +++ b/content/docs/csidriver/installation/helm/powermax.md @@ -112,7 +112,7 @@ CSI Driver for Dell PowerMax supports PowerPath for Linux. Configure Linux Power Set up the PowerPath for Linux as follows: - All the nodes must have the PowerPath package installed . Download the PowerPath archive for the environment from [Dell EMC Online Support](https://www.dell.com/support/home/en-in/product-support/product/powerpath-for-linux/drivers). -- Untar the PowerPath archive, Copy the RPM package into a temporary folder and Install PowerPath using `rpm -ivh DellEMCPower.LINUX--..x86_64.rpm` +- `Untar` the PowerPath archive, Copy the RPM package into a temporary folder and Install PowerPath using `rpm -ivh DellEMCPower.LINUX--..x86_64.rpm` - Start the PowerPath service using `systemctl start PowerPath` ### (Optional) Volume Snapshot Requirements @@ -125,7 +125,7 @@ snapshot: ``` #### Volume Snapshot CRD's -The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. For installation, use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. For installation, use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd) #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers to support Volume snapshots. @@ -152,7 +152,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl ``` *NOTE:* -- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. +- It is recommended to use 6.0.x version of snapshotter/snapshot-controller. - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. ### (Optional) Replication feature Requirements @@ -173,7 +173,7 @@ CRDs should be configured during replication prepare stage with repctl as descri **Steps** -1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts. +1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts. 2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace powermax` to create a new one 3. Edit the `samples/secret/secret.yaml file, point to the correct namespace, and replace the values for the username and password parameters. These values can be obtained using base64 encoding as described in the following example: @@ -185,7 +185,8 @@ CRDs should be configured during replication prepare stage with repctl as descri 4. Create the secret by running `kubectl create -f samples/secret/secret.yaml`. 5. If you are going to install the new CSI PowerMax ReverseProxy service, create a TLS secret with the name - _csireverseproxy-tls-secret_ which holds an SSL certificate and the corresponding private key in the namespace where you are installing the driver. 6. Copy the default values.yaml file `cd helm && cp csi-powermax/values.yaml my-powermax-settings.yaml` -7. Edit the newly created file and provide values for the following parameters `vi my-powermax-settings.yaml` +7. Ensure the unisphere have 10.0 REST endpoint support. +8. Edit the newly created file and provide values for the following parameters `vi my-powermax-settings.yaml` | Parameter | Description | Required | Default | |-----------|--------------|------------|----------| @@ -343,7 +344,7 @@ global: csireverseproxy: # Set enabled to true if you want to use proxy enabled: true - image: dellemc/csipowermax-reverseproxy:v1.4.0 + image: dellemc/csipowermax-reverseproxy:v2.3.0 tlsSecret: csirevproxy-tls-secret deployAsSidecar: true port: 2222 @@ -391,7 +392,7 @@ global: csireverseproxy: # Set enabled to true if you want to use proxy enabled: true - image: dellemc/csipowermax-reverseproxy:v1.4.0 + image: dellemc/csipowermax-reverseproxy:v2.3.0 tlsSecret: csirevproxy-tls-secret deployAsSidecar: true port: 2222 diff --git a/content/docs/csidriver/installation/offline/_index.md b/content/docs/csidriver/installation/offline/_index.md index 63c4b96a29..2d10b99362 100644 --- a/content/docs/csidriver/installation/offline/_index.md +++ b/content/docs/csidriver/installation/offline/_index.md @@ -78,9 +78,9 @@ cd dell-csi-operator/scripts dellemc/csi-isilon:v2.0.0 dellemc/csi-isilon:v2.1.0 - dellemc/csipowermax-reverseproxy:v1.4.0 - dellemc/csi-powermax:v2.0.0 - dellemc/csi-powermax:v2.1.0 + dellemc/csipowermax-reverseproxy:v2.3.0 + dellemc/csi-powermax:v2.3.0 + dellemc/csi-powermax:v2.4.0 dellemc/csi-powerstore:v2.0.0 dellemc/csi-powerstore:v2.1.0 dellemc/csi-unity:v2.0.0 diff --git a/content/docs/csidriver/installation/operator/powermax.md b/content/docs/csidriver/installation/operator/powermax.md index 7c1e13c246..a13a8cc4c2 100644 --- a/content/docs/csidriver/installation/operator/powermax.md +++ b/content/docs/csidriver/installation/operator/powermax.md @@ -36,6 +36,35 @@ Set up the iSCSI initiators as follows: For more information about configuring iSCSI, see [Dell Host Connectivity guide](https://www.delltechnologies.com/asset/zh-tw/products/storage/technical-support/docu5128.pdf). +#### Linux multipathing requirements + +CSI Driver for Dell PowerMax supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver. + +Set up Linux multipathing as follows: + +- All the nodes must have the _Device Mapper Multipathing_ package installed. + *NOTE:* When this package is installed it creates a multipath configuration file which is located at `/etc/multipath.conf`. Please ensure that this file always exists. +- Enable multipathing using `mpathconf --enable --with_multipathd y` +- Enable `user_friendly_names` and `find_multipaths` in the `multipath.conf` file. + +As a best practice, use the following options to help the operating system and the mulitpathing software detect path changes efficiently: +```text +path_grouping_policy multibus +path_checker tur +features "1 queue_if_no_path" +path_selector "round-robin 0" +no_path_retry 10 +``` + +#### PowerPath for Linux requirements + +CSI Driver for Dell PowerMax supports PowerPath for Linux. Configure Linux PowerPath before installing the CSI Driver. + +Set up the PowerPath for Linux as follows: + +- All the nodes must have the PowerPath package installed . Download the PowerPath archive for the environment from [Dell EMC Online Support](https://www.dell.com/support/home/en-in/product-support/product/powerpath-for-linux/drivers). +- `Untar` the PowerPath archive, Copy the RPM package into a temporary folder and Install PowerPath using `rpm -ivh DellEMCPower.LINUX--..x86_64.rpm` +- Start the PowerPath service using `systemctl start PowerPath` #### Create secret for client-side TLS verification (Optional) Create a secret named powermax-certs in the namespace where the CSI PowerMax driver will be installed. This is an optional step and is only required if you are setting the env variable X_CSI_POWERMAX_SKIP_CERTIFICATE_VALIDATION to false. See the detailed documentation on how to create this secret [here](../../helm/powermax#certificate-validation-for-unisphere-rest-api-calls). @@ -179,7 +208,7 @@ metadata: namespace: test-powermax # <- Set the namespace to where you will install the CSI PowerMax driver spec: # Image for CSI PowerMax ReverseProxy - image: dellemc/csipowermax-reverseproxy:v2.1.0 # <- CSI PowerMax Reverse Proxy image + image: dellemc/csipowermax-reverseproxy:v2.3.0 # <- CSI PowerMax Reverse Proxy image imagePullPolicy: Always # TLS secret which contains SSL certificate and private key for the Reverse Proxy server tlsSecret: csirevproxy-tls-secret @@ -265,8 +294,8 @@ metadata: namespace: test-powermax spec: driver: - # Config version for CSI PowerMax v2.3.0 driver - configVersion: v2.3.0 + # Config version for CSI PowerMax v2.4.0 driver + configVersion: v2.4.0 # replica: Define the number of PowerMax controller nodes # to deploy to the Kubernetes release # Allowed values: n, where n > 0 @@ -275,8 +304,8 @@ spec: dnsPolicy: ClusterFirstWithHostNet forceUpdate: false common: - # Image for CSI PowerMax driver v2.3.0 - image: dellemc/csi-powermax:v2.3.0 + # Image for CSI PowerMax driver v2.4.0 + image: dellemc/csi-powermax:v2.4.0 # imagePullPolicy: Policy to determine if the image should be pulled prior to starting the container. # Allowed values: # Always: Always pull the image. diff --git a/content/docs/csidriver/upgradation/drivers/powermax.md b/content/docs/csidriver/upgradation/drivers/powermax.md index 6f551a181c..ffa4a0262b 100644 --- a/content/docs/csidriver/upgradation/drivers/powermax.md +++ b/content/docs/csidriver/upgradation/drivers/powermax.md @@ -10,10 +10,16 @@ Description: Upgrade PowerMax CSI driver You can upgrade CSI Driver for Dell PowerMax using Helm or Dell CSI Operator. -## Update Driver from v2.2 to v2.3 using Helm +**Note:** CSI Driver for Powermax v2.4.0 requires 10.0 REST endpoint support of Unisphere. +### Updating the CSI Driver to use 10.0 Unisphere + +1. Upgrade the Unisphere to have 10.0 endpoint support. +2. Update the `my-powermax-settings.yaml` to have endpoint with 10.0 support. + +## Update Driver from v2.3 to v2.4 using Helm **Steps** -1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the v2.3 driver. +1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the v2.4 driver. 2. Update the values file as needed. 2. Run the `csi-install` script with the option _\-\-upgrade_ by running: `cd ../dell-csi-helm-installer && ./csi-install.sh --namespace powermax --values ./my-powermax-settings.yaml --upgrade`. From d59af9aaacfcd985935a88e2899d535290bdc38b Mon Sep 17 00:00:00 2001 From: panigs7 Date: Tue, 6 Sep 2022 03:23:48 -0400 Subject: [PATCH 09/15] Updated sidecar versions and release note Unity XT --- content/docs/csidriver/_index.md | 2 +- content/docs/csidriver/installation/helm/unity.md | 2 +- .../docs/csidriver/installation/operator/_index.md | 12 ++++++------ .../docs/csidriver/installation/operator/unity.md | 1 - content/docs/csidriver/installation/test/unity.md | 4 ++-- content/docs/csidriver/release/unity.md | 2 +- content/docs/csidriver/troubleshooting/unity.md | 2 +- .../docs/csidriver/upgradation/drivers/operator.md | 2 +- content/docs/deployment/csmoperator/_index.md | 2 +- 9 files changed, 14 insertions(+), 15 deletions(-) diff --git a/content/docs/csidriver/_index.md b/content/docs/csidriver/_index.md index 8b5f026bd6..649a965fbf 100644 --- a/content/docs/csidriver/_index.md +++ b/content/docs/csidriver/_index.md @@ -53,7 +53,7 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes- {{}} | | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore | |---------------|:-------------------------------------------------------:|:----------------:|:--------------------------:|:----------------------------------:|:----------------:| -| Storage Array |5978.479.479, 5978.711.711, 6079.xxx.xxx
Unisphere 10.0 | 3.5.x, 3.6.x | 5.0.7, 5.1.0, 5.1.2 | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3, 9.4 | 1.0.x, 2.0.x, 2.1.x, 3.0 | +| Storage Array |5978.479.479, 5978.711.711, 6079.xxx.xxx
Unisphere 10.0 | 3.5.x, 3.6.x | 5.0.7, 5.1.0, 5.1.2, 5.2.x | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3, 9.4 | 1.0.x, 2.0.x, 2.1.x, 3.0 | {{
}} ### Backend Storage Details {{}} diff --git a/content/docs/csidriver/installation/helm/unity.md b/content/docs/csidriver/installation/helm/unity.md index c8e18fecee..48599c6711 100644 --- a/content/docs/csidriver/installation/helm/unity.md +++ b/content/docs/csidriver/installation/helm/unity.md @@ -88,7 +88,7 @@ Install CSI Driver for Unity XT using this procedure. *Before you begin* - * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.3.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure. + * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.4.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure. * In the top-level dell-csi-helm-installer directory, there should be two scripts, `csi-install.sh` and `csi-uninstall.sh`. * Ensure _unity_ namespace exists in Kubernetes cluster. Use the `kubectl create namespace unity` command to create the namespace if the namespace is not present. diff --git a/content/docs/csidriver/installation/operator/_index.md b/content/docs/csidriver/installation/operator/_index.md index ff3f767b75..88acc0fd52 100644 --- a/content/docs/csidriver/installation/operator/_index.md +++ b/content/docs/csidriver/installation/operator/_index.md @@ -76,7 +76,7 @@ The installation process involves the creation of a `Subscription` object either * _Automatic_ - If you want the Operator to be automatically installed or upgraded (once an upgrade becomes available) * _Manual_ - If you want a Cluster Administrator to manually review and approve the `InstallPlan` for installation/upgrades -**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.2`**. +**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.3`**. #### Pre-Requisite for installation with OLM Please run the following commands for creating the required `ConfigMap` before installing the `dell-csi-operator` using OLM. @@ -298,26 +298,26 @@ The below notes explain some of the general items to take care of. - args: - --volume-name-prefix=csiunity - --default-fstype=ext4 - image: k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0 + image: k8s.gcr.io/sig-storage/csi-provisioner:v3.2.0 imagePullPolicy: IfNotPresent name: provisioner - args: - --snapshot-name-prefix=csiunitysnap - image: k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1 + image: k8s.gcr.io/sig-storage/csi-snapshotter:v6.0.1 imagePullPolicy: IfNotPresent name: snapshotter - args: - --monitor-interval=60s - image: gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.5.0 + image: gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.6.0 imagePullPolicy: IfNotPresent name: external-health-monitor - - image: k8s.gcr.io/sig-storage/csi-attacher:v3.4.0 + - image: k8s.gcr.io/sig-storage/csi-attacher:v3.5.0 imagePullPolicy: IfNotPresent name: attacher - image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.1 imagePullPolicy: IfNotPresent name: registrar - - image: k8s.gcr.io/sig-storage/csi-resizer:v1.4.0 + - image: k8s.gcr.io/sig-storage/csi-resizer:v1.5.0 imagePullPolicy: IfNotPresent name: resizer ``` diff --git a/content/docs/csidriver/installation/operator/unity.md b/content/docs/csidriver/installation/operator/unity.md index 637f571ad2..d728919dde 100644 --- a/content/docs/csidriver/installation/operator/unity.md +++ b/content/docs/csidriver/installation/operator/unity.md @@ -210,7 +210,6 @@ kubectl edit configmap -n unity unity-config-params 3. Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation. ## Volume Health Monitoring -This feature is introduced in CSI Driver for Unity XT version v2.1.0. ### Operator based installation diff --git a/content/docs/csidriver/installation/test/unity.md b/content/docs/csidriver/installation/test/unity.md index db32d53c98..94e0b71d40 100644 --- a/content/docs/csidriver/installation/test/unity.md +++ b/content/docs/csidriver/installation/test/unity.md @@ -28,9 +28,9 @@ You can find all the created resources in `test-unity` namespace. kubectl delete -f ./test/sample.yaml ``` -## Support for SLES 15 SP2 +## Support for SLES 15 -The CSI Driver for Dell Unity XT requires the following set of packages installed on all worker nodes that run on SLES 15 SP2. +The CSI Driver for Dell Unity XT requires the following set of packages installed on all worker nodes that run on SLES 15. - open-iscsi **open-iscsi is required in order to make use of iSCSI protocol for provisioning** - nfs-utils **nfs-utils is required in order to make use of NFS protocol for provisioning** diff --git a/content/docs/csidriver/release/unity.md b/content/docs/csidriver/release/unity.md index 226a0d9831..a11b94ad9b 100644 --- a/content/docs/csidriver/release/unity.md +++ b/content/docs/csidriver/release/unity.md @@ -10,7 +10,7 @@ description: Release notes for Unity XT CSI driver - [Added support to configure fsGroupPolicy](https://github.com/dell/csm/issues/361) ### Fixed Issues -CSM Resiliency: Occasional failure unmounting Unity volume for raw block devices via iSCSI. +`fsGroup` specified in pod spec is not reflected in files or directories at mounted path of volume. ### Known Issues diff --git a/content/docs/csidriver/troubleshooting/unity.md b/content/docs/csidriver/troubleshooting/unity.md index 9905215390..84ba491046 100644 --- a/content/docs/csidriver/troubleshooting/unity.md +++ b/content/docs/csidriver/troubleshooting/unity.md @@ -8,9 +8,9 @@ description: Troubleshooting Unity XT Driver | --- | --- | | When you run the command `kubectl describe pods unity-controller- –n unity`, the system indicates that the driver image could not be loaded. | You may need to put an insecure-registries entry in `/etc/docker/daemon.json` or login to the docker registry | | The `kubectl logs -n unity unity-node-` driver logs show that the driver can't connect to Unity XT - Authentication failure. | Check if you have created a secret with correct credentials | -| `fsGroup` specified in pod spec is not reflected in files or directories at mounted path of volume. | fsType of PVC must be set for fsGroup to work. fsType can be specified while creating a storage class. For NFS protocol, fsType can be specified as `nfs`. fsGroup doesn't work for ephemeral inline volumes. | | Dynamic array detection will not work in Topology based environment | Whenever a new array is added or removed, then the driver controller and node pod should be restarted with command **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** when **topology-based storage classes are used**. For dynamic array addition without topology, the driver will detect the newly added or removed arrays automatically| | If source PVC is deleted when cloned PVC exists, then source PVC will be deleted in the cluster but on array, it will still be present and marked for deletion. | All the cloned PVC should be deleted in order to delete the source PVC from the array. | | PVC creation fails on a fresh cluster with **iSCSI** and **NFS** protocols alone enabled with error **failed to provision volume with StorageClass "unity-iscsi": error generating accessibility requirements: no available topology found**. | This is because iSCSI initiator login takes longer than the node pod startup time. This can be overcome by bouncing the node pods in the cluster using the below command the driver pods with **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** | | Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 < 1.25.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-unity/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. | | When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down
2. Delete the VolumeAttachment to the node that went down.
Now the volume can be attached to the new node. | +| Volume attachments are not removed after deleting the pods | If you are using Kubernetes version < 1.24, assign the volume name prefix such that the total length of volume name created in array should be more than 68 bytes. From Kubernetes version >= 1.24, this issue is taken care.
Please refer the kubernetes issue https://github.com/kubernetes/kubernetes/issues/97230 which has detailed explanation. | diff --git a/content/docs/csidriver/upgradation/drivers/operator.md b/content/docs/csidriver/upgradation/drivers/operator.md index 782d6ef1e5..51298cee83 100644 --- a/content/docs/csidriver/upgradation/drivers/operator.md +++ b/content/docs/csidriver/upgradation/drivers/operator.md @@ -25,5 +25,5 @@ The `Update approval` (**`InstallPlan`** in OLM terms) strategy plays a role whi - If the **`Update approval`** is set to `Automatic`, OpenShift automatically detects whenever the latest version of dell-csi-operator is available in the **`Operator hub`**, and upgrades it to the latest available version. - If the upgrade policy is set to `Manual`, OpenShift notifies of an available upgrade. This notification can be viewed by the user in the **`Installed Operators`** section of the OpenShift console. Clicking on the hyperlink to `Approve` the installation would trigger the dell-csi-operator upgrade process. -**NOTE**: The recommended version of OLM for Upstream Kubernetes is **`v0.18.3`** when upgrading operator to `v1.5.0`. +**NOTE**: The recommended version of OLM for Upstream Kubernetes is **`v0.18.3`** when upgrading operator to `v1.9.0`. diff --git a/content/docs/deployment/csmoperator/_index.md b/content/docs/deployment/csmoperator/_index.md index afb8519b81..38360d862e 100644 --- a/content/docs/deployment/csmoperator/_index.md +++ b/content/docs/deployment/csmoperator/_index.md @@ -63,7 +63,7 @@ Dell CSM Operator can be installed manually or via Operator Hub. {{< imgproc install_olm_pods.jpg Resize "2500x" >}}{{< /imgproc >}} ->**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.2`**. +>**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.3`**. ### Installation via Operator Hub `dell-csm-operator` can be installed via Operator Hub on upstream Kubernetes clusters & Red Hat OpenShift Clusters. From 24711823679fd52a822dc561b29707011fddbae6 Mon Sep 17 00:00:00 2001 From: panigs7 Date: Tue, 6 Sep 2022 07:52:15 -0400 Subject: [PATCH 10/15] storage array support for Unity XT --- content/docs/csidriver/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/docs/csidriver/_index.md b/content/docs/csidriver/_index.md index 649a965fbf..8ceb9743ff 100644 --- a/content/docs/csidriver/_index.md +++ b/content/docs/csidriver/_index.md @@ -53,7 +53,7 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes- {{
}} | | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore | |---------------|:-------------------------------------------------------:|:----------------:|:--------------------------:|:----------------------------------:|:----------------:| -| Storage Array |5978.479.479, 5978.711.711, 6079.xxx.xxx
Unisphere 10.0 | 3.5.x, 3.6.x | 5.0.7, 5.1.0, 5.1.2, 5.2.x | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3, 9.4 | 1.0.x, 2.0.x, 2.1.x, 3.0 | +| Storage Array |5978.479.479, 5978.711.711, 6079.xxx.xxx
Unisphere 10.0 | 3.5.x, 3.6.x | 5.0.7, 5.1.0, 5.1.2, 5.2.0 | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3, 9.4 | 1.0.x, 2.0.x, 2.1.x, 3.0 | {{
}} ### Backend Storage Details {{}} From 5906543cfb65c80c9bef0d543b7b1a99b818fae9 Mon Sep 17 00:00:00 2001 From: Utkarsh Dubey Date: Tue, 6 Sep 2022 19:38:46 +0530 Subject: [PATCH 11/15] Adds optional tag in sc --- content/docs/replication/deployment/powermax.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/docs/replication/deployment/powermax.md b/content/docs/replication/deployment/powermax.md index 14d3c845e7..06dc2ec149 100644 --- a/content/docs/replication/deployment/powermax.md +++ b/content/docs/replication/deployment/powermax.md @@ -30,14 +30,14 @@ There are some important limitations that apply to how CSI PowerMax driver uses #### Automatic creation of SRDF Groups CSI Driver for Powermax supports automatic creation of SRDF Groups starting **v2.4.0** with help of **10.0** REST endpoints. To use this feature: -* Remove _replication.storage.dell.com/RemoteRDFGroup_ and replication.storage.dell.com/RDFGroup params from the storage classes before creating first replicated volume. +* Remove _replication.storage.dell.com/RemoteRDFGroup_ and _replication.storage.dell.com/RDFGroup_ params from the storage classes before creating first replicated volume. * Driver will check next available RDF pair and use them to create volumes. * This enables customers to use same storage class across namespace to create volume. Limitation of Auto SRDFG: * For Async mode, this feature is supported for namespaces with at most 7 characters. * RDF label used to map namespace with the RDF group has limit of 10 char. 3 char is used for cluster prefix to make RDFG unique across clusters. - +* For namespace with more than 7 char, use manual entry of RDF groups in storage class. #### In Kubernetes Ensure you installed CRDs and replication controller in your clusters. @@ -116,8 +116,8 @@ parameters: replication.storage.dell.com/RemoteServiceLevel: replication.storage.dell.com/RdfMode: replication.storage.dell.com/Bias: "false" - replication.storage.dell.com/RdfGroup: - replication.storage.dell.com/RemoteRDFGroup: + replication.storage.dell.com/RdfGroup: # optional + replication.storage.dell.com/RemoteRDFGroup: # optional replication.storage.dell.com/remoteStorageClassName: replication.storage.dell.com/remoteClusterID: ``` From 764129795373ec7073b3dcb4d3eaa33ff2826d83 Mon Sep 17 00:00:00 2001 From: panigs7 Date: Tue, 6 Sep 2022 11:16:43 -0400 Subject: [PATCH 12/15] Update release docs Unity XT --- content/docs/csidriver/release/unity.md | 3 --- content/docs/csidriver/troubleshooting/unity.md | 1 + 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/content/docs/csidriver/release/unity.md b/content/docs/csidriver/release/unity.md index a11b94ad9b..9a0668e3c3 100644 --- a/content/docs/csidriver/release/unity.md +++ b/content/docs/csidriver/release/unity.md @@ -9,9 +9,6 @@ description: Release notes for Unity XT CSI driver - [Added support to configure fsGroupPolicy](https://github.com/dell/csm/issues/361) -### Fixed Issues -`fsGroup` specified in pod spec is not reflected in files or directories at mounted path of volume. - ### Known Issues | Issue | Workaround | diff --git a/content/docs/csidriver/troubleshooting/unity.md b/content/docs/csidriver/troubleshooting/unity.md index 84ba491046..cd398664b5 100644 --- a/content/docs/csidriver/troubleshooting/unity.md +++ b/content/docs/csidriver/troubleshooting/unity.md @@ -8,6 +8,7 @@ description: Troubleshooting Unity XT Driver | --- | --- | | When you run the command `kubectl describe pods unity-controller- –n unity`, the system indicates that the driver image could not be loaded. | You may need to put an insecure-registries entry in `/etc/docker/daemon.json` or login to the docker registry | | The `kubectl logs -n unity unity-node-` driver logs show that the driver can't connect to Unity XT - Authentication failure. | Check if you have created a secret with correct credentials | +| `fsGroup` specified in pod spec is not reflected in files or directories at mounted path of volume. | fsType of PVC must be set for fsGroup to work. fsType can be specified while creating a storage class. For NFS protocol, fsType can be specified as `nfs`. fsGroup doesn't work for ephemeral inline volumes. | | Dynamic array detection will not work in Topology based environment | Whenever a new array is added or removed, then the driver controller and node pod should be restarted with command **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** when **topology-based storage classes are used**. For dynamic array addition without topology, the driver will detect the newly added or removed arrays automatically| | If source PVC is deleted when cloned PVC exists, then source PVC will be deleted in the cluster but on array, it will still be present and marked for deletion. | All the cloned PVC should be deleted in order to delete the source PVC from the array. | | PVC creation fails on a fresh cluster with **iSCSI** and **NFS** protocols alone enabled with error **failed to provision volume with StorageClass "unity-iscsi": error generating accessibility requirements: no available topology found**. | This is because iSCSI initiator login takes longer than the node pod startup time. This can be overcome by bouncing the node pods in the cluster using the below command the driver pods with **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** | From f80414b32333413f5892d80474bb7f6933bf5c98 Mon Sep 17 00:00:00 2001 From: Yamunadevi N Shanmugam Date: Tue, 6 Sep 2022 20:47:36 +0530 Subject: [PATCH 13/15] updated PMAX docs --- content/docs/csidriver/release/powermax.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/content/docs/csidriver/release/powermax.md b/content/docs/csidriver/release/powermax.md index 8dcdc0ae88..15af4dcc66 100644 --- a/content/docs/csidriver/release/powermax.md +++ b/content/docs/csidriver/release/powermax.md @@ -8,12 +8,12 @@ description: Release notes for PowerMax CSI driver > Note: Starting from CSI v2.4.0, Only Unisphere 10.0 REST endpoints are supported. It is mandatory that Unisphere should be updated to 10.0. ### New Features/Changes -- Online volume expansion for replicated volumes. -- Added support for PowerMaxOS 10. -- Removed 9.x Unisphere REST endpoints support. -- Added 10.0 Unisphere REST endpoints support. -- Automatic SRDF group creation for PowerMax arrays (PowerMaxOS 10 and above). -- Added PowerPath support. +- [Online volume expansion for replicated volumes.](https://github.com/dell/csm/issues/336) +- [Added support for PowerMaxOS 10.](https://github.com/dell/csm/issues/389) +- [Removed 9.x Unisphere REST endpoints support.](https://github.com/dell/csm/issues/389) +- [Added 10.0 Unisphere REST endpoints support.](https://github.com/dell/csm/issues/389) +- [Automatic SRDF group creation for PowerMax arrays (PowerMaxOS 10 and above).](https://github.com/dell/csm/issues/411) +- [Added PowerPath support.](https://github.com/dell/csm/issues/436) ### Fixed Issues There are no fixed issues in this release. From 550508b2a70944db8172672d32d26bd2ec16f85e Mon Sep 17 00:00:00 2001 From: Yamunadevi N Shanmugam Date: Tue, 6 Sep 2022 21:13:54 +0530 Subject: [PATCH 14/15] updated PMAX docs --- content/docs/csidriver/installation/helm/isilon.md | 6 +++--- content/docs/csidriver/installation/helm/powerflex.md | 6 +++--- content/docs/csidriver/installation/helm/powermax.md | 2 +- content/docs/csidriver/installation/helm/powerstore.md | 6 +++--- content/docs/csidriver/installation/helm/unity.md | 6 +++--- content/docs/csidriver/installation/operator/_index.md | 6 +++--- content/docs/deployment/csmoperator/drivers/_index.md | 2 +- 7 files changed, 17 insertions(+), 17 deletions(-) diff --git a/content/docs/csidriver/installation/helm/isilon.md b/content/docs/csidriver/installation/helm/isilon.md index 991d309b2c..d7665b429e 100644 --- a/content/docs/csidriver/installation/helm/isilon.md +++ b/content/docs/csidriver/installation/helm/isilon.md @@ -47,14 +47,14 @@ controller: ``` #### Volume Snapshot CRD's -The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd) #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers: - A common snapshot controller - A CSI external-snapshotter sidecar -The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) +The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller) *NOTE:* - The manifests available on GitHub install the snapshotter image: @@ -103,7 +103,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl ``` *NOTE:* -- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. +- It is recommended to use 6.0.x version of snapshotter/snapshot-controller. ### (Optional) Replication feature Requirements diff --git a/content/docs/csidriver/installation/helm/powerflex.md b/content/docs/csidriver/installation/helm/powerflex.md index ff4af93181..ac3904b14e 100644 --- a/content/docs/csidriver/installation/helm/powerflex.md +++ b/content/docs/csidriver/installation/helm/powerflex.md @@ -78,14 +78,14 @@ controller: ``` #### Volume Snapshot CRD's -The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd) #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers: - A common snapshot controller - A CSI external-snapshotter sidecar -The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) +The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller) *NOTE:* - The manifests available on GitHub install the snapshotter image: @@ -104,7 +104,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl ``` *NOTE:* -- When using Kubernetes 1.21/1.22/1.23 it is recommended to use 5.0.x version of snapshotter/snapshot-controller. +- When using Kubernetes 1.21/1.22/1.23 it is recommended to use 6.0.x version of snapshotter/snapshot-controller. - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. ## Install the Driver diff --git a/content/docs/csidriver/installation/helm/powermax.md b/content/docs/csidriver/installation/helm/powermax.md index 54750ae294..ce0d87fbfc 100644 --- a/content/docs/csidriver/installation/helm/powermax.md +++ b/content/docs/csidriver/installation/helm/powermax.md @@ -133,7 +133,7 @@ The CSI external-snapshotter sidecar is split into two controllers to support Vo - A common snapshot controller - A CSI external-snapshotter sidecar -The common snapshot controller must be installed only once in the cluster, irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) +The common snapshot controller must be installed only once in the cluster, irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller) *NOTE:* - The manifests available on GitHub install the snapshotter image: diff --git a/content/docs/csidriver/installation/helm/powerstore.md b/content/docs/csidriver/installation/helm/powerstore.md index 5a9d7ec59d..36de891cfa 100644 --- a/content/docs/csidriver/installation/helm/powerstore.md +++ b/content/docs/csidriver/installation/helm/powerstore.md @@ -102,7 +102,7 @@ snapshot: ``` #### Volume Snapshot CRD's -The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) for the installation. +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd) for the installation. #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers: @@ -110,7 +110,7 @@ The CSI external-snapshotter sidecar is split into two controllers: - A CSI external-snapshotter sidecar The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available: -Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) for the installation. +Use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller) for the installation. *NOTE:* - The manifests available on GitHub install the snapshotter image: @@ -129,7 +129,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl ``` *NOTE:* -- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. +- It is recommended to use 6.0.x version of snapshotter/snapshot-controller. ### Volume Health Monitoring diff --git a/content/docs/csidriver/installation/helm/unity.md b/content/docs/csidriver/installation/helm/unity.md index 48599c6711..45078cbe25 100644 --- a/content/docs/csidriver/installation/helm/unity.md +++ b/content/docs/csidriver/installation/helm/unity.md @@ -252,14 +252,14 @@ Procedure In order to use the Kubernetes Volume Snapshot feature, you must ensure the following components have been deployed on your Kubernetes cluster #### Volume Snapshot CRD's - The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) for the installation. + The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd) for the installation. #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers: - A common snapshot controller - A CSI external-snapshotter sidecar - Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) for the installation. + Use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller) for the installation. #### Installation example @@ -273,7 +273,7 @@ Procedure ``` **Note**: - - It is recommended to use 5.0.x version of snapshotter/snapshot-controller. + - It is recommended to use 6.0.x version of snapshotter/snapshot-controller. - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. diff --git a/content/docs/csidriver/installation/operator/_index.md b/content/docs/csidriver/installation/operator/_index.md index 88acc0fd52..65bd661ba1 100644 --- a/content/docs/csidriver/installation/operator/_index.md +++ b/content/docs/csidriver/installation/operator/_index.md @@ -11,14 +11,14 @@ The Dell CSI Operator is a Kubernetes Operator, which can be used to install and ## Prerequisites #### Volume Snapshot CRD's -The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd) #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers: - A common snapshot controller - A CSI external-snapshotter sidecar -The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) +The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller) *NOTE:* - The manifests available on GitHub install the snapshotter image: @@ -37,7 +37,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller ``` *NOTE:* -- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. +- It is recommended to use 6.0.x version of snapshotter/snapshot-controller. ## Installation diff --git a/content/docs/deployment/csmoperator/drivers/_index.md b/content/docs/deployment/csmoperator/drivers/_index.md index 18129d5071..91c428b596 100644 --- a/content/docs/deployment/csmoperator/drivers/_index.md +++ b/content/docs/deployment/csmoperator/drivers/_index.md @@ -37,7 +37,7 @@ kubectl create -f client/config/crd kubectl create -f deploy/kubernetes/snapshot-controller ``` *NOTE:* -- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. +- It is recommended to use 6.0.x version of snapshotter/snapshot-controller. ## Installing CSI Driver via Operator From dd634c015e3c11494bacc613bb4e336270c6d690 Mon Sep 17 00:00:00 2001 From: Yamunadevi N Shanmugam Date: Tue, 6 Sep 2022 22:53:51 +0530 Subject: [PATCH 15/15] updated PMAX docs --- content/docs/csidriver/features/powerscale.md | 2 +- content/docs/csidriver/installation/helm/powerflex.md | 4 ++-- content/docs/csidriver/installation/helm/powermax.md | 4 ++-- content/docs/csidriver/installation/operator/powermax.md | 6 +++--- content/docs/csidriver/installation/test/unity.md | 2 +- content/docs/snapshots/volume-group-snapshots/_index.md | 2 +- 6 files changed, 10 insertions(+), 10 deletions(-) diff --git a/content/docs/csidriver/features/powerscale.md b/content/docs/csidriver/features/powerscale.md index 5c9e0e13bb..085ee57ffd 100644 --- a/content/docs/csidriver/features/powerscale.md +++ b/content/docs/csidriver/features/powerscale.md @@ -22,7 +22,7 @@ You can use existent volumes from the PowerScale array as Persistent Volumes in 1. Open your volume in One FS, and take a note of volume-id. 2. Create PersistentVolume and use this volume-id as a volumeHandle in the manifest. Modify other parameters according to your needs. 3. In the following example, the PowerScale cluster accessZone is assumed as 'System', storage class as 'isilon', cluster name as 'pscale-cluster' and volume's internal name as 'isilonvol'. The volume-handle should be in the format of =_=_==_=_==_=_= -4. If Quotas are enabled in the driver, it is required to add the Quota ID to the description of the NFS export in the following format: +4. If Quotas are enabled in the driver, it is required to add the Quota ID to the description of the NFS export in this format: `CSI_QUOTA_ID:sC-kAAEAAAAAAAAAAAAAQEpVAAAAAAAA` 5. Quota ID can be identified by querying the PowerScale system. diff --git a/content/docs/csidriver/installation/helm/powerflex.md b/content/docs/csidriver/installation/helm/powerflex.md index ac3904b14e..00d3ce3110 100644 --- a/content/docs/csidriver/installation/helm/powerflex.md +++ b/content/docs/csidriver/installation/helm/powerflex.md @@ -104,7 +104,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl ``` *NOTE:* -- When using Kubernetes 1.21/1.22/1.23 it is recommended to use 6.0.x version of snapshotter/snapshot-controller. +- When using Kubernetes it is recommended to use 6.0.x version of snapshotter/snapshot-controller. - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. ## Install the Driver @@ -158,7 +158,7 @@ Use the below command to replace or update the secret: - "insecure" parameter has been changed to "skipCertificateValidation" as insecure is deprecated and will be removed from use in config.yaml or secret.yaml in a future release. Users can continue to use any one of "insecure" or "skipCertificateValidation" for now. The driver would return an error if both parameters are used. - Please note that log configuration parameters from v1.5 will no longer work in v2.0 and higher. Please refer to the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features for more information. - If the user is using complex K8s version like "v1.21.3-mirantis-1", use this kubeVersion check in helm/csi-unity/Chart.yaml file. - kubeVersion: ">= 1.21.0-0 < 1.24.0-0" + kubeVersion: ">= 1.21.0-0 < 1.25.0-0" 5. Default logging options are set during Helm install. To see possible configuration options, see the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features. diff --git a/content/docs/csidriver/installation/helm/powermax.md b/content/docs/csidriver/installation/helm/powermax.md index ce0d87fbfc..2d25ad5042 100644 --- a/content/docs/csidriver/installation/helm/powermax.md +++ b/content/docs/csidriver/installation/helm/powermax.md @@ -111,7 +111,7 @@ CSI Driver for Dell PowerMax supports PowerPath for Linux. Configure Linux Power Set up the PowerPath for Linux as follows: -- All the nodes must have the PowerPath package installed . Download the PowerPath archive for the environment from [Dell EMC Online Support](https://www.dell.com/support/home/en-in/product-support/product/powerpath-for-linux/drivers). +- All the nodes must have the PowerPath package installed . Download the PowerPath archive for the environment from [Dell Online Support](https://www.dell.com/support/home/en-in/product-support/product/powerpath-for-linux/drivers). - `Untar` the PowerPath archive, Copy the RPM package into a temporary folder and Install PowerPath using `rpm -ivh DellEMCPower.LINUX--..x86_64.rpm` - Start the PowerPath service using `systemctl start PowerPath` @@ -185,7 +185,7 @@ CRDs should be configured during replication prepare stage with repctl as descri 4. Create the secret by running `kubectl create -f samples/secret/secret.yaml`. 5. If you are going to install the new CSI PowerMax ReverseProxy service, create a TLS secret with the name - _csireverseproxy-tls-secret_ which holds an SSL certificate and the corresponding private key in the namespace where you are installing the driver. 6. Copy the default values.yaml file `cd helm && cp csi-powermax/values.yaml my-powermax-settings.yaml` -7. Ensure the unisphere have 10.0 REST endpoint support. +7. Ensure the unisphere have 10.0 REST endpoint support by clicking on Unisphere -> Help (?) -> About in Unisphere for PowerMax GUI. 8. Edit the newly created file and provide values for the following parameters `vi my-powermax-settings.yaml` | Parameter | Description | Required | Default | diff --git a/content/docs/csidriver/installation/operator/powermax.md b/content/docs/csidriver/installation/operator/powermax.md index a13a8cc4c2..1290b00418 100644 --- a/content/docs/csidriver/installation/operator/powermax.md +++ b/content/docs/csidriver/installation/operator/powermax.md @@ -47,7 +47,7 @@ Set up Linux multipathing as follows: - Enable multipathing using `mpathconf --enable --with_multipathd y` - Enable `user_friendly_names` and `find_multipaths` in the `multipath.conf` file. -As a best practice, use the following options to help the operating system and the mulitpathing software detect path changes efficiently: +As a best practice, use these options to help the operating system and the mulitpathing software detect path changes efficiently: ```text path_grouping_policy multibus path_checker tur @@ -60,9 +60,9 @@ no_path_retry 10 CSI Driver for Dell PowerMax supports PowerPath for Linux. Configure Linux PowerPath before installing the CSI Driver. -Set up the PowerPath for Linux as follows: +Follow this procedure to set up PowerPath for Linux: -- All the nodes must have the PowerPath package installed . Download the PowerPath archive for the environment from [Dell EMC Online Support](https://www.dell.com/support/home/en-in/product-support/product/powerpath-for-linux/drivers). +- All the nodes must have the PowerPath package installed . Download the PowerPath archive for the environment from [Dell Online Support](https://www.dell.com/support/home/en-in/product-support/product/powerpath-for-linux/drivers). - `Untar` the PowerPath archive, Copy the RPM package into a temporary folder and Install PowerPath using `rpm -ivh DellEMCPower.LINUX--..x86_64.rpm` - Start the PowerPath service using `systemctl start PowerPath` diff --git a/content/docs/csidriver/installation/test/unity.md b/content/docs/csidriver/installation/test/unity.md index 94e0b71d40..d969ead6aa 100644 --- a/content/docs/csidriver/installation/test/unity.md +++ b/content/docs/csidriver/installation/test/unity.md @@ -30,7 +30,7 @@ You can find all the created resources in `test-unity` namespace. ## Support for SLES 15 -The CSI Driver for Dell Unity XT requires the following set of packages installed on all worker nodes that run on SLES 15. +The CSI Driver for Dell Unity XT requires these of packages installed on all worker nodes that run on SLES 15. - open-iscsi **open-iscsi is required in order to make use of iSCSI protocol for provisioning** - nfs-utils **nfs-utils is required in order to make use of NFS protocol for provisioning** diff --git a/content/docs/snapshots/volume-group-snapshots/_index.md b/content/docs/snapshots/volume-group-snapshots/_index.md index 94dfdfe670..3fcf1f5426 100644 --- a/content/docs/snapshots/volume-group-snapshots/_index.md +++ b/content/docs/snapshots/volume-group-snapshots/_index.md @@ -7,7 +7,7 @@ Description: > --- ## Volume Group Snapshot Feature The Dell CSM Volume Group Snapshotter is an operator which extends Kubernetes API to support crash-consistent snapshots of groups of volumes. -Volume Group Snapshot supports Powerflex and Powerstore driver. +Volume Group Snapshot supports PowerFlex and PowerStore driver. In order to use Volume Group Snapshots, ensure the volume snapshot module is enabled. - Kubernetes Volume Snapshot CRDs