}}
diff --git a/content/v1/authorization/_index.md b/content/v1/authorization/_index.md
index f11031b38e..4f9c019f3b 100644
--- a/content/v1/authorization/_index.md
+++ b/content/v1/authorization/_index.md
@@ -33,9 +33,7 @@ The following diagram shows a high-level overview of CSM for Authorization with
{{
}}
## Supported CSI Drivers
@@ -61,21 +59,8 @@ CSM for Authorization supports the following CSI drivers and versions.
To resolve this, please refer to our [troubleshooting guide](./troubleshooting) on the topic.
## Authorization Components Support Matrix
-CSM for Authorization consists of 2 components - the Authorization sidecar and the Authorization proxy server. It is important that the version of the Authorization sidecar image maps to a supported version of the Authorization proxy server.
+CSM for Authorization consists of 2 components - The authorization sidecar, bundled with the driver, communicates with the Authorization proxy server to validate access to Storage platforms. The authorization sidecar is backward compatible with older Authorization proxy server versions. However, it is highly recommended to have the Authorization proxy server and sidecar installed from the same release of CSM.
-{{
}}
## Roles and Responsibilities
The CSM for Authorization CLI can be executed in the context of the following roles:
diff --git a/content/v1/authorization/cli.md b/content/v1/authorization/cli.md
index e395ee58ca..8f13774355 100644
--- a/content/v1/authorization/cli.md
+++ b/content/v1/authorization/cli.md
@@ -247,6 +247,11 @@ Usually, you will want to pipe the output to kubectl to apply the secret
```bash
karavictl generate token --tenant Alice --admin-token admintoken.yaml --addr csm-authorization.host.com | kubectl apply -f -
```
+The token is read once when the driver pods are started and is not dynamically updated. If you are applying a new token in an existing driver installation, restart the driver pods for the new token to take effect.
+```bash
+kubectl -n rollout restart deploy/-controller
+kubectl -n rollout restart ds/-node
+```
### karavictl role
diff --git a/content/v1/authorization/configuration/powerflex/_index.md b/content/v1/authorization/configuration/powerflex/_index.md
index 406013bd61..903245f0b0 100644
--- a/content/v1/authorization/configuration/powerflex/_index.md
+++ b/content/v1/authorization/configuration/powerflex/_index.md
@@ -106,7 +106,7 @@ Given a setup where Kubernetes, a storage system, and the CSM for Authorization
- Update `authorization.enabled` to `true`.
- - Update `authorization.sidecarProxyImage` to the image of the CSM Authorization sidecar. In most cases, you can leave the default value.
+ - Update `images.authorization` to the image of the CSM Authorization sidecar. In most cases, you can leave the default value.
- Update `authorization.proxyHost` to the hostname of the CSM Authorization Proxy Server.
@@ -119,8 +119,8 @@ Given a setup where Kubernetes, a storage system, and the CSM for Authorization
enabled: true
# sidecarProxyImage: the container image used for the csm-authorization-sidecar.
- # Default value: dellemc/csm-authorization-sidecar:v1.8.0
- sidecarProxyImage: dellemc/csm-authorization-sidecar:v1.8.0
+ # Default value: dellemc/csm-authorization-sidecar:v1.9.0
+ sidecarProxyImage: dellemc/csm-authorization-sidecar:v1.9.0
# proxyHost: hostname of the csm-authorization server
# Default value: None
@@ -156,10 +156,10 @@ Given a setup where Kubernetes, a storage system, and the CSM for Authorization
- name: authorization
# enable: Enable/Disable csm-authorization
enabled: true
- configVersion: v1.8.0
+ configVersion: v1.9.0
components:
- name: karavi-authorization-proxy
- image: dellemc/csm-authorization-sidecar:v1.8.0
+ image: dellemc/csm-authorization-sidecar:v1.9.0
envs:
# proxyHost: hostname of the csm-authorization server
- name: "PROXY_HOST"
diff --git a/content/v1/authorization/configuration/powermax/_index.md b/content/v1/authorization/configuration/powermax/_index.md
index 22aadfadbf..ca4f350226 100644
--- a/content/v1/authorization/configuration/powermax/_index.md
+++ b/content/v1/authorization/configuration/powermax/_index.md
@@ -65,7 +65,7 @@ Create the karavi-authorization-config secret using this command:
- Update `authorization.enabled` to `true`.
- - Update `authorization.sidecarProxyImage` to the image of the CSM Authorization sidecar. In most cases, you can leave the default value.
+ - Update `images.authorization` to the image of the CSM Authorization sidecar. In most cases, you can leave the default value.
- Update `authorization.proxyHost` to the hostname of the CSM Authorization Proxy Server.
@@ -85,8 +85,8 @@ Create the karavi-authorization-config secret using this command:
enabled: true
# sidecarProxyImage: the container image used for the csm-authorization-sidecar.
- # Default value: dellemc/csm-authorization-sidecar:v1.8.0
- sidecarProxyImage: dellemc/csm-authorization-sidecar:v1.8.0
+ # Default value: dellemc/csm-authorization-sidecar:v1.9.0
+ sidecarProxyImage: dellemc/csm-authorization-sidecar:v1.9.0
# proxyHost: hostname of the csm-authorization server
# Default value: None
@@ -100,6 +100,42 @@ Create the karavi-authorization-config secret using this command:
skipCertificateValidation: true
```
+ **Operator**
+
+ Refer to the [Install Driver](../../../deployment/csmoperator/drivers/powermax/#install-driver) section to edit the parameters in the Custom Resource to enable CSM Authorization.
+
+ Under `modules`, enable the module named `authorization`:
+
+ - Update the `enabled` field to `true.`
+
+ - Update the `image` to the image of the CSM Authorization sidecar. In most cases, you can leave the default value.
+
+ - Update the `PROXY_HOST` environment value to the hostname of the CSM Authorization Proxy Server.
+
+ - Update the `SKIP_CERTIFICATE_VALIDATION` environment value to `true` or `false` depending on if you want to disable or enable certificate validation of the CSM Authorization Proxy Server.
+
+ Example:
+
+ ```yaml
+ modules:
+ # Authorization: enable csm-authorization for RBAC
+ - name: authorization
+ # enable: Enable/Disable csm-authorization
+ enabled: true
+ configVersion: v1.9.0
+ components:
+ - name: karavi-authorization-proxy
+ image: dellemc/csm-authorization-sidecar:v1.9.0
+ envs:
+ # proxyHost: hostname of the csm-authorization server
+ - name: "PROXY_HOST"
+ value: "csm-authorization.com"
+
+ # skipCertificateValidation: Enable/Disable certificate validation of the csm-authorization server
+ - name: "SKIP_CERTIFICATE_VALIDATION"
+ value: "true"
+ ```
+
5. Install the Dell CSI PowerMax driver following the appropriate documenation for your installation method.
6. (Optional) Install [dellctl](../../../references/cli) to perform Kubernetes administrator commands for additional capabilities (e.g., list volumes). Please refer to the [dellctl documentation page](../../../references/cli) for the installation steps and command list.
\ No newline at end of file
diff --git a/content/v1/authorization/configuration/powerscale/_index.md b/content/v1/authorization/configuration/powerscale/_index.md
index 5e0ca63c16..62964bdd54 100644
--- a/content/v1/authorization/configuration/powerscale/_index.md
+++ b/content/v1/authorization/configuration/powerscale/_index.md
@@ -114,7 +114,7 @@ kubectl -n isilon create secret generic karavi-authorization-config --from-file=
- Update `authorization.enabled` to `true`.
- - Update `authorization.sidecarProxyImage` to the image of the CSM Authorization sidecar. In most cases, you can leave the default value.
+ - Update `images.authorization` to the image of the CSM Authorization sidecar. In most cases, you can leave the default value.
- Update `authorization.proxyHost` to the hostname of the CSM Authorization Proxy Server.
@@ -127,8 +127,8 @@ kubectl -n isilon create secret generic karavi-authorization-config --from-file=
enabled: true
# sidecarProxyImage: the container image used for the csm-authorization-sidecar.
- # Default value: dellemc/csm-authorization-sidecar:v1.8.0
- sidecarProxyImage: dellemc/csm-authorization-sidecar:v1.8.0
+ # Default value: dellemc/csm-authorization-sidecar:v1.9.0
+ sidecarProxyImage: dellemc/csm-authorization-sidecar:v1.9.0
# proxyHost: hostname of the csm-authorization server
# Default value: None
@@ -162,10 +162,10 @@ kubectl -n isilon create secret generic karavi-authorization-config --from-file=
- name: authorization
# enable: Enable/Disable csm-authorization
enabled: true
- configVersion: v1.8.0
+ configVersion: v1.9.0
components:
- name: karavi-authorization-proxy
- image: dellemc/csm-authorization-sidecar:v1.8.0
+ image: dellemc/csm-authorization-sidecar:v1.9.0
envs:
# proxyHost: hostname of the csm-authorization server
- name: "PROXY_HOST"
diff --git a/content/v1/authorization/release/_index.md b/content/v1/authorization/release/_index.md
index a64bec93ca..3bcadd9408 100644
--- a/content/v1/authorization/release/_index.md
+++ b/content/v1/authorization/release/_index.md
@@ -6,18 +6,25 @@ Description: >
Dell Container Storage Modules (CSM) release notes for authorization
---
-## Release Notes - CSM Authorization 1.8.0
+## Release Notes - CSM Authorization 1.9.1
+
+
+
+
+
### New Features/Changes
-- [#922 - [FEATURE]: Use ubi9 micro as base image](https://github.com/dell/csm/issues/922)
+- [#947 - [FEATURE]: Support for Kubernetes 1.28](https://github.com/dell/csm/issues/947)
+- [#1066 - [FEATURE]: Support for Openshift 4.14](https://github.com/dell/csm/issues/1066)
+- [#996 - [FEATURE]: Dell CSI to Dell CSM Operator Migration Process](https://github.com/dell/csm/issues/996)
+- [#1031 - [FEATURE]: Update to the latest UBI Micro image for CSM](https://github.com/dell/csm/issues/1031)
+- [#1062 - [FEATURE]: CSM PowerMax: Support PowerMax v10.1 ](https://github.com/dell/csm/issues/1062)
### Fixed Issues
-- [#895 - [BUG]: Update CSM Authorization karavictl CLI flag descriptions](https://github.com/dell/csm/issues/895)
-- [#916 - [BUG]: Remove references to deprecated io/ioutil package](https://github.com/dell/csm/issues/916)
### Known Issues
diff --git a/content/v1/authorization/troubleshooting.md b/content/v1/authorization/troubleshooting.md
index 664e73b98e..08a6c6aa3d 100644
--- a/content/v1/authorization/troubleshooting.md
+++ b/content/v1/authorization/troubleshooting.md
@@ -15,10 +15,7 @@ The CSM Authorization RPM will be deprecated in a future release. It is highly r
- [Running `karavictl tenant` commands result in an HTTP 504 error](#running-karavictl-tenant-commands-result-in-an-http-504-error)
- [Installation fails to install policies](#installation-fails-to-install-policies)
- [After installation, the create-pvc Pod is in an Error state](#after-installation-the-create-pvc-pod-is-in-an-error-state)
-
-## Helm Deployment
-- [The CSI Driver for Dell PowerFlex v2.3.0 is in an Error or CrashLoopBackoff state due to "request denied for path" errors](#the-csi-driver-for-dell-powerflex-v230-is-in-an-error-or-crashloopbackoff-state-due-to-request-denied-for-path-errors)
-
+- [Intermittent 401 issues with generated token](#intermittent-401-issues-with-generated-token)
---
### The Failure of Building an Authorization RPM
@@ -97,6 +94,23 @@ Run the following commands to allow the PVC to be created:
semanage fcontext -a -t container_file_t "/var/lib/rancher/k3s/storage(/.*)?"
restorecon -R /var/lib/rancher/k3s/storage/
```
+### Intermittent 401 issues with generated token
+This issue occurs when a new access token is generated in an existing driver installation.
+
+__Resolution__
+
+If you are applying a new token in an existing driver installation, restart the driver pods for the new token to take effect. The token is read once when the driver pods are started and is not dynamically updated.
+```bash
+kubectl -n rollout restart deploy/-controller
+kubectl -n rollout restart ds/-node
+```
+
+## Helm Deployment
+- [The CSI Driver for Dell PowerFlex v2.3.0 is in an Error or CrashLoopBackoff state due to "request denied for path" errors](#the-csi-driver-for-dell-powerflex-v230-is-in-an-error-or-crashloopbackoff-state-due-to-request-denied-for-path-errors)
+- [Intermittent 401 issues with generated token](#intermittent-401-issues-with-generated-token)
+
+---
+
### The CSI Driver for Dell PowerFlex v2.3.0 is in an Error or CrashLoopBackoff state due to "request denied for path" errors
The vxflexos-controller pods will have logs similar to:
@@ -184,3 +198,14 @@ kubectl -n rollout restart deploy/proxy-server
kubectl -n rollout restart deploy/vxflexos-controller
kubectl -n rollout restart daemonSet/vxflexos-node
```
+
+### Intermittent 401 issues with generated token
+This issue occurs when a new access token is generated in an existing driver installation.
+
+__Resolution__
+
+If you are applying a new token in an existing driver installation, restart the driver pods for the new token to take effect. The token is read once when the driver pods are started and is not dynamically updated.
+```bash
+kubectl -n rollout restart deploy/-controller
+kubectl -n rollout restart ds/-node
+```
\ No newline at end of file
diff --git a/content/v1/cosidriver/features/objectscale.md b/content/v1/cosidriver/features/objectscale.md
index 8fbaf5fd33..3c5b985677 100644
--- a/content/v1/cosidriver/features/objectscale.md
+++ b/content/v1/cosidriver/features/objectscale.md
@@ -4,11 +4,6 @@ linktitle: ObjectScale
weight: 1
Description: Code features for ObjectScale COSI Driver
---
-
-> **Notational Conventions**
->
-> The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" are to be interpreted as described in [RFC 2119](http://tools.ietf.org/html/rfc2119) (Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997).
-
Fields are specified by their path. Consider the following examples:
1. Field specified by the following path `spec.authenticationType=IAM` is reflected in their resources YAML as the following:
diff --git a/content/v1/cosidriver/installation/configuration_file.md b/content/v1/cosidriver/installation/configuration_file.md
index 8864eba93e..290c72dce6 100644
--- a/content/v1/cosidriver/installation/configuration_file.md
+++ b/content/v1/cosidriver/installation/configuration_file.md
@@ -4,11 +4,6 @@ linktitle: Configuration File
weight: 1
Description: Description of configuration file for ObjectScale
---
-
-> **Notational Conventions**
->
-> The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" are to be interpreted as described in [RFC 2119](http://tools.ietf.org/html/rfc2119) (Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997).
-
## Dell COSI Driver Configuration Schema
This configuration file is used to specify the settings for the Dell COSI Driver, which is responsible for managing connections to the Dell ObjectScale platform. The configuration file is written in YAML format and based on the JSON schema and adheres to its specification.
diff --git a/content/v1/cosidriver/installation/helm.md b/content/v1/cosidriver/installation/helm.md
index 27ff87a921..a53c3cee0c 100644
--- a/content/v1/cosidriver/installation/helm.md
+++ b/content/v1/cosidriver/installation/helm.md
@@ -10,10 +10,6 @@ The COSI Driver for Dell ObjectScale can be deployed by using the provided Helm
The Helm chart installs the following components in a _Deployment_ in the specified namespace:
- COSI Driver for ObjectScale
-> **Notational Conventions**
->
-> The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" are to be interpreted as described in [RFC 2119](http://tools.ietf.org/html/rfc2119) (Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997).
-
## Dependencies
Installing any of the CSI Driver components using Helm requires a few utilities to be installed on the system running the installation.
@@ -38,7 +34,7 @@ Installing any of the CSI Driver components using Helm requires a few utilities
1. Run `git clone -b main https://github.com/dell/helm-charts.git` to clone the git repository.
2. Ensure that you have created the namespace where you want to install the driver. You can run `kubectl create namespace dell-cosi` to create a new one. The use of _dell-cosi_ as the namespace is just an example. You can choose any name for the namespace.
3. Copy the _charts/cosi/values.yaml_ into a new location with name _my-cosi-values.yaml_, to customize settings for installation.
-4. Create new file called _my-cosi-configuration.yaml_, and copy the settings available in the [Configuration File](./configuration_file.md) page.
+4. Create new file called _my-cosi-configuration.yaml_, and copy the settings available in the [Configuration File](../configuration_file/) page.
5. Edit *my-cosi-values.yaml* to set the following parameters for your installation:
The following table lists the primary configurable parameters of the COSI driver Helm chart and their default values. More detailed information can be found in the [`values.yaml`](https://github.com/dell/helm-charts/blob/master/charts/cosi/values.yaml) file in this repository.
diff --git a/content/v1/cosidriver/release/_index.md b/content/v1/cosidriver/release/_index.md
new file mode 100644
index 0000000000..dc55e593a0
--- /dev/null
+++ b/content/v1/cosidriver/release/_index.md
@@ -0,0 +1,14 @@
+---
+title: "Release Notes"
+linkTitle: "Release Notes"
+weight: 6
+description: Release Notes for COSI Driver
+---
+
+## Release Notes - COSI Driver v0.1.1
+
+
+
+### New Features/Changes
+
+- [#1031 - [FEATURE]: Update to the latest UBI Micro image for CSM](https://github.com/dell/csm/issues/1031)
\ No newline at end of file
diff --git a/content/v1/cosidriver/troubleshooting/_index.md b/content/v1/cosidriver/troubleshooting/_index.md
new file mode 100644
index 0000000000..4815d5c6e4
--- /dev/null
+++ b/content/v1/cosidriver/troubleshooting/_index.md
@@ -0,0 +1,32 @@
+---
+title: Troubleshooting
+linktitle: Troubleshooting
+description: Troubleshooting COSI Driver
+weight: 5
+---
+
+## Troubleshooting COSI Driver with logs
+
+For logs use:
+
+```bash
+kubectl logs -n dell-cosi
+```
+
+Additionaly check kubernetes resources:
+
+```bash
+kubectl get bucketclaim -n dell-cosi
+```
+```bash
+kubectl get buckets
+```
+```bash
+kubectl get bucketaccessclass
+```
+```bash
+kubectl get bucketclass
+```
+```bash
+kubectl get bucketaccess
+```
diff --git a/content/v1/cosidriver/uninstallation/_index.md b/content/v1/cosidriver/uninstallation/_index.md
new file mode 100644
index 0000000000..39d6f41151
--- /dev/null
+++ b/content/v1/cosidriver/uninstallation/_index.md
@@ -0,0 +1,14 @@
+---
+title: "Uninstallation"
+linkTitle: "Uninstallation"
+weight: 3
+description: Methods to uninstall Dell COSI Driver
+---
+
+## Uninstall COSI driver installed via Helm
+
+To uninstall a driver use a helm uninstall command:
+
+```bash
+helm uninstall dell-cosi --namespace dell-cosi
+```
\ No newline at end of file
diff --git a/content/v1/cosidriver/upgrade/_index.md b/content/v1/cosidriver/upgrade/_index.md
new file mode 100644
index 0000000000..cc02b5d538
--- /dev/null
+++ b/content/v1/cosidriver/upgrade/_index.md
@@ -0,0 +1,14 @@
+---
+title: Upgrade
+linktitle: Upgrade
+description: Upgrading COSI Driver
+weight: 5
+---
+
+## Update Driver from v0.1.0 to v0.1.1 using Helm
+**Steps**
+1. Run `git clone https://github.com/dell/helm-charts.git` to clone the git repository and get the newest helm chart.
+2. Run the `helm upgrade`:
+ ```bash
+ helm upgrade ./helm-charts/charts/cosi/ -n
+ ```
diff --git a/content/v1/csidriver/_index.md b/content/v1/csidriver/_index.md
index fd1fc283d7..bb015a61c5 100644
--- a/content/v1/csidriver/_index.md
+++ b/content/v1/csidriver/_index.md
@@ -6,45 +6,45 @@ description: About Dell Technologies (Dell) CSI Drivers
weight: 3
---
-The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes-csi.github.io/docs/) (CSI spec v1.5) enabled Container Orchestrator (CO) and Dell Storage Arrays. It is a plug-in that is installed into Kubernetes to provide persistent storage using the Dell storage system.
+The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes-csi.github.io/docs/) (CSI spec v1.6) enabled Container Orchestrator (CO) and Dell Storage Arrays. It is a plug-in that is installed into Kubernetes to provide persistent storage using the Dell storage system.
![CSI Architecture](Architecture_Diagram.png)
## Features and capabilities
-### Supported Operating Systems/Container Orchestrator Platforms
+### Supported Container Orchestrator Platforms
{{
}}
+> Notes:
+> * The required OS dependencies are only for the protocol needed (e.g. if NVMe isn't the storage access protocol then nvme-cli is not required).
+> * The host operating system/version being used must align with what each Dell Storage platform supports. Please visit [E-Lab Navigator](https://elabnavigator.dell.com/eln/modernHomeSSM) for specific Dell Storage platform host operating system level support matrices.
+
### CSI Driver Capabilities
{{
}}
->Note: To connect to a PowerFlex 4.5 array, the SDC image will need to be changed to dellemc/sdc:4.5.
->- If using helm to install, you will need to make this change in your values.yaml file. See [helm install documentation](https://dell.github.io/csm-docs/docs/csidriver/installation/helm/powerflex/) for details.
->- If using CSM-Operator to install, you will need to make this change in your samples file. See [operator install documentation](https://dell.github.io/csm-docs/docs/deployment/csmoperator/drivers/powerflex/) for details.
-
### Backend Storage Details
{{
}}
| Features | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore |
@@ -79,3 +75,11 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes-
| Platform-specific configurable settings | Service Level selection iSCSI CHAP | - | Host IO Limit Tiering Policy NFS Host IO size Snapshot Retention duration | Access Zone NFS version (3 or 4);Configurable Export IPs | iSCSI CHAP |
| Auto RDM(vSphere) | Yes(over FC) | N/A | N/A | N/A | N/A |
{{
}}
diff --git a/content/v1/csidriver/features/powerflex.md b/content/v1/csidriver/features/powerflex.md
index d64e298051..908d51d1a9 100644
--- a/content/v1/csidriver/features/powerflex.md
+++ b/content/v1/csidriver/features/powerflex.md
@@ -283,7 +283,7 @@ allowedTopologies:
```
For additional information, see the [Kubernetes Topology documentation](https://kubernetes-csi.github.io/docs/topology.html).
-> *NOTE*: In the manifest file of the Dell CSI operator, topology can be enabled by specifying the system name or _systemid_ in the allowed topologies field. _Volumebindingmode_ is also set to _WaitForFirstConsumer_ by default.
+> *NOTE*: In the manifest file of the Dell CSM operator, topology can be enabled by specifying the system name or _systemid_ in the allowed topologies field. _Volumebindingmode_ is also set to _WaitForFirstConsumer_ by default.
## Controller HA
@@ -295,7 +295,7 @@ in your values file to the desired number of controller pods. By default, the dr
> *NOTE:* If the controller count is greater than the number of available nodes, excess controller pods will be stuck in a pending state.
-If you are using the Dell CSI Operator, the value to adjust is:
+If you are using the Dell CSM Operator, the value to adjust is:
```yaml
replicas: 1
```
@@ -373,7 +373,7 @@ controller:
```
> *NOTE:* Tolerations/selectors work the same way for node pods.
-For configuring Controller HA on the Dell CSI Operator, please refer to the [Dell CSI Operator documentation](../../installation/operator/#custom-resource-specification).
+For configuring Controller HA on the Dell CSM Operator, please refer to the [Dell CSM Operator documentation](../../../deployment/csmoperator/#custom-resource-specification).
## SDC Deployment
@@ -824,7 +824,7 @@ allowedTopologies:
- "true"
```
-[`helm/csi-vxflexos/values.yaml`](https://github.com/dell/csi-powerflex/blob/main/helm/csi-vxflexos/values.yaml)
+[`helm/csi-vxflexos/values.yaml`](https://github.com/dell/helm-charts/blob/main/charts/csi-vxflexos/values.yaml)
```yaml
...
enableQuota: false
@@ -834,7 +834,7 @@ enableQuota: false
## Usage of Quotas to Limit Storage Consumption for NFS volumes
Starting with version 2.8, the CSI driver for PowerFlex will support enabling tree quotas for limiting capacity for NFS volumes. To use the quota feature user can specify the boolean value `enableQuota` in values.yaml.
-To enable quota for NFS volumes, make the following edits to [values.yaml](https://github.com/dell/csi-powerflex/blob/main/helm/csi-vxflexos/values.yaml) file:
+To enable quota for NFS volumes, make the following edits to [values.yaml](https://github.com/dell/helm-charts/blob/main/charts/csi-vxflexos/values.yaml) file:
```yaml
...
...
@@ -907,6 +907,20 @@ allowedTopologies:
values:
- "true"
```
+## Configuring custom access to NFS exports
+
+CSI PowerFlex driver Version 2.9.0 and later supports the ability to configure NFS access to nodes that use dedicated storage networks.
+
+To enable this feature you need to specify `externalAccess` parameter in your helm `values.yaml` file or `X_CSI_POWERFLEX_EXTERNAL_ACCESS` variable when creating CustomResource using an operator.
+
+The value of that parameter is added as an additional entry to NFS Export host access.
+
+For example the following notation:
+```yaml
+externalAccess: "10.0.0.0/24"
+```
+
+This means that we allow for NFS Export created by driver to be consumed by address range `10.0.0.0-10.0.0.255`.
## Storage Capacity Tracking
CSI-PowerFlex driver version 2.8.0 and above supports Storage Capacity Tracking.
diff --git a/content/v1/csidriver/features/powermax.md b/content/v1/csidriver/features/powermax.md
index bf3f94fe07..a9e1a0b65b 100644
--- a/content/v1/csidriver/features/powermax.md
+++ b/content/v1/csidriver/features/powermax.md
@@ -5,8 +5,6 @@ weight: 1
Description: Code features for PowerMax Driver
---
-{{% pageinfo color="primary" %}} Linked Proxy mode for CSI reverse proxy is no longer actively maintained or supported. It will be deprecated in CSM 1.9 (Driver Version 2.9.0). It is highly recommended that you use stand alone mode going forward. {{% /pageinfo %}}
-
## Multi Unisphere Support
Starting with v1.7, the CSI PowerMax driver can communicate with multiple Unisphere for PowerMax servers to manage multiple PowerMax arrays.
@@ -32,7 +30,7 @@ snapshot:
>Note: From v1.7, the CSI PowerMax driver installation process will no longer create VolumeSnapshotClass.
> If you want to create VolumeSnapshots, then create a VolumeSnapshotClass using the sample provided in the _csi-powermax/samples/volumesnapshotclass_ folder
->Note: Snapshot for FIle in PowerMax is currently not supported.
+>Note: Snapshots for File in PowerMax is currently not supported.
### Creating Volume Snapshots
The following is a sample manifest for creating a Volume Snapshot using the **v1** snapshot APIs:
@@ -297,11 +295,9 @@ In the `my-powermax-settings.yaml` file, the csireverseproxy section can be used
The new Helm chart is configured as a sub chart for the CSI PowerMax helm chart. The install script automatically installs the CSI PowerMax Reverse Proxy and configures the CSI PowerMax driver to use this service.
-### Using Dell CSI Operator
-
-Starting with the v1.1.0 release of the Dell CSI Operator, a new Custom Resource Definition can be used to install CSI PowerMax Reverse Proxy.
+### Using Dell CSM Operator
-This Custom Resource has to be created in the same namespace as the CSI PowerMax driver and it has to be created before the driver Custom Resource. To use the service, the driver Custom Resource manifest must be configured with the service name "powermax-reverseproxy". For complete installation instructions for the CSI PowerMax driver and the CSI PowerMax Reverse Proxy, see the [Dell CSI Operator documentation](../../installation/operator/powermax) for PowerMax.
+For complete installation instructions for the CSI PowerMax driver and the CSI PowerMax Reverse Proxy, see the [Dell CSM Operator documentation](../../../deployment/csmoperator/drivers/powermax/) for PowerMax.
## User-friendly hostnames
@@ -315,7 +311,8 @@ For example, if `nodeNameTemplate` is _abc-%foo%-hostname_ and nodename is _work
## Controller HA
-Starting with version 1.5, the CSI PowerMax driver supports running multiple replicas of the controller Pod. At any time, only one controller Pod is active(leader), and the rest are on standby. In case of a failure, one of the standby Pods becomes active and takes the position of leader. This is achieved by using native leader election mechanisms utilizing `kubernetes leases`. Additionally by leveraging `pod anti-affinity`, no two-controller Pods are ever scheduled on the same node.
+Starting with version 1.5, the CSI PowerMax driver supports running multiple replicas of the controller Pod.
+Leader election is only applicable for all sidecar containers and driver container will be running in all controller pods . In case of a failure, one of the standby Pods becomes active and takes the position of leader. This is achieved by using native leader election mechanisms utilizing `kubernetes leases`. Additionally by leveraging `pod anti-affinity`, no two-controller Pods are ever scheduled on the same node.
To increase or decrease the number of controller Pods, edit the following value in `values.yaml` file:
```yaml
@@ -324,12 +321,12 @@ controllerCount: 2
> *NOTE:* The default value for controllerCount is 2. We recommend not changing this unless it is really necessary.
> Also, if the controller count is greater than the number of available nodes (where the Pods can be scheduled), some controller Pods will remain in the Pending state
-If you are using `dell-csi-operator`, adjust the following value in your Custom Resource manifest
+If you are using the Dell CSM Operator, the value to adjust is:
```yaml
replicas: 2
```
-For more details about configuring Controller HA using the Dell CSI Operator, see the [Dell CSI Operator documentation](../../installation/operator/#custom-resource-specification).
+For more details about configuring Controller HA using the Dell CSM Operator, see the [Dell CSM Operator documentation](../../../deployment/csmoperator/#custom-resource-specification).
## NodeSelectors and Tolerations
@@ -625,7 +622,6 @@ Without storage capacity tracking, pods get scheduled on a node satisfying the t
Storage capacity can be tracked by setting the attribute `storageCapacity.enabled` to true in values.yaml (set to true by default) during driver installation. To configure how often driver checks for changed capacity, set the `storageCapacity.pollInterval` attribute (set to 5m by default). In case of driver installed via operator, this interval can be configured in the sample file provided [here.](https://github.com/dell/csm-operator/blob/main/samples) by editing the `--capacity-poll-interval` argument present in the provisioner sidecar.
->Note: This feature requires kubernetes v1.24 and above and will be automatically disabled in lower version of kubernetes.
## Volume Limits
diff --git a/content/v1/csidriver/features/powerscale.md b/content/v1/csidriver/features/powerscale.md
index 31b6198e81..0024fc9b52 100644
--- a/content/v1/csidriver/features/powerscale.md
+++ b/content/v1/csidriver/features/powerscale.md
@@ -289,8 +289,7 @@ spec:
## Controller HA
-CSI PowerScale driver version 1.4.0 and later supports running multiple replicas of the controller pod. At any time, only one controller pod is active(leader), and the rest are on standby.
-In case of a failure, one of the standby pods becomes active and takes the position of leader. This is achieved by using native leader election mechanisms utilizing `kubernetes leases`.
+CSI PowerScale driver version 1.4.0 and later supports running multiple replicas of the controller pod. Leader election is only applicable for all sidecar containers and driver container will be running in all controller pods. In case of a failure, one of the standby pods becomes active and takes the position of leader. This is achieved by using native leader election mechanisms utilizing `kubernetes leases`.
Additionally by leveraging `pod anti-affinity`, no two-controller pods are ever scheduled on the same node.
@@ -302,13 +301,13 @@ controllerCount: 2
>**NOTE:** The default value for controllerCount is 2. It is recommended to not change this unless really required. Also, if the controller count is greater than the number of available nodes (where the pods can be scheduled), some controller pods will remain in a Pending state.
-If you are using the `dell-csi-operator`, adjust the following value in your Custom Resource manifest
+If you are using the Dell CSM Operator, the value to adjust is:
```yaml
replicas: 2
```
-For more details about configuring Controller HA using the Dell CSI Operator, refer to the [Dell CSI Operator documentation](../../installation/operator/#custom-resource-specification).
+For more details about configuring Controller HA using the Dell CSM Operator, see the [Dell CSM Operator documentation](../../../deployment/csmoperator/#custom-resource-specification).
## CSI Ephemeral Inline Volume
@@ -447,6 +446,17 @@ The user can also set the volume limit for all the nodes in the cluster by speci
>**NOTE:** The default value of `maxIsilonVolumesPerNode` is 0. If `maxIsilonVolumesPerNode` is set to zero, then CO shall decide how many volumes of this type can be published by the controller to the node.
The volume limit specified to `maxIsilonVolumesPerNode` attribute is applicable to all the nodes in the cluster for which node label `max-isilon-volumes-per-node` is not set.
+## Storage Capacity Tracking
+
+CSI for PowerScale driver version 2.8.0 and above supports Storage Capacity Tracking.
+
+This feature helps the scheduler to make more informed choices about where to schedule pods which depends on unbound volumes with late binding (aka "wait for first consumer"). Pods will be scheduled on a node (satisfying the topology constraints) only if the requested capacity is available on the storage array.
+If such a node is not available, the pods stay in Pending state. This means pods are not scheduled.
+
+Without storage capacity tracking, pods get scheduled on a node satisfying the topology constraints. If the required capacity is not available, volume attachment to the pods fails, and pods remain in ContainerCreating state. Storage capacity tracking eliminates unnecessary scheduling of pods when there is insufficient capacity.
+
+The attribute `storageCapacity.enabled` in `values.yaml` can be used to enable/disable the feature during driver installation using helm. This is by default set to true. To configure how often driver checks for changed capacity set `storageCapacity.pollInterval` attribute. In case of driver installed via operator, this interval can be configured in the sample file provided [here.](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powerscale_v280.yaml) by editing the `--capacity-poll-interval` argument present in the provisioner sidecar.
+
## Node selector in helm template
Now user can define in which worker node, the CSI node pod daemonset can run (just like any other pod in Kubernetes world).For more information, refer to
diff --git a/content/v1/csidriver/features/powerstore.md b/content/v1/csidriver/features/powerstore.md
index f0ec008a05..a5d2460a12 100644
--- a/content/v1/csidriver/features/powerstore.md
+++ b/content/v1/csidriver/features/powerstore.md
@@ -347,7 +347,7 @@ To create `NFS` volume you need to provide `nasName:` parameters that point to t
The CSI PowerStore driver version 1.2 and later introduces the controller HA feature. Instead of StatefulSet, controller pods are deployed as a Deployment.
-By default number of replicas is set to 2, you can set `controller.replicas` parameter to 1 in `my-powerstore-settings.yaml` if you want to disable controller HA for your installation. When installing via Operator you can change `replicas` parameter in `spec.driver` section in your PowerStore Custom Resource.
+By default number of replicas is set to 2, you can set `controller.replicas` parameter to 1 in `my-powerstore-settings.yaml` if you want to disable controller HA for your installation. When installing via Operator you can change `replicas` parameter in `spec.driver.csiDriverSpec` section in your PowerStore Custom Resource.
When multiple replicas of controller pods are in the cluster, each sidecar (attacher, provisioner, resizer, snapshotter) tries to get a lease so only one instance of each sidecar would be active in the cluster at a time.
@@ -760,7 +760,5 @@ If such a node is not available, the pods stay in Pending state. This means they
Without storage capacity tracking, pods get scheduled on a node satisfying the topology constraints. If the required capacity is not available, volume attachment to the pods fails, and pods remain in ContainerCreating state. Storage capacity tracking eliminates unnecessary scheduling of pods when there is insufficient capacity.
The attribute `storageCapacity.enabled` in `my-powerstore-settings.yaml` can be used to enabled/disabled the feature during driver installation .
-To configure how often driver checks for changed capacity set `storageCapacity.pollInterval` attribute. In case of driver installed via operator, this interval can be configured in the sample files provided [here](https://github.com/dell/dell-csi-operator/tree/master/samples) by editing the `capacity-poll-interval` argument present in the `provisioner` sidecar.
+To configure how often driver checks for changed capacity set `storageCapacity.pollInterval` attribute. In case of driver installed via operator, this interval can be configured in the sample files provided [here](https://github.com/dell/csm-operator/tree/main/samples) by editing the `capacity-poll-interval` argument present in the `provisioner` sidecar.
-**Note:**
->This feature requires kubernetes v1.24 and above and will be automatically disabled in lower version of kubernetes.
diff --git a/content/v1/csidriver/features/unity.md b/content/v1/csidriver/features/unity.md
index bf06822b52..188f5a3232 100644
--- a/content/v1/csidriver/features/unity.md
+++ b/content/v1/csidriver/features/unity.md
@@ -498,7 +498,7 @@ CSI for Unity XT driver version 2.8.0 and above supports Storage Capacity Tracki
This feature helps the scheduler to make more informed choices about where to schedule pods which depends on unbound volumes with late binding (aka "wait for first consumer"). Pods will be scheduled on a node (satisfying the topology constraints) only if the requested capacity is available on the storage array.
If such a node is not available, the pods stay in Pending state. This means pods are not scheduled.
-Without storage capacity tracking, pods get scheduled on a node satisfying the topology constraints. If the required capacity is not available, volume attachment to the pods fails, and pods remain in ContainerCreating state. Storage capacity tracking eliminates unnecessary scheduling of pods when there is insufficient capacity.
+Without storage capacity tracking, pods get scheduled on a node satisfying the topology constraints. If the required capacity is not available, volume attachment to the pods fails, and pods remain in ContainerCreating state. Storage capacity tracking eliminates unnecessary scheduling of pods when there is insufficient capacity. Moreover, storage capacity tracking returns `MaximumVolumeSize` parameter, which may be used as an input to the volume creation.
The attribute `storageCapacity.enabled` in `values.yaml` can be used to enable/disable the feature during driver installation using helm. This is by default set to true. To configure how often driver checks for changed capacity set `storageCapacity.pollInterval` attribute. In case of driver installed via operator, this interval can be configured in the sample file provided [here.](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_unity_v280.yaml) by editing the `--capacity-poll-interval` argument present in the provisioner sidecar.
diff --git a/content/v1/csidriver/installation/helm/isilon.md b/content/v1/csidriver/installation/helm/isilon.md
index db951501f2..31f3a4c38b 100644
--- a/content/v1/csidriver/installation/helm/isilon.md
+++ b/content/v1/csidriver/installation/helm/isilon.md
@@ -5,19 +5,6 @@ description: >
---
The CSI Driver for Dell PowerScale can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-powerscale/tree/master/dell-csi-helm-installer).
-The controller section of the Helm chart installs the following components in a _Deployment_ in the specified namespace:
-
-- CSI Driver for PowerScale
-- Kubernetes External Provisioner, which provisions the volumes
-- Kubernetes External Attacher, which attaches the volumes to the containers
-- Kubernetes External Snapshotter, which provides snapshot support
-- Kubernetes External Resizer, which resizes the volume
-
-The node section of the Helm chart installs the following component in a _DaemonSet_ in the specified namespace:
-
-- CSI Driver for PowerScale
-- Kubernetes Node Registrar, which handles the driver registration
-
## Prerequisites
The following are requirements to be met before installing the CSI Driver for Dell PowerScale:
@@ -107,17 +94,17 @@ CRDs should be configured during replication prepare stage with repctl as descri
**Steps**
-1. Run `git clone -b v2.8.0 https://github.com/dell/csi-powerscale.git` to clone the git repository.
+1. Run `git clone -b v2.9.1 https://github.com/dell/csi-powerscale.git` to clone the git repository.
2. Ensure that you have created the namespace where you want to install the driver. You can run `kubectl create namespace isilon` to create a new one. The use of "isilon" as the namespace is just an example. You can choose any name for the namespace.
3. Collect information from the PowerScale Systems like IP address, IsiPath, username, and password. Make a note of the value for these parameters as they must be entered in the *secret.yaml*.
-4. Download `wget -O my-isilon-settings.yaml https://raw.githubusercontent.com/dell/helm-charts/csi-isilon-2.8.0/charts/csi-isilon/values.yaml` into `cd ../dell-csi-helm-installer` to customize settings for installation.
+4. Download `wget -O my-isilon-settings.yaml https://raw.githubusercontent.com/dell/helm-charts/csi-isilon-2.9.1/charts/csi-isilon/values.yaml` into `cd ../dell-csi-helm-installer` to customize settings for installation.
5. Edit *my-isilon-settings.yaml* to set the following parameters for your installation:
The following table lists the primary configurable parameters of the PowerScale driver Helm chart and their default values. More detailed information can be
- found in the [`values.yaml`](https://github.com/dell/helm-charts/blob/csi-isilon-2.8.0/charts/csi-isilon/values.yaml) file in this repository.
+ found in the [`values.yaml`](https://github.com/dell/helm-charts/blob/csi-isilon-2.9.1/charts/csi-isilon/values.yaml) file in this repository.
| Parameter | Description | Required | Default |
| --------- | ----------- | -------- |-------- |
- | driverRepository | Set to give the repository containing the driver image (used as part of the image name). | Yes | dellemc |
+ | images | List all the images used by the CSI driver and CSM. If you use a private repository, change the registries accordingly. | Yes | "" |
| logLevel | CSI driver log level | No | "debug" |
| certSecretCount | Defines the number of certificate secrets, which the user is going to create for SSL authentication. (isilon-cert-0..isilon-cert-(n-1)); Minimum value should be 1.| Yes | 1 |
| [allowedNetworks](../../../features/powerscale/#support-custom-networks-for-nfs-io-traffic) | Defines the list of networks that can be used for NFS I/O traffic, CIDR format must be used. | No | [ ] |
@@ -152,7 +139,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| ***PLATFORM ATTRIBUTES*** | | | |
| endpointPort | Define the HTTPs port number of the PowerScale OneFS API server. If authorization is enabled, endpointPort should be the HTTPS localhost port that the authorization sidecar will listen on. This value acts as a default value for endpointPort, if not specified for a cluster config in secret. | No | 8080 |
| skipCertificateValidation | Specify whether the PowerScale OneFS API server's certificate chain and hostname must be verified. This value acts as a default value for skipCertificateValidation, if not specified for a cluster config in secret. | No | true |
- | isiAuthType | Indicates the authentication method to be used. If set to 1 then it follows as session-based authentication else basic authentication | No | 0 |
+ | isiAuthType | Indicates the authentication method to be used. If set to 1 then it follows as session-based authentication else basic authentication. If authorization.enabled=true, this value must be set to 1. | No | 0 |
| isiAccessZone | Define the name of the access zone a volume can be created in. If storageclass is missing with AccessZone parameter, then value of isiAccessZone is used for the same. | No | System |
| enableQuota | Indicates whether the provisioner should attempt to set (later unset) quota on a newly provisioned volume. This requires SmartQuotas to be enabled.| No | true |
| isiPath | Define the base path for the volumes to be created on PowerScale cluster. This value acts as a default value for isiPath, if not specified for a cluster config in secret| No | /ifs/data/csi |
@@ -160,25 +147,21 @@ CRDs should be configured during replication prepare stage with repctl as descri
| noProbeOnStart | Define whether the controller/node plugin should probe all the PowerScale clusters during driver initialization | No | false |
| autoProbe | Specify if automatically probe the PowerScale cluster if not done already during CSI calls | No | true |
| **authorization** | [Authorization](../../../../authorization/deployment) is an optional feature to apply credential shielding of the backend PowerScale. | - | - |
- | enabled | A boolean that enables/disables authorization feature. | No | false |
- | sidecarProxyImage | Image for csm-authorization-sidecar. | No | " " |
+ | enabled | A boolean that enables/disables authorization feature. If enabled, isiAuthType must be set to 1. | No | false |
| proxyHost | Hostname of the csm-authorization server. | No | Empty |
| skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization proxy server. | No | true |
| **podmon** | [Podmon](../../../../resiliency/deployment) is an optional feature to enable application pods to be resilient to node failure. | - | - |
| enabled | A boolean that enables/disables podmon feature. | No | false |
- | image | image for podmon. | No | " " |
| **encryption** | [Encryption](../../../../secure/encryption/deployment) is an optional feature to apply encryption to CSI volumes. | - | - |
| enabled | A boolean that enables/disables Encryption feature. | No | false |
- | image | Encryption driver image name. | No | "dellemc/csm-encryption:v0.3.0" |
*NOTE:*
- ControllerCount parameter value must not exceed the number of nodes in the Kubernetes cluster. Otherwise, some of the controller pods remain in a "Pending" state till new nodes are available for scheduling. The installer exits with a WARNING on the same.
- Whenever the *certSecretCount* parameter changes in *my-isilon-setting.yaml* user needs to reinstall the driver.
- In order to enable authorization, there should be an authorization proxy server already installed.
- - If you are using a custom image, check the *version* and *driverRepository* fields in *my-isilon-setting.yaml* to make sure that they are pointing to the correct image repository and driver version. These two fields are spliced together to form the image name, as shown here: /csi-isilon:
-
-6. Edit following parameters in samples/secret/secret.yaml file and update/add connection/authentication information for one or more PowerScale clusters.
+ - If you are using custom images, update each attributes under the *images* field in *my-isilon-setting.yaml* to make sure that they are pointing to the correct image repository and version.
+6. Edit following parameters in samples/secret/secret.yaml file and update/add connection/authentication information for one or more PowerScale clusters. If replication feature is enabled, ensure the secret includes all the PowerScale clusters involved in replication.
| Parameter | Description | Required | Default |
| --------- | ----------- | -------- |-------- |
@@ -208,6 +191,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| ISI_PRIV_NS_IFS_ACCESS | Read Only |
| ISI_PRIV_IFS_BACKUP | Read Only |
| ISI_PRIV_SYNCIQ | Read Write |
+ | ISI_PRIV_STATISTICS | Read Only |
Create isilon-creds secret using the following command:
`kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml`
@@ -226,7 +210,7 @@ Create isilon-creds secret using the following command:
8. Install the driver using `csi-install.sh` bash script and default yaml by running
```bash
- cd dell-csi-helm-installer && wget -O my-isilon-settings.yaml https://raw.githubusercontent.com/dell/helm-charts/csi-isilon-2.8.0/charts/csi-isilon/values.yaml &&
+ cd dell-csi-helm-installer && wget -O my-isilon-settings.yaml https://raw.githubusercontent.com/dell/helm-charts/csi-isilon-2.9.1/charts/csi-isilon/values.yaml &&
./csi-install.sh --namespace isilon --values my-isilon-settings.yaml
```
diff --git a/content/v1/csidriver/installation/helm/powerflex.md b/content/v1/csidriver/installation/helm/powerflex.md
index cd57eb886a..8a87470549 100644
--- a/content/v1/csidriver/installation/helm/powerflex.md
+++ b/content/v1/csidriver/installation/helm/powerflex.md
@@ -7,22 +7,11 @@ description: >
The CSI Driver for Dell PowerFlex can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-powerflex/tree/master/dell-csi-helm-installer).
-The controller section of the Helm chart installs the following components in a _Deployment_ in the specified namespace:
-- CSI Driver for Dell PowerFlex
-- Kubernetes External Provisioner, which provisions the volumes
-- Kubernetes External Attacher, which attaches the volumes to the containers
-- Kubernetes External Snapshotter, which provides snapshot support
-- Kubernetes External Resizer, which resizes the volume
-
-The node section of the Helm chart installs the following component in a _DaemonSet_ in the specified namespace:
-- CSI Driver for Dell PowerFlex
-- Kubernetes Node Registrar, which handles the driver registration
-
## Prerequisites
The following are requirements that must be met before installing the CSI Driver for Dell PowerFlex:
- Install Kubernetes or OpenShift (see [supported versions](../../../../csidriver/#features-and-capabilities))
-- Install Helm 3
+- Install Helm 3.x
- Enable Zero Padding on PowerFlex
- Mount propagation is enabled on container runtime that is being used
- Install PowerFlex Storage Data Client
@@ -32,13 +21,13 @@ The following are requirements that must be met before installing the CSI Driver
- If multipath is configured, ensure CSI-PowerFlex volumes are blacklisted by multipathd. See [troubleshooting section](../../../troubleshooting/powerflex) for details
-### Install Helm 3.0
+### Install Helm 3.x
-Install Helm 3.0 on the master node before you install the CSI Driver for Dell PowerFlex.
+Install Helm 3.x on the master node before you install the CSI Driver for Dell PowerFlex.
**Steps**
- Run the command to install Helm 3.0.
+ Run the command to install Helm 3.x.
```bash
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
```
@@ -83,7 +72,7 @@ When the driver is installed using values generated by installation wizard, then
## Install the Driver
**Steps**
-1. Run `git clone -b v2.8.0 https://github.com/dell/csi-powerflex.git` to clone the git repository.
+1. Run `git clone -b v2.9.2 https://github.com/dell/csi-powerflex.git` to clone the git repository.
2. A namespace for the driver is expected prior to running the command below. If one is not created already, you can run `kubectl create namespace vxflexos` to create a new one.
Note that the namespace can be any user-defined name that follows the conventions for namespaces outlined by Kubernetes. In this example we assume that the namespace is 'vxflexos'
@@ -128,6 +117,8 @@ Example: `samples/secret.yaml` for PowerFlex storage system v4.0.x
```
*NOTE: To use multiple arrays, copy and paste section above for each array. Make sure isDefault is set to true for only one array.*
+If replication feature is enabled, ensure the secret includes all the PowerFlex arrays involved in replication.
+
After editing the file, run the below command to create a secret called `vxflexos-config`. This assumes `vxflexos` is release name, but it can be modified during [install](../#install-the-driver):
```bash
@@ -160,20 +151,18 @@ Use the below command to replace or update the secret:
7. Download the default values.yaml file
```bash
- cd dell-csi-helm-installer && wget -O myvalues.yaml https://github.com/dell/helm-charts/raw/csi-vxflexos-2.8.0/charts/csi-vxflexos/values.yaml
+ cd dell-csi-helm-installer && wget -O myvalues.yaml https://github.com/dell/helm-charts/raw/csi-vxflexos-2.9.2/charts/csi-vxflexos/values.yaml
```
- >Note: To connect to a PowerFlex 4.5 array, edit the powerflexSdc parameter in your values.yaml file to use dellemc/sdc:4.5:
- >`powerflexSdc: dellemc/sdc:4.5`
-8. If you are using a custom image, check the `version` and `driverRepository` fields in `my-vxflexos-settings.yaml` to make sure that they are pointing to the correct image repository and driver version. These two fields are spliced together to form the image name, as shown here: `/csi-vxflexos:v`
+8. If you are using custom images, check the fields under `images` in `my-vxflexos-settings.yaml` to make sure that they are pointing to the correct image repository.
9. Look over all the other fields `myvalues.yaml` and fill in/adjust any as needed. All the fields are described here:
| Parameter | Description | Required | Default |
| ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------- |
-| version | Set to verify the values file version matches driver version and used to pull the image as part of the image name. | Yes | 2.8.0 |
-| driverRepository | Set to give the repository containing the driver image (used as part of the image name). | Yes | dellemc |
-| powerflexSdc | Set to give the location of the SDC image used if automatic SDC deployment is being utilized. | Yes | dellemc/sdc:3.6.1 |
+| version | Set to verify the values file version matches driver version and used to pull the image as part of the image name. | Yes | 2.9.2 |
+| images | List all the images used by the CSI driver and CSM. If you use a private repository, change the registries accordingly. | Yes | "" |
+| images.powerflexSdc | Set to give the location of the SDC image used if automatic SDC deployment is being utilized. | Yes | dellemc/sdc:4.5 |
| certSecretCount | Represents the number of certificate secrets, which the user is going to create for SSL authentication. | No | 0 |
| logLevel | CSI driver log level. Allowed values: "error", "warn"/"warning", "info", "debug". | Yes | "debug" |
| logFormat | CSI driver log format. Allowed values: "TEXT" or "JSON". | Yes | "TEXT" |
@@ -185,6 +174,7 @@ Use the below command to replace or update the secret:
| enablelistvolumesnapshot | A boolean that, when enabled, will allow list volume operation to include snapshots (since creating a volume from a snap actually results in a new snap). It is recommend this be false unless instructed otherwise. | Yes | false |
| allowRWOMultiPodAccess | Setting allowRWOMultiPodAccess to "true" will allow multiple pods on the same node to access the same RWO volume. This behavior conflicts with the CSI specification version 1.3. NodePublishVolume description that requires an error to be returned in this case. However, some other CSI drivers support this behavior and some customers desire this behavior. Customers use this option at their own risk. | Yes | false |
| enableQuota | A boolean that, when enabled, will set quota limit for a newly provisioned NFS volume. | No | false |
+| externalAccess | Defines additional entries for hostAccess of NFS volumes, single IP address and subnet are valid entries | No | " " |
| **controller** | This section allows the configuration of controller-specific parameters. To maximize the number of available nodes for controller pods, see this section. For more details on the new controller pod configurations, see the [Features section](../../../features/powerflex#controller-ha) for Powerflex specifics. | - | - |
| volumeNamePrefix | Set so that volumes created by the driver have a default prefix. If one PowerFlex/VxFlex OS system is servicing several different Kubernetes installations or users, these prefixes help you distinguish them. | Yes | "k8s" |
| controllerCount | Set to deploy multiple controller instances. If the controller count is greater than the number of available nodes, excess pods remain in a pending state. It should be greater than 0. You can increase the number of available nodes by configuring the "controller" section in your values.yaml. For more details on the new controller pod configurations, see the [Features section](../../../features/powerflex#controller-ha) for Powerflex specifics. | Yes | 2 |
@@ -215,10 +205,8 @@ Use the below command to replace or update the secret:
| image | Image for vg snapshotter. | No | " " |
| **podmon** | [Podmon](../../../../resiliency/deployment) is an optional feature to enable application pods to be resilient to node failure. | - | - |
| enabled | A boolean that enables/disables podmon feature. | No | false |
-| image | image for podmon. | No | " " |
| **authorization** | [Authorization](../../../../authorization/deployment) is an optional feature to apply credential shielding of the backend PowerFlex. | - | - |
| enabled | A boolean that enables/disables authorization feature. | No | false |
-| sidecarProxyImage | Image for csm-authorization-sidecar. | No | " " |
| proxyHost | Hostname of the csm-authorization server. | No | Empty |
| skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization proxy server. | No | true |
diff --git a/content/v1/csidriver/installation/helm/powermax.md b/content/v1/csidriver/installation/helm/powermax.md
index 26970fadee..3a20bdf840 100644
--- a/content/v1/csidriver/installation/helm/powermax.md
+++ b/content/v1/csidriver/installation/helm/powermax.md
@@ -4,26 +4,9 @@ linktitle: PowerMax
description: >
Installing CSI Driver for PowerMax via Helm
---
-{{% pageinfo color="primary" %}} Linked Proxy mode for CSI reverse proxy is no longer actively maintained or supported. It will be deprecated in CSM 1.9 (Driver Version 2.9.0). It is highly recommended that you use stand alone mode going forward. {{% /pageinfo %}}
CSI Driver for Dell PowerMax can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, see the script [documentation](https://github.com/dell/csi-powermax/tree/master/dell-csi-helm-installer).
-The controller section of the Helm chart installs the following components in a _Deployment_ in the specified namespace:
-- CSI Driver for Dell PowerMax
-- Kubernetes External Provisioner, which provisions the volumes
-- Kubernetes External Attacher, which attaches the volumes to the containers
-- Kubernetes External Snapshotter, which provides snapshot support-
-- CSI PowerMax ReverseProxy, which maximizes CSI driver and Unisphere performance
-- Kubernetes External Resizer, which resizes the volume
-- (optional) Kubernetes External health monitor, which provides volume health status
-- (optional) Dell CSI Replicator, which provides Replication capability.
-- (optional) Dell CSI Migrator, which provides migrating capability within and across arrays
-- (optional) Node rescanner, which rescans the node for new data paths after migration
-
-The node section of the Helm chart installs the following component in a _DaemonSet_ in the specified namespace:
-- CSI Driver for Dell PowerMax
-- Kubernetes Node Registrar, which handles the driver registration
-
## Prerequisites
The following requirements must be met before installing CSI Driver for Dell PowerMax:
@@ -169,6 +152,53 @@ features "1 queue_if_no_path"
path_selector "round-robin 0"
no_path_retry 10
```
+#### multipathd `MachineConfig`
+
+If you are installing a CSI Driver which requires the installation of the Linux native Multipath software - _multipathd_, please follow the below instructions
+
+To enable multipathd on RedHat CoreOS nodes you need to prepare a working configuration encoded in base64.
+
+```bash echo 'defaults {
+user_friendly_names yes
+find_multipaths yes
+}
+blacklist {
+}' | base64 -w0
+```
+
+Use the base64 encoded string output in the following `MachineConfig` yaml file (under source section)
+```yaml
+apiVersion: machineconfiguration.openshift.io/v1
+kind: MachineConfig
+metadata:
+ name: workers-multipath-conf-default
+ labels:
+ machineconfiguration.openshift.io/role: worker
+spec:
+ config:
+ ignition:
+ version: 3.2.0
+ storage:
+ files:
+ - contents:
+ source: data:text/plain;charset=utf-8;base64,ZGVmYXVsdHMgewp1c2VyX2ZyaWVuZGx5X25hbWVzIHllcwpmaW5kX211bHRpcGF0aHMgeWVzCn0KCmJsYWNrbGlzdCB7Cn0K
+ verification: {}
+ filesystem: root
+ mode: 400
+ path: /etc/multipath.conf
+```
+After deploying this`MachineConfig` object, CoreOS will start multipath service automatically.
+Alternatively, you can check the status of the multipath service by entering the following command in each worker nodes.
+`sudo multipath -ll`
+
+If the above command is not successful, ensure that the /etc/multipath.conf file is present and configured properly. Once the file has been configured correctly, enable the multipath service by running the following command:
+`sudo /sbin/mpathconf –-enable --with_multipathd y`
+
+Finally, you have to restart the service by providing the command
+`sudo systemctl restart multipathd`
+
+For additional information refer to official documentation of the multipath configuration.
+
### PowerPath for Linux requirements
@@ -208,7 +238,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
**Steps**
-1. Run `git clone -b v2.8.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts.
+1. Run `git clone -b v2.9.1 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts.
2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace powermax` to create a new one
3. Edit the `samples/secret/secret.yaml` file,to point to the correct namespace, and replace the values for the username and password parameters.
These values can be obtained using base64 encoding as described in the following example:
@@ -223,7 +253,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
```
5. Download the default values.yaml file
```bash
- cd dell-csi-helm-installer && wget -O my-powermax-settings.yaml https://github.com/dell/helm-charts/raw/csi-powermax-2.8.0/charts/csi-powermax/values.yaml
+ cd dell-csi-helm-installer && wget -O my-powermax-settings.yaml https://github.com/dell/helm-charts/raw/csi-powermax-2.9.1/charts/csi-powermax/values.yaml
```
6. Ensure the unisphere have 10.0 REST endpoint support by clicking on Unisphere -> Help (?) -> About in Unisphere for PowerMax GUI.
7. Edit the newly created file and provide values for the following parameters
@@ -234,7 +264,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| Parameter | Description | Required | Default |
|-----------|--------------|------------|----------|
| **global**| This section refers to configuration options for both CSI PowerMax Driver and Reverse Proxy | - | - |
-|defaultCredentialsSecret| This secret name refers to: 1. The Unisphere credentials if the driver is installed without proxy or with proxy in Linked mode. 2. The proxy credentials if the driver is installed with proxy in StandAlone mode. 3. The default Unisphere credentials if credentialsSecret is not specified for a management server.| Yes | powermax-creds |
+|defaultCredentialsSecret| This secret name refers to: 1 The proxy credentials if the driver is installed with proxy in StandAlone mode. 2. The default Unisphere credentials if credentialsSecret is not specified for a management server.| Yes | powermax-creds |
| storageArrays| This section refers to the list of arrays managed by the driver and Reverse Proxy in StandAlone mode.| - | - |
| storageArrayId | This refers to PowerMax Symmetrix ID.| Yes | 000000000001|
| endpoint | This refers to the URL of the Unisphere server managing _storageArrayId_. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on| Yes if Reverse Proxy mode is _StandAlone_ | https://primary-1.unisphe.re:8443 |
@@ -264,9 +294,8 @@ CRDs should be configured during replication prepare stage with repctl as descri
| powerMaxDebug | Enables low level and http traffic logging between the CSI driver and Unisphere. Don't enable this unless asked to do so by the support team. | No | false |
| enableCHAP | Determine if the driver is going to configure SCSI node databases on the nodes with the CHAP credentials. If enabled, the CHAP secret must be provided in the credentials secret and set to the key "chapsecret" | No | false |
| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
-| version | Current version of the driver. Don't modify this value as this value will be used by the install script. | Yes | v2.3.0 |
-| images | Defines the container images used by the driver. | - | - |
-| driverRepository | Defines the registry of the container image used for the driver. | Yes | dellemc |
+| version | Current version of the driver. Don't modify this value as this value will be used by the install script. | Yes | v2.9.1 |
+| images | List all the images used by the CSI driver and CSM. If you use a private repository, change the registries accordingly. | Yes | "" || driverRepository | Defines the registry of the container image used for the driver. | Yes | dellemc |
| maxPowerMaxVolumesPerNode | Specifies the maximum number of volume that can be created on a node. | Yes| 0 |
| **controller** | Allows configuration of the controller-specific parameters.| - | - |
| controllerCount | Defines the number of csi-powerscale controller pods to deploy to the Kubernetes release| Yes | 2 |
@@ -284,18 +313,15 @@ CRDs should be configured during replication prepare stage with repctl as descri
| healthMonitor.enabled | Allows to enable/disable volume health monitor | No | false |
| topologyControl.enabled | Allows to enable/disable topology control to filter topology keys | No | false |
| **csireverseproxy**| This section refers to the configuration options for CSI PowerMax Reverse Proxy | - | - |
-| image | This refers to the image of the CSI PowerMax Reverse Proxy container. | Yes | dellemc/csipowermax-reverseproxy:v2.4.0 |
| tlsSecret | This refers to the TLS secret of the Reverse Proxy Server.| Yes | csirevproxy-tls-secret |
| deployAsSidecar | If set to _true_, the Reverse Proxy is installed as a sidecar to the driver's controller pod otherwise it is installed as a separate deployment.| Yes | "True" |
| port | Specify the port number that is used by the NodePort service created by the CSI PowerMax Reverse Proxy installation| Yes | 2222 |
-| mode | This refers to the installation mode of Reverse Proxy. It can be set to: 1. _Linked_: In this mode, the Reverse Proxy communicates with a primary or a backup Unisphere managing the same set of arrays. 2. _StandAlone_: In this mode, the Reverse Proxy communicates with multiple arrays managed by different Unispheres.| Yes | "StandAlone" |
| **certManager** | Auto-create TLS certificate for csi-reverseproxy | - | - |
| selfSignedCert | Set selfSignedCert to use a self-signed certificate | No | true |
| certificateFile | certificateFile has tls.key content in encoded format | No | tls.crt.encoded64 |
| privateKeyFile | privateKeyFile has tls.key content in encoded format | No | tls.key.encoded64 |
| **authorization** | [Authorization](../../../../authorization/deployment) is an optional feature to apply credential shielding of the backend PowerMax. | - | - |
| enabled | A boolean that enables/disables authorization feature. | No | false |
-| sidecarProxyImage | Image for csm-authorization-sidecar. | No | " " |
| proxyHost | Hostname of the csm-authorization server. | No | Empty |
| skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization proxy server. | No | true |
| **migration** | [Migration](../../../../replication/migration/migrating-volumes-same-array) is an optional feature to enable migration between storage classes | - | - |
@@ -305,7 +331,6 @@ CRDs should be configured during replication prepare stage with repctl as descri
| migrationPrefix | enables migration sidecar to read required information from the storage class fields | No | migration.storage.dell.com |
| **replication** | [Replication](../../../../replication/deployment) is an optional feature to enable replication & disaster recovery capabilities of PowerMax to Kubernetes clusters.| - | - |
| enabled | A boolean that enables/disables replication feature. | No | false |
-| image | Image for dell-csi-replicator sidecar. | No | " " |
| replicationContextPrefix | enables side cars to read required information from the volume context | No | powermax |
| replicationPrefix | Determine if replication is enabled | No | replication.storage.dell.com |
| **storageCapacity** | It is an optional feature that enable storagecapacity & helps the scheduler to check whether the requested capacity is available on the PowerMax array and allocate it to the nodes.| - | - |
@@ -334,7 +359,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
- This script also runs the verify.sh script in the same directory. You will be prompted to enter the credentials for each of the Kubernetes nodes. The `verify.sh` script needs the credentials to check if the iSCSI initiators have been configured on all nodes. You can also skip the verification step by specifying the `--skip-verify-node` option
- In order to enable authorization, there should be an authorization proxy server already installed.
- PowerMax Array username must have role as `StorageAdmin` to be able to perform CRUD operations.
-- If the user is using complex K8s version like “v1.23.3-mirantis-1”, use this kubeVersion check in [helm Chart](https://github.com/dell/helm-charts/blob/main/charts/csi-powermax/Chart.yaml) file. kubeVersion: “>= 1.23.0-0 < 1.27.0-0”.
+- If the user is using complex K8s version like “v1.24.3-mirantis-1”, use this kubeVersion check in [helm Chart](https://github.com/dell/helm-charts/blob/main/charts/csi-powermax/Chart.yaml) file. kubeVersion: “>= 1.24.0-0 < 1.29.0-0”.
- User should provide all boolean values with double-quotes. This applies only for values.yaml. Example: “true”/“false”.
- controllerCount parameter value should be <= number of nodes in the kubernetes cluster else install script fails.
- Endpoint should not have any special character at the end apart from port number.
@@ -348,44 +373,7 @@ A wide set of annotated storage class manifests has been provided in the `sample
Starting with CSI PowerMax v1.7.0, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/volumesnapshotclass_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots.
## Sample values file
-The following sections have useful snippets from `values.yaml` file which provides more information on how to configure the CSI PowerMax driver along with CSI PowerMax ReverseProxy in various modes
-
-### CSI PowerMax driver with Proxy in Linked mode
-In this mode, the CSI PowerMax ReverseProxy acts as a `passthrough` for the RESTAPI calls and only provides limited functionality
-such as rate limiting, backup Unisphere server. The CSI PowerMax driver is still responsible for the authentication with the Unisphere server.
-
-The first endpoint in the list of management servers is the primary Unisphere server and if you provide a second endpoint, then
-it will be considered as the backup Unisphere's endpoint.
-
-```yaml
-global:
- defaultCredentialsSecret: powermax-creds
- storageArrays:
- - storageArrayId: "000000000001"
- - storageArrayId: "000000000002"
- managementServers:
- - endpoint: https://primary-unisphere:8443
- skipCertificateValidation: false
- certSecret: primary-cert
- limits:
- maxActiveRead: 5
- maxActiveWrite: 4
- maxOutStandingRead: 50
- maxOutStandingWrite: 50
- - endpoint: https://backup-unisphere:8443
-
-# "csireverseproxy" refers to the subchart csireverseproxy
-csireverseproxy:
- # Set enabled to true if you want to use proxy
- image: dellemc/csipowermax-reverseproxy:v2.4.0
- tlsSecret: csirevproxy-tls-secret
- deployAsSidecar: true
- port: 2222
- mode: Linked
-```
-
->Note: Since the driver is still responsible for authentication when used with Proxy in `Linked` mode, the credentials for both
-> primary and backup Unisphere need to be the same.
+The following sections have useful snippets from `values.yaml` file which provides more information on how to configure the CSI PowerMax driver along with CSI PowerMax ReverseProxy in StandAlone mode.
### CSI PowerMax driver with Proxy in StandAlone mode
This is the most advanced configuration which provides you with the capability to connect to Multiple Unisphere servers.
@@ -423,7 +411,6 @@ global:
# "csireverseproxy" refers to the subchart csireverseproxy
csireverseproxy:
- image: dellemc/csipowermax-reverseproxy:v2.4.0
tlsSecret: csirevproxy-tls-secret
deployAsSidecar: true
port: 2222
diff --git a/content/v1/csidriver/installation/helm/powerstore.md b/content/v1/csidriver/installation/helm/powerstore.md
index 4a77564107..8d6b272e53 100644
--- a/content/v1/csidriver/installation/helm/powerstore.md
+++ b/content/v1/csidriver/installation/helm/powerstore.md
@@ -6,22 +6,11 @@ description: >
The CSI Driver for Dell PowerStore can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-powerstore/tree/master/dell-csi-helm-installer).
-The controller section of the Helm chart installs the following components in a _Deployment_ in the specified namespace:
-- CSI Driver for Dell PowerStore
-- Kubernetes External Provisioner, which provisions the volumes
-- Kubernetes External Attacher, which attaches the volumes to the containers
-- (Optional) Kubernetes External Snapshotter, which provides snapshot support
-- (Optional) Kubernetes External Resizer, which resizes the volume
-
-The node section of the Helm chart installs the following component in a _DaemonSet_ in the specified namespace:
-- CSI Driver for Dell PowerStore
-- Kubernetes Node Registrar, which handles the driver registration
-
## Prerequisites
The following are requirements to be met before installing the CSI Driver for Dell PowerStore:
- Install Kubernetes or OpenShift (see [supported versions](../../../../csidriver/#features-and-capabilities))
-- Install Helm 3
+- Install Helm 3.x
- If you plan to use either the Fibre Channel or iSCSI or NVMe/TCP or NVMe/FC protocol, refer to either _Fibre Channel requirements_ or _Set up the iSCSI Initiator_ or _Set up the NVMe Initiator_ sections below. You can use NFS volumes without FC or iSCSI or NVMe/TCP or NVMe/FC configuration.
> You can use either the Fibre Channel or iSCSI or NVMe/TCP or NVMe/FC protocol, but you do not need all the four.
@@ -33,13 +22,13 @@ The following are requirements to be met before installing the CSI Driver for De
- You can access your cluster with kubectl and helm.
- Ensure that your nodes support mounting NFS volumes.
-### Install Helm 3.0
+### Install Helm 3.x
-Install Helm 3.0 on the master node before you install the CSI Driver for Dell PowerStore.
+Install Helm 3.x on the master node before you install the CSI Driver for Dell PowerStore.
**Steps**
- Run the command to install Helm 3.0.
+ Run the command to install Helm 3.x.
```bash
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
```
@@ -97,6 +86,53 @@ Set up Linux multipathing as follows:
- Enable `user_friendly_names` and `find_multipaths` in the `multipath.conf` file.
- Ensure that the multipath command for `multipath.conf` is available on all Kubernetes nodes.
+#### multipathd `MachineConfig`
+
+If you are installing a CSI Driver which requires the installation of the Linux native Multipath software - _multipathd_, please follow the below instructions
+
+To enable multipathd on RedHat CoreOS nodes you need to prepare a working configuration encoded in base64.
+
+```bash echo 'defaults {
+user_friendly_names yes
+find_multipaths yes
+}
+blacklist {
+}' | base64 -w0
+```
+
+Use the base64 encoded string output in the following `MachineConfig` yaml file (under source section)
+```yaml
+apiVersion: machineconfiguration.openshift.io/v1
+kind: MachineConfig
+metadata:
+ name: workers-multipath-conf-default
+ labels:
+ machineconfiguration.openshift.io/role: worker
+spec:
+ config:
+ ignition:
+ version: 3.2.0
+ storage:
+ files:
+ - contents:
+ source: data:text/plain;charset=utf-8;base64,ZGVmYXVsdHMgewp1c2VyX2ZyaWVuZGx5X25hbWVzIHllcwpmaW5kX211bHRpcGF0aHMgeWVzCn0KCmJsYWNrbGlzdCB7Cn0K
+ verification: {}
+ filesystem: root
+ mode: 400
+ path: /etc/multipath.conf
+```
+After deploying this`MachineConfig` object, CoreOS will start multipath service automatically.
+Alternatively, you can check the status of the multipath service by entering the following command in each worker nodes.
+`sudo multipath -ll`
+
+If the above command is not successful, ensure that the /etc/multipath.conf file is present and configured properly. Once the file has been configured correctly, enable the multipath service by running the following command:
+`sudo /sbin/mpathconf –-enable --with_multipathd y`
+
+Finally, you have to restart the service by providing the command
+`sudo systemctl restart multipathd`
+
+For additional information refer to official documentation of the multipath configuration.
+
### (Optional) Volume Snapshot Requirements
For detailed snapshot setup procedure, [click here.](../../../../snapshots/#optional-volume-snapshot-requirements)
@@ -147,7 +183,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
## Install the Driver
**Steps**
-1. Run `git clone -b v2.8.0 https://github.com/dell/csi-powerstore.git` to clone the git repository.
+1. Run `git clone -b v2.9.1 https://github.com/dell/csi-powerstore.git` to clone the git repository.
2. Ensure that you have created namespace where you want to install the driver. You can run `kubectl create namespace csi-powerstore` to create a new one. "csi-powerstore" is just an example. You can choose any name for the namespace.
But make sure to align to the same namespace during the whole installation.
3. Edit `samples/secret/secret.yaml` file and configure connection information for your PowerStore arrays changing following parameters:
@@ -161,7 +197,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
- *nfsAcls* (Optional): defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory.
NFSv4 ACls are supported for NFSv4 shares on NFSv4 enabled NAS servers only. POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares.
- Add more blocks similar to above for each PowerStore array if necessary.
+ Add more blocks similar to above for each PowerStore array if necessary. If replication feature is enabled, ensure the secret includes all the PowerStore arrays involved in replication.
### User Privileges
The username specified in `secret.yaml` must be from the authentication providers of PowerStore. The user must have the correct user role to perform the actions. The minimum requirement is **Storage Operator**.
@@ -174,12 +210,13 @@ CRDs should be configured during replication prepare stage with repctl as descri
> If you do not specify `arrayID` parameter in the storage class then the array that was specified as the default would be used for provisioning volumes.
6. Download the default values.yaml file
```bash
- cd dell-csi-helm-installer && wget -O my-powerstore-settings.yaml https://github.com/dell/helm-charts/raw/csi-powerstore-2.8.0/charts/csi-powerstore/values.yaml
+ cd dell-csi-helm-installer && wget -O my-powerstore-settings.yaml https://github.com/dell/helm-charts/raw/csi-powerstore-2.9.1/charts/csi-powerstore/values.yaml
```
7. Edit the newly created values file and provide values for the following parameters `vi my-powerstore-settings.yaml`:
| Parameter | Description | Required | Default |
|-----------|-------------|----------|---------|
+| images | List all the images used by the CSI driver and CSM. If you use a private repository, change the registries accordingly. | Yes | "" |
| logLevel | Defines CSI driver log level | No | "debug" |
| logFormat | Defines CSI driver log format | No | "JSON" |
| externalAccess | Defines additional entries for hostAccess of NFS volumes, single IP address and subnet are valid entries | No | " " |
@@ -204,13 +241,11 @@ CRDs should be configured during replication prepare stage with repctl as descri
| node.tolerations | Defines tolerations that would be applied to node daemonset | Yes | " " |
| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
| controller.vgsnapshot.enabled | Allows to enable/disable the volume group snapshot feature | No | "true" |
-| images.driverRepository | To use an image from custom repository | No | dockerhub |
| version | To use any driver version | No | Latest driver version |
| allowAutoRoundOffFilesystemSize | Allows the controller to round off filesystem to 3Gi which is the minimum supported value | No | false |
| storageCapacity.enabled | Allows to enable/disable storage capacity tracking feature | No | true
| storageCapacity.pollInterval | Configure how often the driver checks for changed capacity | No | 5m
| podmon.enabled | Allows to enable/disable [Resiliency](../../../../resiliency/deployment#powerstore-specific-recommendations) feature | No | false
-| podmon.image | Sidecar image for resiliency | No | -
8. Install the driver using `csi-install.sh` bash script by running
```bash
diff --git a/content/v1/csidriver/installation/helm/unity.md b/content/v1/csidriver/installation/helm/unity.md
index e7e0f12538..6b9cefabdf 100644
--- a/content/v1/csidriver/installation/helm/unity.md
+++ b/content/v1/csidriver/installation/helm/unity.md
@@ -6,20 +6,6 @@ description: >
The CSI Driver for Dell Unity XT can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-unity/tree/master/dell-csi-helm-installer).
-The controller section of the Helm chart installs the following components in a _Deployment_:
-
-- CSI Driver for Unity XT
-- Kubernetes External Provisioner, which provisions the volumes
-- Kubernetes External Attacher, which attaches the volumes to the containers
-- Kubernetes External Snapshotter, which provides snapshot support
-- Kubernetes External Resizer, which resizes the volume
-- Kubernetes External Health Monitor, which provides volume health status
-
-The node section of the Helm chart installs the following component in a _DaemonSet_:
-
-- CSI Driver for Unity XT
-- Kubernetes Node Registrar, which handles the driver registration
-
## Prerequisites
Before you install CSI Driver for Unity XT, verify the requirements that are mentioned in this topic are installed and configured.
@@ -92,7 +78,7 @@ Install CSI Driver for Unity XT using this procedure.
* As a pre-requisite for running this procedure, you must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command
```bash
- git clone -b v2.8.0 https://github.com/dell/csi-unity.git
+ git clone -b v2.9.1 https://github.com/dell/csi-unity.git
```
* In the top-level dell-csi-helm-installer directory, there should be two scripts, `csi-install.sh` and `csi-uninstall.sh`.
* Ensure _unity_ namespace exists in Kubernetes cluster. Use the `kubectl create namespace unity` command to create the namespace if the namespace is not present.
@@ -112,15 +98,16 @@ Procedure
2. Get the required values.yaml using the command below:
```bash
-cd dell-csi-helm-installer && wget -O my-unity-settings.yaml https://github.com/dell/helm-charts/raw/csi-unity-2.8.0/charts/csi-unity/values.yaml
+cd dell-csi-helm-installer && wget -O my-unity-settings.yaml https://github.com/dell/helm-charts/raw/csi-unity-2.9.1/charts/csi-unity/values.yaml
```
3. Edit `values.yaml` to set the following parameters for your installation:
-
- The following table lists the primary configurable parameters of the Unity XT driver chart and their default values. More detailed information can be found in the [`values.yaml`](https://github.com/dell/helm-charts/blob/main/charts/csi-unity/values.yaml) file in this repository.
+
+ The following table lists the primary configurable parameters of the Unity XT driver chart and their default values. More detailed information can be found in the [`values.yaml`](https://github.com/dell/helm-charts/blob/csi-unity-2.9.1/charts/csi-unity/values.yaml) file in this repository.
| Parameter | Description | Required | Default |
| --------- | ----------- | -------- |-------- |
+ | images | List all the images used by the CSI driver and CSM. If you use a private repository, change the registries accordingly. | Yes | "" |
| logLevel | LogLevel is used to set the logging level of the driver | No | info |
| allowRWOMultiPodAccess | Flag to enable multiple pods to use the same PVC on the same node with RWO access mode. | No | false |
| kubeletConfigDir | Specify kubelet config dir path | Yes | /var/lib/kubelet |
@@ -129,7 +116,6 @@ cd dell-csi-helm-installer && wget -O my-unity-settings.yaml https://github.com/
| certSecretCount | Represents the number of certificate secrets, which the user is going to create for SSL authentication. (unity-cert-0..unity-cert-n). The minimum value should be 1. | No | 1 |
| imagePullPolicy | The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. | Yes | IfNotPresent |
| podmon.enabled | service to monitor failing jobs and notify | No | false |
- | podmon.image| pod man image name | No | - |
| tenantName | Tenant name added while adding host entry to the array | No | |
| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
| storageCapacity.enabled | Enable/Disable storage capacity tracking | No | true |
@@ -341,7 +327,7 @@ cd dell-csi-helm-installer && wget -O my-unity-settings.yaml https://github.com/
**Syntax**:
```bash
- git clone -b csi-unity-2.8.0 https://github.com/dell/helm-charts
+ git clone -b csi-unity-2.9.1 https://github.com/dell/helm-charts
helm install dell/container-storage-modules -n --version -f
diff --git a/content/v1/csidriver/installation/offline/_index.md b/content/v1/csidriver/installation/offline/_index.md
index bb46d37b88..8b12c65308 100644
--- a/content/v1/csidriver/installation/offline/_index.md
+++ b/content/v1/csidriver/installation/offline/_index.md
@@ -4,8 +4,7 @@ linktitle: Offline Installer
description: Offline Installation of Dell CSI Storage Providers
---
-The `csi-offline-bundle.sh` script can be used to create a package usable for offline installation of the Dell CSI Storage Providers, via either Helm
-or the Dell CSI Operator.
+The `csi-offline-bundle.sh` script can be used to create a package usable for offline installation of the Dell CSI Storage Providers, via either Helm or the Dell CSM Operator.
This includes the following drivers:
* [PowerFlex](https://github.com/dell/csi-vxflexos)
@@ -14,8 +13,9 @@ This includes the following drivers:
* [PowerStore](https://github.com/dell/csi-powerstore)
* [Unity XT](https://github.com/dell/csi-unity)
-As well as the Dell CSI Operator
-* [Dell CSI Operator](https://github.com/dell/dell-csi-operator)
+As well as the Dell CSM Operator.
+* [Dell CSM Operator](https://github.com/dell/csm-operator)
+ - Directions for offline installation can be found [here](../../../deployment/csmoperator/#building-an-offline-bundle).
## Dependencies
@@ -50,93 +50,93 @@ To perform an offline installation of a driver or the Operator, the following st
This needs to be performed on a Linux system with access to the Internet as a git repo will need to be cloned, and container images pulled from public registries.
To build an offline bundle, the following steps are needed:
-1. Perform a `git clone` of the desired repository. For a helm-based install, the specific driver repo should be cloned. For an Operator based deployment, the Dell CSI Operator repo should be cloned
+1. Perform a `git clone` of the desired repository. For a helm-based install, the specific driver repo should be cloned. For an Operator based deployment, the Dell CSM Operator repo should be cloned
2. Run the `csi-offline-bundle.sh` script with an argument of `-c` in order to create an offline bundle
- For Helm installs, the `csi-offline-bundle.sh` script will be found in the `dell-csi-helm-installer` directory
- - For Operator installs, the `csi-offline-bundle.sh` script will be found in the `scripts` directory
+ - For Operator installs, the `csm-offline-bundle.sh` script will be found in the `scripts` directory
The script will perform the following steps:
- - Determine required images by parsing either the driver Helm charts (if run from a cloned CSI Driver git repository) or the Dell CSI Operator configuration files (if run from a clone of the Dell CSI Operator repository)
+ - Determine required images by parsing either the driver Helm charts (if run from a cloned CSI Driver git repository) or the Dell CSM Operator configuration files (if run from a clone of the Dell CSM Operator repository)
- Perform an image `pull` of each image required
- Save all required images to a file by running `docker save` or `podman save`
- Build a `tar.gz` file containing the images as well as files required to installer the driver and/or Operator
The resulting offline bundle file can be copied to another machine, if necessary, to gain access to the desired image registry.
-For example, here is the output of a request to build an offline bundle for the Dell CSI Operator:
+For example, here is the output of a request to build an offline bundle for the Dell CSM Operator:
```bash
-git clone -b v1.12.0 https://github.com/dell/dell-csi-operator.git
+git clone -b v1.4.3 https://github.com/dell/csm-operator.git
```
```bash
-cd dell-csi-operator/scripts
+cd csm-operator
```
```bash
-./csi-offline-bundle.sh -c
+bash scripts/csm-offline-bundle.sh -c
```
```
+*
+* Building image manifest file
+
+ Processing file /root/csm-operator/operatorconfig/driverconfig/common/default.yaml
+ Processing file /root/csm-operator/bundle/manifests/dell-csm-operator.clusterserviceversion.yaml
+
*
* Pulling and saving container images
- dellemc/csi-isilon:v2.5.0
- dellemc/csi-isilon:v2.6.0
- dellemc/csi-isilon:v2.7.0
- dellemc/csipowermax-reverseproxy:v2.4.0
- dellemc/csi-powermax:v2.3.1
- dellemc/csi-powermax:v2.4.0
- dellemc/csi-powermax:v2.5.0
- dellemc/csi-powerstore:v2.6.0
- dellemc/csi-powerstore:v2.7.0
- dellemc/csi-powerstore:v2.8.0
- dellemc/csi-unity:v2.3.0
- dellemc/csi-unity:v2.4.0
- dellemc/csi-unity:v2.5.0
- dellemc/csi-vxflexos:v2.6.0
- dellemc/csi-vxflexos:v2.7.0
- dellemc/csi-vxflexos:v2.8.0
- dellemc/dell-csi-operator:v1.12.0
- dellemc/sdc:3.6
- dellemc/sdc:3.6.0.6
- dellemc/sdc:3.6.1
- docker.io/busybox:1.32.0
- ...
- ...
+ dellemc/csi-isilon:v2.9.1
+ dellemc/csi-metadata-retriever:v1.6.1
+ dellemc/csipowermax-reverseproxy:v2.8.1
+ dellemc/csi-powermax:v2.9.1
+ dellemc/csi-powerstore:v2.9.1
+ dellemc/csi-unity:v2.8.1
+ dellemc/csi-vxflexos:v2.9.2
+ dellemc/csm-authorization-sidecar:v1.9.1
+ dellemc/csm-metrics-powerflex:v1.5.0
+ dellemc/csm-metrics-powerscale:v1.2.0
+ dellemc/csm-topology:v1.5.0
+ dellemc/dell-csi-replicator:v1.7.1
+ dellemc/dell-replication-controller:v1.7.0
+ dellemc/sdc:4.5
+ docker.io/dellemc/dell-csm-operator:v1.4.3
+ gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0
+ nginxinc/nginx-unprivileged:1.20
+ otel/opentelemetry-collector:0.42.0
+ registry.k8s.io/sig-storage/csi-attacher:v4.3.0
+ registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.9.0
+ registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.8.0
+ registry.k8s.io/sig-storage/csi-provisioner:v3.5.0
+ registry.k8s.io/sig-storage/csi-resizer:v1.8.0
+ registry.k8s.io/sig-storage/csi-snapshotter:v6.2.2
*
* Copying necessary files
- /root/dell-csi-operator/driverconfig
- /root/dell-csi-operator/deploy
- /root/dell-csi-operator/samples
- /root/dell-csi-operator/scripts
- /root/dell-csi-operator/OLM.md
- /root/dell-csi-operator/README.md
- /root/dell-csi-operator/LICENSE
+ /root/csm-operator/deploy
+ /root/csm-operator/operatorconfig
+ /root/csm-operator/samples
+ /root/csm-operator/scripts
+ /root/csm-operator/README.md
+ /root/csm-operator/LICENSE
*
* Compressing release
- dell-csi-operator-bundle/
- dell-csi-operator-bundle/driverconfig/
- dell-csi-operator-bundle/driverconfig/config.yaml
- dell-csi-operator-bundle/driverconfig/isilon_v230_v121.json
- dell-csi-operator-bundle/driverconfig/isilon_v230_v122.json
- dell-csi-operator-bundle/driverconfig/isilon_v230_v123.json
- dell-csi-operator-bundle/driverconfig/isilon_v230_v124.json
- dell-csi-operator-bundle/driverconfig/isilon_v240_v121.json
- dell-csi-operator-bundle/driverconfig/isilon_v240_v122.json
- dell-csi-operator-bundle/driverconfig/isilon_v240_v123.json
- dell-csi-operator-bundle/driverconfig/isilon_v240_v124.json
- dell-csi-operator-bundle/driverconfig/isilon_v250_v123.json
- dell-csi-operator-bundle/driverconfig/isilon_v250_v124.json
- dell-csi-operator-bundle/driverconfig/isilon_v250_v125.json
- dell-csi-operator-bundle/driverconfig/powermax_v230_v121.json
- ...
- ...
+dell-csm-operator-bundle/
+dell-csm-operator-bundle/deploy/
+dell-csm-operator-bundle/deploy/operator.yaml
+dell-csm-operator-bundle/deploy/crds/
+dell-csm-operator-bundle/deploy/crds/storage.dell.com_containerstoragemodules.yaml
+dell-csm-operator-bundle/deploy/olm/
+dell-csm-operator-bundle/deploy/olm/operator_community.yaml
+...
+...
+dell-csm-operator-bundle/README.md
+dell-csm-operator-bundle/LICENSE
*
* Complete
-Offline bundle file is: /root/dell-csi-operator/dell-csi-operator-bundle.tar.gz
+Offline bundle file is: /root/csm-operator/dell-csm-operator-bundle.tar.gz
```
@@ -148,6 +148,7 @@ To prepare for the driver or Operator installation, the following steps need to
1. Copy the offline bundle file created from the previous step to a system with access to an image registry available to your Kubernetes/OpenShift cluster
2. Expand the bundle file by running `tar xvfz `
3. Run the `csi-offline-bundle.sh` script and supply the `-p` option as well as the path to the internal registry with the `-r` option
+ - For Operator installs, the `csm-offline-bundle.sh` script will be found in the `scripts` directory
The script will then perform the following steps:
- Load the required container images into the local system
@@ -156,24 +157,28 @@ The script will then perform the following steps:
- Modify the Helm charts or Operator configuration to refer to the newly tagged/pushed images
-An example of preparing the bundle for installation (192.168.75.40:5000 refers to an image registry accessible to Kubernetes/OpenShift):
+An example of preparing the bundle for installation for the Dell CSM Operator:
```bash
-tar xvfz dell-csi-operator-bundle.tar.gz
+tar xvfz dell-csm-operator-bundle.tar.gz
```
```
-dell-csi-operator-bundle/
-dell-csi-operator-bundle/samples/
+dell-csm-operator-bundle/
+dell-csm-operator-bundle/deploy/
+dell-csm-operator-bundle/deploy/operator.yaml
+dell-csm-operator-bundle/deploy/crds/
+dell-csm-operator-bundle/deploy/crds/storage.dell.com_containerstoragemodules.yaml
+dell-csm-operator-bundle/deploy/olm/
+dell-csm-operator-bundle/deploy/olm/operator_community.yaml
...
-
...
-dell-csi-operator-bundle/LICENSE
-dell-csi-operator-bundle/README.md
+dell-csm-operator-bundle/README.md
+dell-csm-operator-bundle/LICENSE
```
```bash
-cd dell-csi-operator-bundle
+cd dell-csm-operator-bundle
```
```bash
-./csi-offline-bundle.sh -p -r localregistry:5000/csi-operator
+bash scripts/csm-offline-bundle.sh -p -r localregistry:5000/dell-csm-operator/
```
```
Preparing a offline bundle for installation
@@ -181,93 +186,43 @@ Preparing a offline bundle for installation
*
* Loading docker images
- 5b1fa8e3e100: Loading layer [==================================================>] 3.697MB/3.697MB
- e20ed4c73206: Loading layer [==================================================>] 17.22MB/17.22MB
- Loaded image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.6.0
- d72a74c56330: Loading layer [==================================================>] 3.031MB/3.031MB
- f2d2ab12e2a7: Loading layer [==================================================>] 48.08MB/48.08MB
- Loaded image: k8s.gcr.io/sig-storage/csi-snapshotter-v6.1.0
- 417cb9b79ade: Loading layer [==================================================>] 3.062MB/3.062MB
- 61fefb35ccee: Loading layer [==================================================>] 16.88MB/16.88MB
- Loaded image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.1
- 7a5b9c0b4b14: Loading layer [==================================================>] 3.031MB/3.031MB
- 1555ad6e2d44: Loading layer [==================================================>] 49.86MB/49.86MB
- Loaded image: k8s.gcr.io/sig-storage/csi-attacher-v4.0.0
- 2de1422d5d2d: Loading layer [==================================================>] 54.56MB/54.56MB
- Loaded image: k8s.gcr.io/sig-storage/csi-resizer-v1.6.0
- 25a1c1010608: Loading layer [==================================================>] 54.54MB/54.54MB
- Loaded image: k8s.gcr.io/sig-storage/csi-snapshotter-v6.0.1
- 07363fa84210: Loading layer [==================================================>] 3.062MB/3.062MB
- 5227e51ea570: Loading layer [==================================================>] 54.92MB/54.92MB
- Loaded image: k8s.gcr.io/sig-storage/csi-attacher-v3.5.0
- cfb5cbeabdb2: Loading layer [==================================================>] 55.38MB/55.38MB
- Loaded image: k8s.gcr.io/sig-storage/csi-resizer-v1.5.0
- ...
- ...
+Loaded image: docker.io/dellemc/csi-powerstore:v2.9.1
+Loaded image: docker.io/dellemc/csi-isilon:v2.9.1
+...
+...
+Loaded image: registry.k8s.io/sig-storage/csi-resizer:v1.8.0
+Loaded image: registry.k8s.io/sig-storage/csi-snapshotter:v6.2.2
*
* Tagging and pushing images
- dellemc/dell-csi-operator:v1.12.0 -> localregistry:5000/csi-operator/dell-csi-operator:v1.12.0
- dellemc/csi-isilon:v2.3.0 -> localregistry:5000/csi-operator/csi-isilon:v2.3.0
- dellemc/csi-isilon:v2.4.0 -> localregistry:5000/csi-operator/csi-isilon:v2.4.0
- dellemc/csi-isilon:v2.5.0 -> localregistry:5000/csi-operator/csi-isilon:v2.5.0
- dellemc/csipowermax-reverseproxy:v2.4.0 -> localregistry:5000/csi-operator/csipowermax-reverseproxy:v2.4.0
- dellemc/csi-powermax:v2.3.1 -> localregistry:5000/csi-operator/csi-powermax:v2.3.1
- dellemc/csi-powermax:v2.4.0 -> localregistry:5000/csi-operator/csi-powermax:v2.4.0
- dellemc/csi-powermax:v2.5.0 -> localregistry:5000/csi-operator/csi-powermax:v2.5.0
- dellemc/csi-powerstore:v2.6.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.6.0
- dellemc/csi-powerstore:v2.7.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.7.0
- dellemc/csi-powerstore:v2.8.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.8.0
- dellemc/csi-unity:v2.3.0 -> localregistry:5000/csi-operator/csi-unity:v2.3.0
- dellemc/csi-unity:v2.4.0 -> localregistry:5000/csi-operator/csi-unity:v2.4.0
- dellemc/csi-unity:v2.5.0 -> localregistry:5000/csi-operator/csi-unity:v2.5.0
- dellemc/csi-vxflexos:v2.6.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.6.0
- dellemc/csi-vxflexos:v2.7.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.7.0
- dellemc/csi-vxflexos:v2.8.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.8.0
- dellemc/sdc:3.6 -> localregistry:5000/csi-operator/sdc:3.6
- dellemc/sdc:3.6.0.6 -> localregistry:5000/csi-operator/sdc:3.6.0.6
- dellemc/sdc:3.6.1 -> localregistry:5000/csi-operator/sdc:3.6.1
- docker.io/busybox:1.32.0 -> localregistry:5000/csi-operator/busybox:1.32.0
+ dellemc/csi-isilon:v2.8.0 -> localregistry:5000/dell-csm-operator/csi-isilon:v2.8.0
+ dellemc/csi-metadata-retriever:v1.5.0 -> localregistry:5000/dell-csm-operator/csi-metadata-retriever:v1.5.0
...
...
+ registry.k8s.io/sig-storage/csi-resizer:v1.8.0 -> localregistry:5000/dell-csm-operator/csi-resizer:v1.8.0
+ registry.k8s.io/sig-storage/csi-snapshotter:v6.2.2 -> localregistry:5000/dell-csm-operator/csi-snapshotter:v6.2.2
*
-* Preparing operator files within /root/dell-csi-operator-bundle
-
- changing: dellemc/dell-csi-operator:v1.12.0 -> localregistry:5000/csi-operator/dell-csi-operator:v1.12.0
- changing: dellemc/csi-isilon:v2.3.0 -> localregistry:5000/csi-operator/csi-isilon:v2.3.0
- changing: dellemc/csi-isilon:v2.4.0 -> localregistry:5000/csi-operator/csi-isilon:v2.4.0
- changing: dellemc/csi-isilon:v2.5.0 -> localregistry:5000/csi-operator/csi-isilon:v2.5.0
- changing: dellemc/csipowermax-reverseproxy:v2.4.0 -> localregistry:5000/csi-operator/csipowermax-reverseproxy:v2.4.0
- changing: dellemc/csi-powermax:v2.3.1 -> localregistry:5000/csi-operator/csi-powermax:v2.3.1
- changing: dellemc/csi-powermax:v2.4.0 -> localregistry:5000/csi-operator/csi-powermax:v2.4.0
- changing: dellemc/csi-powermax:v2.5.0 -> localregistry:5000/csi-operator/csi-powermax:v2.5.0
- changing: dellemc/csi-powerstore:v2.6.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.6.0
- changing: dellemc/csi-powerstore:v2.7.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.7.0
- changing: dellemc/csi-powerstore:v2.8.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.8.0
- changing: dellemc/csi-unity:v2.3.0 -> localregistry:5000/csi-operator/csi-unity:v2.3.0
- changing: dellemc/csi-unity:v2.4.0 -> localregistry:5000/csi-operator/csi-unity:v2.4.0
- changing: dellemc/csi-unity:v2.5.0 -> localregistry:5000/csi-operator/csi-unity:v2.5.0
- changing: dellemc/csi-vxflexos:v2.6.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.6.0
- changing: dellemc/csi-vxflexos:v2.7.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.7.0
- changing: dellemc/csi-vxflexos:v2.8.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.8.0
- changing: dellemc/sdc:3.6 -> localregistry:5000/csi-operator/sdc:3.6
- changing: dellemc/sdc:3.6.0.6 -> localregistry:5000/csi-operator/sdc:3.6.0.6
- changing: dellemc/sdc:3.6.1 -> localregistry:5000/csi-operator/sdc:3.6.1
- changing: docker.io/busybox:1.32.0 -> localregistry:5000/csi-operator/busybox:1.32.0
+* Preparing files within /root/dell-csm-operator-bundle
+
+ changing: dellemc/csi-isilon:v2.8.0 -> localregistry:5000/dell-csm-operator/csi-isilon:v2.8.0
+ changing: dellemc/csi-metadata-retriever:v1.5.0 -> localregistry:5000/dell-csm-operator/csi-metadata-retriever:v1.5.0
...
...
-
+ changing: registry.k8s.io/sig-storage/csi-resizer:v1.8.0 -> localregistry:5000/dell-csm-operator/csi-resizer:v1.8.0
+ changing: registry.k8s.io/sig-storage/csi-snapshotter:v6.2.2 -> localregistry:5000/dell-csm-operator/csi-snapshotter:v6.2.2
+
*
* Complete
+
```
### Perform either a Helm installation or Operator installation
-Now that the required images are available and the Helm Charts/Operator configuration updated, you can proceed by following the usual installation procedure as documented either via [Helm](../helm) or [Operator](../operator/#manual-installation).
+Now that the required images are available and the Helm Charts/Operator configuration updated, you can proceed by following the usual installation procedure as documented either via [Helm](../helm) or [Operator](../../../deployment/csmoperator/#installation).
*NOTES:*
1. Offline bundle installation is only supported with manual installs i.e. without using Operator Lifecycle Manager.
-2. Installation should be done using the files that are obtained after unpacking the offline bundle (dell-csi-operator-bundle.tar.gz) as the image tags in the manifests are modified to point to the internal registry.
+2. Installation should be done using the files that are obtained after unpacking the offline bundle (dell-csm-operator-bundle.tar.gz) as the image tags in the manifests are modified to point to the internal registry.
3. Offline bundle installs operator in `default` namespace via install.sh script. Make sure that the current context in kubeconfig file has the namespace set to `default`.
diff --git a/content/v1/csidriver/installation/operator/_index.md b/content/v1/csidriver/installation/operator/_index.md
index fa57b0520f..1a22f14052 100644
--- a/content/v1/csidriver/installation/operator/_index.md
+++ b/content/v1/csidriver/installation/operator/_index.md
@@ -7,446 +7,5 @@ description: >
---
{{% pageinfo color="primary" %}}
-The Dell CSI Operator is no longer actively maintained or supported. Dell CSI Operator has been replaced with [Dell CSM Operator](https://dell.github.io/csm-docs/docs/deployment/csmoperator/). If you are currently using Dell CSI Operator, refer to the [operator migration documentation](https://dell.github.io/csm-docs/docs/csidriver/installation/operator/operator_migration/) to migrate from Dell CSI Operator to Dell CSM Operator.
+The Dell CSI Operator is no longer actively maintained or supported. Dell CSI Operator has been replaced with [Dell CSM Operator](https://dell.github.io/csm-docs/docs/deployment/csmoperator/). If you are currently using Dell CSI Operator, refer to the [operator migration documentation](operator_migration/) to migrate from Dell CSI Operator to Dell CSM Operator.
{{% /pageinfo %}}
-
-The Dell CSI Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers provided by Dell for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. It is also available as a certified operator for OpenShift clusters and can be deployed using the OpenShift Container Platform. Both these methods of installation use OLM (Operator Lifecycle Manager). The operator can also be deployed manually.
-
-## Prerequisites
-### (Optional) Volume Snapshot Requirements
-
-On Upstream Kubernetes clusters, ensure that to install
-* VolumeSnapshot CRDs - Install v1 VolumeSnapshot CRDs
-* External Volume Snapshot Controller
-
-For detailed snapshot setup procedure, [click here.](../../../snapshots/#optional-volume-snapshot-requirements)
-
->NOTE: That step can be skipped with OpenShift.
-
-## Installation
-Dell CSI Operator has been tested and qualified with
-- Upstream Kubernetes or OpenShift (see [supported versions](../../../csidriver/#features-and-capabilities))
-
-### Before you begin
-If you have installed an old version of the `dell-csi-operator` which was available with the name _CSI Operator_, please refer to this [section](#replacing-csi-operator-with-dell-csi-operator) before continuing.
-
-### Full list of CSI Drivers and versions supported by the Dell CSI Operator
-| CSI Driver | Version | ConfigVersion | Kubernetes Version | OpenShift Version |
-| ------------------ | --------- | -------------- | -------------------- | --------------------- |
-| CSI PowerMax | 2.5.0 | v2.5.0 | 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
-| CSI PowerMax | 2.6.0 | v2.6.0 | 1.24, 1.25, 1.26 | 4.10, 4.10 EUS, 4.11 |
-| CSI PowerMax | 2.7.0 | v2.7.0 | 1.25, 1.26, 1.27 | 4.11, 4.12, 4.12 EUS |
-| CSI PowerFlex | 2.5.0 | v2.5.0 | 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
-| CSI PowerFlex | 2.6.0 | v2.6.0 | 1.24, 1.25, 1.26 | 4.10, 4.10 EUS, 4.11 |
-| CSI PowerFlex | 2.7.0 | v2.7.0 | 1.25, 1.26, 1.27 | 4.11, 4.12 EUS, 4.12 |
-| CSI PowerScale | 2.5.0 | v2.5.0 | 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
-| CSI PowerScale | 2.6.0 | v2.6.0 | 1.24, 1.25, 1.26 | 4.10, 4.10 EUS, 4.11 |
-| CSI PowerScale | 2.7.0 | v2.7.0 | 1.25, 1.26, 1.27 | 4.11, 4.12, 4.12 EUS |
-| CSI Unity XT | 2.5.0 | v2.5.0 | 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
-| CSI Unity XT | 2.6.0 | v2.6.0 | 1.24, 1.25, 1.26 | 4.10, 4.10 EUS, 4.11 |
-| CSI Unity XT | 2.7.0 | v2.7.0 | 1.25, 1.26, 1.27 | 4.11, 4.12, 4.12 EUS |
-| CSI PowerStore | 2.5.0 | v2.5.0 | 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
-| CSI PowerStore | 2.6.0 | v2.6.0 | 1.24, 1.25, 1.26 | 4.10, 4.10 EUS, 4.11 |
-| CSI PowerStore | 2.7.0 | v2.7.0 | 1.25, 1.26, 1.27 | 4.11, 4.12, 4.12 EUS |
-
-
-
-**Dell CSI Operator can be installed via OLM (Operator Lifecycle Manager) and manual installation.**
-
-### Installation Using Operator Lifecycle Manager
-`dell-csi-operator` can be installed using Operator Lifecycle Manager (OLM) on upstream Kubernetes clusters & Red Hat OpenShift Clusters.
-The installation process involves the creation of a `Subscription` object either via the _OperatorHub_ UI or using `kubectl/oc`. While creating the `Subscription` you can set the Approval strategy for the `InstallPlan` for the Operator to -
-* _Automatic_ - If you want the Operator to be automatically installed or upgraded (once an upgrade becomes available)
-* _Manual_ - If you want a Cluster Administrator to manually review and approve the `InstallPlan` for installation/upgrades
-
-**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.3`**.
-
-#### Pre-Requisite for installation with OLM
-Please run the following commands for creating the required `ConfigMap` before installing the `dell-csi-operator` using OLM.
-#Replace operator-namespace in the below command with the actual namespace where the operator will be deployed by OLM
-```bash
-git clone -b v1.12.0 https://github.com/dell/dell-csi-operator.git
-cd dell-csi-operator
-tar -czf config.tar.gz driverconfig/
-kubectl create configmap dell-csi-operator-config --from-file config.tar.gz -n
-```
-##### Upstream Kubernetes
-- For installing via OperatorHub.io on Kubernetes, go to the [OperatorHub page](../../partners/operator/).
-##### Red Hat OpenShift Clusters
-- For installing via OpenShift with the Operator, go to the [OpenShift page](../../partners/redhat/).
-
-### Manual Installation
-
-#### Steps
-
->**Skip step 1 for "offline bundle installation" and continue using the workspace created by untar of dell-csi-operator-bundle.tar.gz.**
-1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.12.0 https://github.com/dell/dell-csi-operator.git`.
-2. cd dell-csi-operator
-3. Run `bash scripts/install.sh` to install the operator.
-
-{{< imgproc non-olm-1.jpg Resize "2500x" >}}{{< /imgproc >}}
-
-4. Run the command `oc get pods -n dell-csi-operator` to validate the installation. If completed successfully, you should be able to see the operator-related pod in the 'dell-csi-operator' namespace.
-
-{{< imgproc non-olm-2.jpg Resize "3500x800" >}}{{< /imgproc >}}
-
-## Custom Resource Definitions
-As part of the Dell CSI Operator installation, a CRD representing each driver installation is also installed.
-List of CRDs which are installed in API Group `storage.dell.com`
-* csipowermax
-* csiunity
-* csivxflexos
-* csiisilon
-* csipowerstore
-* csipowermaxrevproxy
-
-For installation of the supported drivers, a `CustomResource` has to be created in your cluster.
-
-## Pre-Requisites for installation of the CSI Drivers
-
-### Pre-requisites for upstream Kubernetes Clusters
-On upstream Kubernetes clusters, make sure to install
-* VolumeSnapshot CRDs
- * On clusters running v1.25,v1.26 & v1.27, make sure to install v1 VolumeSnapshot CRDs
-* External Volume Snapshot Controller with the correct version
-
-### Pre-requisites for Red Hat OpenShift Clusters
-#### iSCSI
-If you are installing a CSI driver which is going to use iSCSI as the transport protocol, please follow the following instructions.
-In Red Hat OpenShift clusters, you can create a `MachineConfig` object using the console or `oc` to ensure that the iSCSI daemon starts on all the Red Hat CoreOS nodes. Here is an example of a `MachineConfig` object:
-
-```yaml
-apiVersion: machineconfiguration.openshift.io/v1
-kind: MachineConfig
-metadata:
- name: 99-iscsid
- labels:
- machineconfiguration.openshift.io/role: worker
-spec:
- config:
- ignition:
- version: 3.2.0
- systemd:
- units:
- - name: "iscsid.service"
- enabled: true
-```
-Once the `MachineConfig` object has been deployed, CoreOS will ensure that `iscsid.service` starts automatically.
-
-Alternatively, you can check the status of the iSCSI service by entering the following command on each worker node in the cluster:
-
-```bash
-sudo systemctl status iscsid
-```
-
-The service should be up and running (i.e. should be active state).
-
-If the `iscsid.service` is not running, then perform the following steps on each worker node in the cluster
-1. `Login` to worker nodes and check if the file /etc/iscsi/initiatorname.iscsi has been created properly
-2. If the file doesn't exist or it doesn't contain a valid ISCSI IQN, then make sure it exists with valid entries
-3. Ensure that iscsid service is running - Enable ```sudo systemctl enable iscsid``` & restart ```sudo systemctl restart iscsid``` iscsid if necessary.
-Note: If your worker nodes are running Red Hat CoreOS, make sure that automatic ISCSI login at boot is configured. Please contact RedHat for more details.
-
-#### MultiPath
-If you are installing a CSI Driver which requires the installation of the Linux native Multipath software - _multipathd_, please follow the below instructions
-
-To enable multipathd on RedHat CoreOS nodes you need to prepare a working configuration encoded in base64.
-
-```bash echo 'defaults {
-user_friendly_names yes
-find_multipaths yes
-}
-blacklist {
-}' | base64 -w0
-```
-
-Use the base64 encoded string output in the following `MachineConfig` yaml file (under source section)
-```yaml
-apiVersion: machineconfiguration.openshift.io/v1
-kind: MachineConfig
-metadata:
- name: workers-multipath-conf-default
- labels:
- machineconfiguration.openshift.io/role: worker
-spec:
- config:
- ignition:
- version: 3.2.0
- storage:
- files:
- - contents:
- source: data:text/plain;charset=utf-8;base64,ZGVmYXVsdHMgewp1c2VyX2ZyaWVuZGx5X25hbWVzIHllcwpmaW5kX211bHRpcGF0aHMgeWVzCn0KCmJsYWNrbGlzdCB7Cn0K
- verification: {}
- filesystem: root
- mode: 400
- path: /etc/multipath.conf
-```
-After deploying this`MachineConfig` object, CoreOS will start multipath service automatically.
-Alternatively, you can check the status of the multipath service by entering the following command in each worker nodes.
-`sudo multipath -ll`
-
-If the above command is not successful, ensure that the /etc/multipath.conf file is present and configured properly. Once the file has been configured correctly, enable the multipath service by running the following command:
-`sudo /sbin/mpathconf –-enable --with_multipathd y`
-
-Finally, you have to restart the service by providing the command
-`sudo systemctl restart multipathd`
-
-For additional information refer to official documentation of the multipath configuration.
-
-## Installing CSI Driver via Operator
-CSI Drivers can be installed by creating a `CustomResource` object in your cluster.
-
-Sample manifest files for each driver `CustomResourceDefintion` have been provided in the `samples` folder to help with the installation of the drivers.
-These files follow the naming convention
-```bash
- {driver name}_{driver version}_k8s_{k8 version}.yaml
-```
-Or
-```bash
- {driver name}_{driver version}_ops_{OpenShift version}.yaml
-```
-For e.g.
-* samples/powermax_v270_k8s_127.yaml* <- To install CSI PowerMax driver v2.7.0 on a Kubernetes 1.27 cluster
-* samples/powermax_v270_ops_412.yaml* <- To install CSI PowerMax driver v2.7.0 on an OpenShift 4.12 cluster
-
-Copy the correct sample file and edit the mandatory & any optional parameters specific to your driver installation by following the instructions [here](#modify-the-driver-specification)
->NOTE: A detailed explanation of the various mandatory and optional fields in the CustomResource is available [here](#custom-resource-specification). Please make sure to read through and understand the various fields.
-
-Run the following command to install the CSI driver.
-```bash
-kubectl create -f
-```
-
-**Note**: If you are using an OLM based installation, the example manifests are available in the `OperatorHub` UI.
-You can edit these manifests and install the driver using the `OperatorHub` UI.
-
-### Verifying the installation
-Once the driver Custom Resource has been created, you can verify the installation
-
-* Check if Driver CR got created successfully
-
- For e.g. – If you installed the PowerMax driver
- ```bash
- kubectl get csipowermax -n
- ```
-* Check the status of the Custom Resource to verify if the driver installation was successful
-
-If the driver-namespace was set to _test-powermax_, and the name of the driver is _powermax_, then run the command `kubectl get csipowermax/powermax -n test-powermax -o yaml` to get the details of the Custom Resource.
-
-Note: If the _state_ of the `CustomResource` is _Running_ then all the driver pods have been successfully installed. If the _state_ is _SuccessFul_, then it means the driver deployment was successful but some driver pods may not be in a _Running_ state.
-Please refer to the _Troubleshooting_ section [here](../../troubleshooting/operator) if you encounter any issues during installation.
-
-## Update CSI Drivers
-The CSI Drivers installed by the Dell CSI Operator can be updated like any Kubernetes resource. This can be achieved in various ways which include –
-
-* Modifying the installation directly via `kubectl edit`
- ```bash
- kubectl get -n
- ```
- For example - If the Unity XT driver is installed then run this command to get the object name of kind CSIUnity.
- #Replace driver-namespace with the namespace where the Unity XT driver is installed
- ```bash
- kubectl get csiunity -n
- ```
- use the object name in `kubectl edit` command.
- ```bash
- kubectl edit / -n
- ```
- For example - If the object name is CSIUnity.
- #Replace object-name with the object name of kind CSIUnity
- ```bash
- kubectl edit csiunity/ -n
- ```
- and modify the installation. The usual fields to edit are the version of drivers, sidecars and the environment variables.
-
-* Modify the API object in place via `kubectl patch` command.
- For example if you want to patch the deployment to have two replicas for Unity XT driver then run this command to get the deployment
- ```bash
- kubectl get deployments -n
- ```
- to patch the deployment with your patch object inline run this command.
- #Replace deployment with the name of the deployment
- ```bash
- kubectl patch deploy/ -n -p '{"spec":{"replicas": 2}}'
- ```
- to patch the deployment with your patch file run this command.
- #Replace deployment with the name of the deployment
- ```bash
- kubectl patch deployment --patch-file patch-file.yaml
- ```
-
-
-To create patch file or edit deployments, refer [here](https://github.com/dell/dell-csi-operator/tree/master/samples) for driver version & environment variables and [here](https://github.com/dell/dell-csi-operator/tree/master/driverconfig/config.yaml) for version of side-cars.
-The latest versions of drivers could have additional environment variables or sidecars.
-
-The below notes explain some of the general items to take care of.
-
-**NOTES:**
-1. If you are trying to upgrade the CSI driver from an older version, make sure to modify the _configVersion_ field if required.
- ```yaml
- driver:
- configVersion: v2.7.0
- ```
-2. Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via operator.
- To enable this feature, we will have to modify the below block while upgrading the driver.To get the volume health state add
- external-health-monitor sidecar in the sidecar section and `value`under controller set to true and the `value` under node set
- to true as shown below:
- i. Add controller and node section as below:
- ```yaml
- controller:
- envs:
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "true"
- dnsPolicy: ClusterFirstWithHostNet
- node:
- envs:
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "true"
- ```
- ii. Update the sidecar versions and add external-health-monitor sidecar if you want to enable health monitor of CSI volumes from Controller plugin:
- ```yaml
- sideCars:
- - args:
- - --volume-name-prefix=csiunity
- - --default-fstype=ext4
- image: k8s.gcr.io/sig-storage/csi-provisioner:v3.4.0
- imagePullPolicy: IfNotPresent
- name: provisioner
- - args:
- - --snapshot-name-prefix=csiunitysnap
- image: k8s.gcr.io/sig-storage/csi-snapshotter:v6.2.1
- imagePullPolicy: IfNotPresent
- name: snapshotter
- - args:
- - --monitor-interval=60s
- image: gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.8.0
- imagePullPolicy: IfNotPresent
- name: external-health-monitor
- - image: k8s.gcr.io/sig-storage/csi-attacher:v4.2.0
- imagePullPolicy: IfNotPresent
- name: attacher
- - image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.6.3
- imagePullPolicy: IfNotPresent
- name: registrar
- - image: k8s.gcr.io/sig-storage/csi-resizer:v1.7.0
- imagePullPolicy: IfNotPresent
- name: resizer
- ```
-3. Configmap needs to be created with command `kubectl create -f configmap.yaml` using following yaml file.
-```yaml
-kind: ConfigMap
-metadata:
- name: unity-config-params
- namespace: unity
-data:
- driver-config-params.yaml: |
- CSI_LOG_LEVEL: "info"
- ALLOW_RWO_MULTIPOD_ACCESS: "false"
- MAX_UNITY_VOLUMES_PER_NODE: "0"
- SYNC_NODE_INFO_TIME_INTERVAL: "15"
- TENANT_NAME: ""
-```
-
-**NOTE:** `Replicas` in the driver CR file should not be greater than or equal to the number of worker nodes when upgrading the driver. If the `Replicas` count is not less than the worker node count, some of the driver controller pods would land in a pending state, and upgrade will not be successful. Driver controller pods go in a pending state because they have anti-affinity to each other and cannot be scheduled on nodes where there is a driver controller pod already running. Refer to https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity for more details.
-
-**NOTE:** Do not try to update the operator by modifying the original `CustomResource` manifest file and running the `kubectl apply -f` command. As part of the driver installation, the Operator sets some annotations on the `CustomResource` object which are further utilized in some workflows (like detecting upgrade of drivers). If you run the `kubectl apply -f` command to update the driver, these annotations are overwritten and this may lead to failures.
-
-**NOTE:** From v1.4.0 onwards, Dell CSI Operator does not support the creation of `StorageClass` and `VolumeSnapshotClass` objects. Although these fields are still present in the various driver `CustomResourceDefinitions`, they would be ignored by the operator. These fields will be removed from the `CustomResourceDefinitions` in a future release. If `StorageClass` and `VolumeSnapshotClass` need to be retained, you should upgrade the driver as per the recommended way noted above.
-`StorageClass` and `VolumeSnapshotClass` would not be retained on driver uninstallation.
-
-### Supported modifications
-* Changing environment variable values for driver
-* Adding (supported) environment variables
-* Updating the image of the driver
-## Limitations
-* The Dell CSI Operator can't manage any existing driver installed using Helm charts. If you already have installed one of the Dell CSI drivers in your cluster and want to use the operator based deployment, uninstall the driver and then redeploy the driver following the installation procedure described.
-* The Dell CSI Operator is not fully compliant with the OperatorHub React UI elements and some of the Custom Resource fields may show up as invalid or unsupported in the OperatorHub GUI. To get around this problem, use kubectl/oc commands to get details about the Custom Resource(CR). This issue will be fixed in the upcoming releases of the Dell CSI Operator
-
-
-## Custom Resource Specification
-Each CSI Driver installation is represented by a Custom Resource.
-
-The specification for the Custom Resource is the same for all the drivers.
-Below is a list of all the mandatory and optional fields in the Custom Resource specification
-
-### Mandatory fields
-**configVersion** - Configuration version - Refer full list of supported driver for finding out the appropriate config version [here](#full-list-of-csi-drivers-and-versions-supported-by-the-dell-csi-operator)
-**replicas** - Number of replicas for controller plugin - Must be set to 1 for all drivers
-**dnsPolicy** - Determines the dnsPolicy for the node daemonset. Accepted values are `Default`, `ClusterFirst`, `ClusterFirstWithHostNet`, `None`
-**common**
-This field is mandatory and is used to specify common properties for both controller and the node plugin
-* image - driver container image
-* imagePullPolicy - Image Pull Policy of the driver image
-* envs - List of environment variables and their values
-### Optional fields
-**controller** - List of environment variables and values which are applicable only for controller
-**node** - List of environment variables and values which are applicable only for node
-**sideCars** - Specification for CSI sidecar containers.
-**authSecret** - Name of the secret holding credentials for use by the driver. If not specified, the default secret *-creds must exist in the same namespace as driver
-**tlsCertSecret** - Name of the TLS cert secret for use by the driver. If not specified, a secret *-certs must exist in the namespace as driver
-
-**forceUpdate**
-Boolean value which can be set to `true` in order to force update the status of the CSI Driver
-
-**tolerations**
-List of tolerations which should be applied to the driver StatefulSet/Deployment and DaemonSet
-It should be set separately in the controller and node sections if you want separate set of tolerations for them
-
-**nodeSelector**
-Used to specify node selectors for the driver StatefulSet/Deployment and DaemonSet
-
-**fsGroupPolicy**
-Defines which FS Group policy mode to be used, Supported modes: None, File and ReadWriteOnceWithFSType
-
-Here is a sample specification annotated with comments to explain each field
-```yaml
-apiVersion: storage.dell.com/v1
-kind: CSIPowerMax # Type of the driver
-metadata:
- name: test-powermax # Name of the driver
- namespace: test-powermax # Namespace where driver is installed
-spec:
- driver:
- # Used to specify configuration version
- configVersion: v3 # Refer the table containing the full list of supported drivers to find the appropriate config version
- replicas: 1
- forceUpdate: false # Set to true in case you want to force an update of driver status
- common: # All common specification
- image: "dellemc/csi-powermax:v1.4.0.000R" #driver image for a particular release
- imagePullPolicy: IfNotPresent
- envs:
- - name: X_CSI_POWERMAX_ENDPOINT
- value: "https://0.0.0.0:8443/"
- - name: X_CSI_K8S_CLUSTER_PREFIX
- value: "XYZ"
-```
-You can set the field ***replicas*** to a higher number than `1` for the latest driver versions.
-
-Note - The `image` field should point to the correct image tag for version of the driver you are installing.
-For e.g. - If you wish to install v2.7.0 of the CSI PowerMax driver, use the image tag `dellemc/csi-powermax:v2.7.0`
-
-### SideCars
-Although the sidecars field in the driver specification is optional, it is **strongly** recommended to not modify any details related to sidecars provided (if present) in the sample manifests. The only exception to this is modifications requested by the documentation, for example, filling in blank IPs or other such system-specific data. Any modifications not specifically requested by the documentation should be only done after consulting with Dell support.
-
-### Modify the driver specification
-* Choose the correct configVersion. Refer the table containing the full list of supported drivers and versions.
-* Provide the namespace (in metadata section) where you want to install the driver.
-* Provide a name (in metadata section) for the driver. This will be the name of the Custom Resource.
-* Edit the values for mandatory configuration parameters specific to your installation.
-* Edit/Add any values for optional configuration parameters to customize your installation.
-* If you are installing the latest versions of the CSI drivers, the default number of replicas is set to 2. You can increase/decrease this value.
-
-### StorageClass and VolumeSnapshotClass
-
-#### New Installations
-You should not provide any `StorageClass` or `VolumeSnapshotClass` details during driver installation. The sample files for all the drivers have been updated to reflect this change. Even if these details are there in the sample files, `StorageClass` or `VolumeSnapshotClass` will not be created.
-
-#### What happens to my existing StorageClass & VolumeSnapshotClass objects
-* In case you are upgrading an existing driver installation by using kubectl edit or by patching the object in place, any existing objects will remain as is. If you added more objects as part of the upgrade, then this request will be ignored by the Operator.
-* If you uninstall the older driver, then any `StorageClass` or `VolumeSnapshotClass` objects will be deleted.
-* An uninstall and followed by an install of the driver would also result in `StorageClass` and `VolumeSnapshotClass` getting deleted and not getting created again.
-
-**NOTE:** For more information on pre-requisites and parameters, please refer to the sub-pages below for each driver.
-
-**NOTE:** Storage Classes and Volume Snapshot Classes would no longer be created during the installation of the driver via an operator from v1.4.0 and higher.
-
diff --git a/content/v1/csidriver/installation/operator/isilon.md b/content/v1/csidriver/installation/operator/isilon.md
deleted file mode 100644
index 6b5afb58e6..0000000000
--- a/content/v1/csidriver/installation/operator/isilon.md
+++ /dev/null
@@ -1,202 +0,0 @@
----
-title: PowerScale
-description: >
- Installing CSI Driver for PowerScale via Operator
----
-{{% pageinfo color="primary" %}}
-The Dell CSI Operator is no longer actively maintained or supported. Dell CSI Operator has been replaced with [Dell CSM Operator](https://dell.github.io/csm-docs/docs/deployment/csmoperator/). If you are currently using Dell CSI Operator, refer to the [operator migration documentation](https://dell.github.io/csm-docs/docs/csidriver/installation/operator/operator_migration/) to migrate from Dell CSI Operator to Dell CSM Operator.
-{{% /pageinfo %}}
-
-## Installing CSI Driver for PowerScale via Operator
-
-The CSI Driver for Dell PowerScale can be installed via the Dell CSI Operator.
-
-To deploy the Operator, follow the instructions available [here](../).
-
-There are sample manifests provided which can be edited to do an easy installation of the driver. Note that the deployment of the driver using the operator does not use any Helm charts and the installation and configuration parameters will be slightly different from the one specified via the Helm installer.
-
-Kubernetes Operators make it easy to deploy and manage the entire lifecycle of complex Kubernetes applications. Operators use Custom Resource Definitions (CRD) which represents the application and use custom controllers to manage them.
-
-**Note**: MKE (Mirantis Kubernetes Engine) does not support the installation of CSI-PowerScale via Operator.
-
-### Listing installed drivers with the CSI Isilon CRD
-User can query for CSI-PowerScale driver using the following command:
-```bash
-kubectl get csiisilon --all-namespaces
-```
-
-### Install Driver
-
-1. Create namespace.
-
- Execute `kubectl create namespace isilon` to create the isilon namespace (if not already present). Note that the namespace can be any user-defined name, in this example, we assume that the namespace is 'isilon'.
-2. Create *isilon-creds* secret by using secret.yaml file format only.
-
- 2.1 Create a yaml file called secret.yaml with the following content:
- ```yaml
- isilonClusters:
- # logical name of PowerScale Cluster
- - clusterName: "cluster1"
-
- # username for connecting to PowerScale OneFS API server
- # Default value: None
- username: "user"
-
- # password for connecting to PowerScale OneFS API server
- password: "password"
-
- # HTTPS endpoint of the PowerScale OneFS API server
- # Default value: None
- # Examples: "1.2.3.4", "https://1.2.3.4", "https://abc.myonefs.com"
- endpoint: "1.2.3.4"
-
- # Is this a default cluster (would be used by storage classes without ClusterName parameter)
- # Allowed values:
- # true: mark this cluster config as default
- # false: mark this cluster config as not default
- # Default value: false
- isDefault: true
-
- # Specify whether the PowerScale OneFS API server's certificate chain and host name should be verified.
- # Allowed values:
- # true: skip OneFS API server's certificate verification
- # false: verify OneFS API server's certificates
- # Default value: default value specified in values.yaml
- # skipCertificateValidation: true
-
- # The base path for the volumes to be created on PowerScale cluster
- # This will be used if a storage class does not have the IsiPath parameter specified.
- # Ensure that this path exists on PowerScale cluster.
- # Allowed values: unix absolute path
- # Default value: default value specified in values.yaml
- # Examples: "/ifs/data/csi", "/ifs/engineering"
- # isiPath: "/ifs/data/csi"
-
- # The permissions for isi volume directory path
- # This will be used if a storage class does not have the IsiVolumePathPermissions parameter specified.
- # Allowed values: valid octal mode number
- # Default value: "0777"
- # Examples: "0777", "777", "0755"
- # isiVolumePathPermissions: "0777"
-
- - clusterName: "cluster2"
- username: "user"
- password: "password"
- endpoint: "1.2.3.4"
- endpointPort: "8080"
- ```
-
- Replace the values for the given keys as per your environment. After creating the secret.yaml, the following command can be used to create the secret,
- ```bash
-
- kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml
- ```
-
- Use the following command to replace or update the secret
-
- ```bash
-
- kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl replace -f -
- ```
-
- **Note**: The user needs to validate the YAML syntax and array related key/values while replacing the isilon-creds secret.
- The driver will continue to use previous values in case of an error found in the YAML file.
-
-3. Create isilon-certs-n secret.
- Please refer [this section](../../helm/isilon/#certificate-validation-for-onefs-rest-api-calls) for creating cert-secrets.
-
- If certificate validation is skipped, empty secret must be created. To create an empty secret. Ex: empty-secret.yaml
-
- ```yaml
- apiVersion: v1
- kind: Secret
- metadata:
- name: isilon-certs-0
- namespace: isilon
- type: Opaque
- data:
- cert-0: ""
- ```
- Execute command:
- ```bash
- kubectl create -f empty-secret.yaml
- ```
-
-4. Create a CR (Custom Resource) for PowerScale using the sample files provided
- [here](https://github.com/dell/dell-csi-operator/tree/master/samples).
-5. Users should configure the parameters in CR. The following table lists the primary configurable parameters of the PowerScale driver and their default values:
-
- | Parameter | Description | Required | Default |
- | --------- | ----------- | -------- |-------- |
- | dnsPolicy | Determines the DNS Policy of the Node service | Yes | ClusterFirstWithHostNet |
- | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
- | storageCapacity | Enable/Disable storage capacity tracking feature | No | true |
- | X_CSI_MAX_PATH_LIMIT | Defines the maximum length of path for a volume | No | 192 |
- | ***Common parameters for node and controller*** |
- | CSI_ENDPOINT | The UNIX socket address for handling gRPC calls | No | /var/run/csi/csi.sock |
- | X_CSI_ISI_SKIP_CERTIFICATE_VALIDATION | Specifies whether SSL security needs to be enabled for communication between PowerScale and CSI Driver | No | true |
- | X_CSI_ISI_PATH | Base path for the volumes to be created | Yes | |
- | X_CSI_ALLOWED_NETWORKS | Custom networks for PowerScale export. List of networks that can be used for NFS I/O traffic, CIDR format should be used | No | empty |
- | X_CSI_ISI_AUTOPROBE | To enable auto probing for driver | No | true |
- | X_CSI_ISI_NO_PROBE_ON_START | Indicates whether the controller/node should probe during initialization | Yes | |
- | X_CSI_ISI_VOLUME_PATH_PERMISSIONS | The permissions for isi volume directory path | Yes | 0777 |
- | X_CSI_ISI_AUTH_TYPE | Indicates the authentication method to be used. If set to 1 then it follows as session-based authentication else basic authentication | No | 0 |
- | ***Controller parameters*** |
- | X_CSI_MODE | Driver starting mode | No | controller |
- | X_CSI_ISI_ACCESS_ZONE | Name of the access zone a volume can be created in | No | System |
- | X_CSI_ISI_QUOTA_ENABLED | To enable SmartQuotas | Yes | |
- | nodeSelector | Define node selection constraints for pods of controller deployment | No | |
- | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Controller plugin. Provides details of volume status and volume condition. As a prerequisite, external-health-monitor sidecar section should be uncommented in samples which would install the sidecar | No | false |
- | ***Node parameters*** |
- | X_CSI_MAX_VOLUMES_PER_NODE | Specify the default value for the maximum number of volumes that the controller can publish to the node | Yes | 0 |
- | X_CSI_MODE | Driver starting mode | No | node |
- | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from node plugin. Provides details of volume usage | No | false |
- | ***Side car parameters*** |
- | leader-election-lease-duration | Duration, that non-leader candidates will wait to force acquire leadership | No | 20s |
- | leader-election-renew-deadline | Duration, that the acting leader will retry refreshing leadership before giving up | No | 15s |
- | leader-election-retry-period | Duration, the LeaderElector clients should wait between tries of actions | No | 5s |
-
-6. Execute the following command to create PowerScale custom resource:
- ```bash
- kubectl create -f
- ```
- This command will deploy the CSI-PowerScale driver in the namespace specified in the input YAML file.
-
-**Note** :
- 1. From CSI-PowerScale v1.6.0 and higher, Storage class and VolumeSnapshotClass will **not** be created as part of driver deployment. The user has to create Storageclass and Volume Snapshot Class.
- 2. "Kubelet config dir path" is not yet configurable in case of Operator based driver installation.
- 3. Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation.
-
-## Volume Health Monitoring
-This feature is introduced in CSI Driver for PowerScale version 2.1.0.
-
-### Operator based installation
-
-Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via operator.
-To enable this feature, add the below block to the driver manifest before installing the driver. This ensures to install external health monitor sidecar. To get the volume health state `value` under controller should be set to true as seen below. To get the volume stats `value` under node should be set to true.
-
- ```yaml
- # Uncomment the following to install 'external-health-monitor' sidecar to enable health monitor of CSI volumes from Controller plugin.
- # Also set the env variable controller.envs.X_CSI_HEALTH_MONITOR_ENABLED to "true".
- # - name: external-health-monitor
- # args: ["--monitor-interval=60s"]
-
- # Install the 'external-health-monitor' sidecar accordingly.
- # Allowed values:
- # true: enable checking of health condition of CSI volumes
- # false: disable checking of health condition of CSI volumes
- # Default value: false
- controller:
- envs:
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "true"
- node:
- envs:
- # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from node plugin - volume usage
- # Allowed values:
- # true: enable checking of health condition of CSI volumes
- # false: disable checking of health condition of CSI volumes
- # Default value: false
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "true"
- ```
diff --git a/content/v1/csidriver/installation/operator/non-olm-1.jpg b/content/v1/csidriver/installation/operator/non-olm-1.jpg
deleted file mode 100644
index 3cc966646a..0000000000
Binary files a/content/v1/csidriver/installation/operator/non-olm-1.jpg and /dev/null differ
diff --git a/content/v1/csidriver/installation/operator/non-olm-2.jpg b/content/v1/csidriver/installation/operator/non-olm-2.jpg
deleted file mode 100644
index 404060afa3..0000000000
Binary files a/content/v1/csidriver/installation/operator/non-olm-2.jpg and /dev/null differ
diff --git a/content/v1/csidriver/installation/operator/operator_migration.md b/content/v1/csidriver/installation/operator/operator_migration.md
index 5a05bbf640..3dc33a07f1 100644
--- a/content/v1/csidriver/installation/operator/operator_migration.md
+++ b/content/v1/csidriver/installation/operator/operator_migration.md
@@ -9,10 +9,10 @@ description: >
{{
}}
>NOTE: Sample files refer to the latest version for each platform. If you do not want to upgrade, please find your preferred version in the [csm-operator repository](https://github.com/dell/csm-operator/blob/main/samples).
@@ -30,8 +30,8 @@ description: >
kubectl -n openshift-operators get CSIUnity/test-unity -o yaml
```
2. Map and update the settings from the CR in step 1 to the relevant CSM Operator CR
- - As the yaml content may differ, ensure the values held in the step 1 CR backup are present in the new CR before installing the new driver
- - Ex: spec.driver.fsGroupPolicy in [PowerMax 2.6 for CSI Operator](https://github.com/dell/dell-csi-operator/blob/main/samples/powermax_v260_k8s_126.yaml#L17C5-L17C18) maps to spec.driver.csiDriverSpec.fSGroupPolicy in [PowerMax 2.6 for CSM Operator](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powermax_v260.yaml#L28C7-L28C20)
+ - As the yaml content may differ, ensure the values held in the step 1 CR backup are present in the new CR before installing the new driver. CR Samples table provided above can be used to compare and map the differences in attributes between Dell CSI Operator and CSM Operator CRs
+ - Ex: spec.driver.fsGroupPolicy in [CSI Operator](https://github.com/dell/dell-csi-operator/blob/main/samples/) maps to spec.driver.csiDriverSpec.fSGroupPolicy in [CSM Operator](https://github.com/dell/csm-operator/blob/main/samples/)
3. Retain (or do not delete) the secret, namespace, storage classes, and volume snapshot classes from the original deployment as they will be re-used in the CSM operator deployment
4. Uninstall the CR from the CSI Operator
```
@@ -69,9 +69,9 @@ description: >
- Select *Create instance* under the provided Container Storage Module API
- Use the CR backup from step 1 to manually map desired settings to the new CSI driver
- As the yaml content may differ, ensure the values held in the step 1 CR backup are present in the new CR before installing the new driver
- - Ex: spec.driver.fsGroupPolicy in [PowerMax 2.6 for CSI Operator](https://github.com/dell/dell-csi-operator/blob/main/samples/powermax_v260_k8s_126.yaml#L17C5-L17C18) maps to spec.driver.csiDriverSpec.fSGroupPolicy in [PowerMax 2.6 for CSM Operator](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powermax_v260.yaml#L28C7-L28C20)
+ - Ex: spec.driver.fsGroupPolicy in [PowerMax 2.7 for CSI Operator](https://github.com/dell/dell-csi-operator/blob/main/samples/powermax_v270_k8s_127.yaml#L17C5-L17C18) maps to spec.driver.csiDriverSpec.fSGroupPolicy in [PowerMax 2.7 for CSM Operator](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powermax_v270.yaml#L28C7-L28C20)
>NOTE: Uninstallation of the driver and the Operator is non-disruptive for mounted volumes. Nonetheless you can not create new volume, snapshot or move a Pod.
## Testing
-To test that the new installation is working, please follow the steps outlined [here](../../test) for your specific driver.
\ No newline at end of file
+To test that the new installation is working, please follow the steps outlined [here](../../test) for your specific driver.
diff --git a/content/v1/csidriver/installation/operator/powerflex.md b/content/v1/csidriver/installation/operator/powerflex.md
deleted file mode 100644
index a4de8f46e6..0000000000
--- a/content/v1/csidriver/installation/operator/powerflex.md
+++ /dev/null
@@ -1,295 +0,0 @@
----
-title: PowerFlex
-description: >
- Installing CSI Driver for PowerFlex via Operator
----
-{{% pageinfo color="primary" %}}
-The Dell CSI Operator is no longer actively maintained or supported. Dell CSI Operator has been replaced with [Dell CSM Operator](https://dell.github.io/csm-docs/docs/deployment/csmoperator/). If you are currently using Dell CSI Operator, refer to the [operator migration documentation](https://dell.github.io/csm-docs/docs/csidriver/installation/operator/operator_migration/) to migrate from Dell CSI Operator to Dell CSM Operator.
-CSM 1.7.1 is applicable to helm based installations of PowerFlex driver.
-{{% /pageinfo %}}
-
-
-## Installing CSI Driver for PowerFlex via Operator
-
-The CSI Driver for Dell PowerFlex can be installed via the Dell CSI Operator.
-
-To deploy the Operator, follow the instructions available [here](../).
-
-There are sample manifests provided which can be edited to do an easy installation of the driver. Note that the deployment of the driver using the operator does not use any Helm charts. The installation and configuration parameters will be slightly different from the ones specified via the Helm installer.
-
-Kubernetes Operators make it easy to deploy and manage the entire lifecycle of complex Kubernetes applications. Operators use Custom Resource Definitions (CRD) which represents the application and use custom controllers to manage them.
-
-### Prerequisites:
-- If multipath is configured, ensure CSI-PowerFlex volumes are blacklisted by multipathd. See [troubleshooting section](../../../troubleshooting/powerflex) for details
-#### SDC Deployment for Operator
-- This feature deploys the sdc kernel modules on all nodes with the help of an init container.
-- For non-supported versions of the OS also do the manual SDC deployment steps given below. Refer to https://hub.docker.com/r/dellemc/sdc for supported versions.
-- **Note:** When the driver is created, MDM value for initContainers in driver CR is set by the operator from mdm attributes in the driver configuration file,
- secret.yaml. An example of secret.yaml is below in this document. Do not set MDM value for initContainers in the driver CR file manually.
-- **Note:** To use an sdc-binary module from customer ftp site:
- - Create a secret, sdc-repo-secret.yaml to contain the credentials for the private repo. To generate the base64 encoding of a credential:
- ```bash
- echo -n | base64 -i
- ```
- secret sample to use:
- ```yaml
- apiVersion: v1
- kind: Secret
- metadata:
- name: sdc-repo-creds
- namespace: vxflexos
- type: Opaque
- data:
- # set username to the base64 encoded username, sdc default is
- username:
- # set password to the base64 encoded password, sdc default is
- password:
- ```
- - Create secret for FTP side by using the command `kubectl create -f sdc-repo-secret.yaml`.
- - Optionally, enable sdc monitor by uncommenting the section for sidecar in manifest yaml. Please note the following:
- - **If using sidecar**, you will need to edit the value fields under the HOST_PID and MDM fields by filling the empty quotes with host PID and the MDM IPs.
- - **If not using sidecar**, please leave this commented out -- otherwise, the empty fields will cause errors.
-##### Example CR: [config/samples/vxflex_v270_ops_412.yaml](https://github.com/dell/dell-csi-operator/blob/main/samples/vxflex_v270_ops_411.yaml)
-```yaml
- sideCars:
- # Comment the following section if you don't want to run the monitoring sidecar
- - name: sdc-monitor
- envs:
- - name: HOST_PID
- value: "1"
- - name: MDM
- value: ""
- - name: external-health-monitor
- args: ["--monitor-interval=60s"]
- initContainers:
- - image: dellemc/sdc:3.6
- imagePullPolicy: IfNotPresent
- name: sdc
- envs:
- - name: MDM
- value: "10.x.x.x,10.x.x.x"
- ```
- *Note:* Please comment the sdc-monitor sidecar section if you are not using it. Blank values for MDM will result in error. Do not comment the external-health-monitor argument.
-
-### Manual SDC Deployment
-
-For detailed PowerFlex installation procedure, see the _Dell PowerFlex Deployment Guide_. Install the PowerFlex SDC as follows:
-
-**Steps**
-
-1. Download the PowerFlex SDC from [Dell Online support](https://www.dell.com/support). The filename is EMC-ScaleIO-sdc-*.rpm, where * is the SDC name corresponding to the PowerFlex installation version.
-2. Export the shell variable _MDM_IP_ in a comma-separated list using `export MDM_IP=xx.xxx.xx.xx,xx.xxx.xx.xx`, where xxx represents the actual IP address in your environment. This list contains the IP addresses of the MDMs.
-3. Install the SDC per the _Dell PowerFlex Deployment Guide_:
- - For Red Hat Enterprise Linux and CentOS, run `rpm -iv ./EMC-ScaleIO-sdc-*.x86_64.rpm`, where * is the SDC name corresponding to the PowerFlex installation version.
-4. To add more MDM_IP for multi-array support, run `/opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip 10.xx.xx.xx.xx,10.xx.xx.xx`
-
-### Install Driver
-
-1. Create namespace:
- Run `kubectl create namespace ` command using the desired name to create the namespace.
-2. Prepare the secret.yaml for driver configuration.
-
- Example: secret.yaml
-
- ```yaml
- # Username for accessing PowerFlex system.
- # Required: true
- - username: "admin"
- # Password for accessing PowerFlex system.
- # Required: true
- password: "password"
- # System name/ID of PowerFlex system.
- # Required: true
- systemID: "ID1"
- # REST API gateway HTTPS endpoint/PowerFlex Manager public IP for PowerFlex system.
- # Required: true
- endpoint: "https://127.0.0.1"
- # Determines if the driver is going to validate certs while connecting to PowerFlex REST API interface.
- # Allowed values: true or false
- # Required: true
- # Default value: true
- skipCertificateValidation: true
- # indicates if this array is the default array
- # needed for backwards compatibility
- # only one array is allowed to have this set to true
- # Required: false
- # Default value: false
- isDefault: true
- # defines the MDM(s) that SDC should register with on start.
- # Allowed values: a list of IP addresses or hostnames separated by comma.
- # Required: true
- # Default value: none
- mdm: "10.0.0.1,10.0.0.2"
- # Defines all system names used to create powerflex volumes
- # Required: false
- # Default value: none
- AllSystemNames: "name1,name2"
- - username: "admin"
- password: "Password123"
- systemID: "ID2"
- endpoint: "https://127.0.0.2"
- skipCertificateValidation: true
- mdm: "10.0.0.3,10.0.0.4"
- AllSystemNames: "name1,name2"
- ```
-
- After editing the file, run the following command to create a secret called `vxflexos-config`
- ```bash
- kubectl create secret generic vxflexos-config -n --from-file=config=secret.yaml
- ```
-
- Use the following command to replace or update the secret:
-
- ```bash
- kubectl create secret generic vxflexos-config -n --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl replace -f -
- ```
-
- *Note:*
-
- - System ID, MDM configuration, etc. now are taken directly from secret.yaml. MDM provided in the input_sample_file.yaml will be overidden with MDM values in secret.yaml.
- - Please provide MDM values in input_sample_file.yaml so that it will be overidden by default value.
-
-3. Create a Custom Resource (CR) for PowerFlex using the sample files provided [here](https://github.com/dell/dell-csi-operator/tree/master/samples).
-4. Users should configure the parameters in CR. The following table lists the primary configurable parameters of the PowerFlex driver and their default values:
-
- | Parameter | Description | Required | Default |
- | --------- | ----------- | -------- |-------- |
- | replicas | Controls the number of controller pods you deploy. If the number of controller pods is greater than the number of available nodes, excess pods will become stay in a pending state. Defaults are 2 which allows for Controller high availability. | Yes | 2 |
- | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
- | ***Common parameters for node and controller*** |
- | X_CSI_VXFLEXOS_ENABLELISTVOLUMESNAPSHOT | Enable list volume operation to include snapshots (since creating a volume from a snap actually results in a new snap) | No | false |
- | X_CSI_VXFLEXOS_ENABLESNAPSHOTCGDELETE | Enable this to automatically delete all snapshots in a consistency group when a snap in the group is deleted | No | false |
- | X_CSI_DEBUG | To enable debug mode | No | true |
- | X_CSI_ALLOW_RWO_MULTI_POD_ACCESS | Setting allowRWOMultiPodAccess to "true" will allow multiple pods on the same node to access the same RWO volume. This behavior conflicts with the CSI specification version 1.3. NodePublishVolume description that requires an error to be returned in this case. However, some other CSI drivers support this behavior and some customers desire this behavior. Customers use this option at their own risk. | No | false |
-5. Execute the `kubectl create -f ` command to create PowerFlex custom resource. This command will deploy the CSI-PowerFlex driver.
- - Example CR for PowerFlex Driver
- ```yaml
- apiVersion: storage.dell.com/v1
- kind: CSIVXFlexOS
- metadata:
- name: test-vxflexos
- namespace: test-vxflexos
- spec:
- driver:
- configVersion: v2.7.0
- replicas: 1
- dnsPolicy: ClusterFirstWithHostNet
- forceUpdate: false
- fsGroupPolicy: File
- common:
- image: "dellemc/csi-vxflexos:v2.7.0"
- imagePullPolicy: IfNotPresent
- envs:
- - name: X_CSI_VXFLEXOS_ENABLELISTVOLUMESNAPSHOT
- value: "false"
- - name: X_CSI_VXFLEXOS_ENABLESNAPSHOTCGDELETE
- value: "false"
- - name: X_CSI_DEBUG
- value: "true"
- - name: X_CSI_ALLOW_RWO_MULTI_POD_ACCESS
- value: "false"
- sideCars:
- # comment the following section if you don't want to run the monitoring sidecar
- - name: sdc-monitor
- envs:
- - name: HOST_PID
- value: "1"
- - name: MDM
- value: ""
-
- # Uncomment the following to install 'external-health-monitor' sidecar to enable health monitor of CSI volumes from Controller plugin.
- # Also set the env variable controller.envs.X_CSI_HEALTH_MONITOR_ENABLED to "true".
- # - name: external-health-monitor
- # args: ["--monitor-interval=60s"]
-
- controller:
- envs:
- # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from Controller plugin - volume condition.
- # Install the 'external-health-monitor' sidecar accordingly.
- # Allowed values:
- # true: enable checking of health condition of CSI volumes
- # false: disable checking of health condition of CSI volumes
- # Default value: false
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "false"
-
- node:
- envs:
- # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from node plugin - volume usage
- # Allowed values:
- # true: enable checking of health condition of CSI volumes
- # false: disable checking of health condition of CSI volumes
- # Default value: false
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "false"
-
- # X_CSI_MAX_VOLUMES_PER_NODE: Defines the maximum PowerFlex volumes that can be created per node
- # Allowed values: Any value greater than or equal to 0
- # If value is 0 then the orchestrator decides how many volumes can be published by the controller to
- # the node
- # Default value: "0"
- - name: X_CSI_MAX_VOLUMES_PER_NODE
- value: "0"
-
- initContainers:
- - image: dellemc/sdc:3.6.1
- imagePullPolicy: IfNotPresent
- name: sdc
- envs:
- - name: MDM
- value: "10.xx.xx.xx,10.xx.xx.xx" #provide MDM value
-
- ---
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: vxflexos-config-params
- namespace: test-vxflexos
- data:
- driver-config-params.yaml: |
- CSI_LOG_LEVEL: "debug"
- CSI_LOG_FORMAT: "TEXT"
- ```
- ### Pre-Requisite for installation with OLM
- Please run the following commands for creating the required ConfigMap before installing the dell-csi-operator using OLM.
- #Replace operator-namespace in the below command with the actual namespace where the operator will be deployed by OLM
- ```bash
- git clone https://github.com/dell/dell-csi-operator.git
- cd dell-csi-operator
- tar -czf config.tar.gz driverconfig/
- kubectl create configmap dell-csi-operator-config --from-file config.tar.gz -n
- ```
-
-## Volume Health Monitoring
- Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via operator.
-
- To enable this feature, add the below block to the driver manifest before installing the driver. This ensures to install external
- health monitor sidecar. To get the volume health state value under controller should be set to true as seen below. To get the
- volume stats value under node should be set to true.
-
- ```yaml
- # Uncomment the following to install 'external-health-monitor' sidecar to enable health monitor of CSI volumes from Controller plugin.
- # Also set the env variable controller.envs.X_CSI_HEALTH_MONITOR_ENABLED to "true".
- # - name: external-health-monitor
- # args: ["--monitor-interval=60s"]
-
- # Install the 'external-health-monitor' sidecar accordingly.
- # Allowed values:
- # true: enable checking of health condition of CSI volumes
- # false: disable checking of health condition of CSI volumes
- # Default value: false
- controller:
- envs:
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "true"
- node:
- envs:
- # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from node plugin - volume usage
- # Allowed values:
- # true: enable checking of health condition of CSI volumes
- # false: disable checking of health condition of CSI volumes
- # Default value: false
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "true"
- ```
-
diff --git a/content/v1/csidriver/installation/operator/powermax.md b/content/v1/csidriver/installation/operator/powermax.md
deleted file mode 100644
index bba41cf45e..0000000000
--- a/content/v1/csidriver/installation/operator/powermax.md
+++ /dev/null
@@ -1,440 +0,0 @@
----
-title: PowerMax
-description: >
- Installing CSI Driver for PowerMax via Operator
----
-{{% pageinfo color="primary" %}}
-The Dell CSI Operator is no longer actively maintained or supported. Dell CSI Operator has been replaced with [Dell CSM Operator](https://dell.github.io/csm-docs/docs/deployment/csmoperator/). If you are currently using Dell CSI Operator, refer to the [operator migration documentation](https://dell.github.io/csm-docs/docs/csidriver/installation/operator/operator_migration/) to migrate from Dell CSI Operator to Dell CSM Operator.
-
-{{% /pageinfo %}}
-{{% pageinfo color="primary" %}} Linked Proxy mode for CSI reverse proxy is no longer actively maintained or supported. It will be deprecated in CSM 1.9. It is highly recommended that you use stand alone mode going forward. {{% /pageinfo %}}
-
-## Installing CSI Driver for PowerMax via Operator
-
-CSI Driver for Dell PowerMax can be installed via the Dell CSI Operator.
-
-To deploy the Operator, follow the instructions available [here](../).
-
-There are sample manifests provided which can be edited to do an easy installation of the driver. Please note that the deployment of the driver using the operator does not use any Helm charts and the installation and configuration parameters will be slightly different from the ones specified via the Helm installer.
-
-Kubernetes Operators make it easy to deploy and manage the entire lifecycle of complex Kubernetes applications. Operators use Custom Resource Definitions (CRD) which represents the application and use custom controllers to manage them.
-
-### Prerequisite
-
-#### Fibre Channel Requirements
-
-CSI Driver for Dell PowerMax supports Fibre Channel communication. Ensure that the following requirements are met before you install CSI Driver:
-- Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port director must be completed.
-- Ensure that the HBA WWNs (initiators) appear on the list of initiators that are logged into the array.
-- If the number of volumes that will be published to nodes is high, then configure the maximum number of LUNs for your HBAs on each node. See the appropriate HBA document to configure the maximum number of LUNs.
-
-#### iSCSI Requirements
-
-The CSI Driver for Dell PowerMax supports iSCSI connectivity. These requirements are applicable for the nodes that use iSCSI initiator to connect to the PowerMax arrays.
-
-Set up the iSCSI initiators as follows:
-- All Kubernetes nodes must have the _iscsi-initiator-utils_ package installed.
-- Ensure that the iSCSI initiators are available on all the nodes where the driver node plugin will be installed.
-- Kubernetes nodes should have access (network connectivity) to an iSCSI director on the Dell PowerMax array that has IP interfaces. Manually create IP routes for each node that connects to the Dell PowerMax if required.
-- Ensure that the iSCSI initiators on the nodes are not a part of any existing Host (Initiator Group) on the Dell PowerMax array.
-- The CSI Driver needs the port group names containing the required iSCSI director ports. These port groups must be set up on each Dell PowerMax array. All the port group names supplied to the driver must exist on each Dell PowerMax with the same name.
-
-For more information about configuring iSCSI, see [Dell Host Connectivity guide](https://www.delltechnologies.com/asset/zh-tw/products/storage/technical-support/docu5128.pdf).
-
-#### Auto RDM for vSphere over FC requirements
-
-The CSI Driver for Dell PowerMax supports auto RDM for vSphere over FC. These requirements are applicable for the clusters deployed on ESX/ESXi using virtualized environement.
-
-Set up the environment as follows:
-
-- Requires VMware vCenter management software to manage all ESX/ESXis where the cluster is hosted.
-
-- Add all FC array ports zoned to the ESX/ESXis to a port group where the cluster is hosted .
-
-- Add initiators from all ESX/ESXis to a host(initiator group)/host group(cascaded initiator group) where the cluster is hosted.
-- Create a secret which contains vCenter privileges. Follow the steps [here](#support-for-auto-rdm-for-vsphere-over-fc) to create the same.
-
->Note: Hostgroups support with vSphere environment will be only available on csm-operator.
-
-#### Linux multipathing requirements
-
-CSI Driver for Dell PowerMax supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver.
-
-Set up Linux multipathing as follows:
-
-- All the nodes must have the _Device Mapper Multipathing_ package installed.
- *NOTE:* When this package is installed it creates a multipath configuration file which is located at `/etc/multipath.conf`. Please ensure that this file always exists.
-- Enable multipathing using `mpathconf --enable --with_multipathd y`
-- Enable `user_friendly_names` and `find_multipaths` in the `multipath.conf` file.
-
-As a best practice, use these options to help the operating system and the mulitpathing software detect path changes efficiently:
-```text
-path_grouping_policy multibus
-path_checker tur
-features "1 queue_if_no_path"
-path_selector "round-robin 0"
-no_path_retry 10
-```
-
-#### PowerPath for Linux requirements
-
-CSI Driver for Dell PowerMax supports PowerPath for Linux. Configure Linux PowerPath before installing the CSI Driver.
-
-Follow this procedure to set up PowerPath for Linux:
-
-- All the nodes must have the PowerPath package installed . Download the PowerPath archive for the environment from [Dell Online Support](https://www.dell.com/support/home/en-in/product-support/product/powerpath-for-linux/drivers).
-- `Untar` the PowerPath archive, Copy the RPM package into a temporary folder and Install PowerPath using `rpm -ivh DellEMCPower.LINUX--..x86_64.rpm`
-- Start the PowerPath service using `systemctl start PowerPath`
-
->Note: Do not install Dell PowerPath if multi-path software is already installed, as they cannot co-exist with native multi-path software.
-
-#### Create secret for client-side TLS verification (Optional)
-Create a secret named powermax-certs in the namespace where the CSI PowerMax driver will be installed. This is an optional step and is only required if you are setting the env variable X_CSI_POWERMAX_SKIP_CERTIFICATE_VALIDATION to false. See the detailed documentation on how to create this secret [here](../../helm/powermax#certificate-validation-for-unisphere-rest-api-calls).
-
-
-### Install Driver
-
-1. Create namespace:
- Run `kubectl create namespace ` using the desired name to create the namespace.
-2. Create PowerMax credentials:
- Create a file called powermax-creds.yaml with the following content:
- ```yaml
- apiVersion: v1
- kind: Secret
- metadata:
- name: powermax-creds
- # Replace driver-namespace with the namespace where driver is being deployed
- namespace:
- type: Opaque
- data:
- # set username to the base64 encoded username
- username:
- # set password to the base64 encoded password
- password:
- # Uncomment the following key if you wish to use ISCSI CHAP authentication (v1.3.0 onwards)
- # chapsecret:
- ```
- Replace the values for the username and password parameters. These values can be obtained using base64 encoding as described in the following example:
- ```BASH
- echo -n "myusername" | base64
- echo -n "mypassword" | base64
- # If mychapsecret is the ISCSI CHAP secret
- echo -n "mychapsecret" | base64
-
- ```
- Run the `kubectl create -f powermax-creds.yaml` command to create the secret.
-3. Create a Custom Resource (CR) for PowerMax using the sample files provided [here](https://github.com/dell/dell-csi-operator/tree/master/samples).
-4. Users should configure the parameters in CR. The following table lists the primary configurable parameters of the PowerMax driver and their default values:
-
- | Parameter | Description | Required | Default |
- | --------- | ----------- | -------- |-------- |
- | replicas | Controls the number of controller Pods you deploy. If controller Pods are greater than the number of available nodes, excess Pods will become stuck in pending. The default is 2 which allows for Controller high availability. | Yes | 2 |
- | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
- | storageCapacity | Helps the scheduler to schedule the pod on a node satisfying the topology constraints, only if the requested capacity is available on the storage array | - | true |
- | ***Common parameters for node and controller*** |
- | X_CSI_K8S_CLUSTER_PREFIX | Define a prefix that is appended to all resources created in the array; unique per K8s/CSI deployment; max length - 3 characters | Yes | XYZ |
- | X_CSI_POWERMAX_ENDPOINT | IP address of the Unisphere for PowerMax | Yes | https://0.0.0.0:8443 |
- | X_CSI_TRANSPORT_PROTOCOL | Choose which transport protocol to use (ISCSI, FC, auto or None) | Yes | auto |
- | X_CSI_POWERMAX_PORTGROUPS |List of comma-separated port groups (ISCSI only). Example: "PortGroup1,PortGroup2" | No | - |
- | X_CSI_MANAGED_ARRAYS | List of comma-separated array ID(s) which will be managed by the driver | Yes | - |
- | X_CSI_POWERMAX_PROXY_SERVICE_NAME | Name of CSI PowerMax ReverseProxy service. | Yes | powermax-reverseproxy |
- | X_CSI_GRPC_MAX_THREADS | Number of concurrent grpc requests allowed per client | No | 4 |
- | X_CSI_IG_MODIFY_HOSTNAME | Change any existing host names. When nodenametemplate is set, it changes the name to the specified format else it uses driver default host name format. | No | false |
- | X_CSI_IG_NODENAME_TEMPLATE | Provide a template for the CSI driver to use while creating the Host/IG on the array for the nodes in the cluster. It is of the format a-b-c-%foo%-xyz where foo will be replaced by host name of each node in the cluster. | No | - |
- | X_CSI_POWERMAX_DRIVER_NAME | Set custom CSI driver name. For more details on this feature see the related [documentation](../../../features/powermax/#custom-driver-name) | No | - |
- | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Controller and Node plugin. Provides details of volume status, usage and volume condition. As a prerequisite, external-health-monitor sidecar section should be uncommented in samples which would install the sidecar | No | false |
- | X_CSI_VSPHERE_ENABLED | Enable VMware virtualized environment support via RDM | No | false |
- | X_CSI_VSPHERE_PORTGROUP | Existing portGroup that driver will use for vSphere | Yes | "" |
- | X_CSI_VSPHERE_HOSTNAME | Existing host(initiator group)/host group(cascaded initiator group) that driver will use for vSphere | Yes | "" |
- | X_CSI_VCenter_HOST | URL/endpoint of the vCenter where all the ESX are present | Yes | "" |
- | ***Node parameters***|
- | X_CSI_POWERMAX_ISCSI_ENABLE_CHAP | Enable ISCSI CHAP authentication. For more details on this feature see the related [documentation](../../../features/powermax/#iscsi-chap) | No | false |
- | X_CSI_TOPOLOGY_CONTROL_ENABLED | Enable/Disabe topology control. It filters out arrays, associated transport protocol available to each node and creates topology keys based on any such user input. | No | false |
- | X_CSI_MAX_VOLUMES_PER_NODE | Enable volume limits. It specifies the maximum number of volumes that can be created on a node. | Yes | 0 |
-
-5. Execute the following command to create the PowerMax custom resource:`kubectl create -f `. The above command will deploy the CSI-PowerMax driver.
-
-**Note** - If CSI driver is getting installed using OCP UI , create these two configmaps manually using the command `oc create -f `
-1. Configmap name powermax-config-params
- ```yaml
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: powermax-config-params
- namespace: test-powermax
- data:
- driver-config-params.yaml: |
- CSI_LOG_LEVEL: "debug"
- CSI_LOG_FORMAT: "JSON"
- ```
- 2. Configmap name node-topology-config
- ```yaml
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: node-topology-config
- namespace: test-powermax
- data:
- topologyConfig.yaml: |
- allowedConnections:
- - nodeName: "node1"
- rules:
- - "000000000001:FC"
- - "000000000002:FC"
- - nodeName: "*"
- rules:
- - "000000000002:FC"
- deniedConnections:
- - nodeName: "node2"
- rules:
- - "000000000002:*"
- - nodeName: "node3"
- rules:
- - "*:*"
-
- ```
-
-
-
-### CSI PowerMax ReverseProxy
-
-CSI PowerMax ReverseProxy is component that will be installed along with the CSI PowerMax driver. For more details on this feature see the related [documentation](../../../features/powermax#csi-powermax-reverse-proxy).
-
-Deployment and ClusterIP service will be created by dell-csi-operator.
-
-#### Pre-requisites
-Create a TLS secret that holds an SSL certificate and a private key which is required by the reverse proxy server.
-Use a tool such as `openssl` to generate this secret using the example below:
-
-```bash
-openssl genrsa -out tls.key 2048
-openssl req -new -x509 -sha256 -key tls.key -out tls.crt -days 3650
-kubectl create secret -n tls revproxy-certs --cert=tls.crt --key=tls.key
-kubectl create secret -n tls csirevproxy-tls-secret --cert=tls.crt --
-key=tls.key
-```
-
-#### Set the following parameters in the CSI PowerMaxReverseProxy Spec
-* **tlsSecret** : Provide the name of the TLS secret. If using the above example, it should be set to `revproxy-certs`
-* **config** : This section contains the details of the Reverse Proxy configuration
-* **mode** : This value is set to `Linked` by default. Do not change this value
-* **linkConfig** : This section contains the configuration of the `Linked` mode
-* **primary** : This section holds details for the primary Unisphere which the Reverse Proxy will connect to
-* **backup** : This optional section holds details for a backup Unisphere which the Reverse Proxy can connect
-to if the primary Unisphere is unreachable
-* **url** : URL of the Unisphere server
-* **skipCertificateValidation**: This setting determines if the client-side Unisphere certificate validation is required
-* **certSecret**: Secret name which holds the CA certificates which was used to sign Unisphere SSL certificates. Mandatory if skipCertificateValidation is set to `false`
-* **standAloneConfig** : This section contains the configuration of the `StandAlone` mode. Refer to the sample below for the detailed config
-
->Note: Only one of the `Linked` or `StandAlone` configurations needs to be supplied. The appropriate `mode` needs to be set in the spec as well.
-
-Here is a sample manifest with each field annotated. A copy of this manifest is provided in the `samples` folder
-```yaml
-apiVersion: storage.dell.com/v1
-kind: CSIPowerMaxRevProxy
-metadata:
- name: powermax-reverseproxy # <- Name of the CSIPowerMaxRevProxy object
- namespace: test-powermax # <- Set the namespace to where you will install the CSI PowerMax driver
-spec:
- # Image for CSI PowerMax ReverseProxy
- image: dellemc/csipowermax-reverseproxy:v2.3.0 # <- CSI PowerMax Reverse Proxy image
- imagePullPolicy: Always
- # TLS secret which contains SSL certificate and private key for the Reverse Proxy server
- tlsSecret: csirevproxy-tls-secret
- config:
- mode: Linked
- linkConfig:
- primary:
- url: https://0.0.0.0:8443 #Unisphere URL
- skipCertificateValidation: true # This setting determines if client side Unisphere certificate validation is to be skipped
- certSecret: "" # Provide this value if skipCertificateValidation is set to false
- backup: # This is an optional field and lets you configure a backup unisphere which can be used by proxy server
- url: https://0.0.0.0:8443 #Unisphere URL
- skipCertificateValidation: true
- standAloneConfig: # Set mode to "StandAlone" in order to use this config
- storageArrays:
- - storageArrayId: "000000000001"
- # Unisphere server managing the PowerMax array
- primaryURL: https://unisphere-1-addr:8443
- # proxyCredentialSecrets are used by the clients of the proxy to connect to it
- # If using proxy in the stand alone mode, then the driver must be provided the same secret.
- # The format of the proxy credential secret are exactly the same as the unisphere credential secret
- # For using the proxy with the driver, use the same proxy credential secrets for
- # all the managed storage arrays
- proxyCredentialSecrets:
- - proxy-creds
- - storageArrayId: "000000000002"
- primaryURL: https://unisphere-2-addr:8443
- # An optional backup Unisphere server managing the same array
- # This can be used by the proxy to fall back to in case the primary
- # Unisphere is inaccessible temporarily
- backupURL: unisphere-3-addr:8443
- proxyCredentialSecrets:
- - proxy-creds
- managementServers:
- - url: https://unisphere-1-addr:8443
- # Secret containing the credentials of the Unisphere server
- arrayCredentialSecret: unsiphere-1-creds
- skipCertificateValidation: true
- - url: https://unisphere-2-addr:8443
- arrayCredentialSecret: unsiphere-2-creds
- skipCertificateValidation: true
- - url: https://unisphere-3-addr:8443
- arrayCredentialSecret: unsiphere-3-creds
- skipCertificateValidation: true
-
-```
-
-#### Installation
-Copy the sample file - `powermax_reverseproxy.yaml` from the `samples` folder or use the sample available in the `OperatorHub` UI
-Edit and input all required parameters and then use the `OperatorHub` UI or run the following command to install the CSI PowerMax Reverse Proxy service:
-```bash
- kubectl create -f powermax_reverseproxy.yaml
-```
-You can query for the deployment and service created as part of the installation using the following commands:
-```bash
- kubectl get deployment -n
- kubectl get svc -n
-```
-There is a new sample file - `powermax_revproxy_standalone_with_driver.yaml` in the `samples` folder which enables installation of
-CSI PowerMax ReverseProxy in `StandAlone` mode along with the CSI PowerMax driver. This mode enables the CSI PowerMax driver to connect
-to multiple Unisphere servers for managing multiple PowerMax arrays. Please follow the same steps described above to install ReverseProxy
-with this new sample file.
-
-## Dynamic Logging Configuration
-
-This feature is introduced in CSI Driver for powermax version 2.0.0.
-
-### Operator based installation
-As part of driver installation, a ConfigMap with the name `powermax-config-params` is created using the manifest located in the sample file. This ConfigMap contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of the CSI driver. To set the default/initial log level user can set this field during driver installation.
-
-To update the log level dynamically user has to edit the ConfigMap `powermax-config-params` and update `CSI_LOG_LEVEL` to the desired log level.
-```bash
-kubectl edit configmap -n powermax powermax-config-params
-```
-### Sample CRD file for powermax
-You can find the sample CRD file [here](https://github.com/dell/dell-csi-operator/blob/main/samples/powermax_v260_k8s_126.yaml)
-
->Note:
- - `Kubelet config dir path` is not yet configurable in case of Operator based driver installation.
- - Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation.
-
-## Volume Health Monitoring
-This feature is introduced in CSI Driver for PowerMax version 2.2.0.
-
-### Operator based installation
-Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via operator.
-
-To enable this feature, set `X_CSI_HEALTH_MONITOR_ENABLED` to `true` in the driver manifest under controller and node section. Also, install the `external-health-monitor` from `sideCars` section for controller plugin.
-To get the volume health state `value` under controller should be set to true as seen below. To get the volume stats `value` under node should be set to true.
-```yaml
- # Install the 'external-health-monitor' sidecar accordingly.
- # Allowed values:
- # true: enable checking of health condition of CSI volumes
- # false: disable checking of health condition of CSI volumes
- # Default value: false
- controller:
- envs:
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "true"
- node:
- envs:
- # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from node plugin - volume usage
- # Allowed values:
- # true: enable checking of health condition of CSI volumes
- # false: disable checking of health condition of CSI volumes
- # Default value: false
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "true"
-```
-
-## Support for custom topology keys
-
-This feature is introduced in CSI Driver for PowerMax version 2.3.0.
-
-### Operator based installation
-
-Support for custom topology keys is optional and by default this feature is disabled for drivers when installed via operator.
-
-X_CSI_TOPOLOGY_CONTROL_ENABLED provides a way to filter topology keys on a node based on array and transport protocol. If enabled, user can create custom topology keys by editing node-topology-config configmap.
-
-1. To enable this feature, set `X_CSI_TOPOLOGY_CONTROL_ENABLED` to `true` in the driver manifest under node section.
-
-```yaml
- # X_CSI_TOPOLOGY_CONTROL_ENABLED provides a way to filter topology keys on a node based on array and transport protocol
- # if enabled, user can create custom topology keys by editing node-topology-config configmap.
- # Allowed values:
- # true: enable the filtration based on config map
- # false: disable the filtration based on config map
- # Default value: false
- - name: X_CSI_TOPOLOGY_CONTROL_ENABLED
- value: "false"
-```
-2. Edit the sample config map "node-topology-config" present in [sample CRD](#sample--crd-file-for--powermax) with appropriate values:
-
- | Parameter | Description |
- |-----------|--------------|
- | allowedConnections | List of node, array and protocol info for user allowed configuration |
- | allowedConnections.nodeName | Name of the node on which user wants to apply given rules |
- | allowedConnections.rules | List of StorageArrayID:TransportProtocol pair |
- | deniedConnections | List of node, array and protocol info for user denied configuration |
- | deniedConnections.nodeName | Name of the node on which user wants to apply given rules |
- | deniedConnections.rules | List of StorageArrayID:TransportProtocol pair |
-
-
- >Note: Name of the configmap should always be `node-topology-config`.
-
-## Support for auto RDM for vSphere over FC
-
-This feature is introduced in CSI Driver for PowerMax version 2.5.0.
-
-### Operator based installation
-Support for auto RDM for vSphere over FC feature is optional and by default this feature is disabled for drivers when installed via operator.
-
-To enable this feature, set `X_CSI_VSPHERE_ENABLED` to `true` in the driver manifest under controller and node section.
-
-```yaml
-# VMware/vSphere virtualization support
- # set X_CSI_VSPHERE_ENABLED to true, if you to enable VMware virtualized environment support via RDM
- # Allowed values:
- # "true" - vSphere volumes are enabled
- # "false" - vSphere volumes are disabled
- # Default value: "false"
- - name: "X_CSI_VSPHERE_ENABLED"
- value: "false"
- # X_CSI_VSPHERE_PORTGROUP: An existing portGroup that driver will use for vSphere
- # recommended format: csi-x-VC-PG, x can be anything of user choice
- # Allowed value: valid existing port group on the array
- # Default value: ""
- - name: "X_CSI_VSPHERE_PORTGROUP"
- value: ""
- # X_CSI_VSPHERE_HOSTNAME: An existing host(initiator group)/ host group(cascaded intiator group) that driver will use for vSphere
- # this host/host group should contain initiators from all the ESXs/ESXi host where the cluster is deployed
- # recommended format: csi-x-VC-HN, x can be anything of user choice
- # Allowed value: valid existing host(initiator group)/ host group(cascaded intiator group) on the array
- # Default value: ""
- - name: "X_CSI_VSPHERE_HOSTNAME"
- value: ""
-```
-Edit the section in the driver manifest having the sample for the following `Secret` with required values.
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: vcenter-creds
- # Set driver namespace
- namespace: test-powermax
-type: Opaque
-data:
- # set username to the base64 encoded username
- username: YWRtaW4=
- # set password to the base64 encoded password
- password: YWRtaW4=
-```
-These values can be obtained using base64 encoding as described in the following example:
-```bash
-echo -n "myusername" | base64
-echo -n "mypassword" | base64
-```
-where *myusername* and *mypassword* are credentials for a user with vCenter privileges.
diff --git a/content/v1/csidriver/installation/operator/powerstore.md b/content/v1/csidriver/installation/operator/powerstore.md
deleted file mode 100644
index efda0d0b6f..0000000000
--- a/content/v1/csidriver/installation/operator/powerstore.md
+++ /dev/null
@@ -1,207 +0,0 @@
----
-title: PowerStore
-description: >
- Installing CSI Driver for PowerStore via Operator
----
-{{% pageinfo color="primary" %}}
-The Dell CSI Operator is no longer actively maintained or supported. Dell CSI Operator has been replaced with [Dell CSM Operator](https://dell.github.io/csm-docs/docs/deployment/csmoperator/). If you are currently using Dell CSI Operator, refer to the [operator migration documentation](https://dell.github.io/csm-docs/docs/csidriver/installation/operator/operator_migration/) to migrate from Dell CSI Operator to Dell CSM Operator.
-
-{{% /pageinfo %}}
-
-## Installing CSI Driver for PowerStore via Operator
-
-The CSI Driver for Dell PowerStore can be installed via the Dell CSI Operator.
-
-To deploy the Operator, follow the instructions available [here](../).
-
-There are sample manifests provided which can be edited to do an easy installation of the driver.
-Note: The deployment of the driver using the operator does not use any Helm charts. The installation and configuration parameters will be slightly different from the ones specified via the Helm installer.
-
-Kubernetes Operators make it easy to deploy and manage the entire lifecycle of complex Kubernetes applications. Operators use Custom Resource Definitions (CRD) which represents the application and use custom controllers to manage them.
-
-### Install Driver
-
-1. Create namespace:
-
- Run `kubectl create namespace ` using the desired name to create the namespace.
-2. Create PowerStore array connection config:
-
- Create a file called `config.yaml` with the following content
- ```yaml
- arrays:
- - endpoint: "https://10.0.0.1/api/rest" # full URL path to the PowerStore API
- globalID: "unique" # unique id of the PowerStore array
- username: "user" # username for connecting to API
- password: "password" # password for connecting to API
- skipCertificateValidation: true # indicates if client side validation of (management)server's certificate can be skipped
- isDefault: true # treat current array as a default (would be used by storage classes without arrayID parameter)
- blockProtocol: "auto" # what SCSI transport protocol use on node side (FC, ISCSI, NVMeTCP, NVMeFC, None, or auto)
- nasName: "nas-server" # what NAS should be used for NFS volumes
- nfsAcls: "0777" # (Optional) defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory.
- # NFSv4 ACls are supported for NFSv4 shares on NFSv4 enabled NAS servers only. POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares.
- ```
- Change the parameters with relevant values for your PowerStore array.
- Add more blocks similar to above for each PowerStore array if necessary.
- ### User Privileges
- The username specified in `config.yaml` must be from the authentication providers of PowerStore. The user must have the correct user role to perform the actions. The minimum requirement is **Storage Operator**.
-
-3. Create Kubernetes secret:
-
- Create a file called `secret.yaml` in same folder as `config.yaml` with following content
- ```yaml
- apiVersion: v1
- kind: Secret
- metadata:
- name: powerstore-config
- namespace:
- type: Opaque
- data:
- config: CONFIG_YAML
- ```
-
- Combine both files and create Kubernetes secret by running the following command:
- ```bash
-
- sed "s/CONFIG_YAML/`cat config.yaml | base64 -w0`/g" secret.yaml | kubectl apply -f -
- ```
-
-4. Create a Custom Resource (CR) for PowerStore using the sample files provided [here](https://github.com/dell/dell-csi-operator/tree/master/samples).
-
-Below is a sample CR:
-
-```yaml
-apiVersion: storage.dell.com/v1
-kind: CSIPowerStore
-metadata:
- name: test-powerstore
- namespace: test-powerstore
-spec:
- driver:
- configVersion: v2.7.0
- replicas: 2
- dnsPolicy: ClusterFirstWithHostNet
- forceUpdate: false
- fsGroupPolicy: ReadWriteOnceWithFSType
- storageCapacity: true
- common:
- image: "dellemc/csi-powerstore:v2.7.0"
- imagePullPolicy: IfNotPresent
- envs:
- - name: X_CSI_POWERSTORE_NODE_NAME_PREFIX
- value: "csi"
- - name: X_CSI_FC_PORTS_FILTER_FILE_PATH
- value: "/etc/fc-ports-filter"
- sideCars:
- - name: external-health-monitor
- args: ["--monitor-interval=60s"]
- - name: provisioner
- args: ["--capacity-poll-interval=5m"]
-
- controller:
- envs:
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "false"
- - name: X_CSI_NFS_ACLS
- value: "0777"
- nodeSelector:
- node-role.kubernetes.io/master: ""
- tolerations:
- - key: "node-role.kubernetes.io/master"
- operator: "Exists"
- effect: "NoSchedule"
-
- node:
- envs:
- - name: "X_CSI_POWERSTORE_ENABLE_CHAP"
- value: "true"
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "false"
- - name: X_CSI_POWERSTORE_MAX_VOLUMES_PER_NODE
- value: "0"
- nodeSelector:
- node-role.kubernetes.io/worker: ""
-
- tolerations:
- - key: "node-role.kubernetes.io/worker"
- operator: "Exists"
- effect: "NoSchedule"
----
-apiVersion: v1
-kind: ConfigMap
-metadata:
- name: powerstore-config-params
- namespace: test-powerstore
-data:
- driver-config-params.yaml: |
- CSI_LOG_LEVEL: "debug"
- CSI_LOG_FORMAT: "JSON"
-```
-
-5. Users must configure the parameters in CR. The following table lists the primary configurable parameters of the PowerStore driver and their default values:
-
-| Parameter | Description | Required | Default |
-| --------- | ----------- | -------- |-------- |
-| replicas | Controls the number of controller pods you deploy. If the number of controller pods is greater than the number of available nodes, the excess pods will be pending state till new nodes are available for scheduling. Default is 2 which allows for Controller high availability. | Yes | 2 |
-| namespace | Specifies namespace where the drive will be installed | Yes | "test-powerstore" |
-| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No |"ReadWriteOnceWithFSType"|
-| storageCapacity | Enable/Disable storage capacity tracking feature | No | true |
-| ***Common parameters for node and controller*** |
-| X_CSI_POWERSTORE_NODE_NAME_PREFIX | Prefix to add to each node registered by the CSI driver | Yes | "csi-node"
-| X_CSI_FC_PORTS_FILTER_FILE_PATH | To set path to the file which provides a list of WWPN which should be used by the driver for FC connection on this node | No | "/etc/fc-ports-filter" |
-| ***Controller parameters*** |
-| X_CSI_POWERSTORE_EXTERNAL_ACCESS | allows specifying additional entries for hostAccess of NFS volumes. Both single IP address and subnet are valid entries | No | " "|
-| X_CSI_NFS_ACLS | Defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. | No | "0777" |
-| ***Node parameters*** |
-| X_CSI_POWERSTORE_ENABLE_CHAP | Set to true if you want to enable iSCSI CHAP feature | No | false |
-| X_CSI_POWERSTORE_MAX_VOLUMES_PER_NODE | Specify the default value for the maximum number of volumes that the controller can publish to the node | No | 0 |
-
-6. Execute the following command to create PowerStore custom resource:`kubectl create -f `. The above command will deploy the CSI-PowerStore driver.
- - After that the driver should be installed, you can check the condition of driver pods by running `kubectl get all -n `
-
-## Volume Health Monitoring
-
-Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via operator.
-To enable this feature, add the below block to the driver manifest before installing the driver. This ensures to install external
-health monitor sidecar. To get the volume health state value under controller should be set to true as seen below. To get the
-volume stats value under node should be set to true.
- ```yaml
- sideCars:
- # Uncomment the following to install 'external-health-monitor' sidecar to enable health monitor of CSI volumes from Controller plugin.
- # Also set the env variable controller.envs.X_CSI_HEALTH_MONITOR_ENABLED to "true".
- - name: external-health-monitor
- args: ["--monitor-interval=60s"]
- controller:
- envs:
- # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from Controller plugin- volume status, volume condition.
- # Install the 'external-health-monitor' sidecar accordingly.
- # Allowed values:
- # true: enable checking of health condition of CSI volumes
- # false: disable checking of health condition of CSI volumes
- # Default value: false
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "false"
- node:
- envs:
- # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from node plugin- volume usage, volume condition
- # Allowed values:
- # true: enable checking of health condition of CSI volumes
- # false: disable checking of health condition of CSI volumes
- # Default value: false
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "false"
- ```
-
-## Dynamic Logging Configuration
-
-This feature is introduced in CSI Driver for PowerStore version 2.0.0.
-
-### Operator based installation
-As part of driver installation, a ConfigMap with the name `powerstore-config-params` is created using the manifest located in the sample file. This ConfigMap contains attributes `CSI_LOG_LEVEL` which specifies the current log level of the CSI driver and `CSI_LOG_FORMAT` which specifies the current log format of the CSI driver. To set the default/initial log level user can set this field during driver installation.
-
-To update the log level dynamically user has to edit the ConfigMap `powerstore-config-params` and update `CSI_LOG_LEVEL` to the desired log level and `CSI_LOG_FORMAT` to the desired log format.
-```bash
-kubectl edit configmap -n csi-powerstore powerstore-config-params
-```
-**Note** :
- 1. "Kubelet config dir path" is not yet configurable in case of Operator based driver installation.
- 2. Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation.
diff --git a/content/v1/csidriver/installation/operator/unity.md b/content/v1/csidriver/installation/operator/unity.md
deleted file mode 100644
index ff5a411a24..0000000000
--- a/content/v1/csidriver/installation/operator/unity.md
+++ /dev/null
@@ -1,255 +0,0 @@
----
-title: Unity XT
-description: >
- Installing CSI Driver for Unity XT via Operator
----
-
-{{% pageinfo color="primary" %}}
-The Dell CSI Operator is no longer actively maintained or supported. Dell CSI Operator has been replaced with [Dell CSM Operator](https://dell.github.io/csm-docs/docs/deployment/csmoperator/). If you are currently using Dell CSI Operator, refer to the [operator migration documentation](https://dell.github.io/csm-docs/docs/csidriver/installation/operator/operator_migration/) to migrate from Dell CSI Operator to Dell CSM Operator.
-
-{{% /pageinfo %}}
-
-
-## CSI Driver for Unity XT
-### Pre-requisites
-#### Create secret to store Unity XT credentials
-Create a namespace called unity (it can be any user-defined name; But commands in this section assumes that the namespace is unity)
-Prepare the secret.yaml for driver configuration.
-The following table lists driver configuration parameters for multiple storage arrays.
-
-| Parameter | Description | Required | Default |
-| --------- | ----------- | -------- |-------- |
-| username | Username for accessing Unity XT system | true | - |
-| password | Password for accessing Unity XT system | true | - |
-| endpoint | REST API gateway HTTPS endpoint Unity XT system| true | - |
-| arrayId | ArrayID for Unity XT system | true | - |
-| isDefault | An array having isDefault=true is for backward compatibility. This parameter should occur once in the list. | true | - |
-| skipCertificateValidation | Determines if the driver is going to validate unisphere certs while connecting to the Unisphere REST API interface | true | true | - |
-
-Ex: secret.yaml
-
-```yaml
-
- storageArrayList:
- - arrayId: "APM00******1"
- username: "user"
- password: "password"
- endpoint: "https://10.1.1.1/"
- skipCertificateValidation: true
- isDefault: true
-
- - arrayId: "APM00******2"
- username: "user"
- password: "password"
- endpoint: "https://10.1.1.2/"
- skipCertificateValidation: true
-
-```
-
-```bash
-
-kubectl create secret generic unity-creds -n unity --from-file=config=secret.yaml
-```
-
-Use the following command to replace or update the secret
-
-```bash
-
-kubectl create secret generic unity-creds -n unity --from-file=config=secret.yaml -o yaml --dry-run | kubectl replace -f -
-```
-
-**Note**: The user needs to validate the YAML syntax and array related key/values while replacing the unity-creds secret.
-The driver will continue to use previous values in case of an error found in the YAML file.
-
-#### Create secret for client side TLS verification
-
-Please refer detailed documentation on how to create this secret [here](../../helm/unity/#certificate-validation-for-unisphere-rest-api-calls)
-
-If certificate validation is skipped, empty secret must be created. To create an empty secret. Ex: empty-secret.yaml
-
-```yaml
- apiVersion: v1
- kind: Secret
- metadata:
- name: unity-certs-0
- namespace: unity
- type: Opaque
- data:
- cert-0: ""
-```
-Execute command: ```kubectl create -f empty-secret.yaml```
-
-
-### Modify/Set the following *optional* environment variables
-
-Users should configure the parameters in CR. The following table lists the primary configurable parameters of the Unity driver and their default values:
-
- | Parameter | Description | Required | Default |
- | ----------------------------------------------- | --------------------------------------------------------------------------- | -------- | --------------------- |
- | ***Common parameters for node and controller*** | | | |
- | CSI_ENDPOINT | Specifies the HTTP endpoint for Unity XT. | No | /var/run/csi/csi.sock |
- | X_CSI_UNITY_ALLOW_MULTI_POD_ACCESS | Flag to enable multiple pods use same pvc on same node with RWO access mode | No | false |
- | ***Controller parameters*** | | | |
- | X_CSI_MODE | Driver starting mode | No | controller |
- | X_CSI_UNITY_AUTOPROBE | To enable auto probing for driver | No | true |
- | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Controller plugin | No | |
- | ***Node parameters*** | | | |
- | X_CSI_MODE | Driver starting mode | No | node |
- | X_CSI_ISCSI_CHROOT | Path to which the driver will chroot before running any iscsi commands | No | /noderoot |
- | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Node plugin | No | |
-
-### Example CR for Unity XT
-Refer samples from [here](https://github.com/dell/dell-csi-operator/tree/master/samples). Below is an example CR:
-```yaml
-apiVersion: storage.dell.com/v1
-kind: CSIUnity
-metadata:
- name: unity
- namespace: unity
-spec:
- driver:
- configVersion: v2.7.0
- replicas: 2
- dnsPolicy: ClusterFirstWithHostNet
- forceUpdate: false
- common:
- image: "dellemc/csi-unity:v2.7.0"
- imagePullPolicy: IfNotPresent
- sideCars:
- - name: provisioner
- args: ["--volume-name-prefix=csiunity","--default-fstype=ext4"]
- - name: snapshotter
- args: ["--snapshot-name-prefix=csiunitysnap"]
- # Enable/Disable health monitor of CSI volumes from node plugin. Provides details of volume usage.
- # - name: external-health-monitor
- # args: ["--monitor-interval=60s"]
-
- controller:
- envs:
- # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from Controller plugin - volume condition.
- # Install the 'external-health-monitor' sidecar accordingly.
- # Allowed values:
- # true: enable checking of health condition of CSI volumes
- # false: disable checking of health condition of CSI volumes
- # Default value: false
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "false"
-
- # nodeSelector: Define node selection constraints for controller pods.
- # For the pod to be eligible to run on a node, the node must have each
- # of the indicated key-value pairs as labels.
- # Leave as blank to consider all nodes
- # Allowed values: map of key-value pairs
- # Default value: None
- nodeSelector:
- # Uncomment if nodes you wish to use have the node-role.kubernetes. io/control-plane taint
- # node-role.kubernetes.io/control-plane: ""
-
- # tolerations: Define tolerations for the controllers, if required.
- # Leave as blank to install controller on worker nodes
- # Default value: None
- tolerations:
- # Uncomment if nodes you wish to use have the node-role.kubernetes.io/control-plane taint
- # - key: "node-role.kubernetes.io/control-plane"
- # operator: "Exists"
- # effect: "NoSchedule"
-
- node:
- envs:
- # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from node plugin - volume usage
- # Allowed values:
- # true: enable checking of health condition of CSI volumes
- # false: disable checking of health condition of CSI volumes
- # Default value: false
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "false"
- # nodeSelector: Define node selection constraints for node pods.
- # For the pod to be eligible to run on a node, the node must have each
- # of the indicated key-value pairs as labels.
- # Leave as blank to consider all nodes
- # Allowed values: map of key-value pairs
- # Default value: None
- nodeSelector:
- # Uncomment if nodes you wish to use have the node-role.kubernetes.io/control-plane taint
- # node-role.kubernetes.io/control-plane: ""
-
- # tolerations: Define tolerations for the node daemonset, if required.
- # Default value: None
- tolerations:
- # Uncomment if nodes you wish to use have the node-role.kubernetes.io/control-plane taint
- # - key: "node-role.kubernetes.io/control-plane"
- # operator: "Exists"
- # effect: "NoSchedule"
- # - key: "node.kubernetes.io/memory-pressure"
- # operator: "Exists"
- # effect: "NoExecute"
- # - key: "node.kubernetes.io/disk-pressure"
- # operator: "Exists"
- # effect: "NoExecute"
- # - key: "node.kubernetes.io/network-unavailable"
- # operator: "Exists"
- # effect: "NoExecute"
-
----
-apiVersion: v1
-kind: ConfigMap
-metadata:
- name: unity-config-params
- namespace: unity
-data:
- driver-config-params.yaml: |
- CSI_LOG_LEVEL: "info"
- ALLOW_RWO_MULTIPOD_ACCESS: "false"
- MAX_UNITY_VOLUMES_PER_NODE: "0"
- SYNC_NODE_INFO_TIME_INTERVAL: "15"
- TENANT_NAME: ""
-```
-
-## Dynamic Logging Configuration
-
-### Operator based installation
-As part of driver installation, a ConfigMap with the name `unity-config-params` is created using the manifest located in the sample file. This ConfigMap contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of the CSI driver. To set the default/initial log level user can set this field during driver installation.
-
-To update the log level dynamically user has to edit the ConfigMap `unity-config-params` and update `CSI_LOG_LEVEL` to the desired log level.
-```bash
-kubectl edit configmap -n unity unity-config-params
-```
-
-**Note** :
- 1. The log level is not allowed to be updated dynamically through `logLevel` attribute in the secret object.
- 2. "Kubelet config dir path" is not yet configurable in case of Operator based driver installation.
- 3. Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation.
-
-## Volume Health Monitoring
-
-### Operator based installation
-
-Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via operator.
-To enable this feature, add the below block to the driver manifest before installing the driver. This ensures to install external health monitor sidecar. To get the volume health state `value` under controller should be set to true as seen below. To get the volume stats `value` under node should be set to true.
-```yaml
- # Uncomment the following to install 'external-health-monitor' sidecar to enable health monitor of CSI volumes from Controller plugin.
- # Also set the env variable controller.envs.X_CSI_ENABLE_VOL_HEALTH_MONITOR to "true".
- # - name: external-health-monitor
- # args: ["--monitor-interval=60s"]
-
- controller:
- envs:
- # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from Controller plugin- volume status, volume condition.
- # Install the 'external-health-monitor' sidecar accordingly.
- # Allowed values:
- # true: enable checking of health condition of CSI volumes
- # false: disable checking of health condition of CSI volumes
- # Default value: false
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "false"
-
- node:
- envs:
- # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from node plugin - volume usage
- # Allowed values:
- # true: enable checking of health condition of CSI volumes
- # false: disable checking of health condition of CSI volumes
- # Default value: false
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "false"
-```
diff --git a/content/v1/csidriver/installation/test/certcsi.md b/content/v1/csidriver/installation/test/certcsi.md
index 205bb57dcb..49f72232af 100644
--- a/content/v1/csidriver/installation/test/certcsi.md
+++ b/content/v1/csidriver/installation/test/certcsi.md
@@ -7,52 +7,144 @@ description: Tool to validate Dell CSI Drivers
Cert-CSI is a tool to validate Dell CSI Drivers. It contains various test suites to validate the drivers.
## Installation
-To install this tool you can download one of binary files located in [RELEASES](https://github.com/dell/cert-csi/releases)
-You can build the tool by cloning the repository and running this command:
+There are three methods of installing `cert-csi`.
+
+1. [Download the executable from the latest GitHub release](#download-release-linux).
+2. [Pull the container image from DockerHub](#pull-the-container-image).
+3. [Build the exectuable or container image locally](#building-locally).
+
+> The exectuable from the GitHub Release only supports Linux. For non-Linux users, you must build the `cert-csi` executable [locally](#building-locally).
+
+### Download Release (Linux)
+
+1. Download the latest release of the cert-csi zip file.
+
+```bash
+curl -LO https://github.com/dell/cert-csi/releases/download/v1.3.1/cert-csi-v1.3.1.zip
+```
+
+2. Unzip the file.
+
+``` bash
+unzip cert-csi-v1.3.1.zip
+chmod +x ./cert-csi-v1.3.1
+```
+
+3. Install cert-csi-v1.3.1 as cert-csi.
+
```bash
-make build
+sudo install -o root -g root -m 0755 cert-csi-v1.3.1 /usr/local/bin/cert-csi
```
-You can also build a docker container image by running this command:
+If you do not have root access on the target system, you can still install cert-csi to the ~/.local/bin directory:
+
```bash
-docker build -t cert-csi .
+chmod +x cert-csi-v1.3.1
+mkdir -p ~/.local/bin
+mv ./cert-csi-v1.3.1 ~/.local/bin/cert-csi
+# and then append (or prepend) ~/.local/bin to $PATH
```
+### Pull The Container Image
+
+ {{< tabs name="pulling-cert-csi-image" >}}
+ {{% tab name="Docker" %}}
+
+ ```bash
+ docker pull dellemc/cert-csi:v1.3.1
+ ```
+
+ {{% /tab %}}
+ {{% tab name="Podman" %}}
+
+ ```bash
+ podman pull dellemc/cert-csi:v1.3.1
+ ```
+
+ {{% /tab %}}
+ {{< /tabs >}}
+
+### Building Locally
+#### Prerequisites
+- [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
+- [Go](https://go.dev/doc/install) (If buidling the executable)
+- Podman or Docker (If building the container image)
+
+1. Clone the repository
+
+```bash
+git clone -b "v1.3.1" https://github.com/dell/cert-csi.git && cd cert-csi
+```
+
+2. Build cert-csi
+
+{{< tabs name="build-cert-csi" >}}
+{{% tab name="Executable" %}}
+
+```bash
+ make build # the cert-csi executable will be in the working directory
+ chmod +x ./cert-csi # if building on *nix machine
+```
+
+{{% /tab %}}
+{{% tab name="Container Image" %}}
+
+```bash
+ # uses podman if available, otherwise uses docker. The resulting image is tagged cert-csi:latest
+ make docker
+```
+
+{{% /tab %}}
+{{< /tabs >}}
+
+### Optional
+
If you want to collect csi-driver resource usage metrics, then please provide the namespace where it can be found and install the metric-server using this command (kubectl is required):
```bash
make install-ms
```
-[FOR UNIX] If you want to build and install the tool to your $PATH and enable the **auto-completion** feature, then run this command:
+## Running Cert-CSI
+
+{{< tabs name="running-cert-csi" >}}
+{{% tab name="Executable" %}}
```bash
-make install-nix
+ cert-csi --help
+```
+{{% /tab %}}
+{{% tab name="Docker" %}}
+```bash
+ docker run --rm -it -v ~/.kube/config:/root/.kube/config dellemc/cert-csi:v1.3.1 --help
+```
+{{% /tab %}}
+{{% tab name="Podman" %}}
+```bash
+ podman run --rm -it -v ~/.kube/config:/root/.kube/config dellemc/cert-csi:v1.3.1 --help
```
-> Alternatively, you can install the metric-server by following the instructions at https://github.com/kubernetes-incubator/metrics-server
-## Running Cert-CSI
+{{% /tab %}}
+{{< /tabs >}}
-To get information on how to use the program, you can use built-in help. If you're using a UNIX-like system and enabled _auto-completion feature_ while installing the tool, then you can use shell's built-in auto-completion to navigate through program's subcommands and flags interactively by just pressing TAB.
+> The following sections showing how to execute the various test suites use the executable for brevity. For executions requiring special behavior, such as mounting file arguments into the container image, it will be noted for the relevant command.
-To run cert-csi, you have to point your environment to a kube cluster. This allows you to receive dynamically formatted suggestions from your cluster.
-For example if you press TAB while passing --storageclass (or --sc) argument, the tool will parse all existing Storage Classes from your cluster and suggest them as an input for you.
+> Log files are located in the `logs` directory in the working directory of cert-csi.\
+> Report files are located in the default `$HOME/.cert-csi/reports` directory.\
+> Database (SQLite) file for test suites is `.db` in the working directory of cert-csi.\
+> Database (SQLite) file for functional test suites is `cert-csi-functional.db` in the working directory of cert-csi.
-> To run a docker container your command should look something like this
-> ```bash
->
-> docker run --rm -it -v ~/.kube/config:/root/.kube/config -v $(pwd):/app/cert-csi cert-csi
-> ```
+> NOTE: If using the container image, these files will be inside the container. If you are interested in these files, it is recommended to use the exectuable.
-## Driver Certification
+## Run All Test Suites
-You can use cert-csi to launch a certification test run against multiple storage classes to check if the driver adheres to advertised capabilities.
+You can use cert-csi to launch a test run against multiple storage classes to check if the driver adheres to advertised capabilities.
### Preparing Config
-To run the certification test you need to provide `.yaml` config with storage classes and their capabilities. You can use `example-certify-config.yaml` as an example.
+To run the test suites you need to provide `.yaml` config with storage classes and their capabilities. You can use `example-certify-config.yaml` as an example.
-Example:
+Template:
```yaml
storageClasses:
- name: # storage-class-name (ex. powerstore)
@@ -62,313 +154,571 @@ storageClasses:
clone: # is volume cloning supported (true or false)
snapshot: # is volume snapshotting supported (true or false)
RWX: # is ReadWriteMany volume access mode supported for non RawBlock volumes (true or false)
- volumeHealth: false # set this to enable the execution of the VolumeHealthMetricsSuite.
+ volumeHealth: # set this to enable the execution of the VolumeHealthMetricsSuite (true or false)
# Make sure to enable healthMonitor for the driver's controller and node pods before running this suite. It is recommended to use a smaller interval time for this sidecar and pass the required arguments.
- VGS: false # set this to enable the execution of the VolumeGroupSnapSuite.
+ VGS: # set this to enable the execution of the VolumeGroupSnapSuite (true or false)
# Additionally, make sure to provide the necessary required arguments such as volumeSnapshotClass, vgs-volume-label, and any others as needed.
- RWOP: false # set this to enable the execution of the MultiAttachSuite with the AccessMode set to ReadWriteOncePod.
- ephemeral: # if exists, then run EphemeralVolumeSuite
- driver: # driver name for EphemeralVolumeSuite
+ RWOP: # set this to enable the execution of the MultiAttachSuite with the AccessMode set to ReadWriteOncePod (true or false)
+ ephemeral: # if exists, then run EphemeralVolumeSuite. See the Ephemeral Volumes suite section for example Volume Attributes
+ driver: # driver name for EphemeralVolumeSuite (e.g., csi-vxflexos.dellemc.com)
fstype: # fstype for EphemeralVolumeSuite
- volumeAttributes: # volume attrs for EphemeralVolumeSuite.
+ volumeAttributes: # volume attrs for EphemeralVolumeSuite.
attr1: # volume attr for EphemeralVolumeSuite
attr2: # volume attr for EphemeralVolumeSuite
+ capacityTracking:
+ driverNamespace: # namepsace where driver is installed
+ pollInterval: # duration to poll capacity (e.g., 2m)
```
-### Launching Certification Test Run
+Driver specific examples:
-After preparing a certification configuration file, you can launch certification by running
-```bash
-cert-csi certify --cert-config
-Optional Params:
- --vsc: volume snapshot class, required if you specified snapshot capability
- --timeout: set the timeout value for certification suites
- --no-metrics: disables metrics aggregation (set if you encounter k8s performance issues)
- --path: path to folder where reports will be created (if not specified ~/.cert-csi/ will be used)
-```
+ {{< tabs name="cerity-config-examples" >}}
+ {{% tab name="CSI PowerFlex" %}}
-## Functional Tests
+```yaml
+storageClasses:
+ - name: vxflexos
+ minSize: 8Gi
+ rawBlock: true
+ expansion: true
+ clone: true
+ snapshot: true
+ RWX: false
+ ephemeral:
+ driver: csi-powerstore.dellemc.com
+ fstype: ext4
+ volumeAttributes:
+ volumeName: "my-ephemeral-vol"
+ size: "8Gi"
+ storagepool: "sample"
+ systemID: "sample"
+ - name: vxflexos-nfs
+ minSize: 8Gi
+ rawBlock: false
+ expansion: true
+ clone: true
+ snapshot: true
+ RWX: true
+ RWOP: true
+ ephemeral:
+ driver: csi-vxflexos.dellemc.com
+ fstype: "nfs"
+ volumeAttributes:
+ volumeName: "my-ephemeral-vol"
+ size: "8Gi"
+ storagepool: "sample"
+ systemID: "sample"
+ capacityTracking:
+ driverNamespace: powerstore
+ pollInterval: 2m
+```
+
+ {{% /tab %}}
+ {{% tab name="CSI PowerScale" %}}
-### Running Individual Suites
-#### Volume/PVC Creation
+```yaml
+storageClasses:
+ - name: isilon
+ minSize: 8Gi
+ rawBlock: false
+ expansion: true
+ clone: true
+ snapshot: true
+ RWX: false
+ ephemeral:
+ driver: csi-isilon.dellemc.com
+ fstype: nfs
+ volumeAttributes:
+ size: "10Gi"
+ ClusterName: "sample"
+ AccessZone: "sample"
+ IsiPath: "/ifs/data/sample"
+ IsiVolumePathPermissions: "0777"
+ AzServiceIP: "192.168.2.1"
+```
+
+ {{% tab name="CSI PowerMax" %}}
-To run volume or PVC creation test suite, run the command:
-```bash
-cert-csi functional-test volume-creation --sc -n 5
-Optional Params:
---custom-name : To give custom name for PVC while creating only 1 PVC
---size : To give custom size, possible values for size in Gi/Mi
---access-mode : To set custom access-modes, possible values - ReadWriteOnce,ReadOnlyMany and ReadWriteMany
---block : To create raw block volumes
-```
+```yaml
+storageClasses:
+ - name: powermax-iscsi
+ minSize: 5Gi
+ rawBlock: true
+ expansion: true
+ clone: true
+ snapshot: true
+ capacityTracking:
+ driverNamespace: powerstore
+ pollInterval: 2m
+ - name: powermax-nfs
+ minSize: 5Gi
+ rawBlock: false
+ expansion: true
+ clone: true
+ snapshot: true
+ RWX: true
+ RWOP: true
+ capacityTracking:
+ driverNamespace: powerstore
+ pollInterval: 2m
+```
+
+ {{% /tab %}}
+
+ {{% /tab %}}
+ {{% tab name="CSI PowerStore" %}}
-#### Provisioning/Pod creation
+```yaml
+storageClasses:
+ - name: powerstore
+ minSize: 5Gi
+ rawBlock: true
+ expansion: true
+ clone: true
+ snapshot: true
+ RWX: false
+ ephemeral:
+ driver: csi-powerstore.dellemc.com
+ fstype: ext4
+ volumeAttributes:
+ arrayID: "arrayid"
+ protocol: iSCSI
+ size: 5Gi
+ - name: powerstore-nfs
+ minSize: 5Gi
+ rawBlock: false
+ expansion: true
+ clone: true
+ snapshot: true
+ RWX: true
+ RWOP: true
+ ephemeral:
+ driver: csi-powerstore.dellemc.com
+ fstype: "nfs"
+ volumeAttributes:
+ arrayID: "arrayid"
+ protocol: NFS
+ size: 5Gi
+ nasName: "nas-server"
+ capacityTracking:
+ driverNamespace: powerstore
+ pollInterval: 2m
+```
+
+ {{% /tab %}}
+ {{% tab name="CSI Unity" %}}
+
+```yaml
+storageClasses:
+ - name: unity-iscsi
+ minSize: 3Gi
+ rawBlock: true
+ expansion: true
+ clone: false
+ snapshot: true
+ RWX: false
+ ephemeral:
+ driver: csi-unity.dellemc.com
+ fstype: ext4
+ volumeAttributes:
+ arrayId: "array-id"
+ storagePool: pool-name
+ protocol: NFS
+ size: 5Gi
+ - name: unity-nfs
+ minSize: 3Gi
+ rawBlock: false
+ expansion: true
+ clone: false
+ snapshot: true
+ RWX: true
+ RWOP: true
+ ephemeral:
+ driver: csi-unity.dellemc.com
+ fstype: "nfs"
+ volumeAttributes:
+ arrayId: "array-id"
+ storagePool: pool-name
+ protocol: NFS
+ size: 5Gi
+ nasServer: "nas-server"
+ nasName: "nas-name"
+ capacityTracking:
+ driverNamespace: unity
+ pollInterval: 2m
+```
+
+ {{% /tab %}}
+ {{< /tabs >}}
+
+### Launching Test Run
+1. Executes the [VolumeIO](#volume-io) suite.
+2. Executes the [Scaling](#scalability) suite.
+3. If `storageClasses.clone` is `true`, executes the [Volume Cloning](#volume-cloning) suite.
+4. If `storageClasses.expansion` is `true`, executes the [Volume Expansion](#volume-expansion) suite.
+5. If `storageClasses.expansion` is `true` and `storageClasses.rawBlock` is `true`, executes the [Volume Expansion](#volume-expansion) suite with raw block volumes.
+6. If `storageClasses.snapshot` is `true`, exeuctes the [Snapshot](#snapshots) suite and the [Replication](#replication) suite.
+7. If `storageClasses.rawBlock` is `true`, executes the [Multi-Attach Volume](#multi-attach-volume) suite with raw block volumes.
+8. If `storageClasses.rwx` is `true`, executes the [Multi-Attach Volume](#multi-attach-volume) suite. (Storgae Class must be NFS.)
+9. If `storageClasses.volumeHealth` is `true`, executes the [Volume Health Metrics](#volume-health-metrics) suite.
+10. If `storageClasses.rwop` is `true`, executes the [Multi-Attach Volume](#multi-attach-volume) suite with the volume access mode `ReadWriteOncePod`.
+11. If `storageClasses.ephemeral` exists, executes the [Ephemeral Volumes](#ephemeral-volumes) suite.
+12. If `storageClasses.vgs` is `true`, executes the [Volume Group Snapshot]() suite.
+13. If `storageClasses.capacityTracking` exists, exeuctes the [Storage Class Capacity Tracking](#storage-capacity-tracking) suite.
+
+> NOTE: For testing/debugging purposes, it can be useful to use the `--no-cleanup` so resources do not get deleted.
+
+> NOTE: If you are using CSI PowerScale with [SmartQuotas](../../../features/powerscale/#usage-of-smartquotas-to-limit-storage-consumption) disabled, the `Volume Expansion` suite is expected to timeout due to the way PowerScale provisions storage. Set `storageClasses.expansion` to `false` to skip this suite.
+
+```bash
+cert-csi certify --cert-config --vsc
+```
+
+Withold the `--vsc` argument if Snapshot capabilities are disabled.
-To run volume provisioning or pod creation test suite, run the command:
```bash
-cert-csi functional-test provisioning --sc
+cert-csi certify --cert-config
Optional Params:
---volumeNumber : number of volumes to attach to each pod
---podNumber : number of pod to create
---podName : To give custom name for pod while creating only 1 pod
---block : To create raw block volumes and attach it to pods
---vol-access-mode: To set volume access modes
+ --vsc: volume snapshot class, required if you specified snapshot capability
```
-#### Running Volume Deletion suite
+Run `cert-csi certify -h` for more options.
+
+If you are using the container image, the `cert-config` file must be mounted into the container. Assuming your `cert-config` file is `/home/user/example-certify-config.yaml`, here are examples of how to exeucte this suite with the container image.
-To run volume delete test suite, run the command:
+{{< tabs name="running-container-certify" >}}
+{{% tab name="Docker" %}}
```bash
-cert-csi functional-test volume-deletion
---pvc-name value : PVC name to delete
---pvc-namespace : PVC namespace where PVC is present
+ docker run --rm -it -v ~/.kube/config:/root/.kube/config -v /home/user/example-certify-config.yaml:/example-certify-config.yaml dellemc/cert-csi:v1.3.1 certify --cert-config /example-certify-config.yaml --vsc
```
-
-#### Running Pod Deletion suite
-
-To run pod deletion test suite, run the command:
+{{% /tab %}}
+{{% tab name="Podman" %}}
```bash
-cert-csi functional-test pod-deletion
---pod-name : Pod name to delete
---pod-namespace : Pod namespace where pod is present
+ podman run --rm -it -v ~/.kube/config:/root/.kube/config -v /home/user/example-certify-config.yaml:/example-certify-config.yaml dellemc/cert-csi:v1.3.1 certify --cert-config /example-certify-config.yaml --vsc
```
-#### Running Cloned Volume deletion suite
+{{% /tab %}}
+{{< /tabs >}}
-To run cloned volume deletion test suite, run the command:
-```bash
-cert-csi functional-test clone-volume-deletion
---clone-volume-name : Volume name to delete
-```
+### Running Invidual Test Suites
-#### Multi Attach Volume Tests
+> NOTE: For testing/debugging purposes, it can useful to use the `--no-cleanup` flag so resources do not get deleted.
+
+#### Volume I/O
+1. Creates the namespace `volumeio-test-*` where resources will be created.
+2. Creates Persistent Volume Claims.
+3. If the specified storage class binding mode is not `WaitForFirstConsumer`, waits for Persistent Volume Claims to be bound to Persistent Volumes.
+4. For each Persistent Volume Claim, executes the following workflow concurrently:
+ 1. Creates a Pod to consume the Persistent Volume Claim.
+ 2. Writes data to the volume and verifies the checksum of the data.
+ 3. Deletes the Pod.
+ 4. Waits for the associated Volume Attachment to be deleted.
-To run multi-attach volume test suite, run the command:
```bash
-cert-csi functional-test multi-attach-vol --sc
---pods : Number of pods to create
---block : To create raw block volume
+cert-csi test vio --sc
```
-#### Ephemeral volumes suite
-
-To run ephemeral volume test suite, run the command:
-```bash
+Run `cert-csi test vio -h` for more options.
-cert-csi functional-test ephemeral-volume --driver --attr ephemeral-config.properties
---pods : Number of pods to create
---pod-name : To create pods with custom name
---attr : CSI volume attributes file name
---fs-type: FS Type can be specified
+#### Scalability
+1. Creates the namespace `scale-test-*` where resources will be created.
+2. Creates a StatefulSet.
+3. Scales up the StatefulSet.
+4. Scales down the StatefulSet to zero.
-Sample ephemeral-config.properties (key/value pair)
-arrayId=arr1
-protocol=iSCSI
-size=5Gi
+```bash
+cert-csi test scaling --sc
```
-#### Storage Capacity Tracking Suite
+Run `cert-csi test scaling -h` for more options.
-To run storage capacity tracking test suite, run the command:
-```bash
+#### Snapshots
+1. Creates the namespace `snap-test-*` where resources will be created.
+2. Creates Persistent Volume Claim.
+3. If the specified storage class binding mode is not `WaitForFirstConsumer`, waits for Persistent Volume Claim to be bound to Persistent Volumes.
+4. Create Pod to consume the Persistent Volume Claim.
+5. Writes data to the volume.
+6. Deletes the Pod.
+7. Creates a Volume Snapshot from the Persistent Volume Claim.
+8. Waits for the Volume Snapshot to be Ready.
+9. Creates a new Persistent Volume Claim from the Volume Snapshot.
+10. Creates a new Pod to consume the new Persistent Volume Claim.
+11. Verifies the checksum of the data.
-cert-csi functional-test capacity-tracking --sc --drns --pi
-Optional Params:
---vs : volume size to be created
+```bash
+cert-csi test snap --sc --vsc
```
-### Other Options
+Run `cert-csi test snap -h` for more options.
-#### Generating tabular report from DB
+#### Volume Group Snapshots
+1. Creates the namespace `vgs-snap-test-*` where resources will be created.
+2. Creates Persistent Volume Claims.
+3. If the specified storage class binding mode is not `WaitForFirstConsumer`, waits for Persistent Volume Claim to be bound to Persistent Volumes.
+4. Create Pods to consume the Persistent Volume Claims.
+5. Creates Volume Group Snapshot.
+6. Waits for Volume Group Snapshot state to be COMPLETE.
-To generate tabular report from the database, run the command:
-```bash
-cert-csi -db functional-report -tabular
-Example: cert-csi -db ./test.db functional-report -tabular
-```
-> Note: DB is mandatory parameter
+> Note: Volume Group Snapshots are only supported by CSI PowerFlex and CSI PowerStore.
-#### Generating XML report from DB
+#### Multi-Attach Volume
+1. Creates the namespace `mas-test-*` where resources will be created.
+2. Creates Persistent Volume Claim.
+3. Creates Pod to consume the Persistent Volume Claim.
+4. Waits for Pod to be in the Ready state.
+5. Creates additional Pods to consume the same Persistent Volume Claim.
+6. Watis for Pods to be in the Ready state.
+7. Writes data to the volumes on the Pods and verifies checksum of the data.
-To generate XML report from the database, run the command:
```bash
-cert-csi -db functional-report -xml
-Example: cert-csi -db ./test.db functional-report -xml
+cert-csi test multi-attach-vol --sc
```
-> Note: DB is mandatory parameter
-#### Including Array configuration file
+> The storage class must be an NFS storage class. Otherwise, raw block volumes must be used.
```bash
-# Array properties sample (array-config.properties)
-arrayIPs: 192.168.1.44
-name: Unity
-user: root
-password: test-password
-arrayIds: arr-1
+cert-csi test multi-attach-vol --sc --block
```
-### Screenshots
-
-Tabular Report example
+Run `cert-csi test multi-attach-vol -h` for more options.
-![img9](../img/tabularReport.png)
+#### Replication
+1. Creates the namespace `replication-suite-*` where resources will be created.
+2. Creates Persistent Volume Claims.
+3. Create Pods to consume the Persistent Volume Claims.
+4. Waits for Pods to be in the Ready state.
+5. Creates a Volume Snapshot from each Persistent Volume Claim.
+6. Waits for the Volume Snapshots to be Ready.
+7. Creates Persistent Volume Claims from the Volume Snapshots.
+8. Creates Pods to consume the Persistent Volume Claims.
+9. Waits for Pods to be in the Ready state.
+10. Verifies the replication group name on ersistent Volume Claims.
-## Kubernetes End-To-End Tests
-All Kubernetes end to end tests require that you provide the driver config based on the storage class you want to test and the version of the kubernetes you want to test against. These are the mandatory parameters that you can provide in command like..
```bash
- --driver-config and --version "v1.25.0"
- ```
-
-### Running kubernetes end-to-end tests
-
-To run kubernetes end-to-end tests, run the command:
-```bash
-
-cert-csi k8s-e2e --config --driver-config --focus --timeout --version < version of k8s Ex: "v1.25.0"> --skip-tests --skip
+cert-csi test replication --sc --vsc
```
-### Kubernetes end-to-end reporting
+Run `cert-csi test replication -h` for more options.
-- All the reports generated by kubernetes end-to-end tests will be under `$HOME/reports` directory by default if user doesn't mention the report path.
-- Kubernetes end to end tests Execution log file will be placed under `$HOME/reports/execution_[storage class name].log`
-- Cert-CSI logs will be present in the execution directory `info.log` , `error.log`
+#### Volume Cloning
+1. Creates the namespace `clonevolume-suite-*` where resources will be created.
+2. Creates Persistent Volume Claims.
+3. Create Pods to consume the Persistent Volume Claims.
+4. Waits for Pods to be in the Ready state.
+5. Creates Persistent Volume Claims with the source volume being from the volumes in step 2.
+6. Create Pods to consume the Persistent Volume Claims.
+7. Waits for Pods to be in the Ready state.
-### Test config files format
-- #### [driver-config](https://github.com/dell/cert-csi/blob/main/pkg/utils/testdata/config-nfs.yaml)
-- #### [ignore-tests](https://github.com/dell/cert-csi/blob/main/pkg/utils/ignore.yaml)
+```bash
+cert-csi test clone-volume --sc
+```
-### Example Commands
-- ```bash
+Run `cert-csi test clone-volume -h` for more options.
- cert-csi k8s-e2e --config "/root/.kube/config" --driver-config "/root/e2e_config/config-nfs.yaml" --focus "External.Storage.*" --timeout "2h" --version "v1.25.0" --skip-tests "/root/e2e_config/ignore.yaml"
- ```
-- ```bash
+#### Volume Expansion
+1. Creates the namespace `volume-expansion-suite-*` where resources will be created.
+2. Creates Persistent Volume Claims.
+3. Create Pods to consume the Persistent Volume Claims.
+4. Waits for Pods to be in the Ready state.
+5. Expands the size in the Persistent Volume Claims.
+6. Verifies that the volumes mounted to the Pods were expanded.
- ./cert-csi k8s-e2e --config "/root/.kube/config" --driver-config "/root/e2e_config/config-iscsi.yaml" --focus "External.Storage.*" --timeout "2h" --version "v1.25.0" --focus-file "capacity.go"
- ```
+> Raw block volumes cannot be verified since there is no filesystem.
-## Performance Tests
+> If you are using CSI PowerScale with [SmartQuotas](../../../features/powerscale/#usage-of-smartquotas-to-limit-storage-consumption) disabled, the `Volume Expansion` suite is expected to timeout due to the way PowerScale provisions storage.
-All performance tests require that you provide a storage class that you want to test. You can provide multiple storage classes in one command. For example,
```bash
-... --sc --sc ...
+cert-csi test expansion --sc
```
-### Running Individual Suites
-#### Running Volume Creation test suite
+Run `cert-csi test expansion -h` for more options.
+
+#### Blocksnap suite
+1. Creates the namespace `block-snap-test-*` where resources will be created.
+2. Creates Persistent Volume Claim.
+3. If the specified storage class binding mode is not `WaitForFirstConsumer`, waits for Persistent Volume Claim to be bound to Persistent Volumes.
+4. Creates Pod to consume the Persistent Volume Claim.
+5. Writes data to the volume.
+5. Creates a Volume Snapshot from the Persistent Volume Claim.
+6. Waits for the Volume Snapshot to be Ready.
+7. Create a Persistent Volume Claim with raw block volume mode from the Volume Snapshot.
+8. Creates Pod to consume the Persistent Volume Claim.
+9. Mounts the raw block volume and verifes the checksum of the data.
-To run volume creation test suite, run the command:
```bash
-cert-csi test volume-creation --sc -n 25
+cert-csi test blocksnap --sc --vsc
```
-#### Running Provisioning test suite
+Run `cert-csi test blocksnap -h` for more options.
+
+#### Volume Health Metrics
+1. Creates the namespace `volume-health-metrics-*` where resources will be created.
+2. Creates Persistent Volume Claim.
+3. Creates Pod to consume the Persistent Volume Claim.
+4. Waits for Pod to be in the Ready state.
+4. Veries that ControllerGetVolume and NodeGetVolumeStats are being executed in the controller and node pods, respectively.
-To run volume provisioning test suite, run the command:
```bash
-cert-csi test provisioning --sc --podNum 1 --volNum 10
+cert-csi test volumehealthmetrics --sc --driver-ns
```
-#### Running Scalability test suite
+Run `cert-csi test volumehealthmetrics -h` for more options.
-To run scalability test suite, run the command:
-```bash
-cert-csi test scaling --sc --replicas 5
-```
+> Note: Make sure to enable healthMonitor for the driver's controller and node pods before running this suite. It is recommended to use a smaller interval time for this sidecar.
-#### Running VolumeIO test suite
+#### Ephemeral Volumes
+1. Creates namespace `functional-test` where resources will be created.
+2. Creates Pods with one ephemeral inline volume each.
+3. Waits for Pods to be in the Ready state.
+4. Writes data to the volume on each Pod.
+5. Verifies the checksum of the data.
-To run volumeIO test suite, run the command:
```bash
-cert-csi test vio --sc --chainNumber 5 --chainLength 20
+cert-csi test ephemeral-volume --driver --attr ephemeral-config.properties
```
-#### Running Snap test suite
+Run `cert-csi test ephemeral-volume -h` for more options.
-To run volume snapshot test suite, run the command:
-```bash
-cert-csi test snap --sc --vsc
-```
+> `--driver` is the name of a CSI Driver from the output of `kubectl get csidriver` (e.g, csi-vxflexos.dellemc.com).
+> This suite does not delete resources on success.
-#### Running Multi-attach volume suite
+If you are using the container image, the `attr` file must be mounted into the container. Assuming your `attr` file is `/home/user/ephemeral-config.properties`, here are examples of how to exeucte this suite with the container image.
-To run multi-attach volume test suite, run the command:
+{{< tabs name="running-container-ephemeral-volume" >}}
+{{% tab name="Docker" %}}
```bash
-cert-csi test multi-attach-vol --sc --podNum 3
+ docker run --rm -it -v ~/.kube/config:/root/.kube/config -v /home/user/ephemeral-config.properties:/ephemeral-config.properties dellemc/cert-csi:v1.3.1 test ephemeral-volume --driver --attr /ephemeral-config.properties
```
+{{% /tab %}}
+{{% tab name="Podman" %}}
```bash
-
-cert-csi test multi-attach-vol --sc --podNum 3 --block # to use raw block volumes
+ podman run --rm -it -v ~/.kube/config:/root/.kube/config -v /home/user/ephemeral-config.properties:/ephemeral-config.properties dellemc/cert-csi:v1.3.1 test ephemeral-volume --driver --attr /ephemeral-config.properties
```
-#### Running Replication test suite
-
-To run replication test suite, run the command:
-```bash
+{{% /tab %}}
+{{< /tabs >}}
-cert-csi test replication --sc --pn 1 --vn 5 --vsc
-```
+Sample ephemeral-config.properties (key/value pair)
+ {{< tabs name="volume-attributes-examples" >}}
+ {{% tab name="CSI PowerFlex" %}}
+
+ ```yaml
+ volumeName: "my-ephemeral-vol"
+ size: "10Gi"
+ storagepool: "sample"
+ systemID: "sample"
+ ```
-#### Running Volume Cloning test suite
+ {{% /tab %}}
+ {{% tab name="CSI PowerScale" %}}
-To run volume cloning test suite, run the command:
-```bash
-cert-csi test clone-volume --sc --pn 1 --vn 5
-```
+ ```yaml
+ size: "10Gi"
+ ClusterName: "sample"
+ AccessZone: "sample"
+ IsiPath: "/ifs/data/sample"
+ IsiVolumePathPermissions: "0777"
+ AzServiceIP: "192.168.2.1"
+ ```
-#### Running Volume Expansion test suite
+ {{% /tab %}}
+ {{% tab name="CSI PowerStore" %}}
-To run volume expansion test, run the command:
-```bash
+ ```yaml
+ size: "10Gi"
+ arrayID: "sample"
+ nasName: "sample"
+ nfsAcls: "0777"
+ ```
-cert-csi test expansion --sc --pn 1 --vn 5 --iSize 8Gi --expSize 16Gi
+ {{% /tab %}}
+ {{% tab name="CSI Unity" %}}
+
+ ```yaml
+ size: "10Gi"
+ arrayID: "sample"
+ protocol: iSCSI
+ thinProvisioned: "true"
+ isDataReductionEnabled: "false"
+ tieringPolicy: "1"
+ storagePool: pool_2
+ nasName: "sample"
+ ```
-cert-csi test expansion --sc --pn 1 --vn 5 # `iSize` and `expSize` default to 3Gi and 6Gi respectively
+ {{% /tab %}}
+ {{< /tabs >}}
-cert-csi test expansion --sc --pn 1 --vn 5 --block # to create block volumes
-```
+#### Storage Capacity Tracking
+1. Creates namespace `functional-test` where resources will be created.
+2. Creates a duplicate of the provided storge class using prefix `capacity-tracking`.
+3. Waits for the associated CSIStorageCapacity object to be created.
+4. Deletes the duplicate storge class.
+5. Waits for the associated CSIStorageCapacity to be deleted.
+6. Sets the capacity of the CSIStorageCapacity of the provided storage class to zero.
+7. Creates Pod with a volume using the provided storage class.
+8. Verifies that the Pod is in the Pending state.
+9. Waits for storage capacity to be polled by the driver.
+10. Waits for Pod to be Running.
-#### Running Blocksnap suite
+> Storage class must use volume binding mode `WaitForFirstConsumer`.\
+> This suite does not delete resources on success.
-To run block snapshot test suite, run the command:
```bash
-cert-csi test blocksnap --sc --vsc
+cert-csi functional-test capacity-tracking --sc --drns
```
-#### Volume Health Metric Suite
+Run `cert-csi test capacity-tracking -h` for more options.
+
+### Running Longevity mode
-To run the volume health metric test suite, run the command:
```bash
+cert-csi test --sc --longevity
+```
+### Use configurable container images
-cert-csi test volumehealthmetrics --sc --driver-ns --podNum --volNum
+To use custom images for creating containers pass an image config YAML file as an argument. The YAML file should have linux(test) and postgres images name with their corresponding image URL. For example
+
+Example:
+```yaml
+images:
+ - test: "docker.io/centos:centos7" # change this to your url
+ postgres: "docker.io/bitnami/postgresql:11.8.0-debian-10-r72" # change this to your url
```
+To use this feature, run cert-csi with the option `--image-config /path/to/config.yaml` along with any other arguments.
-> Note: Make sure to enable healthMonitor for the driver's controller and node pods before running this suite. It is recommended to use a smaller interval time for this sidecar.
-#### Ephemeral volumes suite
+## Kubernetes End-To-End Tests
+All Kubernetes end to end tests require that you provide the driver config based on the storage class you want to test and the version of the kubernetes you want to test against. These are the mandatory parameters that you can provide in command like..
+```bash
+ --driver-config and --version "v1.25.0"
+ ```
-To run the ephemeral volume test suite, run the command:
+### Running kubernetes end-to-end tests
+To run kubernetes end-to-end tests, run the command:
```bash
-cert-csi test ephemeral-volume --driver --attr ephemeral-config.properties
---pods : Number of pods to create
---pod-name : Create pods with custom name
---attr : File name for the CSI volume attributes file (required)
---fs-type: FS Type
-Sample ephemeral-config.properties (key/value pair)
-arrayId=arr1
-protocol=iSCSI
-size=5Gi
+cert-csi k8s-e2e --config --driver-config --focus --timeout --version < version of k8s Ex: "v1.25.0"> --skip-tests --skip
```
-### Running Longevity mode
+### Kubernetes end-to-end reporting
-To run longevity test suite, run the command:
-```bash
+- All the reports generated by kubernetes end-to-end tests will be under `$HOME/reports` directory by default if user doesn't mention the report path.
+- Kubernetes end to end tests Execution log file will be placed under `$HOME/reports/execution_[storage class name].log`
+- Cert-CSI logs will be present in the execution directory `info.log` , `error.log`
-cert-csi test --sc --longevity
-```
+### Test config files format
+- #### [driver-config](https://github.com/dell/cert-csi/blob/main/pkg/utils/testdata/config-nfs.yaml)
+- #### [ignore-tests](https://github.com/dell/cert-csi/blob/main/pkg/utils/ignore.yaml)
+
+### Example Commands
+- ```bash
+
+ cert-csi k8s-e2e --config "/root/.kube/config" --driver-config "/root/e2e_config/config-nfs.yaml" --focus "External.Storage.*" --timeout "2h" --version "v1.25.0" --skip-tests "/root/e2e_config/ignore.yaml"
+ ```
+- ```bash
+
+ ./cert-csi k8s-e2e --config "/root/.kube/config" --driver-config "/root/e2e_config/config-iscsi.yaml" --focus "External.Storage.*" --timeout "2h" --version "v1.25.0" --focus-file "capacity.go"
+ ```
### Interacting with DB
@@ -384,6 +734,16 @@ Report types:
--tabular: tidy html report with basic run information
```
+To generate tabular report from the database, run the command:
+```bash
+cert-csi -db ./cert-csi-functional.db functional-report -tabular
+```
+
+To generate XML report from the database, run the command:
+```bash
+cert-csi -db ./cert-csi-functional.db functional-report -xml
+```
+
#### Customizing report folder
To specify test report folder path, use --path option as follows:
@@ -470,10 +830,14 @@ Text report example
![img7](../img/textReport.png)
+Tabular Report example
+
+![img9](../img/tabularReport.png)
+
### HTML report example
![img8](../img/HTMLReport.png)
### Resource usage example chart
-![img9](../img/resourceUsage.png)
\ No newline at end of file
+![img9](../img/resourceUsage.png)
diff --git a/content/v1/csidriver/partners/_index.md b/content/v1/csidriver/partners/_index.md
deleted file mode 100644
index 2bdf4ff845..0000000000
--- a/content/v1/csidriver/partners/_index.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-title: "Our Ecosystem Partners"
-Description: "Our Ecosystem Partners"
-weight: 9
----
-
-
diff --git a/content/v1/csidriver/partners/docker.md b/content/v1/csidriver/partners/docker.md
deleted file mode 100644
index e7950bf7f7..0000000000
--- a/content/v1/csidriver/partners/docker.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-title: "MKE"
-Description: "About Mirantis Kubernetes Engine"
----
-
-The Dell CSI Drivers support Docker Enterprise Edition (EE) and deployment on clusters bootstrapped with Mirantis Kubernetes Engine (MKE).
-
-The installation process for the drivers on such clusters remains the same as the installation process on regular Kubernetes clusters.
-
-On MKE-based clusters, kubectl may not be installed by default, it is important that kubectl is installed prior to the installation of the driver.
-
-The worker nodes on MKE-backed clusters may run any of the OS which we support with upstream clusters.
-
-## MKE UI Examples
-
-![](../first.png)
-
-![](../second.png)
-
-![](../third.png)
diff --git a/content/v1/csidriver/partners/driver1.PNG b/content/v1/csidriver/partners/driver1.PNG
deleted file mode 100644
index 4b53e325cd..0000000000
Binary files a/content/v1/csidriver/partners/driver1.PNG and /dev/null differ
diff --git a/content/v1/csidriver/partners/driver2.PNG b/content/v1/csidriver/partners/driver2.PNG
deleted file mode 100644
index 5f4d7fdcb9..0000000000
Binary files a/content/v1/csidriver/partners/driver2.PNG and /dev/null differ
diff --git a/content/v1/csidriver/partners/driver3.png b/content/v1/csidriver/partners/driver3.png
deleted file mode 100644
index 4add7b1e66..0000000000
Binary files a/content/v1/csidriver/partners/driver3.png and /dev/null differ
diff --git a/content/v1/csidriver/partners/first.png b/content/v1/csidriver/partners/first.png
deleted file mode 100644
index e4fb7a0a2d..0000000000
Binary files a/content/v1/csidriver/partners/first.png and /dev/null differ
diff --git a/content/v1/csidriver/partners/oc1.PNG b/content/v1/csidriver/partners/oc1.PNG
deleted file mode 100644
index 9dda936937..0000000000
Binary files a/content/v1/csidriver/partners/oc1.PNG and /dev/null differ
diff --git a/content/v1/csidriver/partners/oc2.PNG b/content/v1/csidriver/partners/oc2.PNG
deleted file mode 100644
index 13565835a7..0000000000
Binary files a/content/v1/csidriver/partners/oc2.PNG and /dev/null differ
diff --git a/content/v1/csidriver/partners/oc3.PNG b/content/v1/csidriver/partners/oc3.PNG
deleted file mode 100644
index 273d1c72a2..0000000000
Binary files a/content/v1/csidriver/partners/oc3.PNG and /dev/null differ
diff --git a/content/v1/csidriver/partners/oc4.PNG b/content/v1/csidriver/partners/oc4.PNG
deleted file mode 100644
index 44a56472f3..0000000000
Binary files a/content/v1/csidriver/partners/oc4.PNG and /dev/null differ
diff --git a/content/v1/csidriver/partners/oc5.PNG b/content/v1/csidriver/partners/oc5.PNG
deleted file mode 100644
index e225ae8ca9..0000000000
Binary files a/content/v1/csidriver/partners/oc5.PNG and /dev/null differ
diff --git a/content/v1/csidriver/partners/operator.md b/content/v1/csidriver/partners/operator.md
deleted file mode 100644
index 1b4a5fffd2..0000000000
--- a/content/v1/csidriver/partners/operator.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: "OperatorHub.io"
-linkTitle: "OperatorHub.io"
-weight: 3
-description: Installing the Dell CSI Operator via OperatorHub.io
----
-
-Users can install the Dell CSI Operator via [Operatorhub.io](https://operatorhub.io/) on Kubernetes. The following outlines the process to do so:
-
-**Steps**
-1. Search *dell* in the storage category in [Operatorhub.io](https://operatorhub.io/?keyword=dell).
-
-![](../ophub1.png)
-
-2. Click Dell Operator.
-
-![](../ophub2.png)
-
-3. Check the desired version is selected and click _Install_. Follow the provided instructions.
-
-![](../ophub3.png)
-
-## Install CSI Drivers via Operator
-
-Proceed to [this link](../../installation/operator/#installing-csi-driver-via-operator) for further installing the driver using Operator
\ No newline at end of file
diff --git a/content/v1/csidriver/partners/ophub1.png b/content/v1/csidriver/partners/ophub1.png
deleted file mode 100644
index b86e59cd20..0000000000
Binary files a/content/v1/csidriver/partners/ophub1.png and /dev/null differ
diff --git a/content/v1/csidriver/partners/ophub2.png b/content/v1/csidriver/partners/ophub2.png
deleted file mode 100644
index 2094ebd6c2..0000000000
Binary files a/content/v1/csidriver/partners/ophub2.png and /dev/null differ
diff --git a/content/v1/csidriver/partners/ophub3.png b/content/v1/csidriver/partners/ophub3.png
deleted file mode 100644
index 84773431cf..0000000000
Binary files a/content/v1/csidriver/partners/ophub3.png and /dev/null differ
diff --git a/content/v1/csidriver/partners/rancher.md b/content/v1/csidriver/partners/rancher.md
deleted file mode 100644
index d509db9522..0000000000
--- a/content/v1/csidriver/partners/rancher.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: "RKE"
-Description: "About Rancher Kubernetes Engine"
----
-
-The Dell CSI Drivers support Rancher Kubernetes Engine (RKE) v1.4.1.
-
-The installation process for the drivers on such clusters remains the same as the installation process on regular Kubernetes clusters. Installation on this cluster is done using helm and via Operator has not been qualified.
-
-## RKE Examples
-
-![](../rancher1.PNG)
diff --git a/content/v1/csidriver/partners/rancher1.PNG b/content/v1/csidriver/partners/rancher1.PNG
deleted file mode 100644
index 55c933e513..0000000000
Binary files a/content/v1/csidriver/partners/rancher1.PNG and /dev/null differ
diff --git a/content/v1/csidriver/partners/redhat.md b/content/v1/csidriver/partners/redhat.md
deleted file mode 100644
index 28299fe9d4..0000000000
--- a/content/v1/csidriver/partners/redhat.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-title: "Red Hat OpenShift"
-linkTitle: "Red Hat OpenShift"
-weight: 3
-description: >
- Installing the certified Dell CSI Operator on OpenShift
----
-The Dell CSI Drivers support Red Hat OpenShift. Please see the [Supported Platforms](../../#features-and-capabilities) table for more details.
-
-The CSI drivers can be installed via Helm charts or Dell CSI Operator. The Dell CSI Operator allows for easy installation of the driver via the Openshift UI. The process to install the Operator via the OpenShift UI can be found below.
-
-## Install Operator via the OpenShift UI
-
-**Steps**
-
-1. Type "Dell" in the OperatorHub section under Operators, to get the list of available Dell CSI Operators.
-
-![](../oc1.PNG)
-
-2. Check the version you want to install from the list, you can check the details by clicking it.
-
-![](../oc2.PNG)
-
-3. Once selected, click "Install" to proceed with the installation process.
-
-![](../oc3.PNG)
-
-4. You can verify the list of available operators by selecting the "Installed Operator" section.
-
-![](../oc4.PNG)
-
-5. Select the Dell CSI Operator for further details.
-
-![](../oc5.PNG)
-
-## Install CSI Drivers via Operator
-
-**Steps**
-
-1. Select the particular CSI driver which you want to install, as seen in step 5 above. In this example, CSI Unity is selected.
-
-![](../driver1.PNG)
-
-2. After clicking the "Create CSIUnity" option in the above snippet, you can set relevant parameters in your yaml file, as shown below. Refer to the [driver install pages for the Dell CSI Operator](../../installation/operator/#installing-csi-driver-via-operator) for information on the parameters.
-
-![](../driver2.PNG)
-
-3. You can check the driver installed and node and controller pods running in the Pods section under Workloads.
-
-![](../driver3.png)
diff --git a/content/v1/csidriver/partners/second.png b/content/v1/csidriver/partners/second.png
deleted file mode 100644
index 0cd8375231..0000000000
Binary files a/content/v1/csidriver/partners/second.png and /dev/null differ
diff --git a/content/v1/csidriver/partners/tanzu.md b/content/v1/csidriver/partners/tanzu.md
deleted file mode 100644
index b08f75c3ad..0000000000
--- a/content/v1/csidriver/partners/tanzu.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: "VMware Tanzu"
-Description: "About VMware Tanzu basic"
----
-
-The CSI Driver for Dell Unity XT, PowerScale and PowerStore supports VMware Tanzu. The deployment of these Tanzu clusters is done using the VMware Tanzu supervisor cluster and the supervisor namespace.
-
-Currently, VMware Tanzu 7.0 with normal configuration(without NAT) supports Kubernetes 1.22.
-The CSI driver can be installed on this cluster using Helm. Installation of CSI drivers in Tanzu via Operator has not been qualified.
-
-To login to the Tanzu cluster, download kubectl and kubectl vsphere binaries to any of the system
-
-Refer: https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-0F6E45C4-3CB1-4562-9370-686668519FCA.html
-
-Connect to the VCenter using kubectl vSphere commands as shown below.
-```bash
-
-kubectl vsphere login --insecure-skip-tls-verify --vsphere-username vSphere username --server=https:/// -v 5
-```
-Once login is done to the Tanzu cluster, the installation of CSI driver is done using kubectl binary similar to how we do for other systems.
-
-## Tanzu example
-
-![](../tanzu1.JPG)
-
-![](../tanzu2.JPG)
-
-![](../tanzu3.JPG)
-
-![](../tanzu4.JPG)
diff --git a/content/v1/csidriver/partners/tanzu1.JPG b/content/v1/csidriver/partners/tanzu1.JPG
deleted file mode 100644
index 78bdf6b437..0000000000
Binary files a/content/v1/csidriver/partners/tanzu1.JPG and /dev/null differ
diff --git a/content/v1/csidriver/partners/tanzu2.JPG b/content/v1/csidriver/partners/tanzu2.JPG
deleted file mode 100644
index 5b3ddb1ce0..0000000000
Binary files a/content/v1/csidriver/partners/tanzu2.JPG and /dev/null differ
diff --git a/content/v1/csidriver/partners/tanzu3.JPG b/content/v1/csidriver/partners/tanzu3.JPG
deleted file mode 100644
index f7acf62a9a..0000000000
Binary files a/content/v1/csidriver/partners/tanzu3.JPG and /dev/null differ
diff --git a/content/v1/csidriver/partners/tanzu4.JPG b/content/v1/csidriver/partners/tanzu4.JPG
deleted file mode 100644
index e74c8ff06d..0000000000
Binary files a/content/v1/csidriver/partners/tanzu4.JPG and /dev/null differ
diff --git a/content/v1/csidriver/partners/third.png b/content/v1/csidriver/partners/third.png
deleted file mode 100644
index 528158f3bb..0000000000
Binary files a/content/v1/csidriver/partners/third.png and /dev/null differ
diff --git a/content/v1/csidriver/release/operator.md b/content/v1/csidriver/release/operator.md
deleted file mode 100644
index 966b9c836f..0000000000
--- a/content/v1/csidriver/release/operator.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Operator
-description: Release notes for Dell CSI Operator
----
-
-## Release Notes - Dell CSI Operator 1.12.0
-{{% pageinfo color="primary" %}}
-The Dell CSI Operator is no longer actively maintained or supported. Dell CSI Operator has been replaced with [Dell CSM Operator](https://dell.github.io/csm-docs/docs/deployment/csmoperator/). If you are currently using Dell CSI Operator, refer to the [operator migration documentation](https://dell.github.io/csm-docs/docs/csidriver/installation/operator/operator_migration/) to migrate from Dell CSI Operator to Dell CSM Operator.
-
-{{% /pageinfo %}}
-
-### New Features/Changes
-
-- [Added support to Kubernetes 1.27](https://github.com/dell/csm/issues/761)
-- [Added support to Openshift 4.12](https://github.com/dell/csm/issues/571)
-- [Added Storage Capacity Tracking support for CSI-PowerScale](https://github.com/dell/csm/issues/824)
-- [Migrated image registry from k8s.gcr.io to registry.k8s.io](https://github.com/dell/csm/issues/744)
-- [Allow user to set Quota limit parameters from the PVC request in CSI PowerScale](https://github.com/dell/csm/issues/742)
-
->**Note:** There will be a delay in certification of Dell CSI Operator 1.12.0 and it will not be available for download from the Red Hat OpenShift certified catalog right away. The operator will still be available for download from the Red Hat OpenShift Community Catalog soon after the 1.12.0 release.
-
-### Fixed Issues
-
-- [CHAP is set to true in the CSI-PowerStore sample file in CSI Operator](https://github.com/dell/csm/issues/812)
-- [Vsphere credentials for vsphere secrets is expected when vsphere enable is set to false in CSI PowerMax](https://github.com/dell/csm/issues/799)
-
-### Known Issues
-There are no known issues in this release.
-
-### Support
-The Dell CSI Operator image is available on Docker Hub and is officially supported by Dell.
-For any CSI operator and driver issues, questions or feedback, please follow our [support process](../../../support/).
diff --git a/content/v1/csidriver/release/powerflex.md b/content/v1/csidriver/release/powerflex.md
index d28dfc2198..805a047174 100644
--- a/content/v1/csidriver/release/powerflex.md
+++ b/content/v1/csidriver/release/powerflex.md
@@ -3,21 +3,35 @@ title: PowerFlex
description: Release notes for PowerFlex CSI driver
---
-## Release Notes - CSI PowerFlex v2.8.0
+## Release Notes - CSI PowerFlex v2.9.2
+
+
### New Features/Changes
-- [#724 - [FEATURE]: CSM support for Openshift 4.13](https://github.com/dell/csm/issues/724)
-- [#763 - [FEATURE]: CSI-PowerFlex 4.0 NFS support](https://github.com/dell/csm/issues/763)
-- [#876 - [FEATURE]: CSI 1.5 spec support -StorageCapacityTracking](https://github.com/dell/csm/issues/876)
-- [#878 - [FEATURE]: CSI 1.5 spec support: Implement Volume Limits](https://github.com/dell/csm/issues/878)
-- [#885 - [FEATURE]: SDC 3.6.1 support](https://github.com/dell/csm/issues/885)
+- [#947 - [FEATURE]: Support for Kubernetes 1.28](https://github.com/dell/csm/issues/947)
+- [#1066 - [FEATURE]: Support for Openshift 4.14](https://github.com/dell/csm/issues/1066)
+- [#1067 - [FEATURE]: Support For PowerFlex 4.5](https://github.com/dell/csm/issues/1067)
+- [#851 - [FEATURE]: Helm Chart Enhancement - Container Images Configurable in values.yaml](https://github.com/dell/csm/issues/851)
+- [#905 - [FEATURE]: Add support for CSI Spec 1.6](https://github.com/dell/csm/issues/905)
+- [#996 - [FEATURE]: Dell CSI to Dell CSM Operator Migration Process](https://github.com/dell/csm/issues/996)
### Fixed Issues
-- [#916 - [BUG]: Remove references to deprecated io/ioutil package](https://github.com/dell/csm/issues/916)
+- [#1011 - [BUG]: PowerFlex RWX volume no option to configure the nfs export host access ip address.](https://github.com/dell/csm/issues/1011)
+- [#1014 - [BUG]: Missing error check for os.Stat call during volume publish](https://github.com/dell/csm/issues/1014)
+- [#1020 - [BUG]: CSI-PowerFlex: SDC Rename fails when configuring multiple arrays in the secret](https://github.com/dell/csm/issues/1020)
+- [#1030 - [BUG]: Comment out duplicate entries in the sample secret.yaml file](https://github.com/dell/csm/issues/1030)
+- [#1050 - [BUG]: NFS Export gets deleted when one pod is deleted from the multiple pods consuming the same PowerFlex RWX NFS volume](https://github.com/dell/csm/issues/1050)
+- [#1054 - [BUG]: The PowerFlex Dockerfile is incorrectly labeling the version as 2.7.0 for the 2.8.0 version.](https://github.com/dell/csm/issues/1054)
+- [#1057 - [BUG]: CSI Driver - issue with creation volume from 1 of the worker nodes](https://github.com/dell/csm/issues/1057)
+- [#1058 - [BUG]: CSI Health monitor for Node missing for CSM PowerFlex in Operator samples](https://github.com/dell/csm/issues/1058)
+- [#1061 - [BUG]: Golint is not installing with go get command](https://github.com/dell/csm/issues/1061)
+- [#1110 - [BUG]: Multi Controller defect - sidecars timeout](https://github.com/dell/csm/issues/1110)
+- [#1103 - [BUG]: CSM Operator doesn't apply fSGroupPolicy value to CSIDriver Object](https://github.com/dell/csm/issues/1103)
+- [#1152 - [BUG]: CSI driver changes to facilitate SDC brownfield deployments](https://github.com/dell/csm/issues/1152))
### Known Issues
@@ -27,7 +41,7 @@ description: Release notes for PowerFlex CSI driver
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: 1. Force delete the pod running on the node that went down 2. Delete the volumeattachment to the node that went down. Now the volume can be attached to the new node. |
| sdc:3.6.0.6 is causing issues while installing the csi-powerflex driver on ubuntu,RHEL8.3 | Workaround: Change the powerflexSdc to sdc:3.6 in values.yaml https://github.com/dell/csi-powerflex/blob/72b27acee7553006cc09df97f85405f58478d2e4/helm/csi-vxflexos/values.yaml#L13 |
| sdc:3.6.1 is causing issues while installing the csi-powerflex driver on ubuntu. | Workaround: Change the powerflexSdc to sdc:3.6 in values.yaml https://github.com/dell/csi-powerflex/blob/72b27acee7553006cc09df97f85405f58478d2e4/helm/csi-vxflexos/values.yaml#L13 |
-A CSI ephemeral pod may not get created in OpenShift 4.13 and fail with the error `"error when creating pod: the pod uses an inline volume provided by CSIDriver csi-unity.dellemc.com, and the namespace has a pod security enforcement level that is lower than privileged."` | This issue occurs because OpenShift 4.13 introduced the CSI Volume Admission plugin to restrict the use of a CSI driver capable of provisioning CSI ephemeral volumes during pod admission. Therefore, an additional label `security.openshift.io/csi-ephemeral-volume-profile` in [csidriver.yaml](https://github.com/dell/helm-charts/blob/csi-unity-2.8.0/charts/csi-unity/templates/csidriver.yaml) file with the required security profile value should be provided. Follow [OpenShift 4.13 documentation for CSI Ephemeral Volumes](https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/ephemeral-storage-csi-inline.html) for more information. |
+A CSI ephemeral pod may not get created in OpenShift 4.13 and fail with the error `"error when creating pod: the pod uses an inline volume provided by CSIDriver csi-vxflexos.dellemc.com, and the namespace has a pod security enforcement level that is lower than privileged."` | This issue occurs because OpenShift 4.13 introduced the CSI Volume Admission plugin to restrict the use of a CSI driver capable of provisioning CSI ephemeral volumes during pod admission. Therefore, an additional label `security.openshift.io/csi-ephemeral-volume-profile` in [csidriver.yaml](https://github.com/dell/helm-charts/blob/csi-vxflexos-2.9.1/charts/csi-vxflexos/templates/csidriver.yaml) file with the required security profile value should be provided. Follow [OpenShift 4.13 documentation for CSI Ephemeral Volumes](https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/ephemeral-storage-csi-inline.html) for more information. |
| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. |
| The PowerFlex Dockerfile is incorrectly labeling the version as 2.7.0 for the 2.8.0 version. | Describe the driver pod using ```kubectl describe pod $podname -n vxflexos``` to ensure v2.8.0 is installed. |
diff --git a/content/v1/csidriver/release/powermax.md b/content/v1/csidriver/release/powermax.md
index 4c4ca10a66..2c5ccb9cd1 100644
--- a/content/v1/csidriver/release/powermax.md
+++ b/content/v1/csidriver/release/powermax.md
@@ -3,29 +3,36 @@ title: PowerMax
description: Release notes for PowerMax CSI driver
---
-## Release Notes - CSI PowerMax v2.8.0
+## Release Notes - CSI PowerMax v2.9.1
-{{% pageinfo color="primary" %}} Linked Proxy mode for CSI reverse proxy is no longer actively maintained or supported. It will be deprecated in CSM 1.9. It is highly recommended that you use stand alone mode going forward. {{% /pageinfo %}}
+>Note: Auto SRDF group creation is currently not supported in PowerMaxOS 10.1 (6079) Arrays.
> Note: Starting from CSI v2.4.0, Only Unisphere 10.0 REST endpoints are supported. It is mandatory that Unisphere should be updated to 10.0. Please find the instructions [here.](https://dl.dell.com/content/manual34878027-dell-unisphere-for-powermax-10-0-0-installation-guide.pdf?language=en-us&ps=true)
>Note: File Replication for PowerMax is currently not supported
-
### New Features/Changes
-- [#724 - [FEATURE]: CSM support for Openshift 4.13](https://github.com/dell/csm/issues/724)
-- [#861 - [FEATURE]: CSM for PowerMax file support ](https://github.com/dell/csm/issues/861)
-- [#876 - [FEATURE]: CSI 1.5 spec support -StorageCapacityTracking](https://github.com/dell/csm/issues/876)
-- [#877 - [FEATURE]: Make standalone helm chart available from helm repository : https://dell.github.io/dell/helm-charts](https://github.com/dell/csm/issues/877)
-- [#878 - [FEATURE]: CSI 1.5 spec support: Implement Volume Limits](https://github.com/dell/csm/issues/878)
-- [#922 - [FEATURE]: Use ubi9 micro as base image](https://github.com/dell/csm/issues/922)
-- [#937 - [FEATURE]: Google Anthos 1.15 support for PowerMax](https://github.com/dell/csm/issues/937)
+- [#947 - [FEATURE]: Support for Kubernetes 1.28](https://github.com/dell/csm/issues/947)
+- [#1066 - [FEATURE]: Support for Openshift 4.14](https://github.com/dell/csm/issues/1066)
+- [#851 - [FEATURE]: Helm Chart Enhancement - Container Images Configurable in values.yaml](https://github.com/dell/csm/issues/851)
+- [#905 - [FEATURE]: Add support for CSI Spec 1.6](https://github.com/dell/csm/issues/905)
+- [#991 - [FEATURE]:Remove linked proxy mode for PowerMax](https://github.com/dell/csm/issues/991)
+- [#996 - [FEATURE]: Dell CSI to Dell CSM Operator Migration Process](https://github.com/dell/csm/issues/996)
+- [#1062 - [FEATURE]: CSM PowerMax: Support PowerMax v10.1 ](https://github.com/dell/csm/issues/1062)
### Fixed Issues
-- [#916 - [BUG]: Remove references to deprecated io/ioutil package](https://github.com/dell/csm/issues/916)
+- [#983 - [BUG]: storageCapacity can be set in unsupported CSI Powermax with CSM Operator](https://github.com/dell/csm/issues/983)
+- [#1014 - [BUG]: Missing error check for os.Stat call during volume publish](https://github.com/dell/csm/issues/1014)
+- [#1037 - [BUG]: Document instructions update: Either Multi-Path or the Power-Path software should be enabled for PowerMax ](https://github.com/dell/csm/issues/1037)
+- [#1051 - [BUG]: make docker command is failing with error ](https://github.com/dell/csm/issues/1051)
+- [#1053 - [BUG]: make gosec is erroring out - Repos PowerMax,PowerStore,PowerScale (gosec is installed)](https://github.com/dell/csm/issues/1053)
+- [#1056 - [BUG]: Missing runtime dependencies reference in PowerMax README file.](https://github.com/dell/csm/issues/1056)
+- [#1061 - [BUG]: Golint is not installing with go get command](https://github.com/dell/csm/issues/1061)
+- [#1110 - [BUG]: Multi Controller defect - sidecars timeout](https://github.com/dell/csm/issues/1110)
+- [#1103 - [BUG]: CSM Operator doesn't apply fSGroupPolicy value to CSIDriver Object](https://github.com/dell/csm/issues/1103)
### Known Issues
@@ -34,6 +41,8 @@ description: Release notes for PowerMax CSI driver
| Unable to update Host: A problem occurred modifying the host resource | This issue occurs when the nodes do not have unique hostnames or when an IP address/FQDN with same sub-domains are used as hostnames. The workaround is to use unique hostnames or FQDN with unique sub-domains|
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: 1. Force delete the pod running on the node that went down 2. Delete the volumeattachment to the node that went down. Now the volume can be attached to the new node |
| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. |
+| Automatic SRDF group creation is failing with "Unable to get Remote Port on SAN for Auto SRDF" for PowerMaxOS 10.1 arrays | Create the SRDF Group and add it to the storage class |
+| [Node stage is failing with error "wwn for FC device not found"](https://github.com/dell/csm/issues/1070)| This is an intermittent issue, rebooting the node will resolve this issue |
### Note:
- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters.
diff --git a/content/v1/csidriver/release/powerscale.md b/content/v1/csidriver/release/powerscale.md
index cff4e6c1bd..2aed2774f7 100644
--- a/content/v1/csidriver/release/powerscale.md
+++ b/content/v1/csidriver/release/powerscale.md
@@ -4,26 +4,33 @@ description: Release notes for PowerScale CSI driver
---
-## Release Notes - CSI Driver for PowerScale v2.8.0
-
+## Release Notes - CSI Driver for PowerScale v2.9.1
### New Features/Changes
-- [#724 - [FEATURE]: CSM support for Openshift 4.13](https://github.com/dell/csm/issues/724)
-- [#877 - [FEATURE]: Make standalone helm chart available from helm repository : https://dell.github.io/dell/helm-charts](https://github.com/dell/csm/issues/877)
-- [#950 - [FEATURE]: PowerScale 9.5.0.4 support](https://github.com/dell/csm/issues/950)
-- [#967 - [FEATURE]: SLES15 SP4 support in csi powerscale](https://github.com/dell/csm/issues/967)
-- [#922 - [FEATURE]: Use ubi9 micro as base image](https://github.com/dell/csm/issues/922)
+- [#947 - [FEATURE]: Support for Kubernetes 1.28](https://github.com/dell/csm/issues/947)
+- [#1066 - [FEATURE]: Support for Openshift 4.14](https://github.com/dell/csm/issues/1066)
+- [#851 - [FEATURE]: Helm Chart Enhancement - Container Images Configurable in values.yaml](https://github.com/dell/csm/issues/851)
+- [#905 - [FEATURE]: Add support for CSI Spec 1.6](https://github.com/dell/csm/issues/905)
+- [#996 - [FEATURE]: Dell CSI to Dell CSM Operator Migration Process](https://github.com/dell/csm/issues/996)
### Fixed Issues
-- [#916 - [BUG]: Remove references to deprecated io/ioutil package](https://github.com/dell/csm/issues/916)
-- [#487 - [BUG]: Powerscale CSI driver RO PVC-from-snapshot wrong zone](https://github.com/dell/csm/issues/487)
+- [#771 - [BUG]: Gopowerscale unit test fails](https://github.com/dell/csm/issues/771)
+- [#990 - [BUG]: X_CSI_AUTH_TYPE cannot be set in CSM Operator](https://github.com/dell/csm/issues/990)
+- [#999 - [BUG]: Volume health fails because it looks to a wrong path](https://github.com/dell/csm/issues/999)
+- [#1014 - [BUG]: Missing error check for os.Stat call during volume publish](https://github.com/dell/csm/issues/1014)
+- [#1046 - [BUG]:Is cert-csi expansion expected to successfully run with enableQuota: false on PowerScale?](https://github.com/dell/csm/issues/1046)
+- [#1053 - [BUG]: make gosec is erroring out - Repos PowerMax,PowerStore,PowerScale (gosec is installed)](https://github.com/dell/csm/issues/1053)
+- [#1061 - [BUG]: Golint is not installing with go get command](https://github.com/dell/csm/issues/1061)
+- [#1110 - [BUG]: Multi Controller defect - sidecars timeout](https://github.com/dell/csm/issues/1110)
+- [#1103 - [BUG]: CSM Operator doesn't apply fSGroupPolicy value to CSIDriver Object](https://github.com/dell/csm/issues/1103)
### Known Issues
| Issue | Resolution or workaround, if known |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Storage capacity tracking does not return `MaximumVolumeSize` parameter. PowerScale is purely NFS based meaning it has no actual volumes. Therefore `MaximumVolumeSize` cannot be implemented if there is no volume creation. | CSI PowerScale 2.9.1 is compliant with CSI 1.6 specification since the field `MaximumVolumeSize` is optional. |
| If the length of the nodeID exceeds 128 characters, the driver fails to update the CSINode object and installation fails. This is due to a limitation set by CSI spec which doesn't allow nodeID to be greater than 128 characters. | The CSI PowerScale driver uses the hostname for building the nodeID which is set in the CSINode resource object, hence we recommend not having very long hostnames in order to avoid this issue. This current limitation of 128 characters is likely to be relaxed in future Kubernetes versions as per this issue in the community: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/issues/581
**Note:** In kubernetes 1.22 this limit has been relaxed to 192 characters. |
| If some older NFS exports /terminated worker nodes still in NFS export client list, CSI driver tries to add a new worker node it fails (For RWX volume). | User need to manually clean the export client list from old entries to make successful addition of new worker nodes. |
| Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation. | Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100 |
diff --git a/content/v1/csidriver/release/powerstore.md b/content/v1/csidriver/release/powerstore.md
index 7f2d831a08..8d1f0f110b 100644
--- a/content/v1/csidriver/release/powerstore.md
+++ b/content/v1/csidriver/release/powerstore.md
@@ -3,22 +3,25 @@ title: PowerStore
description: Release notes for PowerStore CSI driver
---
-## Release Notes - CSI PowerStore v2.8.0
-
-
+## Release Notes - CSI PowerStore v2.9.1
### New Features/Changes
-- [#724 - [FEATURE]: CSM support for Openshift 4.13](https://github.com/dell/csm/issues/724)
-- [#877 - [FEATURE]: Make standalone helm chart available from helm repository : https://dell.github.io/dell/helm-charts](https://github.com/dell/csm/issues/877)
-- [#878 - [FEATURE]: CSI 1.5 spec support: Implement Volume Limits](https://github.com/dell/csm/issues/878)
-- [#879 - [FEATURE]: Configurable Volume Attributes use recommended naming convention /](https://github.com/dell/csm/issues/879)
-- [#922 - [FEATURE]: Use ubi9 micro as base image](https://github.com/dell/csm/issues/922)
+- [#947 - [FEATURE]: Support for Kubernetes 1.28](https://github.com/dell/csm/issues/947)
+- [#1066 - [FEATURE]: Support for Openshift 4.14](https://github.com/dell/csm/issues/1066)
+- [#851 - [FEATURE]: Helm Chart Enhancement - Container Images Configurable in values.yaml](https://github.com/dell/csm/issues/851)
+- [#905 - [FEATURE]: Add support for CSI Spec 1.6](https://github.com/dell/csm/issues/905)
+- [#996 - [FEATURE]: Dell CSI to Dell CSM Operator Migration Process](https://github.com/dell/csm/issues/996)
+- [#1031 - [FEATURE]: Update to the latest UBI Micro image for CSM](https://github.com/dell/csm/issues/1031)
### Fixed Issues
-- [#916 - [BUG]: Remove references to deprecated io/ioutil package](https://github.com/dell/csm/issues/916)
-- [#928 - [BUG]: PowerStore Replication - Delete RG request hangs](https://github.com/dell/csm/issues/928)
+- [#1006 - [BUG]: Too many login sessions in gopowerstore client causes unexpected session termination in UI](https://github.com/dell/csm/issues/1006)
+- [#1014 - [BUG]: Missing error check for os.Stat call during volume publish](https://github.com/dell/csm/issues/1014)
+- [#1053 - [BUG]: make gosec is erroring out - Repos PowerMax,PowerStore,PowerScale (gosec is installed)](https://github.com/dell/csm/issues/1053)
+- [#1061 - [BUG]: Golint is not installing with go get command](https://github.com/dell/csm/issues/1061)
+- [#1108 - [BUG]: Volumes failing to mount when customer using NVMeTCP on Powerstore](https://github.com/dell/csm/issues/1108)
+- [#1103 - [BUG]: CSM Operator doesn't apply fSGroupPolicy value to CSIDriver Object](https://github.com/dell/csm/issues/1103)
### Known Issues
@@ -32,6 +35,7 @@ description: Release notes for PowerStore CSI driver
| If an ephemeral pod is not being created in OpenShift 4.13 and is failing with the error "error when creating pod: the pod uses an inline volume provided by CSIDriver csi-powerstore.dellemc.com, and the namespace has a pod security enforcement level that is lower than privileged." | This issue occurs because OpenShift 4.13 introduced the CSI Volume Admission plugin to restrict the use of a CSI driver capable of provisioning CSI ephemeral volumes during pod admission (https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/ephemeral-storage-csi-inline.html). Therefore, an additional label "security.openshift.io/csi-ephemeral-volume-profile" needs to be added to the CSIDriver object to support inline ephemeral volumes. |
| In OpenShift 4.13, the root user is not allowed to perform write operations on NFS shares, when root squashing is enabled. | The workaround for this issue is to disable root squashing by setting allowRoot: "true" in the NFS storage class. |
| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs, and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with Kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. |
+| If two separate networks are configured for ISCSI and NVMeTCP, the driver may encounter difficulty identifying the second network (e.g., NVMeTCP). | This is a known issue, and the workaround involves creating a single network on the array to serve both ISCSI and NVMeTCP purposes. |
### Note:
diff --git a/content/v1/csidriver/release/unity.md b/content/v1/csidriver/release/unity.md
index cd27378100..1c5bcfe8dc 100644
--- a/content/v1/csidriver/release/unity.md
+++ b/content/v1/csidriver/release/unity.md
@@ -3,21 +3,21 @@ title: Unity XT
description: Release notes for Unity XT CSI driver
---
-## Release Notes - CSI Unity XT v2.8.0
-
-
+## Release Notes - CSI Unity XT v2.9.1
### New Features/Changes
-- [#724 - [FEATURE]: CSM support for Openshift 4.13](https://github.com/dell/csm/issues/724)
-- [#876 - [FEATURE]: CSI 1.5 spec support -StorageCapacityTracking](https://github.com/dell/csm/issues/876)
-- [#877 - [FEATURE]: Make standalone helm chart available from helm repository : https://dell.github.io/dell/helm-charts](https://github.com/dell/csm/issues/877)
-- [#891 - [FEATURE]: Enhancing Unity XT driver to handle API requests after the sessionIdleTimeOut in STIG mode](https://github.com/dell/csm/issues/891)
+- [#947 - [FEATURE]: Support for Kubernetes 1.28](https://github.com/dell/csm/issues/947)
+- [#1066 - [FEATURE]: Support for Openshift 4.14](https://github.com/dell/csm/issues/1066)
+- [#851 - [FEATURE]: Helm Chart Enhancement - Container Images Configurable in values.yaml](https://github.com/dell/csm/issues/851)
+- [#905 - [FEATURE]: Add support for CSI Spec 1.6](https://github.com/dell/csm/issues/905)
+- [#996 - [FEATURE]: Dell CSI to Dell CSM Operator Migration Process](https://github.com/dell/csm/issues/996)
### Fixed Issues
-- [#849 - [BUG]: CSI driver does not verify iSCSI initiators on the array correctly](https://github.com/dell/csm/issues/849)
-- [#916 - [BUG]: Remove references to deprecated io/ioutil package](https://github.com/dell/csm/issues/916)
+- [#1014 - [BUG]: Missing error check for os.Stat call during volume publish](https://github.com/dell/csm/issues/1014)
+- [#1110 - [BUG]: Multi Controller defect - sidecars timeout](https://github.com/dell/csm/issues/1110)
+- [#1103 - [BUG]: CSM Operator doesn't apply fSGroupPolicy value to CSIDriver Object](https://github.com/dell/csm/issues/1103)
### Known Issues
diff --git a/content/v1/csidriver/troubleshooting/operator.md b/content/v1/csidriver/troubleshooting/operator.md
deleted file mode 100644
index 24770fb06b..0000000000
--- a/content/v1/csidriver/troubleshooting/operator.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Dell CSI Operator
-description: Troubleshooting Dell CSI Operator
----
-
----
-* Before installing the drivers, Dell CSI Operator tries to validate the Custom Resource being created. If some mandatory environment variables are missing or there is a type mismatch, then the Operator will report an error during the reconciliation attempts.
-
- Because of this, the status of the Custom Resource will change to "Failed" and the error captured in the "ErrorMessage" field in the status.
-
- For example - If the PowerMax driver was installed in the namespace test-powermax and has the name powermax, then run the command `kubectl get csipowermax/powermax -n test-powermax -o yaml` to get the Custom Resource details.
-
- If there was an error while installing the driver, then you would see a status like this:
-
- ```yaml
- status:
- status:
- errorMessage: mandatory Env - X_CSI_K8S_CLUSTER_PREFIX not specified in user spec
- state: Failed
- ```
-
- The state of the Custom Resource can also change to `Failed` because of any other prohibited updates or any failure while installing the driver. In order to recover from this failure, fix the error in the manifest and update/patch the Custom Resource
-
-* After an update to the driver, the controller pod may not have the latest desired specification.
-
- This happens when the controller pod was in a failed state before applying the update. Even though the Dell CSI Operator updates the pod template specification for the StatefulSet, the StatefulSet controller does not apply the update to the pod. This happens because of the unique nature of StatefulSets where the controller tries to retain the last known working state.
-
- To get around this problem, the Dell CSI Operator forces an update of the pod specification by deleting the older pod. In case the Dell CSI Operator fails to do so, delete the controller pod to force an update of the controller pod specification
-
-* The Status of the CSI Driver Custom Resource shows the state of the driver pods after installation. This state will not be updated automatically if there are any changes to the driver pods outside any Operator operations.
-
- At times because of inconsistencies in fetching data from the Kubernetes cache, the state of some driver pods may not be updated correctly in the status. To force an update of the state, you can update the Custom Resource forcefully by setting forceUpdate to true. If all the driver pods are in the `Available` State, then the state of the Custom Resource will be updated as `Running`
diff --git a/content/v1/csidriver/troubleshooting/powerflex.md b/content/v1/csidriver/troubleshooting/powerflex.md
index 62d7ba0aca..0951b42c81 100644
--- a/content/v1/csidriver/troubleshooting/powerflex.md
+++ b/content/v1/csidriver/troubleshooting/powerflex.md
@@ -29,6 +29,7 @@ description: Troubleshooting PowerFlex Driver
| In version v2.6.0, the driver is crashing because the External Health Monitor sidecar crashes when a persistent volume is not found. | This is a known issue reported at [kubernetes-csi/external-health-monitor#100](https://github.com/kubernetes-csi/external-health-monitor/issues/100). |
| In version v2.6.0, when a cluster node goes down, the block volumes attached to the node cannot be attached to another node. | This is a known issue reported at [kubernetes-csi/external-attacher#215](https://github.com/kubernetes-csi/external-attacher/issues/215). Workaround: 1. Force delete the pod running on the node that went down. 2. Delete the pod's persistent volume attachment on the node that went down. Now the volume can be attached to the new node.
A CSI ephemeral pod may not get created in OpenShift 4.13 and fail with the error `"error when creating pod: the pod uses an inline volume provided by CSIDriver csi-vxflexos.dellemc.com, and the namespace has a pod security enforcement level that is lower than privileged."` | This issue occurs because OpenShift 4.13 introduced the CSI Volume Admission plugin to restrict the use of a CSI driver capable of provisioning CSI ephemeral volumes during pod admission. Therefore, an additional label `security.openshift.io/csi-ephemeral-volume-profile` in [csidriver.yaml](https://github.com/dell/helm-charts/blob/csi-vxflexos-2.8.0/charts/csi-vxflexos/templates/csidriver.yaml) file with the required security profile value should be provided. Follow [OpenShift 4.13 documentation for CSI Ephemeral Volumes](https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/ephemeral-storage-csi-inline.html) for more information. |
+| Standby controller pod is in crashloopbackoff state | Scale down the replica count of the controller pod's deployment to 1 using ```kubectl scale deployment --replicas=1 -n ``` |
->*Note*: `vxflexos-controller-*` is the controller pod that acquires leader lease
+>
diff --git a/content/v1/csidriver/troubleshooting/powermax.md b/content/v1/csidriver/troubleshooting/powermax.md
index 88e61470b4..502913ef3a 100644
--- a/content/v1/csidriver/troubleshooting/powermax.md
+++ b/content/v1/csidriver/troubleshooting/powermax.md
@@ -18,4 +18,5 @@ description: Troubleshooting PowerMax Driver
| CreateHost failed with error `initiator is already part of different host.` | Update modifyHostName to true in values.yaml Or Remove the initiator from existing host |
| `kubectl logs powermax-controller- –n ` driver logs says connection refused and the reverseproxy logs says "Failed to setup server.(secrets \"secret-name\" not found)" | Make sure the given secret exist on the cluster |
| nodestage is failing with error `Error invalid IQN Target iqn.EMC.0648.SE1F` | 1. Update initiator name to full default name , ex: iqn.1993-08.org.debian:01:e9afae962192 2.Ensure that the iSCSI initiators are available on all the nodes where the driver node plugin will be installed and it should be full default name. |
-| Volume mount is failing on few OS(ex:VMware Virtual Platform) during node publish with error `wrong fs type, bad option, bad superblock` | 1. Check the multipath configuration(if enabled) 2. Edit Vm Advanced settings->hardware and add the param `disk.enableUUID=true` and reboot the node |
+| Volume mount is failing on few OS(ex:VMware Virtual Platform) during node publish with error `wrong fs type, bad option, bad superblock` | 1. Check the multipath configuration(if enabled) 2. Edit Vm Advanced settings->hardware and add the param `disk.enableUUID=true` and reboot the node |
+| Standby controller pod is in crashloopbackoff state | Scale down the replica count of the controller pod's deployment to 1 using ```kubectl scale deployment --replicas=1 -n ``` |
diff --git a/content/v1/csidriver/troubleshooting/powerscale.md b/content/v1/csidriver/troubleshooting/powerscale.md
index d2f1e75667..9d783f7693 100644
--- a/content/v1/csidriver/troubleshooting/powerscale.md
+++ b/content/v1/csidriver/troubleshooting/powerscale.md
@@ -12,9 +12,11 @@ Here are some installation failures that might be encountered and how to mitigat
|The `kubectl logs isilon-controller-0 -n isilon -c driver` logs shows the driver error: **create volume failed, Access denied. create directory as requested** | This situation can happen when the user who created the base path is different from the user configured for the driver. Make sure the user used to deploy CSI-Driver must have enough rights on the base path (i.e. isiPath) to perform all operations. |
|Volume/filesystem is allowed to mount by any host in the network, though that host is not a part of the export of that particular volume under /ifs directory | "Dell PowerScale: OneFS NFS Design Considerations and Best Practices": There is a default shared directory (ifs) of OneFS, which lets clients running Windows, UNIX, Linux, or Mac OS X access the same directories and files. It is recommended to disable the ifs shared directory in a production environment and create dedicated NFS exports and SMB shares for your workload. |
| Creating snapshot fails if the parameter IsiPath in volume snapshot class and related storage class is not the same. The driver uses the incorrect IsiPath parameter and tries to locate the source volume due to the inconsistency. | Ensure IsiPath in VolumeSnapshotClass yaml and related storageClass yaml are the same. |
-| While deleting a volume, if there are files or folders created on the volume that are owned by different users. If the Isilon credentials used are for a nonprivileged Isilon user, the delete volume action fails. It is due to the limitation in Linux permission control. | To perform the delete volume action, the user account must be assigned a role that has the privilege ISI_PRIV_IFS_RESTORE. The user account must have the following set of privileges to ensure that all the CSI Isilon driver capabilities work properly: * ISI_PRIV_LOGIN_PAPI * ISI_PRIV_NFS * ISI_PRIV_QUOTA * ISI_PRIV_SNAPSHOT * ISI_PRIV_IFS_RESTORE * ISI_PRIV_NS_IFS_ACCESS In some cases, ISI_PRIV_BACKUP is also required, for example, when files owned by other users have mode bits set to 700. |
+| While deleting a volume, if there are files or folders created on the volume that are owned by different users. If the Isilon credentials used are for a nonprivileged Isilon user, the delete volume action fails. It is due to the limitation in Linux permission control. | To perform the delete volume action, the user account must be assigned a role that has the privilege ISI_PRIV_IFS_RESTORE. The user account must have the following set of privileges to ensure that all the CSI Isilon driver capabilities work properly: * ISI_PRIV_LOGIN_PAPI * ISI_PRIV_NFS * ISI_PRIV_QUOTA * ISI_PRIV_SNAPSHOT * ISI_PRIV_IFS_RESTORE * ISI_PRIV_NS_IFS_ACCESS * ISI_PRIV_STATISTICS In some cases, ISI_PRIV_BACKUP is also required, for example, when files owned by other users have mode bits set to 700. |
| If the hostname is mapped to loopback IP in /etc/hosts file, and pods are created using 1.3.0.1 release, after upgrade to driver version 1.4.0 or later there is a possibility of "localhost" as a stale entry in export | Recommended setup: User should not map a hostname to loopback IP in /etc/hosts file |
| Driver node pod is in "CrashLoopBackOff" as "Node ID" generated is not with proper FQDN. | This might be due to "dnsPolicy" implemented on the driver node pod which may differ with different networks.
This parameter is configurable in both helm and Operator installer and the user can try with different "dnsPolicy" according to the environment.|
| The `kubectl logs isilon-controller-0 -n isilon -c driver` logs shows the driver **Authentication failed. Trying to re-authenticate** when using Session-based authentication | The issue has been resolved from OneFS 9.3 onwards, for OneFS versions prior to 9.3 for session-based authentication either smart connect can be created against a single node of Isilon or CSI Driver can be installed/pointed to a particular node of the Isilon else basic authentication can be used by setting isiAuthType in `values.yaml` to 0 |
| When an attempt is made to create more than one ReadOnly PVC from the same volume snapshot, the second and subsequent requests result in PVCs in state `Pending`, with a warning `another RO volume from this snapshot is already present`. This is because the driver allows only one RO volume from a specific snapshot at any point in time. This is to allow faster creation(within a few seconds) of a RO PVC from a volume snapshot irrespective of the size of the volume snapshot. | Wait for the deletion of the first RO PVC created from the same volume snapshot. |
-|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/csi-powerscale/blob/main/helm/csi-isilon/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.|
+|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/helm-charts/blob/main/charts/csi-isilon/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.|
+| Standby controller pod is in crashloopbackoff state | Scale down the replica count of the controller pod's deployment to 1 using ```kubectl scale deployment --replicas=1 -n ``` |
+| Driver install fails because of the incompatible helm values file specified in ```dell-csi-helm-installer``` - expected: v2.9.x, found: v2.8.0. | Change driver version in each file in ```dell/csi-powerscale/dell-csi-helm-installer``` from 2.8.0 to 2.9.x |
diff --git a/content/v1/csidriver/troubleshooting/powerstore.md b/content/v1/csidriver/troubleshooting/powerstore.md
index a788b011e6..9121f8c0ea 100644
--- a/content/v1/csidriver/troubleshooting/powerstore.md
+++ b/content/v1/csidriver/troubleshooting/powerstore.md
@@ -12,4 +12,10 @@ description: Troubleshooting PowerStore Driver
| If the NVMeFC pod is not getting created and the host looses the ssh connection, causing the driver pods to go to error state | remove the nvme_tcp module from the host incase of NVMeFC connection |
| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down 2. Delete the volumeattachment to the node that went down. Now the volume can be attached to the new node. |
| If the pod creation for NVMe takes time when the connections between the host and the array are more than 2 and considerable volumes are mounted on the host | Reduce the number of connections between the host and the array to 2. |
-|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/helm-charts/blob/main/charts/csi-powerstore/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.|
\ No newline at end of file
+|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/helm-charts/blob/main/charts/csi-powerstore/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.|
+| If two separate networks are configured for ISCSI and NVMeTCP, the driver may encounter difficulty identifying the second network (e.g., NVMeTCP). | This is a known issue, and the workaround involves creating a single network on the array to serve both ISCSI and NVMeTCP purposes. |
+| Unable to provision PVC's via driver | Ensure that the NAS name matches the one provided on the array side. |
+| Unable to install or upgrade the driver | Ensure that the firewall is configured to grant adequate permissions for downloading images from the registry. |
+| Faulty paths in the multipath | Ensure that the configuration of the multipath is correct and connectivity to the underlying hardware is intact. |
+| Unable to install or upgrade the driver due to minimum Kubernetes version or Openshift version | Currently CSM only supports n, n-1, n-2 version of Kubernetes and Openshift, if you still wanted to continue with existing version update the `verify.sh` to continue.|
+| Volumes are not getting deleted on the array when PV's are deleted | Ensure `persistentVolumeReclaimPolicy` is set to Delete. |
diff --git a/content/v1/csidriver/troubleshooting/unity.md b/content/v1/csidriver/troubleshooting/unity.md
index 6a380b5754..e3a180e923 100644
--- a/content/v1/csidriver/troubleshooting/unity.md
+++ b/content/v1/csidriver/troubleshooting/unity.md
@@ -14,3 +14,4 @@ description: Troubleshooting Unity XT Driver
| PVC creation fails on a fresh cluster with **iSCSI** and **NFS** protocols alone enabled with error **failed to provision volume with StorageClass "unity-iscsi": error generating accessibility requirements: no available topology found**. | This is because iSCSI initiator login takes longer than the node pod startup time. This can be overcome by bouncing the node pods in the cluster using the below command the driver pods with **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** |
| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.24.0 < 1.29.0 which is incompatible with Kubernetes 1.24.6-mirantis-1` | If you are using an extended Kubernetes version, see the helm Chart at `helm/csi-unity/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down 2. Delete the VolumeAttachment to the node that went down. Now the volume can be attached to the new node. |
+| Standby controller pod is in crashloopbackoff state | Scale down the replica count of the controller pod's deployment to 1 using ```kubectl scale deployment --replicas=1 -n ``` |
diff --git a/content/v1/csidriver/uninstall/_index.md b/content/v1/csidriver/uninstall/_index.md
index 3133966b9c..5667a4122f 100644
--- a/content/v1/csidriver/uninstall/_index.md
+++ b/content/v1/csidriver/uninstall/_index.md
@@ -29,14 +29,7 @@ Options:
-h Help
```
-## Uninstall a CSI driver installed via Dell CSI Operator
+## Uninstall a CSI driver installed via Dell CSM Operator
-For uninstalling any CSI drivers deployed by the Dell CSI Operator, just delete the respective Custom Resources.
-This can be done using OperatorHub GUI by deleting the CR or via kubectl.
-
-For example - To uninstall the driver installed via the operator, delete the Custom Resource(CR)
+For uninstalling any CSI drivers deployed by the Dell CSM Operator, refer to instructions [here](../../deployment/csmoperator/drivers/#uninstall-csi-driver)
-#Replace driver-type, driver-name and driver-namespace with their respective values
-```bash
-kubectl delete / -n
-```
diff --git a/content/v1/csidriver/upgradation/drivers/isilon.md b/content/v1/csidriver/upgradation/drivers/isilon.md
index c0b37f9221..b278408360 100644
--- a/content/v1/csidriver/upgradation/drivers/isilon.md
+++ b/content/v1/csidriver/upgradation/drivers/isilon.md
@@ -6,21 +6,21 @@ tags:
weight: 1
Description: Upgrade PowerScale CSI driver
---
-You can upgrade the CSI Driver for Dell PowerScale using Helm or Dell CSI Operator.
+You can upgrade the CSI Driver for Dell PowerScale using Helm or Dell CSM Operator.
-## Upgrade Driver from version 2.7.0 to 2.8.0 using Helm
+## Upgrade Driver from version 2.8.0 to 2.9.1 using Helm
**Note:** While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes.
### Steps
-1. Clone the repository using `git clone -b v2.8.0 https://github.com/dell/csi-powerscale.git`
+1. Clone the repository using `git clone -b v2.9.1 https://github.com/dell/csi-powerscale.git`
2. Change to directory dell-csi-helm-installer to install the Dell PowerScale `cd dell-csi-helm-installer`
3. Download the default values.yaml using following command:
```bash
- wget -O my-isilon-settings.yaml https://raw.githubusercontent.com/dell/helm-charts/csi-isilon-2.8.0/charts/csi-isilon/values.yaml
+ wget -O my-isilon-settings.yaml https://raw.githubusercontent.com/dell/helm-charts/csi-isilon-2.9.1/charts/csi-isilon/values.yaml
```
Edit the _my-isilon-settings.yaml_ as per the requirements.
diff --git a/content/v1/csidriver/upgradation/drivers/operator.md b/content/v1/csidriver/upgradation/drivers/operator.md
deleted file mode 100644
index 5924444a80..0000000000
--- a/content/v1/csidriver/upgradation/drivers/operator.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: "Dell CSI Operator"
-tags:
- - upgrade
- - csi-driver
-weight: 1
-Description: Upgrade Dell CSI Operator
----
-
-{{% pageinfo color="primary" %}}
-The Dell CSI Operator is no longer actively maintained or supported. It will be deprecated in CSM 1.9. It is highly recommended that you use [CSM Operator](../../../../deployment/csmoperator) going forward.
-{{% /pageinfo %}}
-
-To upgrade Dell CSI Operator, perform the following steps.
-Dell CSI Operator can be upgraded based on the supported platforms in one of the 2 ways:
-1. Using script (for non-OLM based installation)
-2. Using Operator Lifecycle Manager (OLM)
-
-
-### Using Installation Script
-1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.12.0 https://github.com/dell/dell-csi-operator.git`.
-2. cd dell-csi-operator
-3. Execute `bash scripts/install.sh --upgrade`. This command will install the latest version of the operator.
-
-### Using OLM
-The upgrade of the Dell CSI Operator is done via Operator Lifecycle Manager.
-
-The `Update approval` (**`InstallPlan`** in OLM terms) strategy plays a role while upgrading dell-csi-operator on OpenShift. This option can be set during installation of dell-csi-operator on OpenShift via the console and can be either set to `Manual` or `Automatic`.
- - If the **`Update approval`** is set to `Automatic`, OpenShift automatically detects whenever the latest version of dell-csi-operator is available in the **`Operator hub`**, and upgrades it to the latest available version.
- - If the upgrade policy is set to `Manual`, OpenShift notifies of an available upgrade. This notification can be viewed by the user in the **`Installed Operators`** section of the OpenShift console. Clicking on the hyperlink to `Approve` the installation would trigger the dell-csi-operator upgrade process.
-
-**NOTE**: The recommended version of OLM for Upstream Kubernetes is **`v0.18.3`** when upgrading operator to `v1.12.0`.
-
diff --git a/content/v1/csidriver/upgradation/drivers/powerflex.md b/content/v1/csidriver/upgradation/drivers/powerflex.md
index 4890384a19..4e51d479ba 100644
--- a/content/v1/csidriver/upgradation/drivers/powerflex.md
+++ b/content/v1/csidriver/upgradation/drivers/powerflex.md
@@ -8,11 +8,11 @@ weight: 1
Description: Upgrade PowerFlex CSI driver
---
-You can upgrade the CSI Driver for Dell PowerFlex using Helm or Dell CSI Operator.
+You can upgrade the CSI Driver for Dell PowerFlex using Helm or Dell CSM Operator.
-## Update Driver from v2.7.1 to v2.8 using Helm
+## Update Driver from v2.8.0 to v2.9.2 using Helm
**Steps**
-1. Run `git clone -b v2.8.0 https://github.com/dell/csi-powerflex.git` to clone the git repository and get the v2.8.0 driver.
+1. Run `git clone -b v2.9.2 https://github.com/dell/csi-powerflex.git` to clone the git repository and get the v2.9.2 driver.
2. You need to create secret.yaml with the configuration of your system.
Check this section in installation documentation: [Install the Driver](../../../installation/helm/powerflex#install-the-driver)
3. Update myvalues file as needed.
diff --git a/content/v1/csidriver/upgradation/drivers/powermax.md b/content/v1/csidriver/upgradation/drivers/powermax.md
index 9672a34819..e76bb61eb2 100644
--- a/content/v1/csidriver/upgradation/drivers/powermax.md
+++ b/content/v1/csidriver/upgradation/drivers/powermax.md
@@ -8,7 +8,7 @@ weight: 1
Description: Upgrade PowerMax CSI driver
---
-You can upgrade CSI Driver for Dell PowerMax using Helm or Dell CSI Operator.
+You can upgrade CSI Driver for Dell PowerMax using Helm or Dell CSM Operator.
**Note:** CSI Driver for PowerMax v2.4.0 requires 10.0 REST endpoint support of Unisphere.
### Updating the CSI Driver to use 10.0 Unisphere
@@ -16,10 +16,10 @@ You can upgrade CSI Driver for Dell PowerMax using Helm or Dell CSI Operator.
1. Upgrade the Unisphere to have 10.0 endpoint support.Please find the instructions [here.](https://dl.dell.com/content/manual34878027-dell-unisphere-for-powermax-10-0-0-installation-guide.pdf?language=en-us&ps=true)
2. Update the `my-powermax-settings.yaml` to have endpoint with 10.0 support.
-## Update Driver from v2.7 to v2.8 using Helm
+## Update Driver from v2.8 to v2.9.1 using Helm
**Steps**
-1. Run `git clone -b v2.8.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the driver.
+1. Run `git clone -b v2.9.1 https://github.com/dell/csi-powermax.git` to clone the git repository and get the driver.
2. Update the values file as needed.
3. Run the `csi-install` script with the option _\-\-upgrade_ by running:
```bash
diff --git a/content/v1/csidriver/upgradation/drivers/powerstore.md b/content/v1/csidriver/upgradation/drivers/powerstore.md
index fddeeefe09..28204eaa21 100644
--- a/content/v1/csidriver/upgradation/drivers/powerstore.md
+++ b/content/v1/csidriver/upgradation/drivers/powerstore.md
@@ -9,12 +9,12 @@ Description: Upgrade PowerStore CSI driver
You can upgrade the CSI Driver for Dell PowerStore using Helm.
-## Update Driver from v2.7 to v2.8 using Helm
+## Update Driver from v2.8.0 to v2.9.1 using Helm
Note: While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes.
**Steps**
-1. Run `git clone -b v2.8.0 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver.
+1. Run `git clone -b v2.9.1 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver.
2. Edit `samples/secret/secret.yaml` file and configure connection information for your PowerStore arrays changing the following parameters:
- *endpoint*: defines the full URL path to the PowerStore API.
- *globalID*: specifies what storage cluster the driver should use
@@ -38,7 +38,7 @@ Note: While upgrading the driver via helm, controllerCount variable in myvalues.
kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml
```
-5. Download the default values.yaml file `cd dell-csi-helm-installer && wget -O my-powerstore-settings.yaml https://github.com/dell/helm-charts/raw/csi-powerstore-2.8.0/charts/csi-powerstore/values.yaml` and update parameters as per the requirement.
+5. Download the default values.yaml file `cd dell-csi-helm-installer && wget -O my-powerstore-settings.yaml https://github.com/dell/helm-charts/raw/csi-powerstore-2.9.1/charts/csi-powerstore/values.yaml` and update parameters as per the requirement.
6. Run the `csi-install` script with the option _\-\-upgrade_ by running:
```bash
diff --git a/content/v1/csidriver/upgradation/drivers/unity.md b/content/v1/csidriver/upgradation/drivers/unity.md
index efab1b693f..3b8c0108d9 100644
--- a/content/v1/csidriver/upgradation/drivers/unity.md
+++ b/content/v1/csidriver/upgradation/drivers/unity.md
@@ -7,7 +7,7 @@ weight: 1
Description: Upgrade Unity XT CSI driver
---
-You can upgrade the CSI Driver for Dell Unity XT using Helm or Dell CSI Operator.
+You can upgrade the CSI Driver for Dell Unity XT using Helm or Dell CSM Operator.
**Note:**
1. User has to re-create existing custom-storage classes (if any) according to the latest format.
@@ -20,9 +20,9 @@ You can upgrade the CSI Driver for Dell Unity XT using Helm or Dell CSI Operator
Preparing myvalues.yaml is the same as explained in the install section.
-To upgrade the driver from csi-unity v2.7.0 to csi-unity v2.8.0
+To upgrade the driver from csi-unity v2.8.0 to csi-unity v2.9.1
-1. Get the latest csi-unity v2.8.0 code from Github using `git clone -b v2.8.0 https://github.com/dell/csi-unity.git`.
+1. Get the latest csi-unity v2.9.1 code from Github using `git clone -b v2.9.1 https://github.com/dell/csi-unity.git`.
2. Copy the helm/csi-unity/values.yaml to the new location csi-unity/dell-csi-helm-installer and rename it to myvalues.yaml. Customize settings for installation by editing myvalues.yaml as needed.
3. Navigate to csi-unity/dell-csi-hem-installer folder and execute this command:
```bash
diff --git a/content/v1/deployment/_index.md b/content/v1/deployment/_index.md
index f9ae6b5cb6..69be54cedd 100644
--- a/content/v1/deployment/_index.md
+++ b/content/v1/deployment/_index.md
@@ -2,7 +2,7 @@
title: "Deployment"
linkTitle: "Deployment"
no_list: true
-description: Deployment of CSM for Replication
+description: Deployment of CSM
weight: 1
---
@@ -30,39 +30,32 @@ The Container Storage Modules and the required CSI Drivers can each be deployed
[...More on installation instructions](csminstallationwizard)
{{< /card >}}
{{< card header="[Dell CSI Drivers Installation via offline installer](../csidriver/installation/offline)"
- footer="[Offline installation for all drivers](../csidriver/installation/offline)">}}
- Both Helm and Dell CSI opetor supports offline installation of the Dell CSI Storage Providers via `csi-offline-bundle.sh` script by creating a usable package.
+ footer="[Offline installation for all drivers](../csidriver/installation/offline) [Offline installation with Operator](csmoperator/#offline-bundle-installation-on-a-cluster-without-olm)">}}
+ Both Helm and Dell CSM operator supports offline installation of the Dell CSI Storage Providers via `csi-offline-bundle.sh` or `csm-offline-bundle.sh` script, respectively, by creating a usable package.
[...More on installation instructions](../csidriver/installation/offline)
{{< /card >}}
{{< /cardpane >}}
-{{< cardpane >}}
- {{< card header="[Dell CSI Drivers Installation via operator](../csidriver/installation/operator)"
- footer="Installs [PowerStore](../csidriver/installation/operator/powerstore/) [PowerMax](../csidriver/installation/operator/powermax/) [PowerScale](../csidriver/installation/operator/isilon/) [PowerFlex](../csidriver/installation/operator/powerflex/) [Unity XT](../csidriver/installation/operator/unity/)">}}
- Dell CSI Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers provided by Dell for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. It is also available as a certified operator for OpenShift clusters and can be deployed using the OpenShift Container Platform. Both these methods of installation use OLM (Operator Lifecycle Manager). The operator can also be deployed manually.
- [...More on installation instructions](../csidriver/installation/operator)
- {{< /card >}}
-{{< /cardpane >}}
{{< cardpane >}}
{{< card header="[Dell Container Storage Module for Observability](../observability/deployment)"
footer="Installs Observability Module">}}
- CSM for Observability can be deployed either via Helm or CSM for Observability Installer or CSM for Observability Offline Installer
+ CSM for Observability can be deployed either via Helm/CSM operator/CSM for Observability Installer/CSM for Observability Offline Installer
[...More on installation instructions](../observability/deployment)
{{< /card >}}
{{< card header="[Dell Container Storage Module for Authorization](../authorization/deployment)"
footer="Installs Authorization Module">}}
- CSM Authorization can be installed by using the provided Helm v3 charts on Kubernetes platforms.
+ CSM Authorization can be installed by using the provided Helm v3 charts on Kubernetes platforms or CSM operator.
[...More on installation instructions](../authorization/deployment)
{{< /card >}}
{{< /cardpane >}}
{{< cardpane >}}
{{< card header="[Dell Container Storage Module for Resiliency](../resiliency/deployment)"
footer="Installs Resiliency Module">}}
- CSI drivers that support Helm chart installation allow CSM for Resiliency to be _optionally_ installed by variables in the chart. It can be updated via _podmon_ block specified in the _values.yaml_
+ CSI drivers that support Helm chart installation allow CSM for Resiliency to be _optionally_ installed by variables in the chart. It can be updated via _podmon_ block specified in the _values.yaml_. It can be installed via CSM operator as well.
[...More on installation instructions](../resiliency/deployment)
{{< /card >}}
{{< card header="[Dell Container Storage Module for Replication](../replication/deployment)"
footer="Installs Replication Module">}}
- Replication module can be installed by installing repctl,Container Storage Modules (CSM) for Replication Controller,CSI driver after enabling replication.
+ Replication module can be installed by installing repctl,Container Storage Modules (CSM) for Replication Controller,CSI driver after enabling replication. It can be installed via CSM operator as well.
[...More on installation instructions](../replication/deployment)
{{< /card >}}
{{< /cardpane >}}
diff --git a/content/v1/deployment/csminstallationwizard/_index.md b/content/v1/deployment/csminstallationwizard/_index.md
index 6d9b4d13d9..ad5c047b52 100644
--- a/content/v1/deployment/csminstallationwizard/_index.md
+++ b/content/v1/deployment/csminstallationwizard/_index.md
@@ -13,14 +13,19 @@ The [Dell Container Storage Modules Installation Wizard](./src/index.html) is a
| CSI Driver | Version | Helm | Operator |
| ------------------ | --------- | ------ | --------- |
+| CSI PowerStore | 2.9.1 |✔️ |✔️ |
| CSI PowerStore | 2.8.0 |✔️ |✔️ |
| CSI PowerStore | 2.7.0 |✔️ |✔️ |
+| CSI PowerMax | 2.9.1 |✔️ |✔️ |
| CSI PowerMax | 2.8.0 |✔️ |✔️ |
-| CSI PowerMax | 2.7.0 |✔️ |✔️ |
+| CSI PowerMax | 2.7.0 |✔️ |✔️ |
+| CSI PowerFlex | 2.9.1 |✔️ |❌ |
| CSI PowerFlex | 2.8.0 |✔️ |❌ |
| CSI PowerFlex | 2.7.0 |✔️ |❌ |
+| CSI PowerScale | 2.9.1 |✔️ |✔️ |
| CSI PowerScale | 2.8.0 |✔️ |✔️ |
| CSI PowerScale | 2.7.0 |✔️ |✔️ |
+| CSI Unity XT | 2.9.1 |✔️ |❌ |
| CSI Unity XT | 2.8.0 |✔️ |❌ |
| CSI Unity XT | 2.7.0 |✔️ |❌ |
@@ -30,9 +35,9 @@ The [Dell Container Storage Modules Installation Wizard](./src/index.html) is a
| CSM Modules | Version |
| ---------------------| --------- |
-| CSM Observability | 1.6.0 |
-| CSM Replication | 1.6.0 |
-| CSM Resiliency | 1.7.0 |
+| CSM Observability | 1.7.0 |
+| CSM Replication | 1.7.1 |
+| CSM Resiliency | 1.8.1 |
## Installation
@@ -92,7 +97,7 @@ The [Dell Container Storage Modules Installation Wizard](./src/index.html) is a
```terminal
helm install dell/container-storage-modules -n --version -f
- Example: helm install powerstore dell/container-storage-modules -n csi-powerstore --version 1.1.0 -f values.yaml
+ Example: helm install powerstore dell/container-storage-modules -n csi-powerstore --version 1.2.1 -f values.yaml
```
## Installation Using Operator
diff --git a/content/v1/deployment/csminstallationwizard/release/_index.md b/content/v1/deployment/csminstallationwizard/release/_index.md
index cf735551df..eb9869453a 100644
--- a/content/v1/deployment/csminstallationwizard/release/_index.md
+++ b/content/v1/deployment/csminstallationwizard/release/_index.md
@@ -5,21 +5,18 @@ weight: 5
description: Release notes for CSM Installation Wizard
---
-## Release Notes - CSM Installation Wizard 1.1.0
+## Release Notes - CSM Installation Wizard 1.2.1
-### New Features/Changes
-- Added operator mode of installation for CSI-PowerStore, CSI-PowerMax, CSI-PowerScale and the supported modules
-- Helm and Operator based manifest file generation is supported for CSM-1.7 and CSM 1.8 releases
+### New Features/Changes
-- Volume Limit and Storage Capacity Tracking features have been added.
-- Rename SDC and approve SDC feature added for CSM-1.7 and CSM-1.8 for CSI-PowerFlex driver.
-- NFS volume feature added for CSM-1.8 for CSI-PowerFlex driver.
+- [#947 - [FEATURE]: Support for Kubernetes 1.28](https://github.com/dell/csm/issues/947)
+- [#1066 - [FEATURE]: Support for Openshift 4.14](https://github.com/dell/csm/issues/1066)
### Fixed Issues
-- [#959 - [BUG]: Resiliency fields in the generated values.yaml should be uncommented when resiliency is enabled](https://github.com/dell/csm/issues/959)
+- [#1022 - [BUG]: CSM Installation wizard is issuing the warnings that are false positives ](https://github.com/dell/csm/issues/1022)
### Known Issues
diff --git a/content/v1/deployment/csminstallationwizard/src/csm-versions/default-values.properties b/content/v1/deployment/csminstallationwizard/src/csm-versions/default-values.properties
index 150565a2a4..5d1e1241a0 100644
--- a/content/v1/deployment/csminstallationwizard/src/csm-versions/default-values.properties
+++ b/content/v1/deployment/csminstallationwizard/src/csm-versions/default-values.properties
@@ -1,4 +1,4 @@
-csmVersion=1.8.0
+csmVersion=1.9.3
imageRepository=dellemc
controllerCount=1
nodeSelectorLabel=node-role.kubernetes.io/control-plane:
@@ -9,4 +9,4 @@ certSecretCount=1
pollRate=60
driverPodLabel=dell-storage
arrayThreshold=3
-maxVolumesPerNode=0
\ No newline at end of file
+maxVolumesPerNode=0
diff --git a/content/v1/deployment/csminstallationwizard/src/index.html b/content/v1/deployment/csminstallationwizard/src/index.html
index 6b87ad0fda..53fb2133b0 100644
--- a/content/v1/deployment/csminstallationwizard/src/index.html
+++ b/content/v1/deployment/csminstallationwizard/src/index.html
@@ -1,4 +1,9 @@
+
@@ -72,7 +77,7 @@
}}
## Supported CSI Drivers
diff --git a/content/v1/observability/deployment/_index.md b/content/v1/observability/deployment/_index.md
index 6254b3ec80..8d8862c3b9 100644
--- a/content/v1/observability/deployment/_index.md
+++ b/content/v1/observability/deployment/_index.md
@@ -6,11 +6,12 @@ description: >
Dell Container Storage Modules (CSM) for Observability Deployment
---
-CSM for Observability can be deployed in one of three ways:
+CSM for Observability can be deployed in one of four ways:
- [Helm](./helm)
- [CSM for Observability Installer](./online)
- [CSM for Observability Offline Installer](./offline)
+- [Operator](./operator)
## Post Installation Dependencies
diff --git a/content/v1/observability/deployment/helm.md b/content/v1/observability/deployment/helm.md
index ece13b209e..2256cbb0ca 100644
--- a/content/v1/observability/deployment/helm.md
+++ b/content/v1/observability/deployment/helm.md
@@ -10,7 +10,7 @@ The Container Storage Modules (CSM) for Observability Helm chart bootstraps an O
## Prerequisites
-- Helm 3.3
+- Helm 3.x
- The deployment of one or more [supported](../../#supported-csi-drivers) Dell CSI drivers
## Install the CSM for Observability Helm Chart
diff --git a/content/v1/observability/deployment/offline.md b/content/v1/observability/deployment/offline.md
index bf5076f945..c3991c7d3a 100644
--- a/content/v1/observability/deployment/offline.md
+++ b/content/v1/observability/deployment/offline.md
@@ -10,7 +10,7 @@ The following instructions can be followed when a Helm chart will be installed i
## Prerequisites
-- Helm 3.3
+- Helm 3.x
- The deployment of one or more [supported](../#supported-csi-drivers) Dell CSI drivers
### Dependencies
diff --git a/content/v1/observability/deployment/online.md b/content/v1/observability/deployment/online.md
index ed41777d86..465626a8ee 100644
--- a/content/v1/observability/deployment/online.md
+++ b/content/v1/observability/deployment/online.md
@@ -35,7 +35,7 @@ If the Authorization module is enabled for the CSI drivers installed in the same
## Prerequisites
-- Helm 3.3
+- Helm 3.x
- The deployment of one or more [supported](../#supported-csi-drivers) Dell CSI drivers
## Online Installer
diff --git a/content/v1/observability/deployment/operator.md b/content/v1/observability/deployment/operator.md
new file mode 100644
index 0000000000..15ef496192
--- /dev/null
+++ b/content/v1/observability/deployment/operator.md
@@ -0,0 +1,11 @@
+---
+title: Operator
+linktitle: Operator
+description: >
+ Dell Technologies (Dell) Container Storage Modules (CSM) for Observability Operator deployment
+---
+
+The CSM Observability module for supported Dell CSI Drivers can be installed via the Dell CSM Operator.
+To deploy the Operator, follow the instructions available [here](../../../deployment/csmoperator/#installation).
+
+To install CSM Observability via the Dell CSM Operator, follow the instructions [here](../../../deployment/csmoperator/modules/observability).
\ No newline at end of file
diff --git a/content/v1/observability/release/_index.md b/content/v1/observability/release/_index.md
index 7e8b4d695f..e51b6e51ad 100644
--- a/content/v1/observability/release/_index.md
+++ b/content/v1/observability/release/_index.md
@@ -6,17 +6,25 @@ Description: >
Dell Container Storage Modules (CSM) release notes for observability
---
-## Release Notes - CSM Observability 1.6.0
+## Release Notes - CSM Observability 1.7.0
+
+
+
+
+
### New Features/Changes
-- [#724 - [FEATURE]: CSM support for Openshift 4.13](https://github.com/dell/csm/issues/724)
-- [#922 - [FEATURE]: Use ubi9 micro as base image](https://github.com/dell/csm/issues/922)
+- [#947 - [FEATURE]: Support for Kubernetes 1.28](https://github.com/dell/csm/issues/947)
+- [#1066 - [FEATURE]: Support for Openshift 4.14](https://github.com/dell/csm/issues/1066)
+- [#996 - [FEATURE]: Dell CSI to Dell CSM Operator Migration Process](https://github.com/dell/csm/issues/996)
+- [#1031 - [FEATURE]: Update to the latest UBI Micro image for CSM](https://github.com/dell/csm/issues/1031)
+- [#1062 - [FEATURE]: CSM PowerMax: Support PowerMax v10.1 ](https://github.com/dell/csm/issues/1062)
### Fixed Issues
-- [#916 - [BUG]: Remove references to deprecated io/ioutil package](https://github.com/dell/csm/issues/916)
+- [#1019 - [BUG]: karavi-metrics-powerscale pod gets an segmentation violation error during start](https://github.com/dell/csm/issues/1019)
### Known Issues
diff --git a/content/v1/observability/troubleshooting/_index.md b/content/v1/observability/troubleshooting/_index.md
index 2eec45bb44..5d1313c6a8 100644
--- a/content/v1/observability/troubleshooting/_index.md
+++ b/content/v1/observability/troubleshooting/_index.md
@@ -270,10 +270,4 @@ To resolve this, leave the CSM namespace in place after a failed installation, a
helm delete karavi-observability --namespace [CSM_NAMESPACE]
```
-Then delete the namespace `kubectl delete ns [CSM_NAMESPACE]`. Wait until namespace is fully deleted, recreate the namespace, and reinstall Observability again.
-
-### Other issues and workarounds
-
-| Symptoms | Prevention, Resolution or Workaround |
-| --- | --- |
-| karavi-metrics pod crashes for all the supported platforms whenever there are PVs without claim in the cluster | Work around is to create PVCs using the PVs which are not in bound state or delete the PVs without claims |
+Then delete the namespace `kubectl delete ns [CSM_NAMESPACE]`. Wait until namespace is fully deleted, recreate the namespace, and reinstall Observability again.
diff --git a/content/v1/observability/upgrade/_index.md b/content/v1/observability/upgrade/_index.md
index 79f7536e06..95e716efe6 100644
--- a/content/v1/observability/upgrade/_index.md
+++ b/content/v1/observability/upgrade/_index.md
@@ -28,7 +28,7 @@ helm search repo dell
```
```
NAME CHART VERSION APP VERSION DESCRIPTION
-dell/karavi-observability 1.6.0 1.6.0 CSM for Observability is part of the [Container...
+dell/karavi-observability 1.7.0 1.7.0 CSM for Observability is part of the [Container...
```
>Note: If using cert-manager CustomResourceDefinitions older than v1.5.3, delete the old CRDs and install v1.5.3 of the CRDs prior to upgrade. See [Prerequisites](../deployment/helm#prerequisites) for location of CRDs.
diff --git a/content/v1/references/cli/_index.md b/content/v1/references/cli/_index.md
index 511121eca9..f27b4925cf 100644
--- a/content/v1/references/cli/_index.md
+++ b/content/v1/references/cli/_index.md
@@ -885,6 +885,14 @@ dellctl images --component csi-vxflexos
```
```
Driver/Module Image Supported Orchestrator Versions Sidecar Images
+dellemc/csi-vxflexos:v2.9.0 k8s1.28,k8s1.27,k8s1.26,ocp4.14,ocp4.13 registry.k8s.io/sig-storage/csi-attacher:v4.4.2
+ registry.k8s.io/sig-storage/csi-provisioner:v3.6.2
+ registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.10.0
+ registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2
+ registry.k8s.io/sig-storage/csi-resizer:v1.9.2
+ registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1
+ dellemc/sdc:4.5
+
dellemc/csi-vxflexos:v2.8.0 k8s1.27,k8s1.26,k8s1.25,ocp4.13,ocp4.12 registry.k8s.io/sig-storage/csi-attacher:v4.3.0
registry.k8s.io/sig-storage/csi-provisioner:v3.5.0
registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.9.0
@@ -901,14 +909,6 @@ dellemc/csi-vxflexos:v2.7.0 k8s1.27,k8s1.26,k8s1.25,ocp4.12,ocp4.11 registry
registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.8.0
dellemc/sdc:3.6.0.6
-dellemc/csi-vxflexos:v2.6.0 k8s1.26,k8s1.25,k8s1.24,ocp4.11,ocp4.10 registry.k8s.io/sig-storage/csi-attacher:v4.0.0
- registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
- registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
- registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
- registry.k8s.io/sig-storage/csi-resizer:v1.6.0
- registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
- dellemc/sdc:3.6.0.6
-
```
```bash
@@ -916,19 +916,19 @@ dellctl images --component csm-authorization
```
```
Driver/Module Image Supported Orchestrator Versions Sidecar Images
-dellemc/csm-authorization-sidecar:v1.8.0 k8s1.27,k8s1.26,k8s1.25 jetstack/cert-manager-cainjector:v1.6.1
+dellemc/csm-authorization-sidecar:v1.9.0 k8s1.28,k8s1.27,k8s1.26 jetstack/cert-manager-cainjector:v1.6.1
jetstack/cert-manager-controller:v1.6.1
jetstack/cert-manager-webhook:v1.6.1
ingress-nginx/controller:v1.4.0
ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343
-dellemc/csm-authorization-sidecar:v1.7.0 k8s1.27,k8s1.26,k8s1.25 jetstack/cert-manager-cainjector:v1.6.1
+dellemc/csm-authorization-sidecar:v1.8.0 k8s1.27,k8s1.26,k8s1.25 jetstack/cert-manager-cainjector:v1.6.1
jetstack/cert-manager-controller:v1.6.1
jetstack/cert-manager-webhook:v1.6.1
ingress-nginx/controller:v1.4.0
ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343
-dellemc/csm-authorization-sidecar:v1.6.0 k8s1.26,k8s1.25,k8s1.24 jetstack/cert-manager-cainjector:v1.6.1
+dellemc/csm-authorization-sidecar:v1.7.0 k8s1.27,k8s1.26,k8s1.25 jetstack/cert-manager-cainjector:v1.6.1
jetstack/cert-manager-controller:v1.6.1
jetstack/cert-manager-webhook:v1.6.1
ingress-nginx/controller:v1.4.0
diff --git a/content/v1/replication/_index.md b/content/v1/replication/_index.md
index e42b46366e..a0ff4dd4e6 100644
--- a/content/v1/replication/_index.md
+++ b/content/v1/replication/_index.md
@@ -37,12 +37,8 @@ CSM for Replication provides the following capabilities:
{{
}}
>Note: File Replication for PowerMax is currently not supported
diff --git a/content/v1/replication/deployment/install-operator.md b/content/v1/replication/deployment/install-operator.md
new file mode 100644
index 0000000000..62c711db85
--- /dev/null
+++ b/content/v1/replication/deployment/install-operator.md
@@ -0,0 +1,11 @@
+---
+title: Installation using Operator
+linktitle: Installation using Operator
+description: >
+ Dell Technologies (Dell) Container Storage Modules (CSM) for Replication Operator deployment
+---
+
+The CSM Replication module for supported Dell CSI Drivers can be installed via the Dell CSM Operator.
+To deploy the Operator, follow the instructions available [here](../../../deployment/csmoperator/#installation).
+
+To install CSM Replication via the Dell CSM Operator, follow the instructions [here](../../../deployment/csmoperator/modules/replication).
\ No newline at end of file
diff --git a/content/v1/replication/deployment/install-repctl.md b/content/v1/replication/deployment/install-repctl.md
index 9c8f1f677f..9715a917a4 100644
--- a/content/v1/replication/deployment/install-repctl.md
+++ b/content/v1/replication/deployment/install-repctl.md
@@ -13,14 +13,14 @@ Before you begin, make sure you have the repctl tool available.
You can download a pre-built repctl binary from our [Releases](https://github.com/dell/csm-replication/releases) page.
```shell
-wget https://github.com/dell/csm-replication/releases/download/v1.6.0/repctl-linux-amd64
+wget https://github.com/dell/csm-replication/releases/download/v1.7.1/repctl-linux-amd64
mv repctl-linux-amd64 repctl
chmod +x repctl
```
Alternately, if you want to build the binary yourself, you can follow these steps:
```shell
-git clone -b v1.6.0 https://github.com/dell/csm-replication.git
+git clone -b v1.7.1 https://github.com/dell/csm-replication.git
cd csm-replication/repctl
make build
```
diff --git a/content/v1/replication/deployment/install-script.md b/content/v1/replication/deployment/install-script.md
index 4791173498..a01226fd5d 100644
--- a/content/v1/replication/deployment/install-script.md
+++ b/content/v1/replication/deployment/install-script.md
@@ -9,11 +9,11 @@ description: Installation of CSM for Replication using script (Helm chart)
> **_NOTE:_** These steps should be repeated on all Kubernetes clusters where you want to configure replication.
```shell
-git clone -b v1.6.0 https://github.com/dell/csm-replication.git
+git clone -b v1.7.1 https://github.com/dell/csm-replication.git
cd csm-replication
kubectl create ns dell-replication-controller
# Download and modify the default values.yaml file if you wish to customize your deployment in any way
-wget -O myvalues.yaml https://raw.githubusercontent.com/dell/helm-charts/csm-replication-1.6.0/charts/csm-replication/values.yaml
+wget -O myvalues.yaml https://raw.githubusercontent.com/dell/helm-charts/csm-replication-1.7.1/charts/csm-replication/values.yaml
bash scripts/install.sh --values ./myvalues.yaml
```
>Note: Current installation method allows you to specify custom `:` entries to be appended to controller's `/etc/hosts` file. It can be useful if controller is being deployed in private environment where DNS is not set up properly, but kubernetes clusters use FQDN as API server's address.
diff --git a/content/v1/replication/deployment/powerflex.md b/content/v1/replication/deployment/powerflex.md
index 967070cca6..c2fb027620 100644
--- a/content/v1/replication/deployment/powerflex.md
+++ b/content/v1/replication/deployment/powerflex.md
@@ -58,13 +58,11 @@ Here is an example of how that would look:
# Set this to true to enable replication
replication:
enabled: true
- image: dellemc/dell-csi-replicator:v1.6.0
replicationContextPrefix: "powerflex"
replicationPrefix: "replication.storage.dell.com"
...
```
-You can leave other parameters like `image`, `replicationContextPrefix`, and
-`replicationPrefix` as they are.
+You can leave other parameters like `replicationContextPrefix`, and `replicationPrefix` as they are.
After enabling the replication module you can continue to install the CSI driver
for PowerFlex following the usual installation procedure, just ensure you've added
diff --git a/content/v1/replication/deployment/powermax.md b/content/v1/replication/deployment/powermax.md
index a8c7710147..158965cf58 100644
--- a/content/v1/replication/deployment/powermax.md
+++ b/content/v1/replication/deployment/powermax.md
@@ -74,12 +74,11 @@ Here is an example of what that would look like:
# Set this to true to enable replication
replication:
enabled: true
- image: dellemc/dell-csi-replicator:v1.6.0
replicationContextPrefix: "powermax"
replicationPrefix: "replication.storage.dell.com"
...
```
-You can leave other parameters like `image`, `replicationContextPrefix`, and `replicationPrefix` as they are.
+You can leave other parameters like `replicationContextPrefix`, and `replicationPrefix` as they are.
After enabling the replication module you can continue to install the CSI driver for PowerMax following
usual installation procedure, just ensure you've added necessary array connection information to secret.
diff --git a/content/v1/replication/deployment/powerscale.md b/content/v1/replication/deployment/powerscale.md
index 60279f4918..0ed574c078 100644
--- a/content/v1/replication/deployment/powerscale.md
+++ b/content/v1/replication/deployment/powerscale.md
@@ -64,12 +64,11 @@ controller:
# replication: allows to configure replication
replication:
enabled: true
- image: dellemc/dell-csi-replicator:v1.6.0
replicationContextPrefix: "powerscale"
replicationPrefix: "replication.storage.dell.com"
...
```
-You can leave other parameters like `image`, `replicationContextPrefix`, and `replicationPrefix` as they are.
+You can leave other parameters like `replicationContextPrefix`, and `replicationPrefix` as they are.
After enabling the replication module, you can continue to install the CSI driver for PowerScale following the usual installation procedure. Just ensure you've added the necessary array connection information to the Kubernetes secret for the PowerScale driver.
diff --git a/content/v1/replication/deployment/powerstore.md b/content/v1/replication/deployment/powerstore.md
index ec0431e5c2..8d4c380a23 100644
--- a/content/v1/replication/deployment/powerstore.md
+++ b/content/v1/replication/deployment/powerstore.md
@@ -58,12 +58,11 @@ controller:
# replication: allows to configure replication
replication:
enabled: true
- image: dellemc/dell-csi-replicator:v1.6.0
replicationContextPrefix: "powerstore"
replicationPrefix: "replication.storage.dell.com"
...
```
-You can leave other parameters like `image`, `replicationContextPrefix`, and `replicationPrefix` as they are.
+You can leave other parameters like `replicationContextPrefix`, and `replicationPrefix` as they are.
After enabling the replication module you can continue to install the CSI driver for PowerStore following
usual installation procedure, just ensure you've added necessary array connection information to secret.
diff --git a/content/v1/replication/migration/migrating-volumes-diff-array.md b/content/v1/replication/migration/migrating-volumes-diff-array.md
index 6444d41259..cf32d1eb9e 100644
--- a/content/v1/replication/migration/migrating-volumes-diff-array.md
+++ b/content/v1/replication/migration/migrating-volumes-diff-array.md
@@ -42,7 +42,7 @@ kubectl create -f deploy/replicationcrds.all.yaml
## Installing Driver With sidecars
-Dell-csi-migrator and dell-csi-node-rescanner sidecars are installed alongside with the driver, the user can enable it in the driver's myvalues.yaml file.
+Dell-csi-migrator and dell-csi-node-rescanner sidecars are installed alongside with the driver, the user can enable it in the driver's `myvalues.yaml` file.
#### Sample:
@@ -55,10 +55,6 @@ Dell-csi-migrator and dell-csi-node-rescanner sidecars are installed alongside w
# Default value: "false"
migration:
enabled: true
- # Change this to use any specific version of the dell-csi-migrator sidecar
- # Default value: None
- nodeRescanSidecarImage: dellemc/dell-csi-node-rescanner:v1.0.0
- image: dellemc/dell-csi-migrator:v1.1.0
# migrationPrefix: Determine if migration is enabled
# Default value: "migration.storage.dell.com"
# Examples: "migration.storage.dell.com"
diff --git a/content/v1/replication/release/_index.md b/content/v1/replication/release/_index.md
index fda380f66a..bbfe6dafad 100644
--- a/content/v1/replication/release/_index.md
+++ b/content/v1/replication/release/_index.md
@@ -6,23 +6,26 @@ Description: >
Dell Container Storage Modules (CSM) release notes for replication
---
-## Release Notes - CSM Replication 1.6.0
+## Release Notes - CSM Replication 1.7.1
+
+
+
+
+
### New Features/Changes
-- [#724 - [FEATURE]: CSM support for Openshift 4.13](https://github.com/dell/csm/issues/724)
-- [#877 - [FEATURE]: Make standalone helm chart available from helm repository : https://dell.github.io/dell/helm-charts](https://github.com/dell/csm/issues/877)
+- [#947 - [FEATURE]: Support for Kubernetes 1.28](https://github.com/dell/csm/issues/947)
+- [#1066 - [FEATURE]: Support for Openshift 4.14](https://github.com/dell/csm/issues/1066)
+- [#996 - [FEATURE]: Dell CSI to Dell CSM Operator Migration Process](https://github.com/dell/csm/issues/996)
+- [#1031 - [FEATURE]: Update to the latest UBI Micro image for CSM](https://github.com/dell/csm/issues/1031)
+- [#1062 - [FEATURE]: CSM PowerMax: Support PowerMax v10.1 ](https://github.com/dell/csm/issues/1062)
### Fixed Issues
-- [#916 - [BUG]: Remove references to deprecated io/ioutil package](https://github.com/dell/csm/issues/916)
-- [#928 - [BUG]: PowerStore Replication - Delete RG request hangs](https://github.com/dell/csm/issues/928)
-- [#968 - [BUG]: Creating StorageClass for replication failed with unmarshal error](https://github.com/dell/csm/issues/968)
+- [#988 - [BUG]: CSM Operator fails to install CSM Replication on the remote cluster](https://github.com/dell/csm/issues/988)
+- [#1002 - [BUG]: CSM Replication - secret file requirement for both sites not documented ](https://github.com/dell/csm/issues/1002)
### Known Issues
-
-| Github ID | Description |
-| --------------------------------------------- | ------------------------------------------------------------------ |
-| [753](https://github.com/dell/csm/issues/753) | **PowerScale:** When Persistent Volumes (PVs) are created with quota enabled on CSM versions 1.6.0 and before, an incorrect quota gets set for the target side read-only PVs/directories based on the consumed non-zero source size instead of the assigned quota of the source. This can create issues when the user performs failover and wants to write data to the failed over site. If lower quota limit is set, no new writes can be performed on the target side post failover. **Workaround** using PowerScale cluster CLI or UI: For each Persistent Volume on the source kubernetes cluster, 1. Get the quota assigned for the directory on the source PowerScale cluster. The path to the directory information can be obtained from the specification field of the Persistent Volume object. 2. Verify the quota of the target directory on the target PowerScale cluster. If incorrect quota is set, update the quota on the target directory with the same information as on the source. If no quota is set, create a quota for the target directory. |
\ No newline at end of file
diff --git a/content/v1/replication/upgrade.md b/content/v1/replication/upgrade.md
index f5e9ec901c..4e3d496a34 100644
--- a/content/v1/replication/upgrade.md
+++ b/content/v1/replication/upgrade.md
@@ -45,7 +45,7 @@ On PowerScale systems, an additional step is needed when upgrading to CSM Replic
Make sure the appropriate release branch is available on the machine performing the upgrade by running:
```bash
-git clone -b v1.6.0 https://github.com/dell/csm-replication.git
+git clone -b v1.7.1 https://github.com/dell/csm-replication.git
```
### Upgrading with Helm
diff --git a/content/v1/resiliency/_index.md b/content/v1/resiliency/_index.md
index 246992cfe6..a72b8b35b6 100644
--- a/content/v1/resiliency/_index.md
+++ b/content/v1/resiliency/_index.md
@@ -41,10 +41,8 @@ CSM for Resiliency provides the following capabilities:
{{
}}
## Supported CSI Drivers
@@ -203,4 +201,4 @@ A three tier testing methodology is used for CSM for Resiliency:
1. Unit testing with high coverage (>90% statement) tests the program logic and is especially used to test the error paths by injecting faults.
2. An integration test describes test scenarios in Gherkin that sets up specific testing scenarios executed against a Kubernetes test cluster. The tests use ranges for many of the parameters to add an element of "chaos testing".
-3. Script based testing supports longevity testing in a Kubernetes cluster. For example, one test repeatedly fails three different lists of nodes in succession and is used to fail 1/3 of the cluster's worker nodes on a cyclic basis and repeat indefinitely. This test collect statistics on length of time for pod evacuation, pod recovery, and node cleanup.
\ No newline at end of file
+3. Script based testing supports longevity testing in a Kubernetes cluster. For example, one test repeatedly fails three different lists of nodes in succession and is used to fail 1/3 of the cluster's worker nodes on a cyclic basis and repeat indefinitely. This test collect statistics on length of time for pod evacuation, pod recovery, and node cleanup.
diff --git a/content/v1/resiliency/deployment/_index.md b/content/v1/resiliency/deployment/_index.md
new file mode 100644
index 0000000000..54a2e85459
--- /dev/null
+++ b/content/v1/resiliency/deployment/_index.md
@@ -0,0 +1,7 @@
+---
+title: "Deployment"
+linkTitle: "Deployment"
+weight: 1
+Description: >
+ Installation for Dell Container Storage Module (CSM) for Resiliency
+---
diff --git a/content/v1/resiliency/deployment.md b/content/v1/resiliency/deployment/helm.md
similarity index 96%
rename from content/v1/resiliency/deployment.md
rename to content/v1/resiliency/deployment/helm.md
index 1e580917d9..cd5e8a35af 100644
--- a/content/v1/resiliency/deployment.md
+++ b/content/v1/resiliency/deployment/helm.md
@@ -1,12 +1,12 @@
---
-title: Deployment
-linktitle: Deployment
+title: Helm
+linktitle: Helm
weight: 3
description: >
Dell Container Storage Modules (CSM) for Resiliency installation
---
-CSM for Resiliency is installed as part of the Dell CSI driver installation. The drivers can be installed either by a _helm chart_ or by the _Dell CSI Operator_. Currently, only _Helm chart_ installation is supported.
+CSM for Resiliency is installed as part of the Dell CSI driver installation.
For information on the PowerFlex CSI driver, see [PowerFlex CSI Driver](https://github.com/dell/csi-powerflex).
@@ -26,7 +26,6 @@ The drivers that support Helm chart installation allow CSM for Resiliency to be
# Enable this feature only after contact support for additional information
podmon:
enabled: true
- image: dellemc/podmon:v1.3.0
controller:
args:
- "--csisock=unix:/var/run/csi/csi.sock"
@@ -50,7 +49,7 @@ podmon:
To install CSM for Resiliency with the driver, the following changes are required:
1. Enable CSM for Resiliency by changing the podmon.enabled boolean to true. This will enable both controller-podmon and node-podmon.
-2. Specify the podmon image to be used as podmon.image.
+2. If you need to change the registry, specify the podmon image to be used in `images.podmon`
3. Specify arguments to controller-podmon in the podmon.controller.args block. See "Podmon Arguments" below. Note that some arguments are required. Note that the arguments supplied to controller-podmon are different from those supplied to node-podmon.
4. Specify arguments to node-podmon in the podmon.node.args block. See "Podmon Arguments" below. Note that some arguments are required. Note that the arguments supplied to controller-podmon are different from those supplied to node-podmon.
@@ -59,7 +58,6 @@ To install CSM for Resiliency with the driver, the following changes are require
| Argument | Required | Description | Applicability |
|-|-|-|-|
| enabled | Required | Boolean "true" enables CSM for Resiliency installation with the driver in a helm installation. | top level |
-| image | Required | Must be set to a repository where the podmon image can be pulled. | controller & node |
| mode | Required | Must be set to "controller" for controller-podmon and "node" for node-podmon. | controller & node |
| csisock | Required | This should be left as set in the helm template for the driver. For controller: `-csisock=unix:/var/run/csi/csi.sock` For node it will vary depending on the driver's identity: `-csisock=unix:/var/lib/kubelet/plugins` `/vxflexos.emc.dell.com/csi_sock` | controller & node |
| leaderelection | Required | Boolean value that should be set true for controller and false for node. The default value is true. | controller & node |
@@ -79,7 +77,6 @@ Here is a typical installation used for testing:
```yaml
podmon:
- image: dellemc/podmon
enabled: true
controller:
args:
@@ -110,7 +107,6 @@ Here is a typical installation used for testing:
```yaml
podmon:
- image: dellemc/podmon
enabled: true
controller:
args:
@@ -141,7 +137,6 @@ Here is a typical installation used for testing:
```yaml
podmon:
- image: dellemc/podmon
enabled: true
controller:
args:
@@ -174,7 +169,6 @@ Here is a typical installation used for testing:
```yaml
podmon:
enabled: true
- image: dellemc/podmon
controller:
args:
- "--csisock=unix:/var/run/csi/csi.sock"
diff --git a/content/v1/resiliency/deployment/operator.md b/content/v1/resiliency/deployment/operator.md
new file mode 100644
index 0000000000..55fa6d2e59
--- /dev/null
+++ b/content/v1/resiliency/deployment/operator.md
@@ -0,0 +1,11 @@
+---
+title: Operator
+linktitle: Operator
+description: >
+ Dell Technologies (Dell) Container Storage Modules (CSM) for Resiliency Operator deployment
+---
+
+The CSM Resiliency module for supported Dell CSI Drivers can be installed via the Dell CSM Operator.
+To deploy the Operator, follow the instructions available [here](../../../deployment/csmoperator/#installation).
+
+To install CSM Resiliency via the Dell CSM Operator, follow the instructions [here](../../../deployment/csmoperator/modules/resiliency).
\ No newline at end of file
diff --git a/content/v1/resiliency/release/_index.md b/content/v1/resiliency/release/_index.md
index 0af98b6a2a..f85176810c 100644
--- a/content/v1/resiliency/release/_index.md
+++ b/content/v1/resiliency/release/_index.md
@@ -6,18 +6,24 @@ Description: >
Dell Container Storage Modules (CSM) release notes for resiliency
---
-## Release Notes - CSM Resiliency 1.7.0
+## Release Notes - CSM Resiliency 1.8.1
+
+
+
+
+
### New Features/Changes
-- [#724 - [FEATURE]: CSM support for Openshift 4.13](https://github.com/dell/csm/issues/724)
-- [#922 - [FEATURE]: Use ubi9 micro as base image](https://github.com/dell/csm/issues/922)
+- [#947 - [FEATURE]: Support for Kubernetes 1.28](https://github.com/dell/csm/issues/947)
+- [#1066 - [FEATURE]: Support for Openshift 4.14](https://github.com/dell/csm/issues/1066)
+- [#996 - [FEATURE]: Dell CSI to Dell CSM Operator Migration Process](https://github.com/dell/csm/issues/996)
+- [#1031 - [FEATURE]: Update to the latest UBI Micro image for CSM](https://github.com/dell/csm/issues/1031)
### Fixed Issues
-- [#916 - [BUG]: Remove references to deprecated io/ioutil package](https://github.com/dell/csm/issues/916)
### Known Issues
diff --git a/content/v1/resiliency/upgrade.md b/content/v1/resiliency/upgrade.md
index e532c8108f..ed7c9ca9cd 100644
--- a/content/v1/resiliency/upgrade.md
+++ b/content/v1/resiliency/upgrade.md
@@ -6,7 +6,7 @@ description: >
Dell Container Storage Modules (CSM) for Resiliency upgrade
---
-CSM for Resiliency can be upgraded as part of the Dell CSI driver upgrade process. The drivers can be upgraded either by a _helm chart_ or by the _Dell CSI Operator_. Currently, only _Helm chart_ upgrade is supported for CSM for Resiliency.
+CSM for Resiliency can be upgraded as part of the Dell CSI driver upgrade process. The drivers can be upgraded either by a _helm chart_ or by the _Dell CSM Operator_. Currently, only _Helm chart_ upgrade is supported for CSM for Resiliency.
For information on the PowerFlex CSI driver upgrade process, see [PowerFlex CSI Driver](../../csidriver/upgradation/drivers/powerflex).
diff --git a/content/v1/secure/encryption/_index.md b/content/v1/secure/encryption/_index.md
index 10c87f0ef2..440c171c56 100644
--- a/content/v1/secure/encryption/_index.md
+++ b/content/v1/secure/encryption/_index.md
@@ -68,11 +68,8 @@ the CSI driver must be restarted to pick up the change.
{{
}}
### PowerScale
diff --git a/content/v1/secure/encryption/deployment.md b/content/v1/secure/encryption/deployment.md
index 023dac9a3d..db6b188cb6 100644
--- a/content/v1/secure/encryption/deployment.md
+++ b/content/v1/secure/encryption/deployment.md
@@ -5,7 +5,7 @@ weight: 1
Description: >
Deployment
---
-Encryption for Dell Container Storage Modules is enabled via the Dell CSI driver installation. The drivers can be installed either by a Helm chart or by the Dell CSI Operator.
+Encryption for Dell Container Storage Modules is enabled via the Dell CSI driver installation. The drivers can be installed either by a Helm chart or by the Dell CSM Operator.
In the tech preview release, Encryption can only be enabled via Helm chart installation.
Except for additional Encryption related configuration outlined on this page,
@@ -33,9 +33,6 @@ encryption:
# pluginName: The name of the provisioner to use for encrypted volumes.
pluginName: "sec-isilon.dellemc.com"
- # image: Encryption driver image name.
- image: "dellemc/csm-encryption:v0.3.0"
-
# logLevel: Log level of the encryption driver.
# Allowed values: "error", "warning", "info", "debug", "trace".
logLevel: "error"
diff --git a/content/v1/secure/encryption/release/_index.md b/content/v1/secure/encryption/release/_index.md
index c89b327c55..b2376e4be6 100644
--- a/content/v1/secure/encryption/release/_index.md
+++ b/content/v1/secure/encryption/release/_index.md
@@ -8,9 +8,7 @@ Description: >
### New Features/Changes
-- [Technical preview release](https://github.com/dell/csm/issues/437)
-- Kubernetes 1.26 support.
-- Security updates.
+- Supports the latest version of CSM.
### Fixed Issues
diff --git a/content/v1/securitypolicy/_index.md b/content/v1/securitypolicy/_index.md
index 5d3b3fb6c5..06223eb64f 100644
--- a/content/v1/securitypolicy/_index.md
+++ b/content/v1/securitypolicy/_index.md
@@ -6,9 +6,10 @@ Description: >
Dell Container Storage Modules (CSM) Security Policy
---
+# Reporting Security Issues/Vulnerabilities
-The CSM services/repositories are inspected for security vulnerabilities via [gosec](https://github.com/securego/gosec),
-Instructions for reporting a vulnerability can be found on the [Dell Vulnerability Policy](https://www.dell.com/support/contents/en-in/article/product-support/self-support-knowledgebase/security-antivirus/alerts-vulnerabilities/dell-vulnerability-response-policy#:~:text=To%20report%20a%20security%20vulnerability%20or%20issue%20in%20Dell.com,instructions%20to%20reproduce%20the%20issue)
+The Dell Container Storage Modules team and community take security bugs seriously. We sincerely appreciate all your efforts and responsibility to disclose your findings.
+To report a security issue, please submit the security advisory form ["Report a Vulnerability"](https://github.com/dell/csm/security/advisories/new).
-CSM recommends to stay on the [lastest release](https://github.com/dell/csm/releases/latest) of Dell Container Storage Modules to take advantage of new features, enhancements, bug fixes, and security fixes.
+>CSM recommends staying on the [latest release](https://github.com/dell/csm/releases/latest) of Dell Container Storage Modules to take advantage of new features, enhancements, bug fixes, and security fixes.
diff --git a/content/v2/_index.md b/content/v2/_index.md
index 2de8c56267..012ffcef40 100644
--- a/content/v2/_index.md
+++ b/content/v2/_index.md
@@ -3,9 +3,10 @@
title: "Documentation"
linkTitle: "Documentation"
---
+
{{% pageinfo color="primary" %}}
This document version is no longer actively maintained. The site that you are currently viewing is an archived snapshot. For up-to-date documentation, see the [latest version](/csm-docs/)
-CSM 1.7.1 is applicable to helm based installations of PowerFlex driver.
+The CSM Authorization RPM will be deprecated in a future release. It is highly recommended that you use CSM Authorization Helm deployment or CSM Operator going forward.
{{% /pageinfo %}}
The Dell Technologies (Dell) Container Storage Modules (CSM) enables simple and consistent integration and automation experiences, extending enterprise storage capabilities to Kubernetes for cloud-native stateful applications. It reduces management complexity so developers can independently consume enterprise storage with ease and automate daily operations such as provisioning, snapshotting, replication, observability, authorization, application mobility, encryption, and resiliency.
@@ -61,13 +62,13 @@ CSM is made up of multiple components including modules (enterprise capabilities
{{< /card >}}
{{< /cardpane >}}
-## CSM Modules Support Matrix for Dell CSI Drivers
+## CSM Modules Support Matrix for Dell CSI Drivers
-| CSM Module | CSI PowerFlex v2.7.1 | CSI PowerScale v2.7.0 | CSI PowerStore v2.7.0 | CSI PowerMax v2.7.0 | CSI Unity XT v2.7.0 |
+| CSM Module | CSI PowerFlex v2.8.0 | CSI PowerScale v2.8.0 | CSI PowerStore v2.8.0 | CSI PowerMax v2.8.0 | CSI Unity XT v2.8.0 |
| ----------------------------------------------------------- | -------------------- | --------------------- | --------------------- | ------------------- | ------------------- |
-| [**Authorization**](authorization/) v1.7.0 | ✔️ | ✔️ | ❌ | ✔️ | ❌ |
-| [**Observability**](observability/) v1.5.0 | ✔️ | ✔️ | ✔️ | ✔️ | ❌ |
-| [**Replication**](replication/) v1.5.0 | ✔️ | ✔️ | ✔️ | ✔️ | ❌ |
-| [**Resiliency**](resiliency/) v1.6.0 | ✔️ | ✔️ | ✔️ | ❌ | ✔️ |
+| [**Authorization**](authorization/) v1.8.0 | ✔️ | ✔️ | ❌ | ✔️ | ❌ |
+| [**Observability**](observability/) v1.6.0 | ✔️ | ✔️ | ✔️ | ✔️ | ❌ |
+| [**Replication**](replication/) v1.6.0 | ✔️ | ✔️ | ✔️ | ✔️ | ❌ |
+| [**Resiliency**](resiliency/) v1.7.0 | ✔️ | ✔️ | ✔️ | ❌ | ✔️ |
| [**Encryption**](secure/encryption) v0.4.0 | ❌ | ✔️ | ❌ | ❌ | ❌ |
| [**Application Mobility**](applicationmobility/) v0.4.0 | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
diff --git a/content/v2/applicationmobility/release.md b/content/v2/applicationmobility/release.md
deleted file mode 100644
index 4e70a4effc..0000000000
--- a/content/v2/applicationmobility/release.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-title: "Release Notes"
-linkTitle: "Release Notes"
-weight: 5
-Description: >
- Release Notes
----
-
-## Release Notes - CSM Application Mobility 0.3.0
-### New Features/Changes
-
-There are no new features in this release
-
-### Fixed Issues
-
-- [CSM app-mobility can delete restores but they pop back up after 10 seconds.](https://github.com/dell/csm/issues/690)
-- [dellctl crashes on a "backup get" when a trailing "/" is added to the namespace](https://github.com/dell/csm/issues/691)
-
-### Known Issues
-
-There are no known issues in this release.
-
-
-## Release Notes - CSM Application Mobility 0.2.0
-### New Features/Changes
-
-- [Scheduled Backups for Application Mobility](https://github.com/dell/csm/issues/551)
-
-### Fixed Issues
-
-There are no fixed issues in this release.
-
-### Known Issues
-
-There are no known issues in this release.
-
-## Release Notes - CSM Application Mobility 0.1.0
-### New Features/Changes
-
-- [Technical preview release](https://github.com/dell/csm/issues/449)
-- Clone stateful application workloads and application data to other clusters, either on-premise or in the cloud
-- Supports Restic as a data mover for application data
-
-### Fixed Issues
-
-There are no fixed issues in this release.
-
-### Known Issues
-
-There are no known issues in this release.
diff --git a/content/v2/applicationmobility/release/_index.md b/content/v2/applicationmobility/release/_index.md
new file mode 100644
index 0000000000..f09efc282a
--- /dev/null
+++ b/content/v2/applicationmobility/release/_index.md
@@ -0,0 +1,22 @@
+---
+title: "Release Notes"
+linkTitle: "Release Notes"
+weight: 5
+Description: >
+ Release Notes
+---
+
+## Release Notes - CSM Application Mobility 0.3.0
+
+### New Features/Changes
+
+There are no new features in this release
+
+### Fixed Issues
+
+- [CSM app-mobility can delete restores but they pop back up after 10 seconds.](https://github.com/dell/csm/issues/690)
+- [dellctl crashes on a "backup get" when a trailing "/" is added to the namespace](https://github.com/dell/csm/issues/691)
+
+### Known Issues
+
+There are no known issues in this release.
diff --git a/content/v2/authorization/Backup and Restore/rpm/_index.md b/content/v2/authorization/Backup and Restore/rpm/_index.md
index bd537c9514..b90a8e76b5 100644
--- a/content/v2/authorization/Backup and Restore/rpm/_index.md
+++ b/content/v2/authorization/Backup and Restore/rpm/_index.md
@@ -5,6 +5,10 @@ description: >
Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization RPM backup and restore
---
+{{% pageinfo color="primary" %}}
+The CSM Authorization RPM is no longer actively maintained or supported. It will be deprecated in CSM 2.0. It is highly recommended that you use CSM Authorization Helm deployment or CSM Operator going forward.
+{{% /pageinfo %}}
+
## Roles
Role data is stored in the `common` Config Map in the underlying `k3s` deployment.
diff --git a/content/v2/authorization/_index.md b/content/v2/authorization/_index.md
index 6efb28f95f..f11031b38e 100644
--- a/content/v2/authorization/_index.md
+++ b/content/v2/authorization/_index.md
@@ -43,7 +43,7 @@ The following diagram shows a high-level overview of CSM for Authorization with
{{
}}
## Roles and Responsibilities
diff --git a/content/v2/authorization/cli.md b/content/v2/authorization/cli.md
index 157b39cd82..e395ee58ca 100644
--- a/content/v2/authorization/cli.md
+++ b/content/v2/authorization/cli.md
@@ -6,6 +6,10 @@ description: >
Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization CLI
---
+{{% pageinfo color="primary" %}}
+The CSM Authorization karavictl CLI is no longer actively maintained or supported. It will be deprecated in CSM 2.0.
+{{% /pageinfo %}}
+
karavictl is a command-line interface (CLI) used to interact with and manage your Container Storage Modules (CSM) Authorization deployment.
This document outlines all karavictl commands, their intended use, options that can be provided to alter their execution, and expected output from those commands.
diff --git a/content/v2/authorization/configuration/powerflex/_index.md b/content/v2/authorization/configuration/powerflex/_index.md
index 2656d863e0..406013bd61 100644
--- a/content/v2/authorization/configuration/powerflex/_index.md
+++ b/content/v2/authorization/configuration/powerflex/_index.md
@@ -119,8 +119,8 @@ Given a setup where Kubernetes, a storage system, and the CSM for Authorization
enabled: true
# sidecarProxyImage: the container image used for the csm-authorization-sidecar.
- # Default value: dellemc/csm-authorization-sidecar:v1.7.0
- sidecarProxyImage: dellemc/csm-authorization-sidecar:v1.7.0
+ # Default value: dellemc/csm-authorization-sidecar:v1.8.0
+ sidecarProxyImage: dellemc/csm-authorization-sidecar:v1.8.0
# proxyHost: hostname of the csm-authorization server
# Default value: None
@@ -156,10 +156,10 @@ Given a setup where Kubernetes, a storage system, and the CSM for Authorization
- name: authorization
# enable: Enable/Disable csm-authorization
enabled: true
- configVersion: v1.7.0
+ configVersion: v1.8.0
components:
- name: karavi-authorization-proxy
- image: dellemc/csm-authorization-sidecar:v1.7.0
+ image: dellemc/csm-authorization-sidecar:v1.8.0
envs:
# proxyHost: hostname of the csm-authorization server
- name: "PROXY_HOST"
diff --git a/content/v2/authorization/configuration/powermax/_index.md b/content/v2/authorization/configuration/powermax/_index.md
index cfb8b6e0e1..22aadfadbf 100644
--- a/content/v2/authorization/configuration/powermax/_index.md
+++ b/content/v2/authorization/configuration/powermax/_index.md
@@ -85,8 +85,8 @@ Create the karavi-authorization-config secret using this command:
enabled: true
# sidecarProxyImage: the container image used for the csm-authorization-sidecar.
- # Default value: dellemc/csm-authorization-sidecar:v1.7.0
- sidecarProxyImage: dellemc/csm-authorization-sidecar:v1.7.0
+ # Default value: dellemc/csm-authorization-sidecar:v1.8.0
+ sidecarProxyImage: dellemc/csm-authorization-sidecar:v1.8.0
# proxyHost: hostname of the csm-authorization server
# Default value: None
diff --git a/content/v2/authorization/configuration/powerscale/_index.md b/content/v2/authorization/configuration/powerscale/_index.md
index d6c452d033..5e0ca63c16 100644
--- a/content/v2/authorization/configuration/powerscale/_index.md
+++ b/content/v2/authorization/configuration/powerscale/_index.md
@@ -127,8 +127,8 @@ kubectl -n isilon create secret generic karavi-authorization-config --from-file=
enabled: true
# sidecarProxyImage: the container image used for the csm-authorization-sidecar.
- # Default value: dellemc/csm-authorization-sidecar:v1.7.0
- sidecarProxyImage: dellemc/csm-authorization-sidecar:v1.7.0
+ # Default value: dellemc/csm-authorization-sidecar:v1.8.0
+ sidecarProxyImage: dellemc/csm-authorization-sidecar:v1.8.0
# proxyHost: hostname of the csm-authorization server
# Default value: None
@@ -162,10 +162,10 @@ kubectl -n isilon create secret generic karavi-authorization-config --from-file=
- name: authorization
# enable: Enable/Disable csm-authorization
enabled: true
- configVersion: v1.7.0
+ configVersion: v1.8.0
components:
- name: karavi-authorization-proxy
- image: dellemc/csm-authorization-sidecar:v1.6.0
+ image: dellemc/csm-authorization-sidecar:v1.8.0
envs:
# proxyHost: hostname of the csm-authorization server
- name: "PROXY_HOST"
diff --git a/content/v2/authorization/deployment/_index.md b/content/v2/authorization/deployment/_index.md
index 5ff8a907d1..e3f383d6a5 100644
--- a/content/v2/authorization/deployment/_index.md
+++ b/content/v2/authorization/deployment/_index.md
@@ -8,4 +8,8 @@ tags:
- csm-authorization
---
+{{% pageinfo color="primary" %}}
+The CSM Authorization RPM will be deprecated in a future release. It is highly recommended that you use CSM Authorization Helm deployment or CSM Operator going forward.
+{{% /pageinfo %}}
+
Installation information for CSM Authorization can be found in this section.
diff --git a/content/v2/authorization/deployment/helm/_index.md b/content/v2/authorization/deployment/helm/_index.md
index 8ef6f29c26..95f30d6e7b 100644
--- a/content/v2/authorization/deployment/helm/_index.md
+++ b/content/v2/authorization/deployment/helm/_index.md
@@ -5,6 +5,10 @@ description: >
Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization Helm deployment
---
+{{% pageinfo color="primary" %}}
+The CSM Authorization karavictl CLI is no longer actively maintained or supported. It will be deprecated in CSM 2.0.
+{{% /pageinfo %}}
+
CSM Authorization can be installed by using the provided Helm v3 charts on Kubernetes platforms.
The following CSM Authorization components are installed in the specified namespace:
diff --git a/content/v2/authorization/deployment/rpm/_index.md b/content/v2/authorization/deployment/rpm/_index.md
index 8309fddd1a..b60cca62f2 100644
--- a/content/v2/authorization/deployment/rpm/_index.md
+++ b/content/v2/authorization/deployment/rpm/_index.md
@@ -6,6 +6,10 @@ description: >
Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization RPM deployment
---
+{{% pageinfo color="primary" %}}
+The CSM Authorization RPM will be deprecated in a future release. It is highly recommended that you use CSM Authorization Helm deployment or CSM Operator going forward.
+{{% /pageinfo %}}
+
This section outlines the deployment steps for Container Storage Modules (CSM) for Authorization. The deployment of CSM for Authorization is handled in 2 parts:
- Deploying the CSM for Authorization proxy server, to be controlled by storage administrators
- Configuring one to many [supported](../../../authorization#supported-csi-drivers) Dell CSI drivers with CSM for Authorization
diff --git a/content/v2/authorization/design.md b/content/v2/authorization/design.md
index 5d383a5114..2e763582b0 100644
--- a/content/v2/authorization/design.md
+++ b/content/v2/authorization/design.md
@@ -5,7 +5,6 @@ weight: 1
description: >
Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization design
---
-
Container Storage Modules (CSM) for Authorization is designed as a service mesh solution and consists of many internal components that work together in concert to achieve its overall functionality.
This document provides an overview of the major components, including how they fit together and pointers to implementation details.
@@ -263,4 +262,4 @@ The following otel exporters are used:
* ```bash
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp
- ```
\ No newline at end of file
+ ```
diff --git a/content/v2/authorization/release/_index.md b/content/v2/authorization/release/_index.md
index 5383614d75..a64bec93ca 100644
--- a/content/v2/authorization/release/_index.md
+++ b/content/v2/authorization/release/_index.md
@@ -6,18 +6,19 @@ Description: >
Dell Container Storage Modules (CSM) release notes for authorization
---
-## Release Notes - CSM Authorization 1.7.0
+## Release Notes - CSM Authorization 1.8.0
+
+
### New Features/Changes
-- CSM Authorization karavictl requires an admin token. ([#725](https://github.com/dell/csm/issues/725))
-- CSM support for Kubernetes 1.27. ([#761](https://github.com/dell/csm/issues/761))
-- CSM 1.7 release specific changes. ([#743](https://github.com/dell/csm/issues/743))
-- CSM Authorization encryption for secrets in K3S. ([#774](https://github.com/dell/csm/issues/774))
-
-### Bugs
-- Authorization should have sample CRD for every supported version in csm-operator. ([#826](https://github.com/dell/csm/issues/826))
-- Improve CSM Operator Authorization documentation. ([#800](https://github.com/dell/csm/issues/800))
-- CSM Authorization doesn't write the status code on error for csi-powerscale. ([#787](https://github.com/dell/csm/issues/787))
-- Authorization RPM installation should use nogpgcheck for k3s-selinux package. ([#772](https://github.com/dell/csm/issues/772))
-- CSM Authorization - karavictl generate token should output valid yaml. ([#767](https://github.com/dell/csm/issues/767))
+- [#922 - [FEATURE]: Use ubi9 micro as base image](https://github.com/dell/csm/issues/922)
+
+### Fixed Issues
+
+- [#895 - [BUG]: Update CSM Authorization karavictl CLI flag descriptions](https://github.com/dell/csm/issues/895)
+- [#916 - [BUG]: Remove references to deprecated io/ioutil package](https://github.com/dell/csm/issues/916)
+
+### Known Issues
+
+There are no known issues in this release.
diff --git a/content/v2/authorization/troubleshooting.md b/content/v2/authorization/troubleshooting.md
index 53e2b787d8..664e73b98e 100644
--- a/content/v2/authorization/troubleshooting.md
+++ b/content/v2/authorization/troubleshooting.md
@@ -6,6 +6,10 @@ Description: >
Troubleshooting guide
---
+{{% pageinfo color="primary" %}}
+The CSM Authorization RPM will be deprecated in a future release. It is highly recommended that you use CSM Authorization Helm deployment or CSM Operator going forward.
+{{% /pageinfo %}}
+
## RPM Deployment
- [The Failure of Building an Authorization RPM](#The-Failure-of-Building-an-Authorization-RPM)
- [Running `karavictl tenant` commands result in an HTTP 504 error](#running-karavictl-tenant-commands-result-in-an-http-504-error)
@@ -179,4 +183,4 @@ kubectl -n rollout restart deploy/proxy-server
```bash
kubectl -n rollout restart deploy/vxflexos-controller
kubectl -n rollout restart daemonSet/vxflexos-node
-```
\ No newline at end of file
+```
diff --git a/content/v2/authorization/uninstallation.md b/content/v2/authorization/uninstallation.md
index c81fb3c639..0200e9d51d 100644
--- a/content/v2/authorization/uninstallation.md
+++ b/content/v2/authorization/uninstallation.md
@@ -6,6 +6,10 @@ description: >
Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization Uninstallation
---
+{{% pageinfo color="primary" %}}
+The CSM Authorization RPM will be deprecated in a future release. It is highly recommended that you use CSM Authorization Helm deployment or CSM Operator going forward.
+{{% /pageinfo %}}
+
This section outlines the uninstallation steps for Container Storage Modules (CSM) for Authorization.
## Uninstalling the RPM
@@ -24,4 +28,4 @@ rpm -e
## Uninstalling the sidecar-proxy in the CSI Driver
-To uninstall the sidecar-proxy in the CSI Driver, [uninstall](../../csidriver/uninstall) the driver and [reinstall](../../deployment) the driver using the original configuration secret.
\ No newline at end of file
+To uninstall the sidecar-proxy in the CSI Driver, [uninstall](../../csidriver/uninstall) the driver and [reinstall](../../deployment) the driver using the original configuration secret.
diff --git a/content/v2/authorization/upgrade.md b/content/v2/authorization/upgrade.md
index 17a282aecd..a585c16933 100644
--- a/content/v2/authorization/upgrade.md
+++ b/content/v2/authorization/upgrade.md
@@ -6,6 +6,10 @@ description: >
Upgrade Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization
---
+{{% pageinfo color="primary" %}}
+The CSM Authorization RPM will be deprecated in a future release. It is highly recommended that you use CSM Authorization Helm deployment or CSM Operator going forward.
+{{% /pageinfo %}}
+
This section outlines the upgrade steps for Container Storage Modules (CSM) for Authorization. The upgrade of CSM for Authorization is handled in 2 parts:
- Upgrading the CSM for Authorization proxy server
- Upgrading the Dell CSI drivers with CSM for Authorization enabled
@@ -59,4 +63,4 @@ To rollback the rpm package on the system, run the below command:
```bash
rpm -Uvh --oldpackage karavi-authorization-.x86_64.rpm --nopreun --nopostun
-```
\ No newline at end of file
+```
diff --git a/content/v2/cosidriver/_index.md b/content/v2/cosidriver/_index.md
new file mode 100644
index 0000000000..97c354c9d7
--- /dev/null
+++ b/content/v2/cosidriver/_index.md
@@ -0,0 +1,57 @@
+---
+title: "COSI Driver"
+linkTitle: "COSI Driver"
+description: About Dell Technologies (Dell) COSI Driver
+weight: 3
+---
+
+The COSI Driver by Dell implements an interface between [COSI (spec v1alpha1)](https://container-object-storage-interface.github.io/docs/) enabled Container Orchestrator and Dell Storage Arrays. It is a plug-in that is installed into Kubernetes to provide object storage using Dell storage systems.
+
+Dell COSI Driver is a multi-backend driver, meaning that it can connect to multiple Object Storage Platform (OSP) Instances and provide access to them using the same COSI interface.
+
+## Features and capabilities
+
+### Supported Container Orchestrator Platforms
+
+> ℹ️ **NOTE:** during technical preview, no certification is performed. The platforms listed below were tested by developers using integration test suite.
+
+{{
}}
+| Area | Core Features | Implementation level | Status | Details |
+|:------------------|:-----------------------|:-----------------------:|:---------------:|---------------------------------------------------------------------------------------------|
+| Provisioning | _Create Bucket_ | Minimum Viable Product | ✅ Done | Bucket is created using default settings. |
+| | | Brownfield provisioning | ✅ Done | Bucket is created based on existing bucket in Object Storage Provisioner. |
+| | | Advanced provisioning | 📝 Design draft | Extra (non-default) parameters for bucket provisioning are controlled from the BucketClass. |
+| | _Delete Bucket_ | Minimum Viable Product | ✅ Done | Bucket is deleted. |
+| Access Management | _Grant Bucket Access_ | Minimum Viable Product | ✅ Done | Full access is granted for given bucket. |
+| | | Advanced permissions | 📝 Design draft | More control over permission is done through BucketAccessClass. |
+| | _Revoke Bucket Access_ | Minimum Viable Product | ✅ Done | Access is revoked. |
+{{
}}
diff --git a/content/v2/cosidriver/features/objectscale.md b/content/v2/cosidriver/features/objectscale.md
new file mode 100644
index 0000000000..8fbaf5fd33
--- /dev/null
+++ b/content/v2/cosidriver/features/objectscale.md
@@ -0,0 +1,351 @@
+---
+title: ObjectScale
+linktitle: ObjectScale
+weight: 1
+Description: Code features for ObjectScale COSI Driver
+---
+
+> **Notational Conventions**
+>
+> The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" are to be interpreted as described in [RFC 2119](http://tools.ietf.org/html/rfc2119) (Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997).
+
+Fields are specified by their path. Consider the following examples:
+
+1. Field specified by the following path `spec.authenticationType=IAM` is reflected in their resources YAML as the following:
+
+```yaml
+spec:
+ authenticationType: IAM
+```
+
+2. field specified by path `spec.protocols=[Azure,GCS]` is reflected in their resources YAML as the following:
+
+```yaml
+spec:
+ protocols:
+ - Azure
+ - GCS
+```
+
+## Prerequisites
+
+In order to use COSI Driver on ObjectScale platform, the following components MUST be deployed to your cluster:
+
+- Kubernetes Container Object Storage Interface CRDs
+- Container Object Storage Interface Controller
+
+> ℹ️ **NOTE:** use [the official COSI guide](https://container-object-storage-interface.github.io/docs/deployment-guide#quick-start) to deploy the required components.
+
+## Kubernetes Objects
+
+### Bucket
+
+`Bucket` represents a Bucket or its equivalent in the storage backend. Generally, it should be created only in the brownfield provisioning scenario. The following is a sample manifest of `Bucket` resource:
+
+```yaml
+apiVersion: objectstorage.k8s.io/v1alpha1
+kind: Bucket
+metadata:
+ name: my-bucket
+spec:
+ driverName: cosi.dellemc.com
+ bucketClassName: my-bucket-class
+ bucketClaim: my-bucket-claim
+ deletionPolicy: Delete
+ protocols:
+ - S3
+ parameters:
+ id: "my.objectscale"
+```
+
+#### `spec.existingBucketID`
+
+`existingBucketID` is an optional field that contains the unique id of the bucket in the ObjectScale. This field should be used to specify a bucket that has been created outside of COSI.
+Due to the fact that the driver supports multiple arrays and multiple ObjectStores from one instance, the `existingBucketID` needs to have a format of: `-`, e.g. `my.objectscale-existing-bucket`.
+
+### Bucket Claim
+
+`BucketClaim` represents a claim to provision a `Bucket`. The following is a sample manifest for creating a `BucketClaim` resource:
+
+```yaml
+apiVersion: objectstorage.k8s.io/v1alpha1
+kind: BucketClaim
+metadata:
+ name: my-bucketclaim
+ namespace: my-namespace
+spec:
+ bucketClassName: my-bucketclass
+ protocols: [ 'S3' ]
+```
+
+#### Unsupported options
+
+- `spec.protocols=[Azure,GCS]` - Protocols are the set of data API this bucket is required to support. From protocols specified by COSI (`v1alpha1`), Dell ObjectScale platform only supports the S3 protocol. Protocols `Azure` and `GCS` MUST NOT be used.
+
+### Bucket Class
+
+Installation of ObjectScale COSI driver does not create `BucketClass` resource. `BucketClass` represents a class of `Bucket` resources with similar characteristics.
+Dell COSI Driver is a multi-backend driver, meaning that for every platform the specific `BucketClass` should be created. The `BucketClass` resource should contain the name of multi-backend driver and `parameters.id` for specific Object Storage Platform.
+
+The default sample is shown below:
+
+```yaml
+apiVersion: objectstorage.k8s.io/v1alpha1
+kind: BucketClass
+metadata:
+ name: my-bucketclass
+driverName: cosi.dellemc.com
+deletionPolicy: Delete
+parameters:
+ id: "my.objectscale"
+```
+
+#### `deletionPolicy`
+
+> ⚠ **WARNING:** this field is case sensitive, and the bucket deletion will fail if policy is not set exactly to *Delete* or *Retain*.
+
+`deletionPolicy` in `BucketClass` resource is used to specify how COSI should handle deletion of the bucket. There are two possible values:
+- **Retain**: Indicates that the bucket should not be deleted from the object store. The underlying bucket is not cleaned up when the Bucket object is deleted. With this option, the bucket is unreachable from Kubernetes level.
+- **Delete**: Indicates that the bucket should be permanently deleted from the object store once all the workloads accessing this bucket are done. The underlying bucket is cleaned up when the Bucket object is deleted.
+
+#### `emptyBucket`
+
+`emptyBucket` field is set in config YAML file passed to the chart during COSI driver installation. If it is set to `true`, then the bucket will be emptied before deletion. If it is set to `false`, then ObjectScale cannot delete the bucket since it is not empty, and it will return an error.
+
+`emptyBucket` has no effect when Deletion Policy is set to `Retain`.
+
+### Bucket Access Class
+
+Installation of ObjectScale COSI driver does not create `BucketAccessClass` resource. `BucketAccessClass` represents a class of `BucketAccess` resources with similar characteristics.
+Dell COSI Driver is a multi-backend driver, meaning that for every platform the specific `BucketAccessClass` should be created. The `BucketClass` resource should contain the name of multi-backend driver and `parameters.id` for specific Object Storage Platform.
+The default sample is shown below:
+
+```yaml
+apiVersion: objectstorage.k8s.io/v1alpha1
+kind: BucketAccessClass
+metadata:
+ name: my-bucketaccessclass
+driverName: cosi.dellemc.com
+authenticationType: Key
+parameters:
+ id: "my.objectscale"
+```
+
+#### `authenticationType`
+
+> ⚠ **WARNING:** this field is case sensitive, and the granting access will fail if it is not set exactly to *Key* or *IAM*.
+
+`authenticationType` denotes the style of authentication. The only supported option for COSI Driver is `Key`.
+
+#### Unsupported options
+
+- `authenticationType=IAM` - denotes the style of authentication. The `IAM` value MUST NOT be used, because IAM style authentication is not supported.
+
+### Bucket Access
+
+`BucketAccess` resource represents a access request to generate a `Secret`, that will allow you to access ObjectStorage . The following is a sample manifest for creating a BucketClaim resource:
+
+```yaml
+apiVersion: objectstorage.k8s.io/v1alpha1
+kind: BucketAccess
+metadata:
+ name: my-bucketaccess
+ namespace: my-namespace
+spec:
+ bucketClaimName: my-bucketclaim
+ protocol: S3
+ bucketAccessClassName: my-bucketaccessclass
+ credentialsSecretName: my-s3-secret
+```
+
+#### `spec.protocol`
+
+> ⚠ **WARNING:** this field is case sensitive, and the provisioning will fail if protocol is not set exactly to *S3*.
+
+`spec.protocol` is the name of the Protocol that this access credential is supposed to support.
+
+#### Unsupported options
+
+- `spec.serviceAccountName=...` - is the name of the serviceAccount that COSI will map to the object storage provider service account when IAM styled authentication is specified. As the IAM style authentication is not supported, this field is also unsupported.
+- `spec.protocol=...` - Protocols are the set of data API this bucket is required to support. From protocols specified by COSI (`v1alpha1`), Dell ObjectScale platform only supports the `S3` protocol. Protocols `Azure` and `GCS` MUST NOT be used.
+
+## Provisioning Buckets
+
+Each bucket is provisioned using default options:
+
+{{
}}
+| Category | Parameter | Description | Default |
+|-------------|---------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------|
+| Policy | | | No policy applied on the bucket. |
+| Controls | | | No additional controls have been setup for the bucket. |
+| Controls | Versioning | 1. Enabling versioning allows maintaining multiple versions of same object in same bucket. 2. Bucket Versioning can't be disabled when Object Lock is enabled. | Off |
+| Controls | Object Lock | Enabling object lock allows objects to be locked or protected from deletion or overwrite, for a fixed amount of time or indefinitely, depending on the configuration. | Off |
+| Controls | Quotas | 1. Block writes at Quota: Represents a hard quota that prevents bucket writes when total object count/size is reached. 2. Notification at Quota: Represents a soft quota value which triggers a notification when total object count/size is reached. | Off |
+| Controls | Encryption | If encryption is turned on bucket data will be saved in encrypted form. | Off |
+| Event Rules | | The notifications will be sent to the destination when the selected type of events occur on the bucket. | No event rules configured for the bucket. |
+| Events | All | | Off |
+| Events | `s3:ObjectCreated:Put` | | Off |
+| Events | `s3:ObjectCreated:Copy` | | Off |
+| Events | `s3:ObjectCreated:CompleteMultipartUpload` | | Off |
+| Events | `s3:ObjectCreated:*` | | Off |
+| Events | `s3:ObjectRemoved:Delete` | | Off |
+| Events | `s3:ObjectRemoved:DeleteMarkerCreated` | | Off |
+| Events | `s3:ObjectRemoved:*` | | Off |
+| Events | `s3:Replication:OperationFailedReplication` | | Off |
+| Event Rules | Prefix | Event rule are applied for object names with the given prefix. | Off |
+| Event Rules | Suffix | Event rule are applied for object names with the given suffix. | Off |
+| Event Rules | Send To | The notification destination used to send notification for selected events. | Off |
+{{
}}
+
+### Kubernetes Administrator Steps
+
+The first step before you can start provisioning object storage, is to create a `BucketClass`. The `BucketClass` is an object that defines the provisioning and management characteristics of `Bucket` resources. It acts as an abstraction layer between users (such as applications or pods) and the underlying object storage infrastructure. `BucketClass` allows you to dynamically provision and manage `Buckets` in a consistent and automated manner.
+
+The following example shows how to create a `BucketClass`:
+
+```sh
+cat < ℹ️ **NOTE:** remember to replace _my-namespace_, _my-bucketclass_ and _my-bucketclaim_ with actual values.
+
+#### Brownfield Provisioning
+
+_Brownfield Provisioning_ means using an existing bucket, that can already contain the data. This differs slightly from _Greenfield Provisioning_, as we need to create both `Bucket` and `BucketClaim` manually.
+
+The following example shows how to create `Bucket` and `BucketClaim` for brownfield provisioning.
+
+```sh
+cat < ℹ️ **NOTE:** remember to replace _my-namespace_, _existing-bucket-name_ and _my-bucketclaim_ with actual values.
+
+## Deleting Buckets
+
+There are a few crucial details regarding bucket deletion. The first one is `deletionPolicy` which is used to specify how COSI should handle deletion of a bucket. It is found in `BucketClass` resource and can be set to `Delete` and `Retain`. The second crucial detail is `emptyBucket` field in the [Helm Chart configuration](../../installation/configuration_file).
+
+The following example shows how to delete a `BucketClaim`.
+
+```sh
+kubectl --namespace=my-namespace delete bucketclaim my-bucketclaim
+```
+
+> ℹ️ **NOTE:** remember to replace _my-namespace_ and _my-bucketclaim_ with actual values.
+
+## Granting Access
+
+### Kubernetes Administrator Steps
+
+The first step before you start granting access to the object storage for your application, is to create a `BucketAccessClass`. The `BucketAccessClass` is an object that defines the access management characteristics of `Bucket` resources. It acts as an abstraction layer between users (such as applications or pods) and the underlying object storage infrastructure. `BucketAccessClass` allows you to dynamically grant access to `Buckets` in a consistent and automated manner.
+
+The following example shows how to create a `BucketAccessClass`:
+
+```sh
+cat < ⚠ **WARNING:** only full access granting is supported.
+
+The underlying workflow for granting access to the object storage primitive is:
+
+- user is added to particular account in the ObjectScale;
+- bucket policy is modified to reflect that user has gained permissions for a bucket;
+- access key for the user is added to ObjectScale.
+
+The following example shows how to grant an access using `BucketAccess` resource:
+
+```sh
+cat < ℹ️ **NOTE:** remember to replace _my-namespace_, _my-bucketaccessclass_, _my-bucketclaim_, _my-s3-secret_ and _my-bucketaccess_ with actual values.
+
+## Revoking Access
+
+This feature revokes a user's previously granted access to a particular bucket.
+When resource of `BucketAccess` kind is removed from Kubernetes it triggers the process:
+
+- access key is removed from ObjectScale;
+- bucket policy is modified to reflect that user has lost permissions for a bucket;
+- user is removed from ObjectScale.
+
+The following example shows how to revoke a `BucketAccess`:
+
+```sh
+kubectl --namespace=my-namespace delete bucketaccess my-bucketaccess
+```
+
+> ℹ️ **NOTE:** remember to replace _my-namespace_ and _my-bucketaccess_ with actual values.
diff --git a/content/v2/cosidriver/installation/_index.md b/content/v2/cosidriver/installation/_index.md
new file mode 100644
index 0000000000..b82a377fcd
--- /dev/null
+++ b/content/v2/cosidriver/installation/_index.md
@@ -0,0 +1,6 @@
+---
+title: "Installation"
+linkTitle: "Installation"
+weight: 4
+description: Process of installation
+---
\ No newline at end of file
diff --git a/content/v2/cosidriver/installation/configuration_file.md b/content/v2/cosidriver/installation/configuration_file.md
new file mode 100644
index 0000000000..8864eba93e
--- /dev/null
+++ b/content/v2/cosidriver/installation/configuration_file.md
@@ -0,0 +1,320 @@
+---
+title: Configuration File
+linktitle: Configuration File
+weight: 1
+Description: Description of configuration file for ObjectScale
+---
+
+> **Notational Conventions**
+>
+> The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" are to be interpreted as described in [RFC 2119](http://tools.ietf.org/html/rfc2119) (Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997).
+
+## Dell COSI Driver Configuration Schema
+
+This configuration file is used to specify the settings for the Dell COSI Driver, which is responsible for managing connections to the Dell ObjectScale platform. The configuration file is written in YAML format and based on the JSON schema and adheres to its specification.
+
+YAML files can have comments, which are lines in the file that begin with the `#` character. Comments can be used to provide context and explanations for the data in the file, and they are ignored by parsers when reading the YAML data.
+
+## Configuration file example
+
+```yaml
+# This is an example of a configuration file. You MUST edit the file before using it in your environment.
+
+# List of connections to object storage platforms that is used for object storage provisioning.
+connections:
+
+# Configuration specific to the Dell ObjectScale platform.
+- objectscale:
+
+ # Default, unique identifier for the single connection.
+ #
+ # It MUST NOT contain any hyphens '-'.
+ #
+ # REQUIRED
+ id: example.id
+
+ # Credentials used for authentication to object storage provider.
+ #
+ # REQUIRED
+ credentials:
+
+ # Username used to login to ObjectScale Management API
+ #
+ # REQUIRED
+ username: testuser
+
+ # Password used to login to ObjectScale Management API
+ #
+ # REQUIRED
+ password: testpassword
+
+ # Namespace associated with the user/tenant that is allowed to access the bucket.
+ # It can be retrieved from the ObjectScale Portal, under the Accounts tab.
+ #
+ # How to:
+ # 1. Login into ObjectScale Portal;
+ # 2. Select Accounts tab in the panel on the left side of your screen;
+ # 3. You should now see list of accounts. Select one of the values from column called 'Account ID'.
+ #
+ # REQUIRED
+ namespace: osaia3382ab190a7a3df
+
+ # The ID of the ObjectScale the driver should communicate with.
+ # It can be retrieved from the ObjectScale Portal, under the ObjectScale tab.
+ #
+ # How to:
+ # 1. Login into ObjectScale Portal;
+ # 2. From the menu on left side of the screen select 'Administration' tab;
+ # 3. Expand the 'Administration tab and select 'ObjectScale';
+ # 4. Select 'Federation' tab;
+ # 5. In the table you will see value under 'ObjectScale ID' column.
+ #
+ # REQUIRED
+ objectscale-id: osci809ccd51aade874b
+
+ # The ID of the Objectstore under specific ObjectScale, with which the driver should communicate.
+ # It can be retrieved from the ObjectScale Portal, under the ObjectScale tab.
+ #
+ # How to:
+ # 1. Login into ObjectScale Portal;
+ # 2. From the menu on left side of the screen select 'Administration' tab;
+ # 3. Expand the 'Administration tab and select 'ObjectScale';
+ # 4. Select one of the object stores visible in the table, and click its name;
+ # 5. You should see 'Summary' of that object store.
+ # 6. In the 'General' section, you will see value under 'Object store ID' column.
+ #
+ # REQUIRED
+ objectstore-id: ostibd2054393c389b1a
+
+ # Endpoint of the ObjectScale Gateway Internal service.
+ # It can be retrieved from the ObjectScale Portal, under the ObjectScale tab.
+ #
+ # How to:
+ # 1. Login into ObjectScale Portal;
+ # 2. From the menu on left side of the screen select 'Administration' tab;
+ # 3. Expand the 'Administration tab and select 'ObjectScale';
+ # 4. Select 'Federation' tab;
+ # 5. In the table you will see one or more values, expand the selected value;
+ # 6. In the table, you will now see 'External Endpoint' value associated with 'objectscale-gateway-internal'.
+ #
+ # Valid values:
+ # - https://:443
+ # - https://
+ #
+ # REQUIRED
+ objectscale-gateway: https://gateway.objectscale.test:443
+
+ # Endpoint of the ObjectScale ObjectStore Management Gateway service.
+ # It can be retrieved from the ObjectScale Portal, under the ObjectScale tab.
+ #
+ # How to:
+ # 1. Login into ObjectScale Portal;
+ # 2. From the menu on left side of the screen select 'Administration' tab;
+ # 3. Expand the 'Administration' tab, and select 'ObjectScale';
+ # 4. Select one of the object stores visible in the table, and click its name;
+ # 5. You should see 'Summary' of that object store.
+ # 6. In the 'Management Service details' section, you will see value under 'IP address' column.
+ #
+ # Valid values:
+ # - https://:4443
+ # - https://
+ #
+ # REQUIRED
+ objectstore-gateway: https://gateway.objectstore.test:4443
+
+ # Identity and Access Management (IAM) API specific field.
+ # It points to the region in which object storage provider is installed.
+ #
+ # OPTIONAL
+ region: us-east-1
+
+ # Indicates if the contents of the bucket should be emptied as part of the deletion process
+ #
+ # Possible values:
+ # - true - bucket will be emptied during the deletion.
+ # - false - default - deletion of bucket will fail if the bucket is not empty.
+ # All contents of the bucket must be cleared manually.
+ #
+ # OPTIONAL
+ emptyBucket: false
+
+ # Protocols supported by the connection
+ #
+ # Valid values:
+ # s3 (property)
+ #
+ # REQUIRED
+ protocols:
+
+ # S3 configuration
+ #
+ # REQUIRED
+ s3:
+
+ # Endpoint of the S3 service.
+ # It can be retrieved from the ObjectScale Portal, under the ObjectScale tab.
+ #
+ # How to:
+ # 1. Login into ObjectScale Portal;
+ # 2. From the menu on left side of the screen select 'Administration' tab;
+ # 3. Expand the 'Administration tab and select 'ObjectScale';
+ # 4. Select one of the object stores visible in the table, and click its name;
+ # 5. You should see 'Summary' of that object store.
+ # 6. In the 'S3 Service details' section, you will see value under 'IP address' column.
+ #
+ # Valid values:
+ # - https://:443
+ # - https://
+ # - http://:80
+ # - http://
+ #
+ # REQUIRED
+ endpoint: https://s3.objectstore.test
+
+ # TLS configuration details
+ #
+ # REQUIRED
+ tls:
+
+ # Controls whether a client verifies the server's certificate chain and host name.
+ #
+ # Possible values:
+ # - true - default
+ # - false
+ #
+ # REQUIRED
+ insecure: false
+
+ # Base64 encoded content of the root certificate authority file.
+ #
+ # How To:
+ # 1. Fetch the certificate from the ObjectScale:
+ # $ openssl s_client -showcerts -connect [ObjectScale IP] /dev/null | openssl x509 -outform PEM > root.crt
+ # 2. Encode the data using the following commands:
+ # $ cat root.crt | base64 > root.crt.b64
+ # 3. Open the 'root.crt.b64' file, copy it contents, and paste to the configuration file
+ #
+ # REQUIRED:
+ # + if insecure is set to false
+ root-cas: |-
+ LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUU1RENDQXN5Z0F3SUJBZ0lCQVRBTkJna3Fo
+ a2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkMFpYTjAKTFdOaE1CNFhEVEl6TURNeE56RXlN
+ ek16TTFvWERUSTBNRGt4TnpFeU5ETXpNRm93RWpFUU1BNEdBMVVFQXhNSApkR1Z6ZEMxallUQ0NB
+ aUl3RFFZSktvWklodmNOQVFFQkJRQURnZ0lQQURDQ0Fnb0NnZ0lCQU9oUmc1Um95UXdxCmVtQ1VN
+ TDU3cXVLSXJjMWZXdGdlSGRpbVRSamFsVERQMStqYUhGeG56d2M1MTRwOWNLNzcxRWZ2bDRjZW9Q
+ VWsKWnRhNSsxckRxdlBkd25BMnE2TXI5cFB2aWQyRkRiZVZPdXNIaHNQSG1kMDVxa1pnNGNXUGdp
+ eXlSM3BmNTF0bApVYkxyNU1tL0FIK0JvRHVMbFo1UG5SVUw1b0hFd1hQa3BXc0UyMXJDc2xSdmJv
+ WWZJYlplUzlsOHhlYURMVmdDCk53UmFHRjgxTFpoZjVrTDA0SXJUV0dETzdlbVF0S2tpN0dSZ1Ex
+ bHIxRHR3SXZpa0puakhBeEJiOTJ3WDN1WnoKcGdMQksxU2RsUlY1bjY2VTZtUklzMGo1MkVyTG1h
+ TDdUSHJxRVZHRXNvczFIbEZFQ2NJMlNhQjZZdmltaTdZawpmT1lOS2NPaE5BcXlXcWhlUERHQ0dq
+ d3l4RHR3OWN2Z2FJSTlTOFFUa2w5Z1JiL056dFlMREptejlEYXZiRWNjCjRDelZBdUVmdUVtWUNi
+ aFRrUVUyWitZczlKdXgwdmc4WXFFTExlRzlNZHc1cmZJQkkwNmRMRDVkU0JUVFc1Y08KYjRNN0h1
+ ODhrZUdIWnlNZXU2cVMyR2czUUFTVEM3RkpFcWFYTkRDc095aCs2Uk14UnkyZy9idEZMRm5VdmlG
+ QQo0NktKZHk0QWVjOEpXVkc1OFlLYkd2QlJrekkzY1BNWE1oWFpDS3pZb0tnUWoxMnFOMWM0SkVp
+ TUFPK2F2ZW9RCjB0dnJmd3MxMlF3d3ZIZm40SCtYVnlDZGpMcDE5dlhlY0FSRFJyaGlkRW1CbEFD
+ cVJVdTFLSGhzejZ2TmxzUzIKSlZiWU9BYW5ISzYzNzdYT211OUthL2x1TmxSVDdmckxBZ01CQUFH
+ alJUQkRNQTRHQTFVZER3RUIvd1FFQXdJQgpCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VBTUIw
+ R0ExVWREZ1FXQkJSbDk4cG1valVUQ3RZb3phTDl6L0hSCmJIUkdkREFOQmdrcWhraUc5dzBCQVFz
+ RkFBT0NBZ0VBNUVxL09ocGs0MUxLa3R2ajlmSWNFWXI5Vi9QUit6Z1UKQThRTUtvOVBuZSsrdW1N
+ dEZEK1R1M040b1lxV0srTmg0ajdHTm80RFJXejNnbWpZdUlDdklhMGo5emppT1FkWgo1Q2xVQkdk
+ YUlScFoyRG5CblBUM2tJbnhSd0hmU0JMVlVTRXRTcXh4YkV2dk5LWkZWY0lsRUV5ODZodnJ5OUZD
+ CjhFOWRXWEw5VDhMd29uVXpqSjBxZ242cGRjNHpjdEtUMDFjaDQvWGw2UjBVQkR5Q1NoSGFyU29C
+ eTkvSk1NTXIKajBoeEZSN3Izb052a2N3QWl6T1RsQ3BWdTZaNHF2cng3NndCc0hIanV6elNiODJL
+ dUxnelJUNElWbjFjbzRrVQpSaTlBRkNaRlh6QklaQlEwTUZ6NU03bzJkN0ovN3ZMOFhYRlhwWlpy
+ K3RibWE1L3BCSmZhcXliK3FPRXViWGdUCjFsSDZGeFNVcWt0TktQNlZoeWdQY2ZSMlR4YWtHZ0cw
+ Ny9qVWZWRmhpVXM5aFBlejh6Sjg2RWMrd283VEVQbEsKSlRnMHZmMDM4MTROR3ZuWmlpTnBFWVBM
+ S0ZhcHlDMWJONVdFTGFTWFVBaVFPZDJjK01xVHAyN21vV1RZa29TOApzRFczRTMraEN6c1djdmFY
+ RW1nMjZJTjQybmVUWFBuNS9QajNpcUVoT0pQYkJsY3l6dDBZL1BYeU1jR3JtbUs1CkhxOUMzTndl
+ VUV3M09rY09BOXlCdC9kLzZ5S3c3QmovSlFQZGI0aDlWWjNGN09wemFpeXQ5cFhvSXRQMHNUSHUK
+ S2ZKbDBCRUFYV29SR2lWM2EyeUlUcGp0a0pkQVBoS0xpSkkrWWowZEVEU05WZnlENFhJTXdQSmpV
+ eFpsd2FROQorQUtkVDFBdlplbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
+
+ # Base64 encoded content of the clients's certificate file.
+ #
+ # How To:
+ # Considering client certificate file is named 'client.crt', you can obtain the data using the following commands:
+ # cat client.crt | base64 > client.crt.b64
+ # You can then open the 'client.crt.b64' file, copy it contents, and paste to the configuration file
+ #
+ # It is required only if the server requires client authentication.
+ # It is mutually required if the field client-key has a value.
+ #
+ # REQUIRED:
+ # + if insecure is set to false
+ # AND
+ # + the client-key field is not empty
+ client-cert: |-
+ LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVORENDQWh5Z0F3SUJBZ0lSQU9JSlZ2NnB3
+ a0lIK0p1NTNKSEFuam93RFFZSktvWklodmNOQVFFTEJRQXcKRWpFUU1BNEdBMVVFQXhNSGRHVnpk
+ QzFqWVRBZUZ3MHlNekF6TVRjeE1qTTJNelphRncweU5EQTVNVGN4TWpRegpNamxhTUJFeER6QU5C
+ Z05WQkFNVEJtTnNhV1Z1ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDCkFRb0Nn
+ Z0VCQU5LVFNHeEEyV2RyNmtCR0N3RjY5c1JVZElPV0xqeTUvN3QyRktKWDVVenNyMDlFWW9tS0sr
+ bVQKdWF2eWJIMWhsbTYzdG5kb3VFOHFIQnVhYmYvUGIzSlRTQ0twR0NRdHR2NmQzeGc3MHFZVWIx
+ cUZKT2o5andlNgpRZW0xb2RIVFpLc0xMc2J1N1Fzei91MGtseUovMHNYcFQ5K2JXK1M0OHMrL3pK
+ dHNDR21SdVhlRjE2Y1FqOWErCkFFejNqVzhrdExMYi9nS25GWGRSS2FiY2RWLzNzN2RLNWx0SXpS
+ ZlRvUWw0bzBpckpOa3Z4eXIrYUtMMTR4NUQKc3g2Wm9DUHJhRFYrWWlRS0ZSenFjQ1RYcWdRb3BY
+ LzFINFRMV3RkeG14M25IdmhZdzB0VlBZSXZsa245NmpJUwpKdVE2K1VMbVAzZDNzNWJadlhQeUZD
+ bENKSENxaWZNQ0F3RUFBYU9CaFRDQmdqQU9CZ05WSFE4QkFmOEVCQU1DCkE3Z3dIUVlEVlIwbEJC
+ WXdGQVlJS3dZQkJRVUhBd0VHQ0NzR0FRVUZCd01DTUIwR0ExVWREZ1FXQkJTRWVIOTEKVnBhdDlV
+ SWlrRUdkc1ljdUI2dWxOakFmQmdOVkhTTUVHREFXZ0JSbDk4cG1valVUQ3RZb3phTDl6L0hSYkhS
+ RwpkREFSQmdOVkhSRUVDakFJZ2daamJHbGxiblF3RFFZSktvWklodmNOQVFFTEJRQURnZ0lCQUQv
+ TnZVNWRSajlHCmMzYzVVQ3VLcDI4U2lacjAySE40M091WU5QVlk4L1c5QnZUSk5yMXRPRDFscnhE
+ eFMzTkpVdzdGaTNidmU5enMKSzA0a09peUxpVjRLd0g2eitpVm8xZU9GUzJLd1BRaGxsaDlobVBB
+ dXZ4Zm5Fd2k2ZEdXZm5nNExmQ1FvbXFkTgpmbkFCODJBbTViZTBubGJvaGdLcFJUWnVBZjR4dVY4
+ SWxlQ1pjVHdFL1hBbERhNVhHaDNvWlE3REYrQnFLSkNUCk1pYS9MT0JPYXRoRVh5ZGJmbndOUUhy
+ UWlQZzk4c2NMc3FTZEFQMFNGYjMrMmdscFJZT1JrQlFvOWRoa1pGZXkKc2tUakVhbk9YaUhqWldq
+ aXZRS2Z2WEUvK1l2eGpCcEJqREE2NnYyeUgzSlJqZEM5ZTR2cnE2R0t6VXZML3ltOQpVOGdVWnho
+ L2ZmeFp4TVA5UmxXajQ0U1NGUVpZNGxUNFF5U2lteFpGdVBTamwzV29QME12UHVvUzFUUzhQUk5s
+ CnVGeXBVell5SEtlbHpLUnRJZmlnWG9XQi9uR2hSV0RMN2FZS0xYZWRIU0ZrdXBmZm9YM1hHQThM
+ ZVAwQ01PaEsKUUJaUkxIeXU0VjhvRG1lakFIcFoyVjlpY2E1emtmcnJWVXFvSzF1VjYvdHd3cEZG
+ WDErN0w1bk0ybDJDQWxvegpaVHFUZzNCdVdYd2VkYzZQbkpuU2xQSDNadFhqcGFJUWhXdU85TUlG
+ WFVtVFBlSkZ2WGxKeWRsdUxtMlQzanVqCldiVENGcEhyMXBrMGk3K1J4ZVRBcFY0RTk2S09DOXEw
+ ZGREOG1waTM0cnkyZjFmQ2RZekhQM0s4bW5od3BPWmkKaG1Xd3VWVDV3em5kVWVBRGNWYUY2UlhU
+ UENKSElLd24KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
+
+ # Base64 encoded content of the clients's key certificate file.
+ #
+ # How To:
+ # Considering client certificate file is named 'client.key', you can obtain the data using the following commands:
+ # cat client.key | base64 > client.key.b64
+ # You can then open the 'client.key.b64' file, copy it contents, and paste to the configuration file
+ #
+ # It is required only if the server requires client authentication.
+ # It is mutually required if the field client-cert has a value.
+ #
+ # REQUIRED:
+ # + if insecure is set to false
+ # AND
+ # + the client-cert field is not empty
+ client-key: |-
+ LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBMHBOSWJFRFpa
+ MnZxUUVZTEFYcjJ4RlIwZzVZdVBMbi91M1lVb2xmbFRPeXZUMFJpCmlZb3I2Wk81cS9Kc2ZXR1di
+ cmUyZDJpNFR5b2NHNXB0Lzg5dmNsTklJcWtZSkMyMi9wM2ZHRHZTcGhSdldvVWsKNlAyUEI3cEI2
+ YldoMGROa3F3c3V4dTd0Q3pQKzdTU1hJbi9TeGVsUDM1dGI1TGp5ejcvTW0yd0lhWkc1ZDRYWApw
+ eENQMXI0QVRQZU5ieVMwc3R2K0FxY1ZkMUVwcHR4MVgvZXp0MHJtVzBqTkY5T2hDWGlqU0tzazJT
+ L0hLdjVvCm92WGpIa096SHBtZ0krdG9OWDVpSkFvVkhPcHdKTmVxQkNpbGYvVWZoTXRhMTNHYkhl
+ Y2UrRmpEUzFVOWdpK1cKU2YzcU1oSW01RHI1UXVZL2QzZXpsdG05Yy9JVUtVSWtjS3FKOHdJREFR
+ QUJBb0lCQUJFSVVzSlcySDd5RHFlVwpRc3VpMjVUejA5elU1L2FIZ1BUenp5VjJnSmloU0dqYitq
+ QnYyYTl5QUlHMUFTdC9Ha0RvWVR6MVhuc2d4OWMvCnZZZ0VpbG92L0ZTNVlyZUNieHZYUHpWaG1W
+ OVBwZFlua04yN3JMY09UTWlQcFlBb1hpc3JvMlA1N1hpTGd5SkIKWkd3bzlLNkhlYXQza0k1R20z
+ Vk1hVXRsQ0tVcE84cUwzcEZ4S1AwMVVwbGh6ZjhMbXJpTUJQMDlxdFFJejBydQpiR1l5eUdVdk9a
+ a0RKZFJycmlSWGJWK0RNMFlmbVpqU1Q4aEI0UDlsOEhwMEZRNUp2TWVGREpzRFFaZjVBZnJmClFI
+ WE55SlFUeTNTeXJ1bGd5N0p4MGY1T2JpVWRMRWViQVRpN3VLR3Y5UEZRRUJmSzdFdE4vZ1ZibGsx
+ MzRzNUIKWEhkNXU1a0NnWUVBNDBVMjhONko4QXIwY2puYnNLUUJtOGhURWlJSjk3TEJPOU5kOTlJ
+ M1dJYklZVzIzVE5wVwo0M2R4K1JHelA4eVMzYzZhN00wbzR1dUl6TXFDSkV3cVNJUjAvVGZaWWdx
+ cGtwcFZPalp2VFdCUDFtSUlKUFpwCll1SFk0UVRJdkdhcVFNNnFWQXA4MW9YdXoxTmNmQWpTLzNJ
+ Z1BWdGVZeDNKd0pmNWVqenZQclVDZ1lFQTdUSEwKR3VCTWpqTWVhaWk1ZU1sU1BndkYxMHJISUs0
+ RzZCZUJDTFFXU2ViNmNOT2x2a1RaOTNqdlFiWko1L3JBTGNWNgpaTVdqbWY5Tkl0NWdDdyt2K2dM
+ Qm9BZXM3WEk2K2Rpdk1DYXE0dUFmWkhJWjBYbXpIOGx1a0o5ZUtyK2NyR2FzClNhWkdKRnlyQTZz
+ WGdOc1ZJUm85RkFsR3V1dGZnd2hSUmo1eFp3Y0NnWUVBZ241MWcyeGtDMTVlNlU5clkwdG8KV1Fo
+ M0dreE5LTnFNdFVzeUExL0N3NlB3WG5EZTlOUFJYQjV6WkszVEhHamNVMXVUL1MvM3NBUEpzcno4
+ YU5jSwoyRVNsMzljM2pHSE82QXlScnpFZVMzRm5waEwzMWpGZVpaYUVMdi9PT3M5QUpxSURqdW5P
+ c0dhS3JxU1F6KzlKCko3OWgzNWtjNHhCeGpaSTFmd2lKM3BrQ2dZRUFwUnBOMkExYy9IWlVxMnho
+ ZmRRVXJSK2d2TFZPV2s4SWU3RXcKbmhCTW0zQnR6dTlqcFVkanVVQ3l1YmpiUk9CanVQaUdzM0pt
+ NktDdTNxQ1BsZU43aUxrMmNlQWwzTG53bDB6ZQoxTk4xaTZxWjcxOEUzYXlxcEd1ZnpJZENFdHVC
+ Z1BlTzRVMGQ4ZDJYSkZ5SlphWVoxUXJnalB2UUFmZ29hWnIyCmg4Q2JTeTBDZ1lFQW1VQ3BqR0JW
+ MGNpVnlmUXNmOGdsclNOdWx6NzBiaVJWQzVSeno0dVJEMkhsYVM2eC8wc0IKQzltSUhpdWgwR0Zp
+ dEVFRlg4TzdlZ1ppNWJKMGFuQWYyakk1R1RnTjJOYzFpVlZnWldxcHh2aXpuckpKcENSYgpaejB0
+ M2thTkkyNjg0WTNxS2JxeG8ramRNK05hMG1qd2ErTEFOcEdCUDNwb2c0RHJ4eTNNSFdZPQotLS0t
+ LUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
+```
diff --git a/content/v2/cosidriver/installation/helm.md b/content/v2/cosidriver/installation/helm.md
new file mode 100644
index 0000000000..27ff87a921
--- /dev/null
+++ b/content/v2/cosidriver/installation/helm.md
@@ -0,0 +1,74 @@
+---
+title: "COSI Driver installation using Helm"
+linkTitle: "Using Helm"
+weight: 2
+Description: Installation of COSI Driver using Helm
+---
+
+The COSI Driver for Dell ObjectScale can be deployed by using the provided Helm v3 charts on Kubernetes platform.
+
+The Helm chart installs the following components in a _Deployment_ in the specified namespace:
+- COSI Driver for ObjectScale
+
+> **Notational Conventions**
+>
+> The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" are to be interpreted as described in [RFC 2119](http://tools.ietf.org/html/rfc2119) (Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997).
+
+## Dependencies
+
+Installing any of the CSI Driver components using Helm requires a few utilities to be installed on the system running the installation.
+
+{{
}}
+| Dependency | Usage |
+|------------|----------------------------------------------------------------------------------------------------------------------|
+| `kubectl` | Kubectl is used to validate that the Kubernetes system meets the requirements of the driver. |
+| `helm` | Helm v3 is used as the deployment tool for Charts. Go [here](https://helm.sh/docs/intro/install/) to install Helm 3. |
+{{
}}
+
+> ℹ️ **NOTE:**
+> To use these tools, a valid `KUBECONFIG` is required. Ensure that either a valid configuration is in the default location, or, that the `KUBECONFIG` environment variable points to a valid configuration before using these tools.
+
+## Prerequisites
+
+- Install Kubernetes (see [supported versions](../../../cosidriver/#features-and-capabilities))
+
+## Install the Driver
+
+**Steps**
+1. Run `git clone -b main https://github.com/dell/helm-charts.git` to clone the git repository.
+2. Ensure that you have created the namespace where you want to install the driver. You can run `kubectl create namespace dell-cosi` to create a new one. The use of _dell-cosi_ as the namespace is just an example. You can choose any name for the namespace.
+3. Copy the _charts/cosi/values.yaml_ into a new location with name _my-cosi-values.yaml_, to customize settings for installation.
+4. Create new file called _my-cosi-configuration.yaml_, and copy the settings available in the [Configuration File](./configuration_file.md) page.
+5. Edit *my-cosi-values.yaml* to set the following parameters for your installation:
+ The following table lists the primary configurable parameters of the COSI driver Helm chart and their default values. More detailed information can be found in the [`values.yaml`](https://github.com/dell/helm-charts/blob/master/charts/cosi/values.yaml) file in this repository.
+
+{{
}}
+ | Parameter | Description | Required | Default |
+ |------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------:|--------------------------------------------------------------------------------|
+ | provisioner.logLevel | The logging level for the COSI driver provisioner. | yes | `4` |
+ | provisioner.logFormat | The logging format for the COSI driver provisioner. | yes | `"text"` |
+ | provisioner.image.reposiotry | COSI driver provisioner container image repository. | yes | `"docker.io/dell/cosi"` |
+ | provisioner.image.tag | COSI driver provisioner container image tag. | yes | `"v0.1.0"` |
+ | provisioner.image.pullPolicy | COSI driver provisioner container image pull policy. Maps 1-to-1 with [Kubernetes image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | yes | `"IfNotPresent"` |
+ | sidecar.verbosity | The logging verbosity for the COSI driver sidecar, higher values are more verbose, possible values are integers from _-2,147,483,648_ to _2,147,483,647_. Generally the range used is between -4 and 12. However, there may be cases where numbers outside that range might provide more information. For additional information, refer to the [COSI sidecar documentation](https://github.com/kubernetes-sigs/container-object-storage-interface-provisioner-sidecar). | yes | `5` |
+ | sidecar.image.reposiotry | COSI driver sidecar container image repository. | yes | `"gcr.io/k8s-staging-sig-storage/objectstorage-sidecar/objectstorage-sidecar"` |
+ | sidecar.image.tag | COSI driver sidecar container image tag. | yes | `"v20230130-v0.1.0-24-gc0cf995"` |
+ | sidecar.image.pullPolicy | COSI driver sidecar container image pull policy. Maps 1-to-1 with [Kubernetes image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | yes | `"IfNotPresent"` |
+ | configuration.create | Specifies whether a secret with driver configuration should be created If set to false, you must set `configuration.secretName` field to an existing configuration secret name. | yes | `true` |
+ | configuration.secretName | Name can be used to specify an existing secret name to use for the driver configuration or override the generated name. | no | `"cosi-config"` |
+ | configuration.data | Data should be provided when installing chart, it will be used to create the Secret with the driver configuration. `configuration.create` must be set to `true` for this to work. | no | `""` |
+{{
}}
+
+> ℹ️ **NOTE:**
+> - Whenever the *configuration.secretName* parameter changes in *my-cosi-values.yaml* user needs to reinstall the driver.
+> - Whenever the *configuration.data* parameter changes in *my-cosi-values.yaml* user needs to reinstall the driver.
+
+6. Install the driver by running the following command (assuming that the current working directory is _charts_ and _my-cosi-settings.yaml_ is also present in _charts_ directory).
+
+```sh
+helm install dell-cosi ./cosi --namespace=dell-cosi --values ./my-cosi-values.yaml --set-file configuration.data=./my-cosi-configuration.yaml
+```
+
+## Bucket Classes, Bucket Access Classes
+
+The COSI driver for Dell ObjectScale version 1.2, `dell-csi-helm-installer` does not create any _Bucket Classes_ nor _Bucket Access Classes_ as part of the driver installation. A sample class manifests are available at `samples/bucketclass/objectscale.yaml` and `samples/bucketaccessclass/objectscale.yaml`. Use this sample manifest to create a _Bucket Classes_ to provision storage. Remember to uncomment/update the manifest as per the requirements.
diff --git a/content/v2/csidriver/_index.md b/content/v2/csidriver/_index.md
index 3138a459bb..fd1fc283d7 100644
--- a/content/v2/csidriver/_index.md
+++ b/content/v2/csidriver/_index.md
@@ -6,56 +6,64 @@ description: About Dell Technologies (Dell) CSI Drivers
weight: 3
---
-The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes-csi.github.io/docs/) (CSI spec v1.5) enabled Container Orchestrator (CO) and Dell Storage Arrays. It is a plug-in that is installed into Kubernetes to provide persistent storage using Dell storage system.
+The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes-csi.github.io/docs/) (CSI spec v1.5) enabled Container Orchestrator (CO) and Dell Storage Arrays. It is a plug-in that is installed into Kubernetes to provide persistent storage using the Dell storage system.
![CSI Architecture](Architecture_Diagram.png)
## Features and capabilities
### Supported Operating Systems/Container Orchestrator Platforms
+
{{
}}
+
+>Note: To connect to a PowerFlex 4.5 array, the SDC image will need to be changed to dellemc/sdc:4.5.
+>- If using helm to install, you will need to make this change in your values.yaml file. See [helm install documentation](https://dell.github.io/csm-docs/docs/csidriver/installation/helm/powerflex/) for details.
+>- If using CSM-Operator to install, you will need to make this change in your samples file. See [operator install documentation](https://dell.github.io/csm-docs/docs/deployment/csmoperator/drivers/powerflex/) for details.
+
### Backend Storage Details
{{
}}
diff --git a/content/v2/csidriver/features/powerflex.md b/content/v2/csidriver/features/powerflex.md
index bdbebfa149..d64e298051 100644
--- a/content/v2/csidriver/features/powerflex.md
+++ b/content/v2/csidriver/features/powerflex.md
@@ -469,7 +469,7 @@ Dynamic array configuration change detection is only used for properties of an e
To add a new array to the secret, or to alter an array's mdm field, you must run `csi-install.sh` with `--upgrade` option to update the MDM key in secret and restart the node pods.
```bash
cd /dell-csi-helm-installer
-./csi-install.sh --upgrade --namespace vxflexos --values ../helm/csi-vxflexos/values.yaml
+./csi-install.sh --namespace vxflexos --values ./myvalues.yaml --upgrade
kubectl delete pods --all -n vxflexos
```
@@ -750,3 +750,170 @@ node:
> NOTE: Currently, the CSI-PowerFlex driver only supports GUID for the restricted SDC mode.
If SDC approval is denied, then provisioning of the volume will not be attempted and an appropriate error message is reported in the logs/events so the user is informed.
+
+## Volume Limit
+
+The CSI Driver for Dell PowerFlex allows users to specify the maximum number of PowerFlex volumes that can be used in a node.
+
+The user can set the volume limit for a node by creating a node label `max-vxflexos-volumes-per-node` and specifying the volume limit for that node.
+ `kubectl label node max-vxflexos-volumes-per-node=`
+
+The user can also set the volume limit for all the nodes in the cluster by specifying the same to `maxVxflexosVolumesPerNode` attribute in values.yaml file.
+
+>**NOTE:** To reflect the changes after setting the value either via node label or in values.yaml file, user has to bounce the driver controller and node pods using the command `kubectl get pods -n vxflexos --no-headers=true | awk '/vxflexos-/{print $1}'| xargs kubectl delete -n vxflexos pod`.
If the value is set both by node label and values.yaml file then node label value will get the precedence and user has to remove the node label in order to reflect the values.yaml value.
The default value of `maxVxflexosVolumesPerNode` is 0.
If `maxVxflexosVolumesPerNode` is set to zero, then Container Orchestration decides how many volumes of this type can be published by the controller to the node.
The volume limit specified to `maxVxflexosVolumesPerNode` attribute is applicable to all the nodes in the cluster for which node label `max-vxflexos-volumes-per-node` is not set.
+
+## NFS volume support
+Starting with version 2.8, the CSI driver for PowerFlex will support NFS volumes for PowerFlex storage systems version 4.0.x.
+
+CSI driver will support following operations for NFS volumes:
+
+* Creation and deletion of a NFS volume with RWO/RWX/ROX access modes.
+* Support of tree quotas while volume creation.
+* Expand the size of a NFS volume.
+* Creation and deletion of snapshot of a NFS volume while retaining file permissions.
+* Create NFS volume from the snapshot.
+
+To enable the support of NFS volumes operations from CSI driver, there are a few new keys introduced which needs to be set before performing the operations for NFS volumes.
+* `nasName`: defines the NAS server name that should be used for NFS volumes.
+* `enableQuota`: when enabled will set quota limit for a newly provisioned NFS volume.
+
+> NOTE:
+> * `nasName`
+> * nasName is a mandatory parameter and has to be provided in secret yaml, else it will be an error state and will be captured in driver logs.
+> * nasName can be given at storage class level as well.
+> * If specified in both, secret and storage class, then precedence is given to storage class value.
+> * If nasName not given in secret, irrespective of it specified in SC, then it's an error state and will be captured in driver logs.
+> * If the PowerFlex storage system v4.0.x is configured with only block capabilities, then the user is required to give the default value for nasName as "none".
+
+The user has to update the `secret.yaml`, `values.yaml` and `storageclass-nfs.yaml` with the above keys as like below:
+
+[`samples/secret.yaml`](https://github.com/dell/csi-powerflex/blob/main/samples/secret.yaml)
+```yaml
+- username: "admin"
+ password: "Password123"
+ systemID: "2b11bb111111bb1b"
+ endpoint: "https://127.0.0.2"
+ skipCertificateValidation: true
+ isDefault: true
+ mdm: "10.0.0.3,10.0.0.4"
+ nasName: "nas-server"
+```
+
+[`samples/storageclass/storageclass-nfs.yaml`](https://github.com/dell/csi-powerflex/blob/main/samples/storageclass/storageclass-nfs.yaml)
+```yaml
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: vxflexos-nfs
+provisioner: csi-vxflexos.dellemc.com
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+parameters:
+ storagepool: "pool2" # Insert Storage pool
+ systemID: # Insert System ID
+ csi.storage.k8s.io/fstype: nfs
+ nasName: "nas-server"
+# path: /csi
+# softLimit: "80"
+# gracePeriod: "86400"
+volumeBindingMode: WaitForFirstConsumer
+allowedTopologies:
+- matchLabelExpressions:
+ - key: csi-vxflexos.dellemc.com/-nfs # Insert System ID
+ values:
+ - "true"
+```
+
+[`helm/csi-vxflexos/values.yaml`](https://github.com/dell/csi-powerflex/blob/main/helm/csi-vxflexos/values.yaml)
+```yaml
+...
+enableQuota: false
+...
+```
+
+## Usage of Quotas to Limit Storage Consumption for NFS volumes
+Starting with version 2.8, the CSI driver for PowerFlex will support enabling tree quotas for limiting capacity for NFS volumes. To use the quota feature user can specify the boolean value `enableQuota` in values.yaml.
+
+To enable quota for NFS volumes, make the following edits to [values.yaml](https://github.com/dell/csi-powerflex/blob/main/helm/csi-vxflexos/values.yaml) file:
+```yaml
+...
+...
+# enableQuota: a boolean that, when enabled, will set quota limit for a newly provisioned NFS volume.
+# Allowed values:
+# true: set quota for volume
+# false: do not set quota for volume
+# Optional: true
+# Default value: none
+enableQuota: true
+...
+...
+```
+
+For example, if the user creates a PVC with 3 Gi of storage and quotas have already been enabled in PowerFlex system for the specified volume.
+
+When `enableQuota` is set to `true`
+
+* The driver sets the hard limit of the PVC to 3Gi.
+* The user adds data of 2Gi to the PVC (by logging into POD). It works as expected.
+* The user tries to add 2Gi more data.
+* Driver doesn't allow the user to enter more data as total data to be added is 4Gi and PVC limit is 3Gi.
+* The user can expand the volume from 3Gi to 6Gi. The driver allows it and sets the hard limit of PVC to 6Gi.
+* User retries adding 2Gi more data (which has been errored out previously).
+* The driver accepts the data.
+
+When `enableQuota` is set to `false`
+
+* Driver doesn't set any hard limit against the PVC created.
+* The user adds 2Gi data to the PVC, which has a limit of 3Gi. It works as expected.
+* The user tries to add 2Gi more data. Now the total size of data is 4Gi.
+* Driver allows the user to enter more data irrespective of the initial PVC size (since no quota is set against this PVC)
+* The user can expand the volume from an initial size of 3Gi to 4Gi or more. The driver allows it.
+
+If enableQuota feature is set, user can also set other tree quota parameters such as soft limit, soft grace period and path using storage class yaml file.
+
+* `path`: relative path to the root of the associated NFS volume.
+* `softLimit`: soft limit set to quota. Specified as a percentage w.r.t. PVC size.
+* `gracePeriod`: grace period of quota, must be mentioned along with softLimit, in seconds. Soft Limit can be exceeded until the grace period.
+
+> NOTE:
+> * `hardLimit` is set to same size as that of PVC size.
+> * When a volume with quota enabled is expanded then the hardLimit and softLimit are also recalculated by driver w.r.t. to the new PVC size.
+> * `sofLimit` cannot be set to unlimited value (0), otherwise it will become greater than hardLimit (PVC size).
+> * `softLimit` should be lesser than 100%, since hardLimit will be set to 100% (PVC size) internally by the driver.
+
+### Storage Class Example with Quota Limit Parameters
+[`samples/storageclass/storageclass-nfs.yaml`](https://github.com/dell/csi-powerflex/blob/main/samples/storageclass/storageclass-nfs.yaml)
+
+```yaml
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: vxflexos-nfs
+provisioner: csi-vxflexos.dellemc.com
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+parameters:
+ storagepool: "pool2" # Insert Storage pool
+ systemID: # Insert System ID
+ csi.storage.k8s.io/fstype: nfs
+ nasName: "nas-server"
+ path: /csi
+ softLimit: "80"
+ gracePeriod: "86400"
+volumeBindingMode: WaitForFirstConsumer
+allowedTopologies:
+ - matchLabelExpressions:
+ - key: csi-vxflexos.dellemc.com/-nfs # Insert System ID
+ values:
+ - "true"
+```
+
+## Storage Capacity Tracking
+CSI-PowerFlex driver version 2.8.0 and above supports Storage Capacity Tracking.
+
+This feature helps the scheduler to make more informed choices about where to schedule pods which depend on unbound volumes with late binding (aka "wait for first consumer"). Pods will be scheduled on a node (satisfying the topology constraints) only if the requested capacity is available on the storage array.
+If such a node is not available, the pods stay in Pending state. This means pods are not scheduled.
+
+Without storage capacity tracking, pods get scheduled on a node satisfying the topology constraints. If the required capacity is not available, volume attachment to the pods fails, and pods remain in ContainerCreating state. Storage capacity tracking eliminates unnecessary scheduling of pods when there is insufficient capacity.
+
+The attribute `storageCapacity.enabled` in `values.yaml` can be used to enable/disable the feature during driver installation using helm. This is by default set to true. To configure how often the driver checks for changed capacity set `storageCapacity.pollInterval` attribute. In case of driver installed via operator, this interval can be configured in the sample file provided [here](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powerflex_v280.yaml) by editing the `--capacity-poll-interval` argument present in the provisioner sidecar.
\ No newline at end of file
diff --git a/content/v2/csidriver/features/powermax.md b/content/v2/csidriver/features/powermax.md
index 6c96638475..bf3f94fe07 100644
--- a/content/v2/csidriver/features/powermax.md
+++ b/content/v2/csidriver/features/powermax.md
@@ -32,6 +32,8 @@ snapshot:
>Note: From v1.7, the CSI PowerMax driver installation process will no longer create VolumeSnapshotClass.
> If you want to create VolumeSnapshots, then create a VolumeSnapshotClass using the sample provided in the _csi-powermax/samples/volumesnapshotclass_ folder
+>Note: Snapshot for FIle in PowerMax is currently not supported.
+
### Creating Volume Snapshots
The following is a sample manifest for creating a Volume Snapshot using the **v1** snapshot APIs:
```yaml
@@ -611,4 +613,33 @@ vSphere:
>Note: Replication is not supported with this feature.
>Limitations of RDM can be referred [here.](https://configmax.esp.vmware.com/home)
>Supported number of RDM Volumes per VM is 60 as per the limitations.
->RDMs should not be added/removed manually from vCenter on any of the cluster VMs.
+>RDMs should not be added/removed manually from vCenter on any of the cluster VMs.
+
+## Storage Capacity Tracking
+
+CSI PowerMax driver version 2.8.0 and above supports Storage Capacity Tracking.
+
+This feature helps the scheduler to make more informed choices about where to start pods that depend on unbound volumes with late binding (aka “wait for first consumer”). Nodes satisfying the topology constraints, and with the requested capacity that is present on the storage array, will be available for scheduling the pods, Otherwise, the pods stay in pending state. External-provisioner makes one GetCapacity() call per storage class that is present on the cluster to get the AvailableCapacity for the array specified in the storage class that matches with the array mentioned during driver deployment.
+
+Without storage capacity tracking, pods get scheduled on a node satisfying the topology constraints. If the required capacity is not available, volume attachment to the pods fails, and pods remain in the ContainerCreating state. Storage capacity tracking eliminates unnecessary scheduling of pods when there is insufficient capacity.
+
+Storage capacity can be tracked by setting the attribute `storageCapacity.enabled` to true in values.yaml (set to true by default) during driver installation. To configure how often driver checks for changed capacity, set the `storageCapacity.pollInterval` attribute (set to 5m by default). In case of driver installed via operator, this interval can be configured in the sample file provided [here.](https://github.com/dell/csm-operator/blob/main/samples) by editing the `--capacity-poll-interval` argument present in the provisioner sidecar.
+
+>Note: This feature requires kubernetes v1.24 and above and will be automatically disabled in lower version of kubernetes.
+
+
+## Volume Limits
+
+The CSI Driver for Dell PowerMax allows users to specify the maximum number of PowerMax volumes that can be created on a node.
+
+The user can set the volume limit for a node by creating a node label `max-powermax-volumes-per-node` and specifying the volume limit for that node.
+ `kubectl label node max-powermax-volumes-per-node=`
+
+The user can also set the volume limit for all the nodes in the cluster by specifying the same to `maxPowerMaxVolumesPerNode` attribute in values.yaml. In case of driver installed via operator, this attribute can be modified in the sample file provided [here](https://github.com/dell/csm-operator/blob/main/samples) by editing the `X_CSI_MAX_VOLUMES_PER_NODE` parameter.
+
+This feature is also supported for limiting the volume provisioning on Kubernetes clusters running on vSphere (VMware hypervisor) via RDM mechanism. User can set `vSphere.enabled` to true and also set volume limits to positive values less than or equal 60 via labels or in Values.yaml file.
+
+
+>**NOTE:** The default value of `maxPowerMaxVolumesPerNode` is 0. If `maxPowerMaxVolumesPerNode` is set to zero, then CO shall decide how many volumes of this type can be published by the controller to the node.
The volume limit specified to `maxPowerMaxVolumesPerNode` attribute is applicable to all the nodes in the cluster for which node label `max-powermax-volumes-per-node` is not set.
+ Supported maximum number of RDM Volumes per VM is 60 as per the limitations. If the value is set both by node label and values.yaml file then node label value will get the precedence and user has to remove the node label in order to reflect the values.yaml value.
+
diff --git a/content/v2/csidriver/features/powerscale.md b/content/v2/csidriver/features/powerscale.md
index 70d581efed..31b6198e81 100644
--- a/content/v2/csidriver/features/powerscale.md
+++ b/content/v2/csidriver/features/powerscale.md
@@ -21,7 +21,7 @@ You can use existing volumes from the PowerScale array as Persistent Volumes in
1. Open your volume in One FS, and take a note of volume-id.
2. Create PersistentVolume and use this volume-id as a volumeHandle in the manifest. Modify other parameters according to your needs.
-3. In the following example, the PowerScale cluster accessZone is assumed as 'System', storage class as 'isilon', cluster name as 'pscale-cluster' and volume's internal name as 'isilonvol'. The volume-handle should be in the format of =_=_==_=_==_=_=
+3. In the following example, the PowerScale cluster accessZone is assumed as 'System', storage class as 'isilon', cluster name as 'pscale-cluster' and volume's internal name as 'isilonvol'. The volume-handle should be in the format of `=_=_==_=_==_=_=`
4. If Quotas are enabled in the driver, it is required to add the Quota ID to the description of the NFS export in this format:
`CSI_QUOTA_ID:sC-kAAEAAAAAAAAAAAAAQEpVAAAAAAAA`
5. Quota ID can be identified by querying the PowerScale system.
diff --git a/content/v2/csidriver/features/powerstore.md b/content/v2/csidriver/features/powerstore.md
index 5a42df9aa3..f0ec008a05 100644
--- a/content/v2/csidriver/features/powerstore.md
+++ b/content/v2/csidriver/features/powerstore.md
@@ -425,6 +425,17 @@ kubectl get nodes --show-labels
For any additional information about the topology, see the [Kubernetes Topology documentation](https://kubernetes-csi.github.io/docs/topology.html).
+## Volume Limits
+
+The CSI Driver for Dell PowerStore allows users to specify the maximum number of PowerStore volumes that can be used in a node.
+
+The user can set the volume limit for a node by creating a node label `max-powerstore-volumes-per-node` and specifying the volume limit for that node.
+ `kubectl label node max-powerstore-volumes-per-node=`
+
+The user can also set the volume limit for all the nodes in the cluster by specifying the same value for the `maxPowerstoreVolumesPerNode` attribute in values.yaml during Helm installation. In the case of driver installed via the operator, this attribute can be modified in the sample yaml file for PowerStore, which is located at https://github.com/dell/csm-operator/blob/main/samples/ by editing the `X_CSI_POWERSTORE_MAX_VOLUMES_PER_NODE` parameter.
+
+>**NOTE:** The default value of `maxPowerstoreVolumesPerNode` is 0. If `maxPowerstoreVolumesPerNode` is set to zero, then CO shall decide how many volumes of this type can be published by the controller to the node.
The volume limit specified in the `maxPowerstoreVolumesPerNode` attribute is applicable to all the nodes in the cluster for which the node label `max-powerstore-volumes-per-node` is not set.
+
## Reuse PowerStore hostname
@@ -709,9 +720,9 @@ metadata:
name: pvc1
namespace: default
labels:
- description: DB-volume
- appliance_id: A1
- volume_group_id: f5f9dbbd-d12f-463e-becb-2e6d0a85405e
+ csi.dell.com/description: DB-volume
+ csi.dell.com/appliance_id: A1
+ csi.dell.com/volume_group_id: f5f9dbbd-d12f-463e-becb-2e6d0a85405e
spec:
accessModes:
- ReadWriteOnce
@@ -728,7 +739,7 @@ This is the list of all the attributes supported by PowerStore CSI driver:
| Block Volume | NFS Volume |
| --- | --- |
-| description appliance_id volume_group_id protection_policy_id performance_policy_id app_type app_type_other
| csi.dell.com/description csi.dell.com/config_type csi.dell.com/access_policy csi.dell.com/locking_policy csi.dell.com/folder_rename_policy csi.dell.com/is_async_mtime_enabled csi.dell.com/protection_policy_id csi.dell.com/file_events_publishing_mode csi.dell.com/host_io_size csi.dell.com/flr_attributes.flr_create.mode csi.dell.com/flr_attributes.flr_create.default_retention csi.dell.com/flr_attributes.flr_create.maximum_retention csi.dell.com/flr_attributes.flr_create.minimum_retention |
@@ -738,6 +749,8 @@ This is the list of all the attributes supported by PowerStore CSI driver:
>Configurable Volume Attributes feature is supported with Helm.
+>Prefix `csi.dell.com/` has been added to the attributes from CSI PowerStore driver version 2.8.0
+
## Storage Capacity Tracking
CSI PowerStore driver version 2.5.0 and above supports Storage Capacity Tracking.
@@ -750,4 +763,4 @@ The attribute `storageCapacity.enabled` in `my-powerstore-settings.yaml` can be
To configure how often driver checks for changed capacity set `storageCapacity.pollInterval` attribute. In case of driver installed via operator, this interval can be configured in the sample files provided [here](https://github.com/dell/dell-csi-operator/tree/master/samples) by editing the `capacity-poll-interval` argument present in the `provisioner` sidecar.
**Note:**
->This feature requires kubernetes v1.24 and above and will be automatically disabled in lower version of kubernetes.
\ No newline at end of file
+>This feature requires kubernetes v1.24 and above and will be automatically disabled in lower version of kubernetes.
diff --git a/content/v2/csidriver/features/unity.md b/content/v2/csidriver/features/unity.md
index 2be1899c6d..bf06822b52 100644
--- a/content/v2/csidriver/features/unity.md
+++ b/content/v2/csidriver/features/unity.md
@@ -492,6 +492,16 @@ This feature:
```
By default this is disabled in CSI Driver for Unity XT. You will have to set the `healthMonitor.enable` flag for controller, node or for both in `values.yaml` to get the volume stats and volume condition.
+## Storage Capacity Tracking
+CSI for Unity XT driver version 2.8.0 and above supports Storage Capacity Tracking.
+
+This feature helps the scheduler to make more informed choices about where to schedule pods which depends on unbound volumes with late binding (aka "wait for first consumer"). Pods will be scheduled on a node (satisfying the topology constraints) only if the requested capacity is available on the storage array.
+If such a node is not available, the pods stay in Pending state. This means pods are not scheduled.
+
+Without storage capacity tracking, pods get scheduled on a node satisfying the topology constraints. If the required capacity is not available, volume attachment to the pods fails, and pods remain in ContainerCreating state. Storage capacity tracking eliminates unnecessary scheduling of pods when there is insufficient capacity.
+
+The attribute `storageCapacity.enabled` in `values.yaml` can be used to enable/disable the feature during driver installation using helm. This is by default set to true. To configure how often driver checks for changed capacity set `storageCapacity.pollInterval` attribute. In case of driver installed via operator, this interval can be configured in the sample file provided [here.](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_unity_v280.yaml) by editing the `--capacity-poll-interval` argument present in the provisioner sidecar.
+
## Dynamic Logging Configuration
### Helm based installation
diff --git a/content/v2/csidriver/installation/helm/isilon.md b/content/v2/csidriver/installation/helm/isilon.md
index 8b9539c35d..db951501f2 100644
--- a/content/v2/csidriver/installation/helm/isilon.md
+++ b/content/v2/csidriver/installation/helm/isilon.md
@@ -82,6 +82,12 @@ node:
enabled: false
```
+*NOTE*: To enable this feature to existing driver OR enable this feature while upgrading the driver versions,
+follow either of the way.
+
+1. Reinstall of Driver
+2. Upgrade the driver smoothly with "--upgrade" option
+
### (Optional) Replication feature Requirements
Applicable only if you decided to enable the Replication feature in `values.yaml`
@@ -101,17 +107,17 @@ CRDs should be configured during replication prepare stage with repctl as descri
**Steps**
-1. Run `git clone -b v2.7.0 https://github.com/dell/csi-powerscale.git` to clone the git repository.
+1. Run `git clone -b v2.8.0 https://github.com/dell/csi-powerscale.git` to clone the git repository.
2. Ensure that you have created the namespace where you want to install the driver. You can run `kubectl create namespace isilon` to create a new one. The use of "isilon" as the namespace is just an example. You can choose any name for the namespace.
3. Collect information from the PowerScale Systems like IP address, IsiPath, username, and password. Make a note of the value for these parameters as they must be entered in the *secret.yaml*.
-4. Copy *the helm/csi-isilon/values.yaml* into a new location with name say *my-isilon-settings.yaml*, to customize settings for installation.
+4. Download `wget -O my-isilon-settings.yaml https://raw.githubusercontent.com/dell/helm-charts/csi-isilon-2.8.0/charts/csi-isilon/values.yaml` into `cd ../dell-csi-helm-installer` to customize settings for installation.
5. Edit *my-isilon-settings.yaml* to set the following parameters for your installation:
The following table lists the primary configurable parameters of the PowerScale driver Helm chart and their default values. More detailed information can be
- found in the [`values.yaml`](https://github.com/dell/csi-powerscale/blob/master/helm/csi-isilon/values.yaml) file in this repository.
+ found in the [`values.yaml`](https://github.com/dell/helm-charts/blob/csi-isilon-2.8.0/charts/csi-isilon/values.yaml) file in this repository.
| Parameter | Description | Required | Default |
| --------- | ----------- | -------- |-------- |
-| driverRepository | Set to give the repository containing the driver image (used as part of the image name). | Yes | dellemc |
+ | driverRepository | Set to give the repository containing the driver image (used as part of the image name). | Yes | dellemc |
| logLevel | CSI driver log level | No | "debug" |
| certSecretCount | Defines the number of certificate secrets, which the user is going to create for SSL authentication. (isilon-cert-0..isilon-cert-(n-1)); Minimum value should be 1.| Yes | 1 |
| [allowedNetworks](../../../features/powerscale/#support-custom-networks-for-nfs-io-traffic) | Defines the list of networks that can be used for NFS I/O traffic, CIDR format must be used. | No | [ ] |
@@ -164,17 +170,16 @@ CRDs should be configured during replication prepare stage with repctl as descri
| **encryption** | [Encryption](../../../../secure/encryption/deployment) is an optional feature to apply encryption to CSI volumes. | - | - |
| enabled | A boolean that enables/disables Encryption feature. | No | false |
| image | Encryption driver image name. | No | "dellemc/csm-encryption:v0.3.0" |
-
+
*NOTE:*
- ControllerCount parameter value must not exceed the number of nodes in the Kubernetes cluster. Otherwise, some of the controller pods remain in a "Pending" state till new nodes are available for scheduling. The installer exits with a WARNING on the same.
- Whenever the *certSecretCount* parameter changes in *my-isilon-setting.yaml* user needs to reinstall the driver.
- In order to enable authorization, there should be an authorization proxy server already installed.
- - If you are using a custom image, check the *version* and *driverRepository* fields in *my-isilon-setting.yaml* to make sure that they are pointing to the correct image repository and driver version. These two fields are spliced together to form the image name, as shown here: /csi-isilon:v
-
-
+ - If you are using a custom image, check the *version* and *driverRepository* fields in *my-isilon-setting.yaml* to make sure that they are pointing to the correct image repository and driver version. These two fields are spliced together to form the image name, as shown here: /csi-isilon:
+
6. Edit following parameters in samples/secret/secret.yaml file and update/add connection/authentication information for one or more PowerScale clusters.
-
+
| Parameter | Description | Required | Default |
| --------- | ----------- | -------- |-------- |
| clusterName | Logical name of PoweScale cluster against which volume CRUD operations are performed through this secret. | Yes | - |
@@ -206,24 +211,24 @@ CRDs should be configured during replication prepare stage with repctl as descri
Create isilon-creds secret using the following command:
`kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml`
-
+
*NOTE:*
- If any key/value is present in all *my-isilon-settings.yaml*, *secret*, and storageClass, then the values provided in storageClass parameters take precedence.
- The user has to validate the yaml syntax and array-related key/values while replacing or appending the isilon-creds secret. The driver will continue to use previous values in case of an error found in the yaml file.
- For the key isiIP/endpoint, the user can give either IP address or FQDN. Also, the user can prefix 'https' (For example, https://192.168.1.1) with the value.
- The *isilon-creds* secret has a *mountEndpoint* parameter which should only be updated and used when [Authorization](../../../../authorization) is enabled.
-
+
7. Install OneFS CA certificates by following the instructions from the next section, if you want to validate OneFS API server's certificates. If not, create an empty secret using the following command and an empty secret must be created for the successful installation of CSI Driver for Dell PowerScale.
```bash
kubectl create -f empty-secret.yaml
```
This command will create a new secret called `isilon-certs-0` in isilon namespace.
-
-8. Install the driver using `csi-install.sh` bash script by running
+
+8. Install the driver using `csi-install.sh` bash script and default yaml by running
```bash
- cd ../dell-csi-helm-installer && ./csi-install.sh --namespace isilon --values ../helm/my-isilon-settings.yaml
- ```
-(assuming that the current working directory is 'helm' and my-isilon-settings.yaml is also present under 'helm' directory)
+ cd dell-csi-helm-installer && wget -O my-isilon-settings.yaml https://raw.githubusercontent.com/dell/helm-charts/csi-isilon-2.8.0/charts/csi-isilon/values.yaml &&
+ ./csi-install.sh --namespace isilon --values my-isilon-settings.yaml
+ ```
## Certificate validation for OneFS REST API calls
@@ -269,7 +274,7 @@ kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.y
## Storage Classes
-The CSI driver for Dell PowerScale version 1.5 and later, `dell-csi-helm-installer` does not create any storage classes as part of the driver installation. A sample storage class manifest is available at `samples/storageclass/isilon.yaml`. Use this sample manifest to create a storageclass to provision storage; uncomment/ update the manifest as per the requirements.
+The CSI driver for Dell PowerScale version 1.5 and later, `dell-csi-helm-installer` does not create any storage classes as part of the driver installation. A sample storage class manifest is available at `samples/storageclass/isilon.yaml`. Use this sample manifest to create a storageclass to provision storage; uncomment/ update the manifest as per the requirements.
### What happens to my existing storage classes?
@@ -292,9 +297,9 @@ Deleting a storage class has no impact on a running Pod with mounted PVCs. You c
**Steps to create secondary storage class:**
-There are samples storage class yaml files available under `samples/storageclass`. These can be copied and modified as needed.
+There are samples storage class yaml files available under `samples/storageclass`. These can be copied and modified as needed.
-1. Copy the `storageclass.yaml` to `second_storageclass.yaml` ( This is just an example, you can rename to file you require. )
+1. Copy the `storageclass.yaml` to `second_storageclass.yaml` (This is just an example, you can rename to file you require.)
2. Edit the `second_storageclass.yaml` yaml file and update following parameters:
- Update the `name` parameter to you require
```yaml
@@ -308,14 +313,14 @@ There are samples storage class yaml files available under `samples/storageclass
password: "Password"
endpoint: "10.X.X.X"
endpointPort: "8080
- ```
+ ```
- Use same clusterName ↑ in the `second_storageclass.yaml`
```yaml
# Optional: true
ClusterName: "cluster2"
```
-- *Note*: These are two essential parameters that you need to change in the "second_storageclass.yaml" file and other parameters that you change as required.
-3. Save the `second_storageclass.yaml` file
+- *Note*: These are two essential parameters that you need to change in the "second_storageclass.yaml" file and other parameters that you change as required.
+3. Save the `second_storageclass.yaml` file
4. Create your 2nd storage class by using `kubectl`:
```bash
kubectl create -f
@@ -352,6 +357,7 @@ Mount Re-tries handles below scenarios:
- No such file or directory (NFSv4)
*Sample*:
+
```
level=error clusterName=powerscale runid=10 msg="mount failed: exit status 32
mounting arguments: -t nfs -o rw XX.XX.XX.XX:/ifs/data/csi/k8s-ac7b91962d /var/lib/kubelet/pods/9f72096a-a7dc-4517-906c-20697f9d7375/volumes/kubernetes.io~csi/k8s-ac7b91962d/mount
diff --git a/content/v2/csidriver/installation/helm/powerflex.md b/content/v2/csidriver/installation/helm/powerflex.md
index ceb23e56f5..cd57eb886a 100644
--- a/content/v2/csidriver/installation/helm/powerflex.md
+++ b/content/v2/csidriver/installation/helm/powerflex.md
@@ -49,7 +49,7 @@ Verify that zero padding is enabled on the PowerFlex storage pools that will be
### Install PowerFlex Storage Data Client
The CSI Driver for PowerFlex requires you to have installed the PowerFlex Storage Data Client (SDC) on all Kubernetes nodes which run the node portion of the CSI driver.
-SDC could be installed automatically by CSI driver install on Kubernetes nodes with OS platform which support automatic SDC deployment; for Red Hat CoreOS (RHCOS), RHEL 7.9, RHEL 8.4, RHEL 8.6. On Kubernetes nodes with OS version not supported by automatic install, you must perform the Manual SDC Deployment steps [below](#manual-sdc-deployment).
+SDC could be installed automatically by CSI driver install on Kubernetes nodes with OS platform which support automatic SDC deployment; for Red Hat CoreOS (RHCOS), RHEL 7.9, RHEL 8.6. On Kubernetes nodes with OS version not supported by automatic install, you must perform the Manual SDC Deployment steps [below](#manual-sdc-deployment).
Refer to https://hub.docker.com/r/dellemc/sdc for supported OS versions.
*NOTE:* To install CSI driver for Powerflex with automated SDC deployment, you need below two packages on worker nodes.
@@ -70,6 +70,12 @@ For detailed PowerFlex installation procedure, see the [Dell PowerFlex Deploymen
- For Red Hat Enterprise Linux and CentOS, run `rpm -iv ./EMC-ScaleIO-sdc-*.x86_64.rpm`, where * is the SDC name corresponding to the PowerFlex installation version.
4. To add more MDM_IP for multi-array support, run `/opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip 10.xx.xx.xx.xx,10.xx.xx.xx`
+#### Installation Wizard prerequisite, secret update:
+When the driver is installed using values generated by installation wizard, then the user needs to update the secret for driver by patching the MDM keys, as follows:
+
+**Steps**
+* `echo -n '' | base64`
+* `kubectl patch secret vxflexos-config -n vxflexos -p "{\"data\": { \"MDM\": \"\"}}"`
### (Optional) Volume Snapshot Requirements
For detailed snapshot setup procedure, [click here.](../../../../snapshots/#optional-volume-snapshot-requirements)
@@ -77,7 +83,7 @@ For detailed PowerFlex installation procedure, see the [Dell PowerFlex Deploymen
## Install the Driver
**Steps**
-1. Run `git clone -b v2.7.1 https://github.com/dell/csi-powerflex.git` to clone the git repository.
+1. Run `git clone -b v2.8.0 https://github.com/dell/csi-powerflex.git` to clone the git repository.
2. A namespace for the driver is expected prior to running the command below. If one is not created already, you can run `kubectl create namespace vxflexos` to create a new one.
Note that the namespace can be any user-defined name that follows the conventions for namespaces outlined by Kubernetes. In this example we assume that the namespace is 'vxflexos'
@@ -96,6 +102,7 @@ Note that the namespace can be any user-defined name that follows the convention
| skipCertificateValidation | Determines if the driver is going to validate certs while connecting to PowerFlex REST API interface. | true | true |
| isDefault | An array having isDefault=true is for backward compatibility. This parameter should occur once in the list. | false | false |
| mdm | mdm defines the MDM(s) that SDC should register with on start. This should be a list of MDM IP addresses or hostnames separated by comma. | true | - |
+ | nasName | nasName defines what NAS should be used for NFS volumes. NFS volumes are supported on arrays version 4.0.x | false | none |
Example: `samples/secret.yaml`
@@ -107,6 +114,17 @@ Note that the namespace can be any user-defined name that follows the convention
skipCertificateValidation: true
isDefault: true
mdm: "10.0.0.3,10.0.0.4"
+```
+Example: `samples/secret.yaml` for PowerFlex storage system v4.0.x
+```yaml
+- username: "admin"
+ password: "Password123"
+ systemID: "2b11bb111111bb1b"
+ endpoint: "https://127.0.0.2"
+ skipCertificateValidation: true
+ isDefault: true
+ mdm: "10.0.0.3,10.0.0.4"
+ nasName : "nasServer"
```
*NOTE: To use multiple arrays, copy and paste section above for each array. Make sure isDefault is set to true for only one array.*
@@ -132,7 +150,7 @@ Use the below command to replace or update the secret:
- "insecure" parameter has been changed to "skipCertificateValidation" as insecure is deprecated and will be removed from use in config.yaml or secret.yaml in a future release. Users can continue to use any one of "insecure" or "skipCertificateValidation" for now. The driver would return an error if both parameters are used.
- Please note that log configuration parameters from v1.5 will no longer work in v2.0 and higher. Please refer to the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features for more information.
- If the user is using complex K8s version like "v1.21.3-mirantis-1", use this kubeVersion check in helm/csi-unity/Chart.yaml file.
- kubeVersion: ">= 1.21.0-0 < 1.28.0-0"
+ kubeVersion: ">= 1.21.0-0 < 1.29.0-0"
5. Default logging options are set during Helm install. To see possible configuration options, see the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features.
@@ -140,20 +158,22 @@ Use the below command to replace or update the secret:
6. If using automated SDC deployment:
- Check the SDC container image is the correct version for your version of PowerFlex.
-7. Copy the default values.yaml file
+7. Download the default values.yaml file
```bash
- cd helm && cp csi-vxflexos/values.yaml myvalues.yaml
- ```
+ cd dell-csi-helm-installer && wget -O myvalues.yaml https://github.com/dell/helm-charts/raw/csi-vxflexos-2.8.0/charts/csi-vxflexos/values.yaml
+ ```
+ >Note: To connect to a PowerFlex 4.5 array, edit the powerflexSdc parameter in your values.yaml file to use dellemc/sdc:4.5:
+ >`powerflexSdc: dellemc/sdc:4.5`
-8. If you are using a custom image, check the `version` and `driverRepository` fields in `myvalues.yaml` to make sure that they are pointing to the correct image repository and driver version. These two fields are spliced together to form the image name, as shown here: `/csi-vxflexos:v`
+8. If you are using a custom image, check the `version` and `driverRepository` fields in `my-vxflexos-settings.yaml` to make sure that they are pointing to the correct image repository and driver version. These two fields are spliced together to form the image name, as shown here: `/csi-vxflexos:v`
9. Look over all the other fields `myvalues.yaml` and fill in/adjust any as needed. All the fields are described here:
| Parameter | Description | Required | Default |
| ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------- |
-| version | Set to verify the values file version matches driver version and used to pull the image as part of the image name. | Yes | 2.7.1 |
+| version | Set to verify the values file version matches driver version and used to pull the image as part of the image name. | Yes | 2.8.0 |
| driverRepository | Set to give the repository containing the driver image (used as part of the image name). | Yes | dellemc |
-| powerflexSdc | Set to give the location of the SDC image used if automatic SDC deployment is being utilized. | Yes | dellemc/sdc:3.6.0.6 |
+| powerflexSdc | Set to give the location of the SDC image used if automatic SDC deployment is being utilized. | Yes | dellemc/sdc:3.6.1 |
| certSecretCount | Represents the number of certificate secrets, which the user is going to create for SSL authentication. | No | 0 |
| logLevel | CSI driver log level. Allowed values: "error", "warn"/"warning", "info", "debug". | Yes | "debug" |
| logFormat | CSI driver log format. Allowed values: "TEXT" or "JSON". | Yes | "TEXT" |
@@ -164,6 +184,7 @@ Use the below command to replace or update the secret:
| enablesnapshotcgdelete | A boolean that, when enabled, will delete all snapshots in a consistency group everytime a snap in the group is deleted. | Yes | false |
| enablelistvolumesnapshot | A boolean that, when enabled, will allow list volume operation to include snapshots (since creating a volume from a snap actually results in a new snap). It is recommend this be false unless instructed otherwise. | Yes | false |
| allowRWOMultiPodAccess | Setting allowRWOMultiPodAccess to "true" will allow multiple pods on the same node to access the same RWO volume. This behavior conflicts with the CSI specification version 1.3. NodePublishVolume description that requires an error to be returned in this case. However, some other CSI drivers support this behavior and some customers desire this behavior. Customers use this option at their own risk. | Yes | false |
+| enableQuota | A boolean that, when enabled, will set quota limit for a newly provisioned NFS volume. | No | false |
| **controller** | This section allows the configuration of controller-specific parameters. To maximize the number of available nodes for controller pods, see this section. For more details on the new controller pod configurations, see the [Features section](../../../features/powerflex#controller-ha) for Powerflex specifics. | - | - |
| volumeNamePrefix | Set so that volumes created by the driver have a default prefix. If one PowerFlex/VxFlex OS system is servicing several different Kubernetes installations or users, these prefixes help you distinguish them. | Yes | "k8s" |
| controllerCount | Set to deploy multiple controller instances. If the controller count is greater than the number of available nodes, excess pods remain in a pending state. It should be greater than 0. You can increase the number of available nodes by configuring the "controller" section in your values.yaml. For more details on the new controller pod configurations, see the [Features section](../../../features/powerflex#controller-ha) for Powerflex specifics. | Yes | 2 |
@@ -182,6 +203,9 @@ Use the below command to replace or update the secret:
| enabled | A boolean that enable/disable rename SDC feature. | No | false |
| prefix | Defines a string for the prefix of the SDC. | No | " " |
| approveSDC.enabled | A boolean that enable/disable SDC approval feature. | No | false |
+| **storageCapacity** | Enable/Disable storage capacity tracking | - | - |
+| enabled | A boolean that enables/disables storage capacity tracking feature. | Yes | true |
+| pollInterval | Configure how often the driver checks for changed capacity | No | 5m |
| **monitor** | This section allows the configuration of the SDC monitoring pod. | - | - |
| enabled | Set to enable the usage of the monitoring pod. | Yes | false |
| hostNetwork | Set whether the monitor pod should run on the host network or not. | Yes | true |
@@ -199,7 +223,7 @@ Use the below command to replace or update the secret:
| skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization proxy server. | No | true |
-10. Install the driver using `csi-install.sh` bash script by running `cd dell-csi-helm-installer && ./csi-install.sh --namespace vxflexos --values ../helm/myvalues.yaml`. You may modify the release name with the `--release` arg. If arg is not provided, release will be named `vxflexos` by default.
+10. Install the driver using `csi-install.sh` bash script by running `cd dell-csi-helm-installer && ./csi-install.sh --namespace vxflexos --values myvalues.yaml`. You may modify the release name with the `--release` arg. If arg is not provided, release will be named `vxflexos` by default.
Alternatively, to do a helm install solely with Helm charts (without shell scripts), refer to `helm/README.md`.
*NOTE:*
@@ -286,15 +310,16 @@ Upgrading from an older version of the driver: The storage classes will be delet
**Steps to create storage class:**
There are samples storage class yaml files available under `samples/storageclass`. These can be copied and modified as needed.
-1. Edit `storageclass.yaml` if you need ext4 filesystem and `storageclass-xfs.yaml` if you want xfs filesystem.
+1. Edit `storageclass.yaml` if you need ext4 filesystem, `storageclass-xfs.yaml` if you want xfs filesystem and `storageclass-nfs.yaml` if you need nfs filesystem
2. Replace `` with the storage pool you have.
3. Replace `` with the system ID you have. Note there are two appearances in the file.
-4. Edit `storageclass.kubernetes.io/is-default-class` to true if you want to set it as default, otherwise false.
-5. Save the file and create it by using `kubectl create -f storageclass.yaml` or `kubectl create -f storageclass-xfs.yaml`
+4. Edit `storageclass.kubernetes.io/is-default-class` to true if you want to set it as default, otherwise false.
+5. If using `storageclass-nfs.yaml` Replace `"nas-server"` with the NAS server's name you have.
+5. Save the file and create it by using `kubectl create -f storageclass.yaml` / `kubectl create -f storageclass-xfs.yaml`/ `kubectl create -f storageclass-nfs.yaml`
*NOTE*:
- At least one storage class is required for one array.
-- If you uninstall the driver and reinstall it, you can still face errors if any update in the `values.yaml` file leads to an update of the storage class(es):
+- If you uninstall the driver and reinstall it, you can still face errors if any update in the `myvalues.yaml` file leads to an update of the storage class(es):
```
Error: cannot patch "" with kind StorageClass: StorageClass.storage.k8s.io "" is invalid: parameters: Forbidden: updates to parameters are forbidden
diff --git a/content/v2/csidriver/installation/helm/powermax.md b/content/v2/csidriver/installation/helm/powermax.md
index 23b7bf4637..26970fadee 100644
--- a/content/v2/csidriver/installation/helm/powermax.md
+++ b/content/v2/csidriver/installation/helm/powermax.md
@@ -31,6 +31,7 @@ The following requirements must be met before installing CSI Driver for Dell Pow
- Install Helm 3
- Fibre Channel requirements
- iSCSI requirements
+- NFS requirements
- Auto RDM for vSphere over FC requirements
- Certificate validation for Unisphere REST API calls
- Mount propagation is enabled on container runtime that is being used
@@ -45,7 +46,7 @@ CSI PowerMax Reverse Proxy is an HTTPS server and has to be configured with an S
The certificate and key are provided to the proxy via a Kubernetes TLS secret (in the same namespace). The SSL certificate must be an X.509 certificate encoded in PEM format. The certificates can be obtained via a Certificate Authority or can be self-signed and generated by a tool such as openssl.
-Starting from v2.7.0 , these secrets will be created automatically using the below tls.key and tls.cert contents provided in values.yaml file.
+Starting from v2.7.0 , these secrets will be created automatically using the following tls.key and tls.cert contents provided in my-powermax-settings.yaml file.
For this , we need to install cert-manager using below command which manages the certs and secrets .
```bash
@@ -90,6 +91,13 @@ Set up the iSCSI initiators as follows:
For more information about configuring iSCSI, see [Dell Host Connectivity guide](https://www.delltechnologies.com/asset/zh-tw/products/storage/technical-support/docu5128.pdf).
+### NFS requirements
+
+CSI Driver for Dell PowerMax supports NFS communication. Ensure that the following requirements are met before you install CSI Driver:
+- Configure the NFS network. Please refer [here](https://dl.dell.com/content/manual57826791-dell-powermax-file-protocol-guide.pdf?language=en-us&ps=true) for more details.
+- PowerMax Embedded Management guest to access Unisphere for PowerMax.
+- Create the NAS server. Please refer [here](https://dl.dell.com/content/manual55638050-dell-powermax-file-quick-start-guide.pdf?language=en-us&ps=true) for more details.
+
### Auto RDM for vSphere over FC requirements
The CSI Driver for Dell PowerMax supports auto RDM for vSphere over FC. These requirements are applicable for the clusters deployed on ESX/ESXi using virtualized environement.
@@ -177,7 +185,6 @@ Set up the PowerPath for Linux as follows:
```bash
systemctl start PowerPath
```
-
>Note: Do not install Dell PowerPath if multi-path software is already installed, as they cannot co-exist with native multi-path software.
### (Optional) Volume Snapshot Requirements
@@ -185,7 +192,7 @@ Set up the PowerPath for Linux as follows:
### (Optional) Replication feature Requirements
-Applicable only if you decided to enable the Replication feature in `values.yaml`
+Applicable only if you decided to enable the Replication feature in `my-powermax-settings.yaml`
```yaml
replication:
@@ -201,7 +208,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
**Steps**
-1. Run `git clone -b v2.7.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts.
+1. Run `git clone -b v2.8.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts.
2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace powermax` to create a new one
3. Edit the `samples/secret/secret.yaml` file,to point to the correct namespace, and replace the values for the username and password parameters.
These values can be obtained using base64 encoding as described in the following example:
@@ -214,9 +221,9 @@ CRDs should be configured during replication prepare stage with repctl as descri
```bash
kubectl create -f samples/secret/secret.yaml
```
-5. Copy the default values.yaml file
+5. Download the default values.yaml file
```bash
- cd helm && cp csi-powermax/values.yaml my-powermax-settings.yaml
+ cd dell-csi-helm-installer && wget -O my-powermax-settings.yaml https://github.com/dell/helm-charts/raw/csi-powermax-2.8.0/charts/csi-powermax/values.yaml
```
6. Ensure the unisphere have 10.0 REST endpoint support by clicking on Unisphere -> Help (?) -> About in Unisphere for PowerMax GUI.
7. Edit the newly created file and provide values for the following parameters
@@ -260,6 +267,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| version | Current version of the driver. Don't modify this value as this value will be used by the install script. | Yes | v2.3.0 |
| images | Defines the container images used by the driver. | - | - |
| driverRepository | Defines the registry of the container image used for the driver. | Yes | dellemc |
+| maxPowerMaxVolumesPerNode | Specifies the maximum number of volume that can be created on a node. | Yes| 0 |
| **controller** | Allows configuration of the controller-specific parameters.| - | - |
| controllerCount | Defines the number of csi-powerscale controller pods to deploy to the Kubernetes release| Yes | 2 |
| volumeNamePrefix | Defines a string prefix for the names of PersistentVolumes created | Yes | "k8s" |
@@ -300,6 +308,9 @@ CRDs should be configured during replication prepare stage with repctl as descri
| image | Image for dell-csi-replicator sidecar. | No | " " |
| replicationContextPrefix | enables side cars to read required information from the volume context | No | powermax |
| replicationPrefix | Determine if replication is enabled | No | replication.storage.dell.com |
+| **storageCapacity** | It is an optional feature that enable storagecapacity & helps the scheduler to check whether the requested capacity is available on the PowerMax array and allocate it to the nodes.| - | - |
+| enabled | A boolean that enables/disables storagecapacity feature. | - | true |
+| pollInterval | It configure how often external-provisioner polls the driver to detect changed capacity | - | 5m |
| **vSphere**| This section refers to the configuration options for VMware virtualized environment support via RDM | - | - |
| enabled | A boolean that enables/disables VMware virtualized environment support. | No | false |
| fcPortGroup | Existing portGroup that driver will use for vSphere. | Yes | "" |
@@ -310,7 +321,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
8. Install the driver using `csi-install.sh` bash script by running
```bash
- cd ../dell-csi-helm-installer && ./csi-install.sh --namespace powermax --values ../helm/my-powermax-settings.yaml
+ cd ../dell-csi-helm-installer && ./csi-install.sh --namespace powermax --values ./my-powermax-settings.yaml
```
9. Or you can also install the driver using standalone helm chart using the command
```bash
diff --git a/content/v2/csidriver/installation/helm/powerstore.md b/content/v2/csidriver/installation/helm/powerstore.md
index c535f7fce7..4a77564107 100644
--- a/content/v2/csidriver/installation/helm/powerstore.md
+++ b/content/v2/csidriver/installation/helm/powerstore.md
@@ -147,7 +147,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
## Install the Driver
**Steps**
-1. Run `git clone -b v2.7.0 https://github.com/dell/csi-powerstore.git` to clone the git repository.
+1. Run `git clone -b v2.8.0 https://github.com/dell/csi-powerstore.git` to clone the git repository.
2. Ensure that you have created namespace where you want to install the driver. You can run `kubectl create namespace csi-powerstore` to create a new one. "csi-powerstore" is just an example. You can choose any name for the namespace.
But make sure to align to the same namespace during the whole installation.
3. Edit `samples/secret/secret.yaml` file and configure connection information for your PowerStore arrays changing following parameters:
@@ -172,9 +172,9 @@ CRDs should be configured during replication prepare stage with repctl as descri
5. Create storage classes using ones from `samples/storageclass` folder as an example and apply them to the Kubernetes cluster by running `kubectl create -f `
> If you do not specify `arrayID` parameter in the storage class then the array that was specified as the default would be used for provisioning volumes.
-6. Copy the default values.yaml file
+6. Download the default values.yaml file
```bash
- cd dell-csi-helm-installer && cp ../helm/csi-powerstore/values.yaml ./my-powerstore-settings.yaml
+ cd dell-csi-helm-installer && wget -O my-powerstore-settings.yaml https://github.com/dell/helm-charts/raw/csi-powerstore-2.8.0/charts/csi-powerstore/values.yaml
```
7. Edit the newly created values file and provide values for the following parameters `vi my-powerstore-settings.yaml`:
@@ -184,6 +184,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| logFormat | Defines CSI driver log format | No | "JSON" |
| externalAccess | Defines additional entries for hostAccess of NFS volumes, single IP address and subnet are valid entries | No | " " |
| kubeletConfigDir | Defines kubelet config path for cluster | Yes | "/var/lib/kubelet" |
+| maxPowerstoreVolumesPerNode | Defines the default value for maximum number of volumes that the controller can publish to the node. If the value is zero, then CO shall decide how many volumes of this type can be published by the controller to the node. This limit is applicable to all the nodes in the cluster for which the node label 'max-powerstore-volumes-per-node' is not set. | No | 0 |
| imagePullPolicy | Policy to determine if the image should be pulled prior to starting the container. | Yes | "IfNotPresent" |
| nfsAcls | Defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. | No | "0777" |
| connection.enableCHAP | Defines whether the driver should use CHAP for iSCSI connections or not | No | False |
diff --git a/content/v2/csidriver/installation/helm/unity.md b/content/v2/csidriver/installation/helm/unity.md
index c14d324eed..e7e0f12538 100644
--- a/content/v2/csidriver/installation/helm/unity.md
+++ b/content/v2/csidriver/installation/helm/unity.md
@@ -92,7 +92,7 @@ Install CSI Driver for Unity XT using this procedure.
* As a pre-requisite for running this procedure, you must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command
```bash
- git clone -b v2.7.0 https://github.com/dell/csi-unity.git
+ git clone -b v2.8.0 https://github.com/dell/csi-unity.git
```
* In the top-level dell-csi-helm-installer directory, there should be two scripts, `csi-install.sh` and `csi-uninstall.sh`.
* Ensure _unity_ namespace exists in Kubernetes cluster. Use the `kubectl create namespace unity` command to create the namespace if the namespace is not present.
@@ -101,34 +101,39 @@ Install CSI Driver for Unity XT using this procedure.
Procedure
-1. Collect information from the Unity XT Systems like Unique ArrayId, IP address, username, and password. Make a note of the value for these parameters as they must be entered in the `secret.yaml` and `myvalues.yaml` file.
+1. Collect information from the Unity XT Systems like unique ArrayId, IP address, username, and password. Make a note of the value for these parameters as they must be entered in the `secret.yaml` and `myvalues.yaml` file.
**Note**:
* ArrayId corresponds to the serial number of Unity XT array.
* Unity XT Array username must have role as Storage Administrator to be able to perform CRUD operations.
* If the user is using a complex K8s version like "v1.24.6-mirantis-1", use this kubeVersion check in helm/csi-unity/Chart.yaml file.
- kubeVersion: ">= 1.24.0-0 < 1.28.0-0"
+ kubeVersion: ">= 1.24.0-0 < 1.29.0-0"
-2. Copy the `helm/csi-unity/values.yaml` into a file named `myvalues.yaml` in the same directory of `csi-install.sh`, to customize settings for installation.
+2. Get the required values.yaml using the command below:
+
+```bash
+cd dell-csi-helm-installer && wget -O my-unity-settings.yaml https://github.com/dell/helm-charts/raw/csi-unity-2.8.0/charts/csi-unity/values.yaml
+```
+
+3. Edit `values.yaml` to set the following parameters for your installation:
-3. Edit `myvalues.yaml` to set the following parameters for your installation:
-
The following table lists the primary configurable parameters of the Unity XT driver chart and their default values. More detailed information can be found in the [`values.yaml`](https://github.com/dell/helm-charts/blob/main/charts/csi-unity/values.yaml) file in this repository.
| Parameter | Description | Required | Default |
| --------- | ----------- | -------- |-------- |
- | version | helm version | true | - |
- | logLevel | LogLevel is used to set the logging level of the driver | true | info |
- | allowRWOMultiPodAccess | Flag to enable multiple pods to use the same PVC on the same node with RWO access mode. | false | false |
+ | logLevel | LogLevel is used to set the logging level of the driver | No | info |
+ | allowRWOMultiPodAccess | Flag to enable multiple pods to use the same PVC on the same node with RWO access mode. | No | false |
| kubeletConfigDir | Specify kubelet config dir path | Yes | /var/lib/kubelet |
- | syncNodeInfoInterval | Time interval to add node info to the array. Default 15 minutes. The minimum value should be 1 minute. | false | 15 |
- | maxUnityVolumesPerNode | Maximum number of volumes that controller can publish to the node. | false | 0 |
- | certSecretCount | Represents the number of certificate secrets, which the user is going to create for SSL authentication. (unity-cert-0..unity-cert-n). The minimum value should be 1. | false | 1 |
+ | syncNodeInfoInterval | Time interval to add node info to the array. Default 15 minutes. The minimum value should be 1 minute. | No | 15 |
+ | maxUnityVolumesPerNode | Maximum number of volumes that controller can publish to the node. | No | 0 |
+ | certSecretCount | Represents the number of certificate secrets, which the user is going to create for SSL authentication. (unity-cert-0..unity-cert-n). The minimum value should be 1. | No | 1 |
| imagePullPolicy | The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. | Yes | IfNotPresent |
- | podmon.enabled | service to monitor failing jobs and notify | false | - |
- | podmon.image| pod man image name | false | - |
+ | podmon.enabled | service to monitor failing jobs and notify | No | false |
+ | podmon.image| pod man image name | No | - |
| tenantName | Tenant name added while adding host entry to the array | No | |
| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
+ | storageCapacity.enabled | Enable/Disable storage capacity tracking | No | true |
+ | storageCapacity.pollInterval | Configure how often the driver checks for changed capacity | No | 5m |
| **controller** | Allows configuration of the controller-specific parameters.| - | - |
| controllerCount | Defines the number of csi-unity controller pods to deploy to the Kubernetes release| Yes | 2 |
| volumeNamePrefix | Defines a string prefix for the names of PersistentVolumes created | Yes | "k8s" |
@@ -185,12 +190,12 @@ Procedure
| Parameter | Description | Required | Default |
| ------------------------- | ---------------------------------------------- | -------- |-------- |
- | storageArrayList.username | Username for accessing Unity XT system | true | - |
- | storageArrayList.password | Password for accessing Unity XT system | true | - |
- | storageArrayList.endpoint | REST API gateway HTTPS endpoint Unity XT system| true | - |
- | storageArrayList.arrayId | ArrayID for Unity XT system | true | - |
- | storageArrayList.skipCertificateValidation | "skipCertificateValidation " determines if the driver is going to validate unisphere certs while connecting to the Unisphere REST API interface. If it is set to false, then a secret unity-certs has to be created with an X.509 certificate of CA which signed the Unisphere certificate. | true | true |
- | storageArrayList.isDefault| An array having isDefault=true or isDefault=true will be considered as the default array when arrayId is not specified in the storage class. This parameter should occur only once in the list. | true | - |
+ | storageArrayList.username | Username for accessing Unity XT system | Yes | - |
+ | storageArrayList.password | Password for accessing Unity XT system | Yes | - |
+ | storageArrayList.endpoint | REST API gateway HTTPS endpoint Unity XT system| Yes | - |
+ | storageArrayList.arrayId | ArrayID for Unity XT system | Yes | - |
+ | storageArrayList.skipCertificateValidation | "skipCertificateValidation " determines if the driver is going to validate unisphere certs while connecting to the Unisphere REST API interface. If it is set to false, then a secret unity-certs has to be created with an X.509 certificate of CA which signed the Unisphere certificate. | Yes | true |
+ | storageArrayList.isDefault| An array having isDefault=true or isDefault=true will be considered as the default array when arrayId is not specified in the storage class. This parameter should occur only once in the list. | Yes | - |
Example: secret.yaml
@@ -208,6 +213,7 @@ Procedure
password: "password"
endpoint: "https://10.1.1.2/"
skipCertificateValidation: true
+ isDefault: false
```
Use the following command to create a new secret unity-creds from `secret.yaml` file.
@@ -243,6 +249,7 @@ Procedure
password: "password"
endpoint: "https://10.1.1.2/"
skipCertificateValidation: true
+ isDefault: false
```
**Note:** Parameters "allowRWOMultiPodAccess" and "syncNodeInfoInterval" have been enabled for configuration in values.yaml and this helps users to dynamically change these values without the need for driver re-installation.
@@ -326,25 +333,20 @@ Procedure
**Note**:
To install nightly or latest csi driver build using bash script use this command:
```bash
- /csi-install.sh --namespace unity --values ./myvalues.yaml --version latest
+ /csi-install.sh --namespace unity --values ./myvalues.yaml --version latest --helm-charts-version
```
-8. You can also install the driver using standalone helm chart by running helm install command, first using the --dry-run flag to
- confirm various parameters are as desired. Once the parameters are validated, run the command without the --dry-run flag.
- Note: This example assumes that the user is at repo root helm folder i.e csi-unity/helm.
+8. You can also install the driver using standalone helm chart by cloning the centralised helm charts and running the helm install command as shown.
**Syntax**:
```bash
- helm install --dry-run --values --namespace
- ```
- `` - namespace of the driver installation.
- `` - unity in case of unity-creds and unity-certs-0 secrets.
- `` - Path of the helm directory.
- e.g:
- ```bash
- helm install --dry-run --values ./csi-unity/myvalues.yaml --namespace unity unity ./csi-unity
- ```
+ git clone -b csi-unity-2.8.0 https://github.com/dell/helm-charts
+
+ helm install dell/container-storage-modules -n --version -f
+
+ Example: helm install unity dell/container-storage-modules -n csi-unity --version 1.0.1 -f values.yaml
+ ```
## Certificate validation for Unisphere REST API calls
diff --git a/content/v2/csidriver/installation/offline/_index.md b/content/v2/csidriver/installation/offline/_index.md
index 52043d99f0..bb46d37b88 100644
--- a/content/v2/csidriver/installation/offline/_index.md
+++ b/content/v2/csidriver/installation/offline/_index.md
@@ -20,10 +20,10 @@ As well as the Dell CSI Operator
## Dependencies
Multiple Linux-based systems may be required to create and process an offline bundle for use.
-* One Linux-based system, with internet access, will be used to create the bundle. This involved the user cloning a git repository hosted on github.com and then invoking a script that utilizes `docker` or `podman` to pull and save container images to file.
+* One Linux-based system, with Internet access, will be used to create the bundle. This involved the user cloning a git repository hosted on github.com and then invoking a script that utilizes `docker` or `podman` to pull and save container images to file.
* One Linux-based system, with access to an image registry, to invoke a script that uses `docker` or `podman` to restore container images from file and push them to a registry
-If one Linux system has both internet access and access to an internal registry, that system can be used for both steps.
+If one Linux system has both Internet access and access to an internal registry, that system can be used for both steps.
Preparing an offline bundle requires the following utilities:
@@ -47,7 +47,7 @@ To perform an offline installation of a driver or the Operator, the following st
### Building an offline bundle
-This needs to be performed on a Linux system with access to the internet as a git repo will need to be cloned, and container images pulled from public registries.
+This needs to be performed on a Linux system with access to the Internet as a git repo will need to be cloned, and container images pulled from public registries.
To build an offline bundle, the following steps are needed:
1. Perform a `git clone` of the desired repository. For a helm-based install, the specific driver repo should be cloned. For an Operator based deployment, the Dell CSI Operator repo should be cloned
@@ -84,18 +84,19 @@ cd dell-csi-operator/scripts
dellemc/csi-powermax:v2.3.1
dellemc/csi-powermax:v2.4.0
dellemc/csi-powermax:v2.5.0
- dellemc/csi-powerstore:v2.5.0
dellemc/csi-powerstore:v2.6.0
dellemc/csi-powerstore:v2.7.0
+ dellemc/csi-powerstore:v2.8.0
dellemc/csi-unity:v2.3.0
dellemc/csi-unity:v2.4.0
dellemc/csi-unity:v2.5.0
- dellemc/csi-vxflexos:v2.5.0
dellemc/csi-vxflexos:v2.6.0
dellemc/csi-vxflexos:v2.7.0
+ dellemc/csi-vxflexos:v2.8.0
dellemc/dell-csi-operator:v1.12.0
dellemc/sdc:3.6
dellemc/sdc:3.6.0.6
+ dellemc/sdc:3.6.1
docker.io/busybox:1.32.0
...
...
@@ -215,17 +216,18 @@ Preparing a offline bundle for installation
dellemc/csi-powermax:v2.3.1 -> localregistry:5000/csi-operator/csi-powermax:v2.3.1
dellemc/csi-powermax:v2.4.0 -> localregistry:5000/csi-operator/csi-powermax:v2.4.0
dellemc/csi-powermax:v2.5.0 -> localregistry:5000/csi-operator/csi-powermax:v2.5.0
- dellemc/csi-powerstore:v2.5.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.5.0
dellemc/csi-powerstore:v2.6.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.6.0
dellemc/csi-powerstore:v2.7.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.7.0
+ dellemc/csi-powerstore:v2.8.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.8.0
dellemc/csi-unity:v2.3.0 -> localregistry:5000/csi-operator/csi-unity:v2.3.0
dellemc/csi-unity:v2.4.0 -> localregistry:5000/csi-operator/csi-unity:v2.4.0
dellemc/csi-unity:v2.5.0 -> localregistry:5000/csi-operator/csi-unity:v2.5.0
- dellemc/csi-vxflexos:v2.5.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.5.0
dellemc/csi-vxflexos:v2.6.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.6.0
dellemc/csi-vxflexos:v2.7.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.7.0
+ dellemc/csi-vxflexos:v2.8.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.8.0
dellemc/sdc:3.6 -> localregistry:5000/csi-operator/sdc:3.6
dellemc/sdc:3.6.0.6 -> localregistry:5000/csi-operator/sdc:3.6.0.6
+ dellemc/sdc:3.6.1 -> localregistry:5000/csi-operator/sdc:3.6.1
docker.io/busybox:1.32.0 -> localregistry:5000/csi-operator/busybox:1.32.0
...
...
@@ -241,17 +243,18 @@ Preparing a offline bundle for installation
changing: dellemc/csi-powermax:v2.3.1 -> localregistry:5000/csi-operator/csi-powermax:v2.3.1
changing: dellemc/csi-powermax:v2.4.0 -> localregistry:5000/csi-operator/csi-powermax:v2.4.0
changing: dellemc/csi-powermax:v2.5.0 -> localregistry:5000/csi-operator/csi-powermax:v2.5.0
- changing: dellemc/csi-powerstore:v2.5.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.5.0
changing: dellemc/csi-powerstore:v2.6.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.6.0
changing: dellemc/csi-powerstore:v2.7.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.7.0
+ changing: dellemc/csi-powerstore:v2.8.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.8.0
changing: dellemc/csi-unity:v2.3.0 -> localregistry:5000/csi-operator/csi-unity:v2.3.0
changing: dellemc/csi-unity:v2.4.0 -> localregistry:5000/csi-operator/csi-unity:v2.4.0
changing: dellemc/csi-unity:v2.5.0 -> localregistry:5000/csi-operator/csi-unity:v2.5.0
- changing: dellemc/csi-vxflexos:v2.5.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.5.0
changing: dellemc/csi-vxflexos:v2.6.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.6.0
changing: dellemc/csi-vxflexos:v2.7.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.7.0
+ changing: dellemc/csi-vxflexos:v2.8.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.8.0
changing: dellemc/sdc:3.6 -> localregistry:5000/csi-operator/sdc:3.6
changing: dellemc/sdc:3.6.0.6 -> localregistry:5000/csi-operator/sdc:3.6.0.6
+ changing: dellemc/sdc:3.6.1 -> localregistry:5000/csi-operator/sdc:3.6.1
changing: docker.io/busybox:1.32.0 -> localregistry:5000/csi-operator/busybox:1.32.0
...
...
diff --git a/content/v2/csidriver/installation/operator/_index.md b/content/v2/csidriver/installation/operator/_index.md
index a086c56ba2..fa57b0520f 100644
--- a/content/v2/csidriver/installation/operator/_index.md
+++ b/content/v2/csidriver/installation/operator/_index.md
@@ -35,7 +35,7 @@ If you have installed an old version of the `dell-csi-operator` which was availa
| ------------------ | --------- | -------------- | -------------------- | --------------------- |
| CSI PowerMax | 2.5.0 | v2.5.0 | 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
| CSI PowerMax | 2.6.0 | v2.6.0 | 1.24, 1.25, 1.26 | 4.10, 4.10 EUS, 4.11 |
-| CSI PowerMax | 2.7.0 | v2.7.0 | 1.25, 1.26, 1.27 | 4.11, 4.12 EUS, 4.12 |
+| CSI PowerMax | 2.7.0 | v2.7.0 | 1.25, 1.26, 1.27 | 4.11, 4.12, 4.12 EUS |
| CSI PowerFlex | 2.5.0 | v2.5.0 | 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
| CSI PowerFlex | 2.6.0 | v2.6.0 | 1.24, 1.25, 1.26 | 4.10, 4.10 EUS, 4.11 |
| CSI PowerFlex | 2.7.0 | v2.7.0 | 1.25, 1.26, 1.27 | 4.11, 4.12 EUS, 4.12 |
@@ -206,8 +206,8 @@ Or
{driver name}_{driver version}_ops_{OpenShift version}.yaml
```
For e.g.
-* samples/powermax_v270_k8s_126.yaml* <- To install CSI PowerMax driver v2.7.0 on a Kubernetes 1.26 cluster
-* samples/powermax_v270_ops_411.yaml* <- To install CSI PowerMax driver v2.7.0 on an OpenShift 4.11 cluster
+* samples/powermax_v270_k8s_127.yaml* <- To install CSI PowerMax driver v2.7.0 on a Kubernetes 1.27 cluster
+* samples/powermax_v270_ops_412.yaml* <- To install CSI PowerMax driver v2.7.0 on an OpenShift 4.12 cluster
Copy the correct sample file and edit the mandatory & any optional parameters specific to your driver installation by following the instructions [here](#modify-the-driver-specification)
>NOTE: A detailed explanation of the various mandatory and optional fields in the CustomResource is available [here](#custom-resource-specification). Please make sure to read through and understand the various fields.
diff --git a/content/v2/csidriver/installation/operator/operator_migration.md b/content/v2/csidriver/installation/operator/operator_migration.md
new file mode 100644
index 0000000000..5a05bbf640
--- /dev/null
+++ b/content/v2/csidriver/installation/operator/operator_migration.md
@@ -0,0 +1,77 @@
+---
+title: CSI to CSM Operator Migration
+description: >
+ Migrating from CSI Operator to CSM Operator
+---
+
+## CR Sample Files
+
+{{
}}
+>NOTE: Sample files refer to the latest version for each platform. If you do not want to upgrade, please find your preferred version in the [csm-operator repository](https://github.com/dell/csm-operator/blob/main/samples).
+
+## Migration Steps
+
+1. Save the CR yaml file of the current CSI driver to preserve the settings. Use the following commands in your cluster to get the CR:
+ ```
+ kubectl -n get
+ kubectl -n get / -o yaml
+ ```
+ Example for CSI Unity:
+ ```
+ kubectl -n openshift-operators get CSIUnity
+ kubectl -n openshift-operators get CSIUnity/test-unity -o yaml
+ ```
+2. Map and update the settings from the CR in step 1 to the relevant CSM Operator CR
+ - As the yaml content may differ, ensure the values held in the step 1 CR backup are present in the new CR before installing the new driver
+ - Ex: spec.driver.fsGroupPolicy in [PowerMax 2.6 for CSI Operator](https://github.com/dell/dell-csi-operator/blob/main/samples/powermax_v260_k8s_126.yaml#L17C5-L17C18) maps to spec.driver.csiDriverSpec.fSGroupPolicy in [PowerMax 2.6 for CSM Operator](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powermax_v260.yaml#L28C7-L28C20)
+3. Retain (or do not delete) the secret, namespace, storage classes, and volume snapshot classes from the original deployment as they will be re-used in the CSM operator deployment
+4. Uninstall the CR from the CSI Operator
+ ```
+ kubectl delete / -n
+ ```
+5. Uninstall the CSI Operator itself
+ - Instructions can be found [here](../../../../deployment/csmoperator/#uninstall)
+6. Install the CSM Operator
+ - Instructions can be found [here](../../../../deployment/csmoperator/#installation)
+7. Install the CR updated in step 2
+ - Instructions can be found [here](../#installing-csi-driver-via-operator)
+>NOTE: Uninstallation of the driver and the Operator is non-disruptive for mounted volumes. Nonetheless you can not create new volume, snapshot or move a Pod.
+
+## OpenShift Web Console Migration Steps
+
+1. Save the CR yaml file of the current CSI driver to preserve the settings (for use in step 6). Use the following commands in your cluster to get the CR:
+ ```
+ kubectl -n get
+ kubectl -n get / -o yaml
+ ```
+ Example for CSI Unity:
+ ```
+ kubectl -n openshift-operators get CSIUnity
+ kubectl -n openshift-operators get CSIUnity/test-unity -o yaml
+ ```
+2. Retain (or do not delete) the secret, namespace, storage classes, and volume snapshot classes from the original deployment as they will be re-used in the CSM operator deployment
+3. Delete the CSI driver through the CSI Operator in the OpenShift Web Console
+ - Find the CSI operator under *Operators* -> *Installed Operators*
+ - Select the *Dell CSI Operator* and find your installed CSI driver under *All instances*
+4. Uninstall the CSI Operator in the OpenShift Web Console
+5. Install the CSM Operator in the OpenShift Web Console
+ - Search for *Dell* in the OperatorHub
+ - Select *Dell Container Storage Modules* and install
+6. Install the CSI driver through the CSM Operator in the OpenShift Web Console
+ - Select *Create instance* under the provided Container Storage Module API
+ - Use the CR backup from step 1 to manually map desired settings to the new CSI driver
+ - As the yaml content may differ, ensure the values held in the step 1 CR backup are present in the new CR before installing the new driver
+ - Ex: spec.driver.fsGroupPolicy in [PowerMax 2.6 for CSI Operator](https://github.com/dell/dell-csi-operator/blob/main/samples/powermax_v260_k8s_126.yaml#L17C5-L17C18) maps to spec.driver.csiDriverSpec.fSGroupPolicy in [PowerMax 2.6 for CSM Operator](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powermax_v260.yaml#L28C7-L28C20)
+>NOTE: Uninstallation of the driver and the Operator is non-disruptive for mounted volumes. Nonetheless you can not create new volume, snapshot or move a Pod.
+
+## Testing
+
+To test that the new installation is working, please follow the steps outlined [here](../../test) for your specific driver.
\ No newline at end of file
diff --git a/content/v2/csidriver/installation/operator/powerflex.md b/content/v2/csidriver/installation/operator/powerflex.md
index 61a94669b2..a4de8f46e6 100644
--- a/content/v2/csidriver/installation/operator/powerflex.md
+++ b/content/v2/csidriver/installation/operator/powerflex.md
@@ -5,7 +5,6 @@ description: >
---
{{% pageinfo color="primary" %}}
The Dell CSI Operator is no longer actively maintained or supported. Dell CSI Operator has been replaced with [Dell CSM Operator](https://dell.github.io/csm-docs/docs/deployment/csmoperator/). If you are currently using Dell CSI Operator, refer to the [operator migration documentation](https://dell.github.io/csm-docs/docs/csidriver/installation/operator/operator_migration/) to migrate from Dell CSI Operator to Dell CSM Operator.
-
CSM 1.7.1 is applicable to helm based installations of PowerFlex driver.
{{% /pageinfo %}}
@@ -172,7 +171,7 @@ For detailed PowerFlex installation procedure, see the _Dell PowerFlex Deploymen
namespace: test-vxflexos
spec:
driver:
- configVersion: v2.6.0
+ configVersion: v2.7.0
replicas: 1
dnsPolicy: ClusterFirstWithHostNet
forceUpdate: false
@@ -224,8 +223,16 @@ For detailed PowerFlex installation procedure, see the _Dell PowerFlex Deploymen
- name: X_CSI_HEALTH_MONITOR_ENABLED
value: "false"
+ # X_CSI_MAX_VOLUMES_PER_NODE: Defines the maximum PowerFlex volumes that can be created per node
+ # Allowed values: Any value greater than or equal to 0
+ # If value is 0 then the orchestrator decides how many volumes can be published by the controller to
+ # the node
+ # Default value: "0"
+ - name: X_CSI_MAX_VOLUMES_PER_NODE
+ value: "0"
+
initContainers:
- - image: dellemc/sdc:3.6.0.6
+ - image: dellemc/sdc:3.6.1
imagePullPolicy: IfNotPresent
name: sdc
envs:
diff --git a/content/v2/csidriver/installation/operator/powermax.md b/content/v2/csidriver/installation/operator/powermax.md
index 8ba63a4653..bba41cf45e 100644
--- a/content/v2/csidriver/installation/operator/powermax.md
+++ b/content/v2/csidriver/installation/operator/powermax.md
@@ -130,6 +130,7 @@ Create a secret named powermax-certs in the namespace where the CSI PowerMax dri
| --------- | ----------- | -------- |-------- |
| replicas | Controls the number of controller Pods you deploy. If controller Pods are greater than the number of available nodes, excess Pods will become stuck in pending. The default is 2 which allows for Controller high availability. | Yes | 2 |
| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
+ | storageCapacity | Helps the scheduler to schedule the pod on a node satisfying the topology constraints, only if the requested capacity is available on the storage array | - | true |
| ***Common parameters for node and controller*** |
| X_CSI_K8S_CLUSTER_PREFIX | Define a prefix that is appended to all resources created in the array; unique per K8s/CSI deployment; max length - 3 characters | Yes | XYZ |
| X_CSI_POWERMAX_ENDPOINT | IP address of the Unisphere for PowerMax | Yes | https://0.0.0.0:8443 |
@@ -149,6 +150,7 @@ Create a secret named powermax-certs in the namespace where the CSI PowerMax dri
| ***Node parameters***|
| X_CSI_POWERMAX_ISCSI_ENABLE_CHAP | Enable ISCSI CHAP authentication. For more details on this feature see the related [documentation](../../../features/powermax/#iscsi-chap) | No | false |
| X_CSI_TOPOLOGY_CONTROL_ENABLED | Enable/Disabe topology control. It filters out arrays, associated transport protocol available to each node and creates topology keys based on any such user input. | No | false |
+ | X_CSI_MAX_VOLUMES_PER_NODE | Enable volume limits. It specifies the maximum number of volumes that can be created on a node. | Yes | 0 |
5. Execute the following command to create the PowerMax custom resource:`kubectl create -f `. The above command will deploy the CSI-PowerMax driver.
@@ -208,7 +210,8 @@ Use a tool such as `openssl` to generate this secret using the example below:
openssl genrsa -out tls.key 2048
openssl req -new -x509 -sha256 -key tls.key -out tls.crt -days 3650
kubectl create secret -n tls revproxy-certs --cert=tls.crt --key=tls.key
-kubectl create secret -n tls csirevproxy-tls-secret --cert=tls.crt --key=tls.key
+kubectl create secret -n tls csirevproxy-tls-secret --cert=tls.crt --
+key=tls.key
```
#### Set the following parameters in the CSI PowerMaxReverseProxy Spec
diff --git a/content/v2/csidriver/installation/operator/powerstore.md b/content/v2/csidriver/installation/operator/powerstore.md
index 59616afe38..efda0d0b6f 100644
--- a/content/v2/csidriver/installation/operator/powerstore.md
+++ b/content/v2/csidriver/installation/operator/powerstore.md
@@ -116,6 +116,8 @@ spec:
value: "true"
- name: X_CSI_HEALTH_MONITOR_ENABLED
value: "false"
+ - name: X_CSI_POWERSTORE_MAX_VOLUMES_PER_NODE
+ value: "0"
nodeSelector:
node-role.kubernetes.io/worker: ""
@@ -151,6 +153,7 @@ data:
| X_CSI_NFS_ACLS | Defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. | No | "0777" |
| ***Node parameters*** |
| X_CSI_POWERSTORE_ENABLE_CHAP | Set to true if you want to enable iSCSI CHAP feature | No | false |
+| X_CSI_POWERSTORE_MAX_VOLUMES_PER_NODE | Specify the default value for the maximum number of volumes that the controller can publish to the node | No | 0 |
6. Execute the following command to create PowerStore custom resource:`kubectl create -f `. The above command will deploy the CSI-PowerStore driver.
- After that the driver should be installed, you can check the condition of driver pods by running `kubectl get all -n `
diff --git a/content/v2/csidriver/installation/operator/unity.md b/content/v2/csidriver/installation/operator/unity.md
index fce4ba71a9..ff5a411a24 100644
--- a/content/v2/csidriver/installation/operator/unity.md
+++ b/content/v2/csidriver/installation/operator/unity.md
@@ -122,7 +122,7 @@ spec:
args: ["--snapshot-name-prefix=csiunitysnap"]
# Enable/Disable health monitor of CSI volumes from node plugin. Provides details of volume usage.
# - name: external-health-monitor
- # args: ["--monitor-interval=60s"]
+ # args: ["--monitor-interval=60s"]
controller:
envs:
diff --git a/content/v2/csidriver/installation/test/certcsi.md b/content/v2/csidriver/installation/test/certcsi.md
index 04d791ba88..205bb57dcb 100644
--- a/content/v2/csidriver/installation/test/certcsi.md
+++ b/content/v2/csidriver/installation/test/certcsi.md
@@ -62,6 +62,11 @@ storageClasses:
clone: # is volume cloning supported (true or false)
snapshot: # is volume snapshotting supported (true or false)
RWX: # is ReadWriteMany volume access mode supported for non RawBlock volumes (true or false)
+ volumeHealth: false # set this to enable the execution of the VolumeHealthMetricsSuite.
+ # Make sure to enable healthMonitor for the driver's controller and node pods before running this suite. It is recommended to use a smaller interval time for this sidecar and pass the required arguments.
+ VGS: false # set this to enable the execution of the VolumeGroupSnapSuite.
+ # Additionally, make sure to provide the necessary required arguments such as volumeSnapshotClass, vgs-volume-label, and any others as needed.
+ RWOP: false # set this to enable the execution of the MultiAttachSuite with the AccessMode set to ReadWriteOncePod.
ephemeral: # if exists, then run EphemeralVolumeSuite
driver: # driver name for EphemeralVolumeSuite
fstype: # fstype for EphemeralVolumeSuite
@@ -331,6 +336,32 @@ To run block snapshot test suite, run the command:
cert-csi test blocksnap --sc --vsc
```
+#### Volume Health Metric Suite
+
+To run the volume health metric test suite, run the command:
+```bash
+
+cert-csi test volumehealthmetrics --sc --driver-ns --podNum --volNum
+```
+
+> Note: Make sure to enable healthMonitor for the driver's controller and node pods before running this suite. It is recommended to use a smaller interval time for this sidecar.
+
+#### Ephemeral volumes suite
+
+To run the ephemeral volume test suite, run the command:
+```bash
+cert-csi test ephemeral-volume --driver --attr ephemeral-config.properties
+--pods : Number of pods to create
+--pod-name : Create pods with custom name
+--attr : File name for the CSI volume attributes file (required)
+--fs-type: FS Type
+
+Sample ephemeral-config.properties (key/value pair)
+arrayId=arr1
+protocol=iSCSI
+size=5Gi
+```
+
### Running Longevity mode
To run longevity test suite, run the command:
diff --git a/content/v2/csidriver/installation/test/powerflex.md b/content/v2/csidriver/installation/test/powerflex.md
index 60890928ed..deb58f77d4 100644
--- a/content/v2/csidriver/installation/test/powerflex.md
+++ b/content/v2/csidriver/installation/test/powerflex.md
@@ -138,3 +138,91 @@ spec:
```
*NOTE:* The _spec.dataSource_ clause, specifies a source _VolumeSnapshot_ named _pvol0-snap1_ which matches the snapshot's name in `snap1.yaml`.
+
+## Test creating NFS volumes
+**Steps**
+
+1. Navigate to the test/helm directory, which contains the `starttest.sh` and the _1vol-nfs_ directories. This directory contains a simple Helm chart that will deploy a pod that uses one PowerFlex volumes for NFS filesystem type.
+
+*NOTE:*
+- Helm tests are designed assuming users are using the _storageclass_ name: _vxflexos-nfs_. If your _storageclass_ names differ from these values, please update the templates in 1vol-nfs accordingly (located in `test/helm/1vol-nfs/templates` directory). You can use `kubectl get sc` to check for the _storageclass_ names.
+
+3. Run `sh starttest.sh 1vol-nfs` to deploy the pod. You should see the following:
+```
+Normal Scheduled default-scheduler, Successfully assigned helmtest-vxflexos/vxflextest-0 to worker-1-zwfjtd4eoblkg.domain
+Normal SuccessfulAttachVolume attachdetach-controller, AttachVolume.Attach succeeded for volume "k8s-e279d47296"
+Normal Pulled 13s kubelet, Successfully pulled image "docker.io/centos:latest" in 791.117427ms (791.125522ms including waiting)
+Normal Created 13s kubelet, Created container test
+Normal Started 13s kubelet, Started container test
+10.x.x.x:/k8s-e279d47296 8388608 1582336 6806272 19% /data0
+10.x.x.x:/k8s-e279d47296 on /data0 type nfs4 (rw,relatime,vers=4.2,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.x.x.x,local_lock=none,addr=10.x.x.x)
+```
+3. To stop the test, run `sh stoptest.sh 1vol-nfs`. This script deletes the pods and the volumes depending on the retention setting you have configured.
+
+**Results**
+
+An outline of this workflow is described below:
+1. The _1vol-nfs_ helm chart contains one PersistentVolumeClaim definition in `pvc0.yaml`. It is referenced by the `test.yaml` which creates the pod. The contents of the `pvc0.yaml` file are described below:
+```yaml
+kind: PersistentVolumeClaim
+apiVersion: v1
+metadata:
+ name: pvol0
+ namespace: helmtest-vxflexos
+spec:
+ accessModes:
+ - ReadWriteOnce
+ volumeMode: Filesystem
+ resources:
+ requests:
+ storage: 8Gi
+ storageClassName: vxflexos-nfs
+```
+
+2. The _volumeMode: Filesystem_ requires a mounted file system, and the _resources.requests.storage_ of 8Gi requires an 8 GB file. In this case, the _storageClassName: vxflexos-nfs_ directs the system to use a storage class named _vxflexos-nfs_. This step yields a mounted _nfs_ file system. You can create the _vxflexos-nfs_ storage classes by using the yaml located in samples/storageclass.
+3. To see the volumes you created, run `kubectl get persistentvolumeclaim -n helmtest-vxflexos` and `kubectl describe persistentvolumeclaim -n helmtest-vxflexos`.
+>*NOTE:* For more information about Kubernetes objects like _StatefulSet_ and _PersistentVolumeClaim_ see [Kubernetes documentation: Concepts](https://kubernetes.io/docs/concepts/).
+
+## Test restoring NFS volume from snapshot
+Test the restore operation workflow to restore NFS volume from a snapshot.
+
+**Prerequisites**
+
+Ensure that you have stopped any previous test instance before performing this procedure.
+
+**Steps**
+
+1. Run `sh snaprestoretest-nfs.sh` to start the test.
+
+This script deploys the _1vol-nfs_ example, creates a snap of _pvol0_, and then updates the deployed helm chart from the updated directory _1vols+restore-nfs_. This adds an additional volume that is created from the snapshot.
+
+*NOTE:*
+- Helm tests are designed assuming users are using the _storageclass_ name: _vxflexos-nfs_. If your _storageclass_ names differ from these values, update the templates for 1vols+restore-nfs accordingly (located in `test/helm/1vols+restore-nfs/template` directory). You can use `kubectl get sc` to check for the _storageclass_ names.
+- Helm tests are designed assuming users are using the _snapshotclass_ name: _vxflexos-snapclass_ If your _snapshotclass_ name differs from the default values, update `snap1.yaml` accordingly.
+
+**Results**
+
+An outline of this workflow is described below:
+1. The snapshot is taken using `snap1.yaml`.
+2. _Helm_ is called to upgrade the deployment with a new definition, which is found in the _1vols+restore-nfs_ directory. The `csi-vxflexos/test/helm/1vols+restore-nfs/templates` directory contains the newly created `createFromSnap.yaml` file. The script then creates a _PersistentVolumeClaim_, which is a volume that is dynamically created from the snapshot. Then the helm deployment is upgraded to contain the newly created third volume. In other words, when the `snaprestoretest-nfs.sh` creates a new volume with data from the snapshot, the restore operation is tested. The contents of the `createFromSnap.yaml` are described below:
+
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: restorepvc
+ namespace: helmtest-vxflexos
+spec:
+ storageClassName: vxflexos-nfs
+ dataSource:
+ name: pvol0-snap1
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 8Gi
+```
+
+*NOTE:* The _spec.dataSource_ clause, specifies a source _VolumeSnapshot_ named _pvol0-snap1_ which matches the snapshot's name in `snap1.yaml`.
\ No newline at end of file
diff --git a/content/v2/csidriver/release/powerflex.md b/content/v2/csidriver/release/powerflex.md
index 6ac2ca691d..d28dfc2198 100644
--- a/content/v2/csidriver/release/powerflex.md
+++ b/content/v2/csidriver/release/powerflex.md
@@ -3,15 +3,22 @@ title: PowerFlex
description: Release notes for PowerFlex CSI driver
---
-## Release Notes - CSI PowerFlex v2.7.1
+## Release Notes - CSI PowerFlex v2.8.0
+
+
### New Features/Changes
-- [K8 1.27 support added.](https://github.com/dell/csm/issues/743)
-- [OCP 4.12 support added](https://github.com/dell/csm/issues/743)
-- [CSM Operator: Support install of Resiliency module](https://github.com/dell/csm/issues/739)
+
+- [#724 - [FEATURE]: CSM support for Openshift 4.13](https://github.com/dell/csm/issues/724)
+- [#763 - [FEATURE]: CSI-PowerFlex 4.0 NFS support](https://github.com/dell/csm/issues/763)
+- [#876 - [FEATURE]: CSI 1.5 spec support -StorageCapacityTracking](https://github.com/dell/csm/issues/876)
+- [#878 - [FEATURE]: CSI 1.5 spec support: Implement Volume Limits](https://github.com/dell/csm/issues/878)
+- [#885 - [FEATURE]: SDC 3.6.1 support](https://github.com/dell/csm/issues/885)
### Fixed Issues
-- [Fix the offline helm installation failure](https://github.com/dell/csm/issues/868)
+
+- [#916 - [BUG]: Remove references to deprecated io/ioutil package](https://github.com/dell/csm/issues/916)
+
### Known Issues
| Issue | Workaround |
@@ -19,10 +26,11 @@ description: Release notes for PowerFlex CSI driver
| Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation.| Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100|
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: 1. Force delete the pod running on the node that went down 2. Delete the volumeattachment to the node that went down. Now the volume can be attached to the new node. |
| sdc:3.6.0.6 is causing issues while installing the csi-powerflex driver on ubuntu,RHEL8.3 | Workaround: Change the powerflexSdc to sdc:3.6 in values.yaml https://github.com/dell/csi-powerflex/blob/72b27acee7553006cc09df97f85405f58478d2e4/helm/csi-vxflexos/values.yaml#L13 |
-
+| sdc:3.6.1 is causing issues while installing the csi-powerflex driver on ubuntu. | Workaround: Change the powerflexSdc to sdc:3.6 in values.yaml https://github.com/dell/csi-powerflex/blob/72b27acee7553006cc09df97f85405f58478d2e4/helm/csi-vxflexos/values.yaml#L13 |
+A CSI ephemeral pod may not get created in OpenShift 4.13 and fail with the error `"error when creating pod: the pod uses an inline volume provided by CSIDriver csi-unity.dellemc.com, and the namespace has a pod security enforcement level that is lower than privileged."` | This issue occurs because OpenShift 4.13 introduced the CSI Volume Admission plugin to restrict the use of a CSI driver capable of provisioning CSI ephemeral volumes during pod admission. Therefore, an additional label `security.openshift.io/csi-ephemeral-volume-profile` in [csidriver.yaml](https://github.com/dell/helm-charts/blob/csi-unity-2.8.0/charts/csi-unity/templates/csidriver.yaml) file with the required security profile value should be provided. Follow [OpenShift 4.13 documentation for CSI Ephemeral Volumes](https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/ephemeral-storage-csi-inline.html) for more information. |
+| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. |
+| The PowerFlex Dockerfile is incorrectly labeling the version as 2.7.0 for the 2.8.0 version. | Describe the driver pod using ```kubectl describe pod $podname -n vxflexos``` to ensure v2.8.0 is installed. |
### Note:
- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters.
-
-- CSI-PowerFlex v2.7.1 is applicable only for helm based installations.
diff --git a/content/v2/csidriver/release/powermax.md b/content/v2/csidriver/release/powermax.md
index 7ec91a65c2..4c4ca10a66 100644
--- a/content/v2/csidriver/release/powermax.md
+++ b/content/v2/csidriver/release/powermax.md
@@ -3,27 +3,29 @@ title: PowerMax
description: Release notes for PowerMax CSI driver
---
-## Release Notes - CSI PowerMax v2.7.0
+## Release Notes - CSI PowerMax v2.8.0
{{% pageinfo color="primary" %}} Linked Proxy mode for CSI reverse proxy is no longer actively maintained or supported. It will be deprecated in CSM 1.9. It is highly recommended that you use stand alone mode going forward. {{% /pageinfo %}}
> Note: Starting from CSI v2.4.0, Only Unisphere 10.0 REST endpoints are supported. It is mandatory that Unisphere should be updated to 10.0. Please find the instructions [here.](https://dl.dell.com/content/manual34878027-dell-unisphere-for-powermax-10-0-0-installation-guide.pdf?language=en-us&ps=true)
+>Note: File Replication for PowerMax is currently not supported
+
+
+
### New Features/Changes
-- [Added support for OpenShift 4.12](https://github.com/dell/csm/issues/571)
-- [Added support for PowerMax v10.0.1 array](https://github.com/dell/csm/issues/760)
-- [Migrated image registry from k8s.gcr.io to registry.k8s.io](https://github.com/dell/csm/issues/744)
-- [Added support for Amazon EKS Anywhere](https://github.com/dell/csm/issues/825)
-- [Added support for Kubernetes 1.27](https://github.com/dell/csm/issues/761)
-- [Added support for read only mount option for block volumes](https://github.com/dell/csm/issues/792)
-- [Added support for host groups for vSphere environment](https://github.com/dell/csm/issues/746)
-- [Added support to delete volumes on target array when it is set to Delete in storage class](https://github.com/dell/csm/issues/801)
-- [Added support for setting up QoS parameters for throttling performance and bandwidth at Storage Group level](https://github.com/dell/csm/issues/726)
-- [Added support for CSM Operator for PowerMax Driver](https://github.com/dell/csm/issues/769)
-- [Added support to create reverseproxy certs automatically](https://github.com/dell/csm/issues/819)
+
+- [#724 - [FEATURE]: CSM support for Openshift 4.13](https://github.com/dell/csm/issues/724)
+- [#861 - [FEATURE]: CSM for PowerMax file support ](https://github.com/dell/csm/issues/861)
+- [#876 - [FEATURE]: CSI 1.5 spec support -StorageCapacityTracking](https://github.com/dell/csm/issues/876)
+- [#877 - [FEATURE]: Make standalone helm chart available from helm repository : https://dell.github.io/dell/helm-charts](https://github.com/dell/csm/issues/877)
+- [#878 - [FEATURE]: CSI 1.5 spec support: Implement Volume Limits](https://github.com/dell/csm/issues/878)
+- [#922 - [FEATURE]: Use ubi9 micro as base image](https://github.com/dell/csm/issues/922)
+- [#937 - [FEATURE]: Google Anthos 1.15 support for PowerMax](https://github.com/dell/csm/issues/937)
### Fixed Issues
-There are no fixed issues in this release.
+
+- [#916 - [BUG]: Remove references to deprecated io/ioutil package](https://github.com/dell/csm/issues/916)
### Known Issues
@@ -31,7 +33,7 @@ There are no fixed issues in this release.
|-------|------------|
| Unable to update Host: A problem occurred modifying the host resource | This issue occurs when the nodes do not have unique hostnames or when an IP address/FQDN with same sub-domains are used as hostnames. The workaround is to use unique hostnames or FQDN with unique sub-domains|
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: 1. Force delete the pod running on the node that went down 2. Delete the volumeattachment to the node that went down. Now the volume can be attached to the new node |
-
+| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. |
### Note:
- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters.
diff --git a/content/v2/csidriver/release/powerscale.md b/content/v2/csidriver/release/powerscale.md
index 70790a4eea..cff4e6c1bd 100644
--- a/content/v2/csidriver/release/powerscale.md
+++ b/content/v2/csidriver/release/powerscale.md
@@ -4,21 +4,24 @@ description: Release notes for PowerScale CSI driver
---
-## Release Notes - CSI Driver for PowerScale v2.7.0
+## Release Notes - CSI Driver for PowerScale v2.8.0
+
### New Features/Changes
-- [Allow user to set Quota limit parameters from the PVC request in CSI PowerScale](https://github.com/dell/csm/issues/742)
-- [CSI Spec 1.5: Storage capacity tracking feature](https://github.com/dell/csm/issues/824)
-- [Added support for Kubernetes 1.27](https://github.com/dell/csm/issues/761)
-- [Added support for OpenShift 4.12](https://github.com/dell/csm/issues/571)
-- [Migrated image registry from k8s.gcr.io to registry.k8s.io](https://github.com/dell/csm/issues/744)
-- [CSM Operator: Support install of Resiliency module](https://github.com/dell/csm/issues/739)
+- [#724 - [FEATURE]: CSM support for Openshift 4.13](https://github.com/dell/csm/issues/724)
+- [#877 - [FEATURE]: Make standalone helm chart available from helm repository : https://dell.github.io/dell/helm-charts](https://github.com/dell/csm/issues/877)
+- [#950 - [FEATURE]: PowerScale 9.5.0.4 support](https://github.com/dell/csm/issues/950)
+- [#967 - [FEATURE]: SLES15 SP4 support in csi powerscale](https://github.com/dell/csm/issues/967)
+- [#922 - [FEATURE]: Use ubi9 micro as base image](https://github.com/dell/csm/issues/922)
### Fixed Issues
+- [#916 - [BUG]: Remove references to deprecated io/ioutil package](https://github.com/dell/csm/issues/916)
+- [#487 - [BUG]: Powerscale CSI driver RO PVC-from-snapshot wrong zone](https://github.com/dell/csm/issues/487)
### Known Issues
+
| Issue | Resolution or workaround, if known |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| If the length of the nodeID exceeds 128 characters, the driver fails to update the CSINode object and installation fails. This is due to a limitation set by CSI spec which doesn't allow nodeID to be greater than 128 characters. | The CSI PowerScale driver uses the hostname for building the nodeID which is set in the CSINode resource object, hence we recommend not having very long hostnames in order to avoid this issue. This current limitation of 128 characters is likely to be relaxed in future Kubernetes versions as per this issue in the community: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/issues/581
**Note:** In kubernetes 1.22 this limit has been relaxed to 192 characters. |
@@ -26,8 +29,9 @@ description: Release notes for PowerScale CSI driver
| Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation. | Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100 |
| fsGroupPolicy may not work as expected without root privileges for NFS only https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "RootClientEnabled" = "true" in the storage class parameter |
| Driver logs shows "VendorVersion=2.3.0+dirty" | Update the driver to csi-powerscale 2.4.0 |
+| PowerScale 9.5.0, Driver installation fails with session based auth, "HTTP/1.1 401 Unauthorized" | Fix is available in PowerScale >= 9.5.0.4 |
+| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. |
-
-### Note:
+### Note
- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters.
diff --git a/content/v2/csidriver/release/powerstore.md b/content/v2/csidriver/release/powerstore.md
index c18b7246bf..7f2d831a08 100644
--- a/content/v2/csidriver/release/powerstore.md
+++ b/content/v2/csidriver/release/powerstore.md
@@ -3,22 +3,22 @@ title: PowerStore
description: Release notes for PowerStore CSI driver
---
-## Release Notes - CSI PowerStore v2.7.0
+## Release Notes - CSI PowerStore v2.8.0
+
+
### New Features/Changes
-- [CSI PowerStore - Add support for PowerStore Medusa (v3.5) array](https://github.com/dell/csm/issues/735)
-- [Allow FQDN for the endpoint in CSI-PowerStore](https://github.com/dell/csm/issues/731)
-- [CSM Operator: Support install of Resiliency module](https://github.com/dell/csm/issues/739)
-- [Migrate image registry from k8s.gcr.io to registry.k8s.io](https://github.com/dell/csm/issues/744)
-- [CSM support for Kubernetes 1.27](https://github.com/dell/csm/issues/761)
-- [Add upgrade support of csi-powerstore driver in CSM-Operator](https://github.com/dell/csm/issues/805)
-- [CSM support for Openshift 4.12](https://github.com/dell/csm/issues/571)
-- [Update to the latest UBI/UBI Micro images for CSM](https://github.com/dell/csm/issues/843)
+
+- [#724 - [FEATURE]: CSM support for Openshift 4.13](https://github.com/dell/csm/issues/724)
+- [#877 - [FEATURE]: Make standalone helm chart available from helm repository : https://dell.github.io/dell/helm-charts](https://github.com/dell/csm/issues/877)
+- [#878 - [FEATURE]: CSI 1.5 spec support: Implement Volume Limits](https://github.com/dell/csm/issues/878)
+- [#879 - [FEATURE]: Configurable Volume Attributes use recommended naming convention /](https://github.com/dell/csm/issues/879)
+- [#922 - [FEATURE]: Use ubi9 micro as base image](https://github.com/dell/csm/issues/922)
### Fixed Issues
-- [Storage Capacity Tracking not working in CSI-PowerStore when installed using CSM Operator](https://github.com/dell/csm/issues/823)
-- [CHAP is set to true in the CSI-PowerStore sample file in CSI Operator](https://github.com/dell/csm/issues/812)
-- [Unable to delete application pod when CSI PowerStore is installed using CSM Operator](https://github.com/dell/csm/issues/785)
+
+- [#916 - [BUG]: Remove references to deprecated io/ioutil package](https://github.com/dell/csm/issues/916)
+- [#928 - [BUG]: PowerStore Replication - Delete RG request hangs](https://github.com/dell/csm/issues/928)
### Known Issues
@@ -28,7 +28,10 @@ description: Release notes for PowerStore CSI driver
| fsGroupPolicy may not work as expected without root privileges for NFS only https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "allowRoot: "true" in the storage class parameter |
| If the NVMeFC pod is not getting created and the host looses the ssh connection, causing the driver pods to go to error state | remove the nvme_tcp module from the host incase of NVMeFC connection |
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: 1. Force delete the pod running on the node that went down 2. Delete the volumeattachment to the node that went down. Now the volume can be attached to the new node. |
-| When driver node pods enter CrashLoopBackOff and PVC remains in pending state with one of the following events: 1. failed to provision volume with StorageClass ``: error generating accessibility requirements: no available topology found 2. waiting for a volume to be created, either by external provisioner "csi-powerstore.dellemc.com" or manually created by system administrator. | Check whether all array details present in the secret file are valid and remove any invalid entries if present. Redeploy the driver.
+| When driver node pods enter CrashLoopBackOff and PVC remains in pending state with one of the following events: 1. failed to provision volume with StorageClass ``: error generating accessibility requirements: no available topology found 2. waiting for a volume to be created, either by external provisioner "csi-powerstore.dellemc.com" or manually created by system administrator. | Check whether all array details present in the secret file are valid and remove any invalid entries if present. Redeploy the driver. |
+| If an ephemeral pod is not being created in OpenShift 4.13 and is failing with the error "error when creating pod: the pod uses an inline volume provided by CSIDriver csi-powerstore.dellemc.com, and the namespace has a pod security enforcement level that is lower than privileged." | This issue occurs because OpenShift 4.13 introduced the CSI Volume Admission plugin to restrict the use of a CSI driver capable of provisioning CSI ephemeral volumes during pod admission (https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/ephemeral-storage-csi-inline.html). Therefore, an additional label "security.openshift.io/csi-ephemeral-volume-profile" needs to be added to the CSIDriver object to support inline ephemeral volumes. |
+| In OpenShift 4.13, the root user is not allowed to perform write operations on NFS shares, when root squashing is enabled. | The workaround for this issue is to disable root squashing by setting allowRoot: "true" in the NFS storage class. |
+| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs, and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with Kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. |
### Note:
diff --git a/content/v2/csidriver/release/unity.md b/content/v2/csidriver/release/unity.md
index 7168afe87f..cd27378100 100644
--- a/content/v2/csidriver/release/unity.md
+++ b/content/v2/csidriver/release/unity.md
@@ -3,20 +3,21 @@ title: Unity XT
description: Release notes for Unity XT CSI driver
---
-## Release Notes - CSI Unity XT v2.7.0
+## Release Notes - CSI Unity XT v2.8.0
+
+
### New Features/Changes
-- [Migrated image registry from k8s.gcr.io to registry.k8s.io](https://github.com/dell/csm/issues/744)
-- [Added support for OpenShift 4.12](https://github.com/dell/csm/issues/571)
-- [Added support for Kubernetes 1.27](https://github.com/dell/csm/issues/761)
-- [Added support for K3s on Debian OS](https://github.com/dell/csm/issues/798)
-- [Added support for Unisphere 5.3.0 array](https://github.com/dell/csm/issues/842)
+- [#724 - [FEATURE]: CSM support for Openshift 4.13](https://github.com/dell/csm/issues/724)
+- [#876 - [FEATURE]: CSI 1.5 spec support -StorageCapacityTracking](https://github.com/dell/csm/issues/876)
+- [#877 - [FEATURE]: Make standalone helm chart available from helm repository : https://dell.github.io/dell/helm-charts](https://github.com/dell/csm/issues/877)
+- [#891 - [FEATURE]: Enhancing Unity XT driver to handle API requests after the sessionIdleTimeOut in STIG mode](https://github.com/dell/csm/issues/891)
### Fixed Issues
-There are no fixed issues in this release.
-
+- [#849 - [BUG]: CSI driver does not verify iSCSI initiators on the array correctly](https://github.com/dell/csm/issues/849)
+- [#916 - [BUG]: Remove references to deprecated io/ioutil package](https://github.com/dell/csm/issues/916)
### Known Issues
@@ -26,7 +27,8 @@ There are no fixed issues in this release.
| NFS Clone - Resize of the snapshot is not supported by Unity XT Platform, however the user should never try to resize the cloned NFS volume.| Currently, when the driver takes a clone of NFS volume, it succeeds but if the user tries to resize the NFS volumesnapshot, the driver will throw an error.|
| Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation.| Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100|
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: 1. Force delete the pod running on the node that went down 2. Delete the VolumeAttachment to the node that went down. Now the volume can be attached to the new node. |
-| CSI driver does not verify iSCSI initiators on the array correctly when iSCSI initiator names are not in lowercase - After any node reboot, the driver pod on that rebooted node goes into a failed state, failing to find the iSCSI initiator on the array | Work around is to rename host iSCSI initiators to lowercase and reboot the respective worker node. The CSI driver pod will spin off successfully. Example: Rename "iqn.2000-11.com.DEMOWORKERNODE01:1a234b56cd78" to "iqn.2000-11.com.demoworkernode01:1a234b56cd78" in lowercase.
+| A CSI ephemeral pod may not get created in OpenShift 4.13 and fail with the error `"error when creating pod: the pod uses an inline volume provided by CSIDriver csi-unity.dellemc.com, and the namespace has a pod security enforcement level that is lower than privileged."` | This issue occurs because OpenShift 4.13 introduced the CSI Volume Admission plugin to restrict the use of a CSI driver capable of provisioning CSI ephemeral volumes during pod admission. Therefore, an additional label `security.openshift.io/csi-ephemeral-volume-profile` in [csidriver.yaml](https://github.com/dell/helm-charts/blob/csi-unity-2.8.0/charts/csi-unity/templates/csidriver.yaml) file with the required security profile value should be provided. Follow [OpenShift 4.13 documentation for CSI Ephemeral Volumes](https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/ephemeral-storage-csi-inline.html) for more information. |
+| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. |
### Note:
- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters.
diff --git a/content/v2/csidriver/troubleshooting/powerflex.md b/content/v2/csidriver/troubleshooting/powerflex.md
index aca2c190b9..62d7ba0aca 100644
--- a/content/v2/csidriver/troubleshooting/powerflex.md
+++ b/content/v2/csidriver/troubleshooting/powerflex.md
@@ -18,15 +18,17 @@ description: Troubleshooting PowerFlex Driver
| The `kubectl logs -n vxflexos vxflexos-controller-* driver` logs show `x509: certificate signed by unknown authority` |A self assigned certificate is used for PowerFlex array. See [certificate validation for PowerFlex Gateway](../../installation/helm/powerflex/#certificate-validation-for-powerflex-gateway-rest-api-calls)|
| When you run the command `kubectl apply -f snapclass-v1.yaml`, you get the error `error: unable to recognize "snapclass-v1.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"` | Check to make sure that the v1 snapshotter CRDs are installed, and not the v1beta1 CRDs, which are no longer supported. |
| The controller pod is stuck and producing errors such as" `Failed to watch *v1.VolumeSnapshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)` | Make sure that v1 snapshotter CRDs and v1 snapclass are installed, and not v1beta1, which is no longer supported. |
-| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 <= 1.27.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-vxflexos/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
+| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 <= 1.28.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, see the helm Chart at `helm/csi-vxflexos/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. Note: this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
| Volume metrics are missing | Enable [Volume Health Monitoring](../../features/powerflex#volume-health-monitoring) |
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: 1. Force delete the pod running on the node that went down 2. Delete the volumeattachment to the node that went down. Now the volume can be attached to the new node. |
| CSI-PowerFlex volumes cannot mount; are being recognized as multipath devices | CSI-PowerFlex does not support multipath; to fix: 1. Remove any multipath mapping involving a powerflex volume with `multipath -f ` 2. Blacklist CSI-PowerFlex volumes in multipath config file |
| When attempting a driver upgrade, you see: ```spec.fsGroupPolicy: Invalid value: "xxx": field is immutable``` | You cannot upgrade between drivers with different fsGroupPolicies. See [upgrade documentation](../../upgradation/drivers/powerflex) for more details |
| When accessing ROX mode PVC in OpenShift where the worker nodes are non-root user, you see: ```Permission denied``` while accesing the PVC mount location from the pod. | Set the ```securityContext``` for ROX mode PVC pod as below, as it defines privileges for the pods or containers.
securityContext: runAsUser: 0 runAsGroup: 0 |
| After installing version v2.6.0 of the driver using the default `powerflexSdc` image, sdc:3.6.0.6, the vxflexos-node pods are in an `Init:CrashLoopBackOff` state. This issue can happen on hosts that require the SDC to be installed manually. Automatic SDC is only supported on Red Hat CoreOS (RHCOS), RHEL 7.9, RHEL 8.4, RHEL 8.6. | The SDC is already installed. Change the `images.powerflexSdc` value to an empty value in the [values](https://github.com/dell/csi-powerflex/blob/72b27acee7553006cc09df97f85405f58478d2e4/helm/csi-vxflexos/values.yaml#L13) and re-install. |
+| After installing version v2.8.0 of the driver using the default `powerflexSdc` image, sdc:3.6.1, the vxflexos-node pods are in an `Init:CrashLoopBackOff` state. This issue can happen on hosts that require the SDC to be installed manually. Automatic SDC is only supported on Red Hat CoreOS (RHCOS), RHEL 7.9, RHEL 8.4, RHEL 8.6. | The SDC is already installed. Change the `images.powerflexSdc` value to an empty value in the [values](https://github.com/dell/csi-powerflex/blob/72b27acee7553006cc09df97f85405f58478d2e4/helm/csi-vxflexos/values.yaml#L13) and re-install. |
| In version v2.6.0, the driver is crashing because the External Health Monitor sidecar crashes when a persistent volume is not found. | This is a known issue reported at [kubernetes-csi/external-health-monitor#100](https://github.com/kubernetes-csi/external-health-monitor/issues/100). |
| In version v2.6.0, when a cluster node goes down, the block volumes attached to the node cannot be attached to another node. | This is a known issue reported at [kubernetes-csi/external-attacher#215](https://github.com/kubernetes-csi/external-attacher/issues/215). Workaround: 1. Force delete the pod running on the node that went down. 2. Delete the pod's persistent volume attachment on the node that went down. Now the volume can be attached to the new node.
+A CSI ephemeral pod may not get created in OpenShift 4.13 and fail with the error `"error when creating pod: the pod uses an inline volume provided by CSIDriver csi-vxflexos.dellemc.com, and the namespace has a pod security enforcement level that is lower than privileged."` | This issue occurs because OpenShift 4.13 introduced the CSI Volume Admission plugin to restrict the use of a CSI driver capable of provisioning CSI ephemeral volumes during pod admission. Therefore, an additional label `security.openshift.io/csi-ephemeral-volume-profile` in [csidriver.yaml](https://github.com/dell/helm-charts/blob/csi-vxflexos-2.8.0/charts/csi-vxflexos/templates/csidriver.yaml) file with the required security profile value should be provided. Follow [OpenShift 4.13 documentation for CSI Ephemeral Volumes](https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/ephemeral-storage-csi-inline.html) for more information. |
>*Note*: `vxflexos-controller-*` is the controller pod that acquires leader lease
diff --git a/content/v2/csidriver/troubleshooting/powerscale.md b/content/v2/csidriver/troubleshooting/powerscale.md
index ca638b8755..d2f1e75667 100644
--- a/content/v2/csidriver/troubleshooting/powerscale.md
+++ b/content/v2/csidriver/troubleshooting/powerscale.md
@@ -17,5 +17,4 @@ Here are some installation failures that might be encountered and how to mitigat
| Driver node pod is in "CrashLoopBackOff" as "Node ID" generated is not with proper FQDN. | This might be due to "dnsPolicy" implemented on the driver node pod which may differ with different networks.
This parameter is configurable in both helm and Operator installer and the user can try with different "dnsPolicy" according to the environment.|
| The `kubectl logs isilon-controller-0 -n isilon -c driver` logs shows the driver **Authentication failed. Trying to re-authenticate** when using Session-based authentication | The issue has been resolved from OneFS 9.3 onwards, for OneFS versions prior to 9.3 for session-based authentication either smart connect can be created against a single node of Isilon or CSI Driver can be installed/pointed to a particular node of the Isilon else basic authentication can be used by setting isiAuthType in `values.yaml` to 0 |
| When an attempt is made to create more than one ReadOnly PVC from the same volume snapshot, the second and subsequent requests result in PVCs in state `Pending`, with a warning `another RO volume from this snapshot is already present`. This is because the driver allows only one RO volume from a specific snapshot at any point in time. This is to allow faster creation(within a few seconds) of a RO PVC from a volume snapshot irrespective of the size of the volume snapshot. | Wait for the deletion of the first RO PVC created from the same volume snapshot. |
-| While attaching a ReadOnly PVC from a volume snapshot to a pod, the mount operation will fail with error `mounting ... failed, reason given by server: No such file or directory`, if RO volume's access zone(non System access zone) on Isilon is configured with a dedicated service IP(which is same as `AzServiceIP` storage class parameter). This operation results in accessing the snapshot base directory(`/ifs`) and results in overstepping the RO volume's access zone's base directory, which the OneFS doesn't allow. | Provide a service ip that belongs to RO volume's access zone which set the highest level `/ifs` as its zone base directory. |
-|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/helm-charts/blob/main/charts/csi-isilon/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.|
+|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/csi-powerscale/blob/main/helm/csi-isilon/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.|
diff --git a/content/v2/csidriver/troubleshooting/unity.md b/content/v2/csidriver/troubleshooting/unity.md
index b1b97b184c..6a380b5754 100644
--- a/content/v2/csidriver/troubleshooting/unity.md
+++ b/content/v2/csidriver/troubleshooting/unity.md
@@ -12,5 +12,5 @@ description: Troubleshooting Unity XT Driver
| Dynamic array detection will not work in Topology based environment | Whenever a new array is added or removed, then the driver controller and node pod should be restarted with command **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** when **topology-based storage classes are used**. For dynamic array addition without topology, the driver will detect the newly added or removed arrays automatically|
| If source PVC is deleted when cloned PVC exists, then source PVC will be deleted in the cluster but on array, it will still be present and marked for deletion. | All the cloned PVC should be deleted in order to delete the source PVC from the array. |
| PVC creation fails on a fresh cluster with **iSCSI** and **NFS** protocols alone enabled with error **failed to provision volume with StorageClass "unity-iscsi": error generating accessibility requirements: no available topology found**. | This is because iSCSI initiator login takes longer than the node pod startup time. This can be overcome by bouncing the node pods in the cluster using the below command the driver pods with **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** |
-| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.24.0 < 1.28.0 which is incompatible with Kubernetes 1.24.6-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-unity/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
+| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.24.0 < 1.29.0 which is incompatible with Kubernetes 1.24.6-mirantis-1` | If you are using an extended Kubernetes version, see the helm Chart at `helm/csi-unity/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down 2. Delete the VolumeAttachment to the node that went down. Now the volume can be attached to the new node. |
diff --git a/content/v2/csidriver/upgradation/drivers/isilon.md b/content/v2/csidriver/upgradation/drivers/isilon.md
index 62c6673523..c0b37f9221 100644
--- a/content/v2/csidriver/upgradation/drivers/isilon.md
+++ b/content/v2/csidriver/upgradation/drivers/isilon.md
@@ -8,30 +8,31 @@ Description: Upgrade PowerScale CSI driver
---
You can upgrade the CSI Driver for Dell PowerScale using Helm or Dell CSI Operator.
-## Upgrade Driver from version 2.6.1 to 2.7.0 using Helm
-
+## Upgrade Driver from version 2.7.0 to 2.8.0 using Helm
**Note:** While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes.
-**Steps**
+### Steps
+
+1. Clone the repository using `git clone -b v2.8.0 https://github.com/dell/csi-powerscale.git`
-1. Clone the repository using `git clone -b v2.7.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements.
2. Change to directory dell-csi-helm-installer to install the Dell PowerScale `cd dell-csi-helm-installer`
-3. Upgrade the CSI Driver for Dell PowerScale using following command:
+3. Download the default values.yaml using following command:
```bash
-
- ./csi-install.sh --namespace isilon --values ./my-isilon-settings.yaml --upgrade
+ wget -O my-isilon-settings.yaml https://raw.githubusercontent.com/dell/helm-charts/csi-isilon-2.8.0/charts/csi-isilon/values.yaml
```
+ Edit the _my-isilon-settings.yaml_ as per the requirements.
+4. Upgrade the CSI Driver for Dell PowerScale using following command:
-## Upgrade using Dell CSI Operator:
-**Notes:**
-1. While upgrading the driver via operator, replicas count in sample CR yaml can be at most one less than the number of worker nodes.
-2. Upgrading the Operator does not upgrade the CSI Driver.
+ ```bash
+ ./csi-install.sh --namespace isilon --values ./my-isilon-settings.yaml --upgrade
+ ```
-To upgrade the driver:
+## Upgrade using Dell CSM Operator
-1. Please upgrade the Dell CSI Operator by following [here](./../operator).
-2. Once the operator is upgraded, to upgrade the driver, refer [here](./../../../installation/operator/#update-csi-drivers).
+**Note:** Upgrading the Operator does not upgrade the CSI Driver.
+1. Please upgrade the Dell CSM Operator by following [here](../../../../deployment/csmoperator/#upgrade)
+2. Once the operator is upgraded, to upgrade the driver, refer [here](../../../../deployment/csmoperator/#upgrade-driver-using-dell-csm-operator)
diff --git a/content/v2/csidriver/upgradation/drivers/operator.md b/content/v2/csidriver/upgradation/drivers/operator.md
index 42d5690810..5924444a80 100644
--- a/content/v2/csidriver/upgradation/drivers/operator.md
+++ b/content/v2/csidriver/upgradation/drivers/operator.md
@@ -6,6 +6,11 @@ tags:
weight: 1
Description: Upgrade Dell CSI Operator
---
+
+{{% pageinfo color="primary" %}}
+The Dell CSI Operator is no longer actively maintained or supported. It will be deprecated in CSM 1.9. It is highly recommended that you use [CSM Operator](../../../../deployment/csmoperator) going forward.
+{{% /pageinfo %}}
+
To upgrade Dell CSI Operator, perform the following steps.
Dell CSI Operator can be upgraded based on the supported platforms in one of the 2 ways:
1. Using script (for non-OLM based installation)
diff --git a/content/v2/csidriver/upgradation/drivers/powerflex.md b/content/v2/csidriver/upgradation/drivers/powerflex.md
index 2f460d7596..4890384a19 100644
--- a/content/v2/csidriver/upgradation/drivers/powerflex.md
+++ b/content/v2/csidriver/upgradation/drivers/powerflex.md
@@ -7,18 +7,15 @@ tags:
weight: 1
Description: Upgrade PowerFlex CSI driver
---
-{{% pageinfo color="primary" %}}
-CSM 1.7.1 is applicable to helm based installations of PowerFlex driver.
-{{% /pageinfo %}}
You can upgrade the CSI Driver for Dell PowerFlex using Helm or Dell CSI Operator.
-## Update Driver from v2.6 to v2.7.1 using Helm
+## Update Driver from v2.7.1 to v2.8 using Helm
**Steps**
-1. Run `git clone -b v2.7.1 https://github.com/dell/csi-powerflex.git` to clone the git repository and get the v2.7.1 driver.
+1. Run `git clone -b v2.8.0 https://github.com/dell/csi-powerflex.git` to clone the git repository and get the v2.8.0 driver.
2. You need to create secret.yaml with the configuration of your system.
Check this section in installation documentation: [Install the Driver](../../../installation/helm/powerflex#install-the-driver)
-3. Update values file as needed.
+3. Update myvalues file as needed.
4. Run the `csi-install` script with the option _\-\-upgrade_ by running:
```bash
@@ -32,7 +29,7 @@ You can upgrade the CSI Driver for Dell PowerFlex using Helm or Dell CSI Operato
./csi-install.sh --namespace vxflexos --values ./myvalues.yaml --upgrade
```
-- The logging configuration from v1.5 will not work in v2.1, since the log configuration parameters are now set in the values.yaml file located at helm/csi-vxflexos/values.yaml. Please set the logging configuration parameters in the values.yaml file.
+- The logging configuration from v1.5 will not work in v2.1, since the log configuration parameters are now set in the myvalues.yaml file located at dell-csi-helm-installer/myvalues.yaml. Please set the logging configuration parameters in the myvalues.yaml file.
- You cannot upgrade between drivers with different fsGroupPolicies. To check the current driver's fsGroupPolicy, use this command:
```bash
@@ -50,13 +47,7 @@ You can upgrade the CSI Driver for Dell PowerFlex using Helm or Dell CSI Operato
...
```
-## Upgrade using Dell CSI Operator:
-**Note:** Upgrading the Operator does not upgrade the CSI Driver.
-
-1. Please upgrade the Dell CSI Operator by following [here](./../operator).
-2. Once the operator is upgraded, to upgrade the driver, refer [here](./../../../installation/operator/#update-csi-drivers).
-
## Upgrade using Dell CSM Operator:
**Note:** Upgrading the Operator does not upgrade the CSI Driver.
-1. Please upgrade the Dell CSM Operator by following [here](../../../../deployment/csmoperator/#to-upgrade-dell-csm-operator-perform-the-following-steps)
+1. Upgrade the Dell CSM Operator by following [here](../../../../deployment/csmoperator/#to-upgrade-dell-csm-operator-perform-the-following-steps)
2. Once the operator is upgraded, to upgrade the driver, refer [here](../../../../deployment/csmoperator/#upgrade-driver-using-dell-csm-operator)
diff --git a/content/v2/csidriver/upgradation/drivers/powermax.md b/content/v2/csidriver/upgradation/drivers/powermax.md
index af937753ab..9672a34819 100644
--- a/content/v2/csidriver/upgradation/drivers/powermax.md
+++ b/content/v2/csidriver/upgradation/drivers/powermax.md
@@ -16,10 +16,10 @@ You can upgrade CSI Driver for Dell PowerMax using Helm or Dell CSI Operator.
1. Upgrade the Unisphere to have 10.0 endpoint support.Please find the instructions [here.](https://dl.dell.com/content/manual34878027-dell-unisphere-for-powermax-10-0-0-installation-guide.pdf?language=en-us&ps=true)
2. Update the `my-powermax-settings.yaml` to have endpoint with 10.0 support.
-## Update Driver from v2.6 to v2.7 using Helm
+## Update Driver from v2.7 to v2.8 using Helm
**Steps**
-1. Run `git clone -b v2.7.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the driver.
+1. Run `git clone -b v2.8.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the driver.
2. Update the values file as needed.
3. Run the `csi-install` script with the option _\-\-upgrade_ by running:
```bash
@@ -52,8 +52,8 @@ You can upgrade CSI Driver for Dell PowerMax using Helm or Dell CSI Operator.
```
-## Upgrade using Dell CSI Operator:
+## Upgrade using Dell CSM Operator:
**Note:** Upgrading the Operator does not upgrade the CSI Driver.
-1. Please upgrade the Dell CSI Operator by following [here](./../operator).
-2. Once the operator is upgraded, to upgrade the driver, refer [here](./../../../installation/operator/#update-csi-drivers).
\ No newline at end of file
+1. Upgrade the Dell CSM Operator by following [here](../../../../deployment/csmoperator/#to-upgrade-dell-csm-operator-perform-the-following-steps)
+2. Once the operator is upgraded, to upgrade the driver, refer [here](../../../../deployment/csmoperator/#upgrade-driver-using-dell-csm-operator)
diff --git a/content/v2/csidriver/upgradation/drivers/powerstore.md b/content/v2/csidriver/upgradation/drivers/powerstore.md
index 0b1b66e553..fddeeefe09 100644
--- a/content/v2/csidriver/upgradation/drivers/powerstore.md
+++ b/content/v2/csidriver/upgradation/drivers/powerstore.md
@@ -9,12 +9,12 @@ Description: Upgrade PowerStore CSI driver
You can upgrade the CSI Driver for Dell PowerStore using Helm.
-## Update Driver from v2.6 to v2.7 using Helm
+## Update Driver from v2.7 to v2.8 using Helm
Note: While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes.
**Steps**
-1. Run `git clone -b v2.7.0 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver.
+1. Run `git clone -b v2.8.0 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver.
2. Edit `samples/secret/secret.yaml` file and configure connection information for your PowerStore arrays changing the following parameters:
- *endpoint*: defines the full URL path to the PowerStore API.
- *globalID*: specifies what storage cluster the driver should use
@@ -32,28 +32,19 @@ Note: While upgrading the driver via helm, controllerCount variable in myvalues.
kubectl create -f
```
- >Storage classes created by v1.4/v2.0/v2.1/v2.2/v2.3/v2.4/v2.5/v2.6 driver will not be deleted, v2.7 driver will use default array to manage volumes provisioned with old storage classes. Thus, if you still have volumes provisioned by v1.4/v2.0/v2.1/v2.2/v2.3/v2.4/v2.5/v2.6 in your cluster then be sure to include the same array you have used for the v1.4/v2.0/v2.1/v2.2/v2.3/v2.4/v2.5/v2.6 driver and make it default in the `secret.yaml` file.
+ >Storage classes created by v1.4/v2.0/v2.1/v2.2/v2.3/v2.4/v2.5/v2.6/v2.7 driver will not be deleted, v2.8 driver will use default array to manage volumes provisioned with old storage classes. Thus, if you still have volumes provisioned by v1.4/v2.0/v2.1/v2.2/v2.3/v2.4/v2.5/v2.6/v2.7 in your cluster then be sure to include the same array you have used for the v1.4/v2.0/v2.1/v2.2/v2.3/v2.4/v2.5/v2.6/v2.7 driver and make it default in the `secret.yaml` file.
4. Create the secret by running
```bash
kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml
```
-5. Copy the default values.yaml file `cd dell-csi-helm-installer && cp ../helm/csi-powerstore/values.yaml ./my-powerstore-settings.yaml` and update parameters as per the requirement.
+5. Download the default values.yaml file `cd dell-csi-helm-installer && wget -O my-powerstore-settings.yaml https://github.com/dell/helm-charts/raw/csi-powerstore-2.8.0/charts/csi-powerstore/values.yaml` and update parameters as per the requirement.
6. Run the `csi-install` script with the option _\-\-upgrade_ by running:
```bash
./csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml --upgrade
```
-## Upgrade using Dell CSI Operator:
-
-**Notes:**
-1. While upgrading the driver via operator, replicas count in sample CR yaml can be at most one less than the number of worker nodes.
-2. Upgrading the Operator does not upgrade the CSI Driver.
-
-
-1. Please upgrade the Dell CSI Operator by following [here](./../operator).
-2. Once the operator is upgraded, to upgrade the driver, refer [here](./../../../installation/operator/#update-csi-drivers).
## Upgrade using Dell CSM Operator:
**Note:** Upgrading the Operator does not upgrade the CSI Driver.
diff --git a/content/v2/csidriver/upgradation/drivers/unity.md b/content/v2/csidriver/upgradation/drivers/unity.md
index bdfdb3120b..efab1b693f 100644
--- a/content/v2/csidriver/upgradation/drivers/unity.md
+++ b/content/v2/csidriver/upgradation/drivers/unity.md
@@ -20,9 +20,9 @@ You can upgrade the CSI Driver for Dell Unity XT using Helm or Dell CSI Operator
Preparing myvalues.yaml is the same as explained in the install section.
-To upgrade the driver from csi-unity v2.6.0 to csi-unity v2.7.0
+To upgrade the driver from csi-unity v2.7.0 to csi-unity v2.8.0
-1. Get the latest csi-unity v2.7.0 code from Github using `git clone -b v2.7.0 https://github.com/dell/csi-unity.git`.
+1. Get the latest csi-unity v2.8.0 code from Github using `git clone -b v2.8.0 https://github.com/dell/csi-unity.git`.
2. Copy the helm/csi-unity/values.yaml to the new location csi-unity/dell-csi-helm-installer and rename it to myvalues.yaml. Customize settings for installation by editing myvalues.yaml as needed.
3. Navigate to csi-unity/dell-csi-hem-installer folder and execute this command:
```bash
@@ -30,14 +30,10 @@ To upgrade the driver from csi-unity v2.6.0 to csi-unity v2.7.0
./csi-install.sh --namespace unity --values ./myvalues.yaml --upgrade
```
-### Using Operator
-
-**Notes:**
-1. While upgrading the driver via operator, replicas count in sample CR yaml can be at most one less than the number of worker nodes.
-2. Upgrading the Operator does not upgrade the CSI Driver.
-
-To upgrade the driver:
+### Upgrade using Dell CSM Operator:
+**Note:**
+Upgrading the Operator does not upgrade the CSI Driver.
-1. Please upgrade the Dell CSI Operator by following [here](./../operator).
-2. Once the operator is upgraded, to upgrade the driver, refer [here](./../../../installation/operator/#update-csi-drivers).
+1. Upgrade the Dell CSM Operator by following [here](../../../../deployment/csmoperator/#to-upgrade-dell-csm-operator-perform-the-following-steps)
+2. Once the operator is upgraded, to upgrade the driver, refer [here](../../../../deployment/csmoperator/#upgrade-driver-using-dell-csm-operator)
diff --git a/content/v2/deployment/_index.md b/content/v2/deployment/_index.md
index 87d3d98cc4..f9ae6b5cb6 100644
--- a/content/v2/deployment/_index.md
+++ b/content/v2/deployment/_index.md
@@ -5,9 +5,6 @@ no_list: true
description: Deployment of CSM for Replication
weight: 1
---
-{{% pageinfo color="primary" %}}
-CSM 1.7.1 is applicable to helm based installations of PowerFlex driver.
-{{% /pageinfo %}}
The Container Storage Modules along with the required CSI Drivers can each be deployed using CSM operator.
diff --git a/content/v2/deployment/csminstallationwizard/_index.md b/content/v2/deployment/csminstallationwizard/_index.md
index 8a6ebfcc0f..6d9b4d13d9 100644
--- a/content/v2/deployment/csminstallationwizard/_index.md
+++ b/content/v2/deployment/csminstallationwizard/_index.md
@@ -7,30 +7,37 @@ weight: 1
The [Dell Container Storage Modules Installation Wizard](./src/index.html) is a webpage that generates a manifest file for installing Dell CSI Drivers and its supported CSM Modules, based on input from the user. It generates a single manifest file to install both Dell CSI Drivers and its supported CSM Modules, thereby eliminating the need to download individual Helm charts for drivers and modules. The user can enable or disable the necessary modules through the UI, and a manifest file is generated accordingly without manually editing the helm charts.
->NOTE: The CSM Installation Wizard currently supports Helm based manifest file generation only.
+>NOTE: The CSM Installation Wizard supports Helm and Operator based manifest file generation.
## Supported Dell CSI Drivers
-| CSI Driver | Version |
-| ------------------ | --------- |
-| CSI PowerStore | 2.7.0 |
-| CSI PowerMax | 2.7.0 |
-| CSI PowerFlex | 2.7.1 |
-| CSI PowerScale | 2.7.0 |
-| CSI Unity XT | 2.7.0 |
+| CSI Driver | Version | Helm | Operator |
+| ------------------ | --------- | ------ | --------- |
+| CSI PowerStore | 2.8.0 |✔️ |✔️ |
+| CSI PowerStore | 2.7.0 |✔️ |✔️ |
+| CSI PowerMax | 2.8.0 |✔️ |✔️ |
+| CSI PowerMax | 2.7.0 |✔️ |✔️ |
+| CSI PowerFlex | 2.8.0 |✔️ |❌ |
+| CSI PowerFlex | 2.7.0 |✔️ |❌ |
+| CSI PowerScale | 2.8.0 |✔️ |✔️ |
+| CSI PowerScale | 2.7.0 |✔️ |✔️ |
+| CSI Unity XT | 2.8.0 |✔️ |❌ |
+| CSI Unity XT | 2.7.0 |✔️ |❌ |
+
+>NOTE: The Installation Wizard currently does not support operator-based manifest file generation for Unity XT and PowerFlex drivers.
## Supported Dell CSM Modules
| CSM Modules | Version |
| ---------------------| --------- |
-| CSM Observability | 1.5.0 |
-| CSM Replication | 1.5.0 |
-| CSM Resiliency | 1.6.0 |
+| CSM Observability | 1.6.0 |
+| CSM Replication | 1.6.0 |
+| CSM Resiliency | 1.7.0 |
## Installation
1. Open the [CSM Installation Wizard](./src/index.html).
-2. Select the `Installation Type` as `Helm`.
+2. Select the `Installation Type` as `Helm`/`Operator`.
3. Select the `Array`.
4. Enter the `Image Repository`. The default value is `dellemc`.
5. Select the `CSM Version`.
@@ -38,16 +45,16 @@ The [Dell Container Storage Modules Installation Wizard](./src/index.html) is a
7. If needed, modify the `Controller Pods Count`.
8. If needed, select `Install Controller Pods on Control Plane` and/or `Install Node Pods on Control Plane`.
9. Enter the `Namespace`. The default value is `csi-`.
-10. Click on `Generate YAML`.
+10. Click on `Generate YAML`.
13. A manifest file, `values.yaml` will be generated and downloaded.
14. A section `Run the following commands to install` will be displayed.
15. Run the commands displayed to install Dell CSI Driver and Modules using the generated manifest file.
-## Install Helm Chart
+## Installation Using Helm Chart
**Steps**
->> NOTE: Ensure that the namespace and secrets are created before installing the Helm chart.
+>NOTE: Ensure that the namespace and secrets are created before installing the Helm chart.
1. Add the Dell Helm Charts repository.
@@ -58,26 +65,58 @@ The [Dell Container Storage Modules Installation Wizard](./src/index.html) is a
helm repo update
```
-2. Copy the downloaded values.yaml file.
+2. Copy the downloaded `values.yaml` file.
3. Look over all the fields in the generated `values.yaml` and fill in/adjust any as needed.
-4. For the Observability module, please refer [Observability](../../observability/deployment/#post-installation-dependencies) to install the post installation dependencies.
+>NOTE: The CSM Installation Wizard generates `values.yaml` with the minimal inputs required to install the CSM. To configure additional parameters in values.yaml, you can follow the steps outlined in [PowerStore](../../csidriver/installation/helm/powerstore/#install-the-driver), [PowerMax](../../csidriver/installation/helm/powermax/#install-the-driver), [PowerScale](../../csidriver/installation/helm/isilon/#install-the-driver), [PowerFlex](../../csidriver/installation/helm/powerflex/#install-the-driver), [Unity XT](../../csidriver/installation/helm/unity/#install-csi-driver), [Observability](../../observability/), [Replication](../../replication/), [Resiliency](../../resiliency/).
-5. If Authorization is enabled , please refer to [Authorization](../../authorization/deployment/helm/) for the installation and configuration of the Proxy Server.
+4. When the PowerFlex driver is installed using values generated by installation wizard, the user needs to update the secret for driver by patching the MDM keys, as follows:
->> NOTE: Only the Authorization sidecar is enabled by the CSM Installation Wizard. The Proxy Server has to be installed and configured separately.
+ ```terminal
+ `echo -n '' | base64`
+ `kubectl patch secret vxflexos-config -n vxflexos -p "{\"data\": { \"MDM\": \"\"}}"`
+ ```
+
+5. If Observability is checked in the wizard, refer to [Observability](../../observability/deployment/#post-installation-dependencies) to export metrics to Prometheus and load the Grafana dashboards.
-6. If the Volume Snapshot feature is enabled, please refer to [Volume Snapshot for PowerStore](../../csidriver/installation/helm/powerstore/#optional-volume-snapshot-requirements) and [Volume Snapshot for PowerMax](../../csidriver/installation/helm/powermax/#optional-volume-snapshot-requirements) to install the Volume Snapshot CRDs and the default snapshot controller.
+6. If Authorization is checked in the wizard, only the sidecar is enabled. Refer to [Authorization](../../authorization/deployment/helm/) to install and configure the CSM Authorization Proxy Server.
->> NOTE: The CSM Installation Wizard generates values.yaml with the minimal inputs required to install the CSM. To configure additional parameters in values.yaml, please follow the steps outlined in [PowerStore](../../csidriver/installation/helm/powerstore/#install-the-driver), [PowerMax](../../csidriver/installation/helm/powermax/#install-the-driver), [PowerScale](../../csidriver/installation/helm/isilon/#install-the-driver), [PowerFlex](../../csidriver/installation/helm/powerflex/#install-the-driver), [Unity XT](../../csidriver/installation/helm/unity/#install-csi-driver), [Observability](../../observability/), [Replication](../../replication/), [Resiliency](../../resiliency/).
+7. If Replication is checked in the wizard, refer to [Replication](../../replication/deployment/) on configuring communication between Kubernetes clusters.
-7. Install the Helm chart.
+8. If your Kubernetes distribution doesn't have the Volume Snapshot feature enabled, refer to [this section](../../snapshots) to install the Volume Snapshot CRDs and the default snapshot controller.
+
+9. Install the Helm chart.
On your terminal, run this command:
```terminal
-
helm install dell/container-storage-modules -n --version -f
- Example: helm install powerstore dell/container-storage-modules -n csi-powerstore --version 1.0.1 -f values.yaml
+ Example: helm install powerstore dell/container-storage-modules -n csi-powerstore --version 1.1.0 -f values.yaml
+ ```
+
+## Installation Using Operator
+
+**Steps**
+
+>NOTE: Ensure that the csm-operator is installed and that the namespace, secrets, and `config.yaml` are created as prerequisites.
+
+1. Copy the downloaded `values.yaml` file.
+
+2. Look over all the fields in the generated `values.yaml` and fill in/adjust any as needed.
+
+>NOTE: The CSM Installation Wizard generates `values.yaml` with the minimal inputs required to install the CSM. To configure additional parameters in values.yaml, you can follow the steps outlined in [PowerStore](../csmoperator/drivers/powerstore), [PowerMax](../csmoperator/drivers/powermax), [PowerScale](../csmoperator/drivers/powerscale), [Resiliency](../csmoperator/modules/resiliency).
+
+3. If Observability is checked in the wizard, refer to [Observability](../csmoperator/modules/observability) to export metrics to Prometheus and load the Grafana dashboards.
+
+4. If Authorization is checked in the wizard, only the sidecar is enabled. Refer to [Authorization](../csmoperator/modules/authorization) to install and configure the CSM Authorization Proxy Server.
+
+5. If Replication is checked in the wizard, refer to [Replication](../csmoperator/modules/replication) for the necessary prerequisites required for this module.
+
+6. Install the Operator.
+
+ On your terminal, run this command:
+
+ ```terminal
+ kubectl create -f values.yaml
```
diff --git a/content/v2/deployment/csminstallationwizard/release/_index.md b/content/v2/deployment/csminstallationwizard/release/_index.md
new file mode 100644
index 0000000000..cf735551df
--- /dev/null
+++ b/content/v2/deployment/csminstallationwizard/release/_index.md
@@ -0,0 +1,29 @@
+---
+title: Release Notes
+linkTitle: "Release notes"
+weight: 5
+description: Release notes for CSM Installation Wizard
+---
+
+## Release Notes - CSM Installation Wizard 1.1.0
+
+### New Features/Changes
+
+- Added operator mode of installation for CSI-PowerStore, CSI-PowerMax, CSI-PowerScale and the supported modules
+
+- Helm and Operator based manifest file generation is supported for CSM-1.7 and CSM 1.8 releases
+
+- Volume Limit and Storage Capacity Tracking features have been added.
+- Rename SDC and approve SDC feature added for CSM-1.7 and CSM-1.8 for CSI-PowerFlex driver.
+- NFS volume feature added for CSM-1.8 for CSI-PowerFlex driver.
+
+### Fixed Issues
+
+- [#959 - [BUG]: Resiliency fields in the generated values.yaml should be uncommented when resiliency is enabled](https://github.com/dell/csm/issues/959)
+
+### Known Issues
+
+There are no known issues in this release
+
+
+
diff --git a/content/v2/deployment/csminstallationwizard/src/csm-versions/default-values.properties b/content/v2/deployment/csminstallationwizard/src/csm-versions/default-values.properties
index 1bf44e11eb..150565a2a4 100644
--- a/content/v2/deployment/csminstallationwizard/src/csm-versions/default-values.properties
+++ b/content/v2/deployment/csminstallationwizard/src/csm-versions/default-values.properties
@@ -1,4 +1,4 @@
-csmVersion=1.7.0
+csmVersion=1.8.0
imageRepository=dellemc
controllerCount=1
nodeSelectorLabel=node-role.kubernetes.io/control-plane:
@@ -6,3 +6,7 @@ taint=node-role.kubernetes.io/control-plane
volNamePrefix=csivol
snapNamePrefix=csi-snap
certSecretCount=1
+pollRate=60
+driverPodLabel=dell-storage
+arrayThreshold=3
+maxVolumesPerNode=0
\ No newline at end of file
diff --git a/content/v2/deployment/csminstallationwizard/src/index.html b/content/v2/deployment/csminstallationwizard/src/index.html
index ddac29478d..6b87ad0fda 100644
--- a/content/v2/deployment/csminstallationwizard/src/index.html
+++ b/content/v2/deployment/csminstallationwizard/src/index.html
@@ -43,10 +43,10 @@
}}
## Supported CSI Drivers
@@ -93,6 +93,7 @@ CSM for Observability provides Kubernetes administrators with the topology data
| Storage Pool | The storage pool name the volume/storage class is associated with |
| Storage System Volume Name | The name of the volume on the storage system that is associated with the persistent volume |
{{}}
+
## TLS Encryption
CSM for Observability deployment relies on [cert-manager](https://github.com/jetstack/cert-manager) to manage SSL certificates that are used to encrypt communication between various components. When [deploying CSM for Observability](./deployment), cert-manager is installed and configured automatically. The cert-manager components listed below will be installed alongside CSM for Observability.
diff --git a/content/v2/observability/deployment/offline.md b/content/v2/observability/deployment/offline.md
index e500f05abe..bf5076f945 100644
--- a/content/v2/observability/deployment/offline.md
+++ b/content/v2/observability/deployment/offline.md
@@ -6,9 +6,9 @@ description: >
Dell Container Storage Modules (CSM) for Observability Offline Installer
---
-The following instructions can be followed when a Helm chart will be installed in an environment that does not have an internet connection and will be unable to download the Helm chart and related Docker images.
+The following instructions can be followed when a Helm chart will be installed in an environment that does not have an Internet connection and will be unable to download the Helm chart and related Docker images.
-## Prerequisites
+## Prerequisites
- Helm 3.3
- The deployment of one or more [supported](../#supported-csi-drivers) Dell CSI drivers
@@ -17,10 +17,10 @@ The following instructions can be followed when a Helm chart will be installed i
Multiple Linux-based systems may be required to create and process an offline bundle for use.
-* One Linux-based system, with internet access, will be used to create the bundle. This involves the user invoking a script that utilizes `docker` to pull and save container images to file.
+* One Linux-based system, with Internet access, will be used to create the bundle. This involves the user invoking a script that utilizes `docker` to pull and save container images to file.
* One Linux-based system, with access to an image registry, to invoke a script that uses `docker` to restore container images from file and push them to a registry
-If one Linux system has both internet access and access to an internal registry, that system can be used for both steps.
+If one Linux system has both Internet access and access to an internal registry, that system can be used for both steps.
Preparing an offline bundle requires the following utilities:
@@ -58,7 +58,7 @@ To perform an offline installation of a Helm chart, the following steps should b
chmod +x offline-installer.sh
```
-3. Build the bundle by providing the Helm chart name as the argument. Below is a sample output that may be different on your machine.
+3. Build the bundle by providing the Helm chart name as the argument. Below is a sample output that may be different on your machine.
```bash
./offline-installer.sh -c dell/karavi-observability
@@ -75,11 +75,11 @@ To perform an offline installation of a Helm chart, the following steps should b
*
* Downloading and saving Docker images
- dellemc/csm-topology:v1.5.0
- dellemc/csm-metrics-powerflex:v1.5.0
- dellemc/csm-metrics-powerstore:v1.5.0
- dellemc/csm-metrics-powerscale:v1.2.0
- dellemc/csm-metrics-powermax:v1.0.0
+ dellemc/csm-topology:v1.6.0
+ dellemc/csm-metrics-powerflex:v1.6.0
+ dellemc/csm-metrics-powerstore:v1.6.0
+ dellemc/csm-metrics-powerscale:v1.3.0
+ dellemc/csm-metrics-powermax:v1.1.0
otel/opentelemetry-collector:0.42.0
nginxinc/nginx-unprivileged:1.20
@@ -106,15 +106,15 @@ To perform an offline installation of a Helm chart, the following steps should b
```bash
./offline-installer.sh -p :5000
```
- ```
+ ```
*
* Loading, tagging, and pushing Docker images to registry :5000/
- dellemc/csm-topology:v1.5.0 -> :5000/csm-topology:v1.5.0
- dellemc/csm-metrics-powerflex:v1.5.0 -> :5000/csm-metrics-powerflex:v1.5.0
- dellemc/csm-metrics-powerstore:v1.5.0 -> :5000/csm-metrics-powerstore:v1.5.0
- dellemc/csm-metrics-powerscale:v1.2.0 -> :5000/csm-metrics-powerscale:v1.2.0
- dellemc/csm-metrics-powermax:v1.0.0 -> :5000/csm-metrics-powerscale:v1.0.0
+ dellemc/csm-topology:v1.6.0 -> :5000/csm-topology:v1.6.0
+ dellemc/csm-metrics-powerflex:v1.6.0 -> :5000/csm-metrics-powerflex:v1.6.0
+ dellemc/csm-metrics-powerstore:v1.6.0 -> :5000/csm-metrics-powerstore:v1.6.0
+ dellemc/csm-metrics-powerscale:v1.3.0 -> :5000/csm-metrics-powerscale:v1.3.0
+ dellemc/csm-metrics-powermax:v1.1.0 -> :5000/csm-metrics-powermax:v1.1.0
otel/opentelemetry-collector:0.42.0 -> :5000/opentelemetry-collector:0.42.0
nginxinc/nginx-unprivileged:1.20 -> :5000/nginx-unprivileged:1.20
```
@@ -131,7 +131,7 @@ To perform an offline installation of a Helm chart, the following steps should b
kubectl apply --validate=false -f cert-manager.crds.yaml
```
-3. Copy the CSI Driver Secret(s)
+3. Copy the CSI Driver Secret(s)
Copy the CSI Driver Secret from the namespace where CSI Driver is installed to the namespace where CSM for Observability is to be installed.
@@ -140,7 +140,7 @@ To perform an offline installation of a Helm chart, the following steps should b
kubectl get secret vxflexos-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
-
+
If the CSI driver secret name is not the default `vxflexos-config`, please use the following command to copy secret:
```bash
@@ -154,7 +154,7 @@ To perform an offline installation of a Helm chart, the following steps should b
kubectl get configmap vxflexos-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
-
+
If the CSI driver configmap name is not the default `vxflexos-config-params`, please use the following command to copy configmap:
```bash
@@ -182,9 +182,9 @@ To perform an offline installation of a Helm chart, the following steps should b
__CSI Driver for PowerScale:__
```bash
- kubectl get secret isilon-creds -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ kubectl get secret isilon-creds -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
-
+
If the CSI driver secret name is not the default `isilon-creds`, please use the following command to copy secret:
```bash
@@ -197,7 +197,7 @@ To perform an offline installation of a Helm chart, the following steps should b
kubectl get configmap isilon-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
-
+
If the CSI driver configmap name is not the default `isilon-config-params`, please use the following command to copy configmap:
```bash
@@ -208,11 +208,11 @@ To perform an offline installation of a Helm chart, the following steps should b
```bash
kubectl get secret karavi-authorization-config proxy-server-root-certificate proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: isilon-karavi-authorization-config/' | sed 's/name: proxy-server-root-certificate/name: isilon-proxy-server-root-certificate/' | sed 's/name: proxy-authz-tokens/name: isilon-proxy-authz-tokens/' | kubectl create -f -
- ```
+ ```
__CSI Driver for PowerMax:__
- Copy the configmap from the CSI Driver for Dell PowerMax namespace to the CSM namespace.
+ Copy the configmap from the CSI Driver for Dell PowerMax namespace to the CSM namespace.
__Note:__ Observability for PowerMax works only with [CSI PowerMax driver with Proxy in StandAlone mode](../../../csidriver/installation/helm/powermax/#csi-powermax-driver-with-proxy-in-standalone-mode).
```bash
@@ -221,9 +221,9 @@ To perform an offline installation of a Helm chart, the following steps should b
If the CSI driver configmap name is not the default `powermax-reverseproxy-config`, please use the following command to copy configmap:
```bash
-
+
kubectl get configmap [POWERMAX-REVERSEPROXY-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [POWERMAX-REVERSEPROXY-CONFIG]/name: powermax-reverseproxy-config/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
- ```
+ ```
Copy the secrets from the CSI Driver for Dell PowerMax namespace to the CSM namespace.
```bash
@@ -250,7 +250,7 @@ To perform an offline installation of a Helm chart, the following steps should b
kubectl get configmap powermax-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
- If the CSI driver configmap name is not the default `powermax-config-params`, please use the following command to copy configmap:
+ If the CSI driver configmap name is not the default `powermax-config-params`, use the following command to copy the configmap:
```bash
@@ -260,14 +260,14 @@ To perform an offline installation of a Helm chart, the following steps should b
```bash
kubectl get secret karavi-authorization-config proxy-server-root-certificate proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: powermax-karavi-authorization-config/' | sed 's/name: proxy-server-root-certificate/name: powermax-proxy-server-root-certificate/' | sed 's/name: proxy-authz-tokens/name: powermax-proxy-authz-tokens/' | kubectl create -f -
- ```
+ ```
-4. Now that the required images have been made available and the Helm chart's configuration updated with references to the internal registry location, installation can proceed by following the instructions that are documented within the Helm chart's repository.
+4. After the images have been made available and the Helm chart configuration is updated, follow the instructions within the Helm chart's repository to complete the installation.
- **Note:**
+ **Note:**
- Optionally, you could provide your own [configurations](../helm/#configuration). A sample values.yaml file is located [here](https://github.com/dell/helm-charts/blob/main/charts/karavi-observability/values.yaml).
- The default `values.yaml` is configured to deploy the CSM for Observability Topology service on install.
- - If CSM for Authorization is enabled for CSI PowerFlex, the `karaviMetricsPowerflex.authorization` parameters must be properly configured.
+ - If CSM for Authorization is enabled for CSI PowerFlex, the `karaviMetricsPowerflex.authorization` parameters must be properly configured.
- If CSM for Authorization is enabled for CSI PowerScale, the `karaviMetricsPowerscale.authorization` parameters must be properly configured.
- If CSM for Authorization is enabled for CSI PowerMax, the `karaviMetricsPowerMax.authorization` parameters must be properly configured.
@@ -283,4 +283,3 @@ To perform an offline installation of a Helm chart, the following steps should b
TEST SUITE: None
```
-
\ No newline at end of file
diff --git a/content/v2/observability/deployment/online.md b/content/v2/observability/deployment/online.md
index 174e44e2bd..ed41777d86 100644
--- a/content/v2/observability/deployment/online.md
+++ b/content/v2/observability/deployment/online.md
@@ -7,7 +7,7 @@ description: >
---
@@ -506,4 +541,4 @@
})();
-
\ No newline at end of file
+