Skip to content

Commit

Permalink
Merge branch 'main' into release-1.10.0
Browse files Browse the repository at this point in the history
  • Loading branch information
shanmydell committed Mar 7, 2024
2 parents a2a1233 + d2bfbee commit daaca0f
Show file tree
Hide file tree
Showing 33 changed files with 353 additions and 156 deletions.
4 changes: 3 additions & 1 deletion .github/CODEOWNERS
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
# be requested for review when someone opens a pull request.
# order is alphabetical for easier maintenance.
#
# Aaron Tye (atye)
# Bharath Sreekanth (bharathsreekanth)
# Deepak Ghivari (Deepak-Ghivari)
# Sean Gallacher (gallacher)
Expand All @@ -19,5 +20,6 @@
# Raymond Sedlock (rsedlock1958)
# Yamunadevi Shanmugam (shanmydell)
# Sharon Toll (sharont58)
# Shayna Finocchiaro (shaynafinocchiaro)

* @bharathsreekanth @Deepak-Ghivari @gallacher @mareksuski-dell @mdutka-dell @mgandharva @mjsdell @prablr79 @rajendraindukuri @rajkumar-palani @rsedlock1958 @shanmydell @sharont58
* @atye @bharathsreekanth @Deepak-Ghivari @gallacher @mareksuski-dell @mdutka-dell @mgandharva @mjsdell @prablr79 @rajendraindukuri @rajkumar-palani @rsedlock1958 @shanmydell @sharont58 @shaynafinocchiaro
7 changes: 5 additions & 2 deletions .github/workflows/deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@ on:
push:
branches:
- main

workflow_dispatch:

jobs:
deploy:
runs-on: ubuntu-latest
Expand All @@ -17,12 +18,14 @@ jobs:
- name: Setup Hugo
uses: peaceiris/actions-hugo@v2
with:
hugo-version: 'latest'
hugo-version: '0.120.3'
extended: true

- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: '1.18.0'

- name: Setup Node
uses: actions/setup-node@v3
with:
Expand Down
54 changes: 0 additions & 54 deletions .github/workflows/deploy1.yml

This file was deleted.

3 changes: 2 additions & 1 deletion config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ enable = false
url = "https://dell.github.io/csm-docs/docs/"

[[params.versions]]
version = "v1.9.1"
version = "v1.9.3"
url = "https://dell.github.io/csm-docs/v1"

[[params.versions]]
Expand Down Expand Up @@ -205,3 +205,4 @@ enable = false
path = "github.com/google/docsy/dependencies"
disable = false

ignoreFiles = ['^content/docs/deployment/csminstallationwizard/src/index\.html$']
3 changes: 2 additions & 1 deletion content/docs/csidriver/features/powermax.md
Original file line number Diff line number Diff line change
Expand Up @@ -311,7 +311,8 @@ For example, if `nodeNameTemplate` is _abc-%foo%-hostname_ and nodename is _work

## Controller HA

Starting with version 1.5, the CSI PowerMax driver supports running multiple replicas of the controller Pod. At any time, only one controller Pod is active(leader), and the rest are on standby. In case of a failure, one of the standby Pods becomes active and takes the position of leader. This is achieved by using native leader election mechanisms utilizing `kubernetes leases`. Additionally by leveraging `pod anti-affinity`, no two-controller Pods are ever scheduled on the same node.
Starting with version 1.5, the CSI PowerMax driver supports running multiple replicas of the controller Pod.
Leader election is only applicable for all sidecar containers and driver container will be running in all controller pods . In case of a failure, one of the standby Pods becomes active and takes the position of leader. This is achieved by using native leader election mechanisms utilizing `kubernetes leases`. Additionally by leveraging `pod anti-affinity`, no two-controller Pods are ever scheduled on the same node.

To increase or decrease the number of controller Pods, edit the following value in `values.yaml` file:
```yaml
Expand Down
3 changes: 1 addition & 2 deletions content/docs/csidriver/features/powerscale.md
Original file line number Diff line number Diff line change
Expand Up @@ -289,8 +289,7 @@ spec:

## Controller HA

CSI PowerScale driver version 1.4.0 and later supports running multiple replicas of the controller pod. At any time, only one controller pod is active(leader), and the rest are on standby.
In case of a failure, one of the standby pods becomes active and takes the position of leader. This is achieved by using native leader election mechanisms utilizing `kubernetes leases`.
CSI PowerScale driver version 1.4.0 and later supports running multiple replicas of the controller pod. Leader election is only applicable for all sidecar containers and driver container will be running in all controller pods. In case of a failure, one of the standby pods becomes active and takes the position of leader. This is achieved by using native leader election mechanisms utilizing `kubernetes leases`.

Additionally by leveraging `pod anti-affinity`, no two-controller pods are ever scheduled on the same node.

Expand Down
1 change: 0 additions & 1 deletion content/docs/csidriver/release/powerflex.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,6 @@ description: Release notes for PowerFlex CSI driver
A CSI ephemeral pod may not get created in OpenShift 4.13 and fail with the error `"error when creating pod: the pod uses an inline volume provided by CSIDriver csi-vxflexos.dellemc.com, and the namespace has a pod security enforcement level that is lower than privileged."` | This issue occurs because OpenShift 4.13 introduced the CSI Volume Admission plugin to restrict the use of a CSI driver capable of provisioning CSI ephemeral volumes during pod admission. Therefore, an additional label `security.openshift.io/csi-ephemeral-volume-profile` in [csidriver.yaml](https://github.com/dell/helm-charts/blob/csi-vxflexos-2.10.0/charts/csi-vxflexos/templates/csidriver.yaml) file with the required security profile value should be provided. Follow [OpenShift 4.13 documentation for CSI Ephemeral Volumes](https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/ephemeral-storage-csi-inline.html) for more information. |
| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. |
| The PowerFlex Dockerfile is incorrectly labeling the version as 2.7.0 for the 2.8.0 version. | Describe the driver pod using ```kubectl describe pod $podname -n vxflexos``` to ensure v2.8.0 is installed. |
| Standby controller pod is in crashloopbackoff state | Scale down the replica count of the controller pod's deployment to 1 using ```kubectl scale deployment <deployment_name> --replicas=1 -n <driver_namespace>``` |

### Note:

Expand Down
1 change: 0 additions & 1 deletion content/docs/csidriver/release/powermax.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,6 @@ description: Release notes for PowerMax CSI driver
| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. |
| Automatic SRDF group creation is failing with "Unable to get Remote Port on SAN for Auto SRDF" for PowerMaxOS 10.1 arrays | Create the SRDF Group and add it to the storage class |
| [Node stage is failing with error "wwn for FC device not found"](https://github.com/dell/csm/issues/1070)| This is an intermittent issue, rebooting the node will resolve this issue |
| Standby controller pod is in crashloopbackoff state | Scale down the replica count of the controller pod's deployment to 1 using ```kubectl scale deployment <deployment_name> --replicas=1 -n <driver_namespace>``` |
### Note:

- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters.
3 changes: 1 addition & 2 deletions content/docs/csidriver/release/powerscale.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,15 +25,14 @@ description: Release notes for PowerScale CSI driver

| Issue | Resolution or workaround, if known |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Storage capacity tracking does not return `MaximumVolumeSize` parameter. PowerScale is purely NFS based meaning it has no actual volumes. Therefore `MaximumVolumeSize` cannot be implemented if there is no volume creation. | CSI PowerScale 2.9.0 is compliant with CSI 1.6 specification since the field `MaximumVolumeSize` is optional. |
| Storage capacity tracking does not return `MaximumVolumeSize` parameter. PowerScale is purely NFS based meaning it has no actual volumes. Therefore `MaximumVolumeSize` cannot be implemented if there is no volume creation. | CSI PowerScale 2.9.1 is compliant with CSI 1.6 specification since the field `MaximumVolumeSize` is optional. |
| If the length of the nodeID exceeds 128 characters, the driver fails to update the CSINode object and installation fails. This is due to a limitation set by CSI spec which doesn't allow nodeID to be greater than 128 characters. | The CSI PowerScale driver uses the hostname for building the nodeID which is set in the CSINode resource object, hence we recommend not having very long hostnames in order to avoid this issue. This current limitation of 128 characters is likely to be relaxed in future Kubernetes versions as per this issue in the community: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/issues/581 <br><br> **Note:** In kubernetes 1.22 this limit has been relaxed to 192 characters. |
| If some older NFS exports /terminated worker nodes still in NFS export client list, CSI driver tries to add a new worker node it fails (For RWX volume). | User need to manually clean the export client list from old entries to make successful addition of new worker nodes. |
| Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation. | Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100 |
| fsGroupPolicy may not work as expected without root privileges for NFS only<br/>https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "RootClientEnabled" = "true" in the storage class parameter |
| Driver logs shows "VendorVersion=2.3.0+dirty" | Update the driver to csi-powerscale 2.4.0 |
| PowerScale 9.5.0, Driver installation fails with session based auth, "HTTP/1.1 401 Unauthorized" | Fix is available in PowerScale >= 9.5.0.4 |
| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. |
| Standby controller pod is in crashloopbackoff state | Scale down the replica count of the controller pod's deployment to 1 using ```kubectl scale deployment <deployment_name> --replicas=1 -n <driver_namespace>``` |

### Note

Expand Down
1 change: 0 additions & 1 deletion content/docs/csidriver/release/unity.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@ description: Release notes for Unity XT CSI driver
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: <br /> 1. Force delete the pod running on the node that went down <br /> 2. Delete the VolumeAttachment to the node that went down. <br /> Now the volume can be attached to the new node. |
| A CSI ephemeral pod may not get created in OpenShift 4.13 and fail with the error `"error when creating pod: the pod uses an inline volume provided by CSIDriver csi-unity.dellemc.com, and the namespace has a pod security enforcement level that is lower than privileged."` | This issue occurs because OpenShift 4.13 introduced the CSI Volume Admission plugin to restrict the use of a CSI driver capable of provisioning CSI ephemeral volumes during pod admission. Therefore, an additional label `security.openshift.io/csi-ephemeral-volume-profile` in [csidriver.yaml](https://github.com/dell/helm-charts/blob/csi-unity-2.8.0/charts/csi-unity/templates/csidriver.yaml) file with the required security profile value should be provided. Follow [OpenShift 4.13 documentation for CSI Ephemeral Volumes](https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/ephemeral-storage-csi-inline.html) for more information. |
| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. |
| Standby controller pod is in crashloopbackoff state | Scale down the replica count of the controller pod's deployment to 1 using ```kubectl scale deployment <deployment_name> --replicas=1 -n <driver_namespace>``` |
### Note:

- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters.
Loading

0 comments on commit daaca0f

Please sign in to comment.