[BUG]: csm object in success state when all csi-powerflex pods are failing due to bad secret credentials #1156
Labels
area/csm-operator
type/bug
Something isn't working. This is the default label associated with a bug issue.
Bug Description
When the csi-powerflex driver is installed with csm-operator, and the login details in the secret are bad, the pods go into crashloopbackoff/error/other bad states but the csm object stays in the successful state.
Logs
[root@master-1-Zaglt7mQUY8Wg csm-operator]# kubectl get csm -A
NAMESPACE NAME CREATIONTIME CSIDRIVERTYPE CONFIGVERSION STATE
vxflexos vxflexos 72m powerflex v2.9.0 Succeeded
[root@master-1-Zaglt7mQUY8Wg csm-operator]# kubectl get pods -n vxflexos
NAME READY STATUS RESTARTS AGE
vxflexos-controller-99fc4778f-q2dss 0/5 CrashLoopBackOff 87 (35s ago) 73m
vxflexos-node-dbkkj 0/2 CrashLoopBackOff 36 (72s ago) 73m
vxflexos-node-zx4wn 0/2 CrashLoopBackOff 36 (42s ago) 73m
[root@master-1-Zaglt7mQUY8Wg csm-operator]#
Screenshots
No response
Additional Environment Information
No response
Steps to Reproduce
Create a vxflexos-config secret with e.g. a bad username, then deploy csi-powerflex driver with csm-operator. You should see failing pods with csm state recorded as Success
Expected Behavior
When pods are failing, csm state should be Failed.
CSM Driver(s)
CSI Powerflex v2.9.1
Installation Type
CSM-Operator v1.4.2
Container Storage Modules Enabled
No response
Container Orchestrator
Kubernetes v1.27.2
Operating System
RHEL 8.9
The text was updated successfully, but these errors were encountered: