You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
Triggered a cluster creation in our private environment and it failed with the following CRD failure: APIServerLoadBalancerReconciliationFailed and BackendAdditionFailed errors. On debugging, I noticed that the loadbalancer was still in creating and was throwing an incorrectState error for the backendAddition step. The work-request for the loadbalancer also clearly failed at a later stage in our logs. Because of this, when we issue a delete command on the failed cluster it never suceeds.
What you expected to happen:
Clusters should get deleted successfully without any impact due to dependent resources.
capioci should not create a machine without checking the loadbalancer status as active.
If the loadbalancer creation fails and is deleted, it should check the work-request(which will be available in failed state if the loadbalancer is not available as failed state) and subsequently proceed with the other deletions and delete the cluster successfully.
How to reproduce it (as minimally and precisely as possible):
On OCI, we are not sure on the ways to reproduce it but I can briefly explain the steps on our environment.
Apply the cluster CRD yaml to create a cluster. Once LB creation starts, please check if the checks are happening multiple times before machine creation is triggered. This can be done by reduce your checking timeframe in the code to make sure the creating state is checked multiple times. It could be timed out if the loadbalancer is stuck in creating before you go ahead with the machine reconciliation.
Anything else we need to know?:
Environment:
CAPOCI version: v0.11.0
Cluster-API version (use clusterctl version): 1.4.0
Kubernetes version (use kubectl version):1.25.7
Docker version (use docker info):N/A
OS (e.g. from /etc/os-release): Oracle Linux 8
The text was updated successfully, but these errors were encountered:
What happened:
Triggered a cluster creation in our private environment and it failed with the following CRD failure: APIServerLoadBalancerReconciliationFailed and BackendAdditionFailed errors. On debugging, I noticed that the loadbalancer was still in creating and was throwing an incorrectState error for the backendAddition step. The work-request for the loadbalancer also clearly failed at a later stage in our logs. Because of this, when we issue a delete command on the failed cluster it never suceeds.
What you expected to happen:
Clusters should get deleted successfully without any impact due to dependent resources.
How to reproduce it (as minimally and precisely as possible):
On OCI, we are not sure on the ways to reproduce it but I can briefly explain the steps on our environment.
Apply the cluster CRD yaml to create a cluster. Once LB creation starts, please check if the checks are happening multiple times before machine creation is triggered. This can be done by reduce your checking timeframe in the code to make sure the creating state is checked multiple times. It could be timed out if the loadbalancer is stuck in creating before you go ahead with the machine reconciliation.
Anything else we need to know?:
Environment:
clusterctl version
): 1.4.0kubectl version
):1.25.7docker info
):N/A/etc/os-release
): Oracle Linux 8The text was updated successfully, but these errors were encountered: