Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Additional management v3 cluster created after upgrade #583

Closed
cpinjani opened this issue Jun 27, 2024 · 3 comments
Closed

[BUG] Additional management v3 cluster created after upgrade #583

cpinjani opened this issue Jun 27, 2024 · 3 comments
Assignees
Labels
kind/bug Something isn't working
Milestone

Comments

@cpinjani
Copy link

cpinjani commented Jun 27, 2024

Description:
Additional management v3 cluster created after upgrade, resulting in failed sequential v3 auto-migration.

What steps did you take and what happened?

  1. Install turtles v0.8.0 (v3 option disabled) and import CAPI cluster

image

  1. Upgrade turtles to latest dev version by keeping settings to default (v3 is auto-enabled to true)

image

  1. An additional v3 cluster is created and corresponding v1 cluster

image

$ kubectl get cluster.provisioning.cattle.io -A
NAMESPACE       NAME            READY   KUBECONFIG
default         cluster1-capi   true    cluster1-capi-kubeconfig
fleet-default   c-7q8g7                 
fleet-local     local           true    local-kubeconfig

$ kubectl get cluster.management.cattle.io -A
NAME           AGE
c-7q8g7        92s
c-m-6tl52brj   7m6s
local          16m
  1. Update turtles settings and set managementv3-cluster-migration to true and let the operation complete.
    The migrated cluster is Unavailable in Rancher.

image

$ kubectl get cluster.management.cattle.io -A
NAME      AGE
c-7q8g7   6m20s
local     20m

$ kubectl get cluster.provisioning.cattle.io -A
NAMESPACE       NAME      READY   KUBECONFIG
fleet-default   c-7q8g7   true    c-7q8g7-kubeconfig
fleet-local     local     true    local-kubeconfig

What did you expect to happen?

No additional cluster creation after upgrade from v0.8.0

How to reproduce it?

Steps mentioned above.

Rancher Turtles version

Dev version - d7ecf2b
Rancher - v2.8-head

Anything else you would like to add?

While upgrade from v0.8.0 if both managementv3-cluster, managementv3-cluster-migration = true, then migration completes successfully.

Label(s) to be applied

/kind bug

@kkaempf kkaempf added the kind/bug Something isn't working label Jun 27, 2024
@furkatgofurov7
Copy link
Contributor

@cpinjani can you please try the latest version of Turtles and see if this is still the case?

@furkatgofurov7 furkatgofurov7 self-assigned this Jul 1, 2024
@cpinjani
Copy link
Author

cpinjani commented Jul 2, 2024

Validated on v0.9.0, the issue is not occurring. Filed a separate issue for failure on Step 4 (#588)

Cluster details after upgrade:

$ kubectl get cluster.provisioning.cattle.io -A
NAMESPACE     NAME            READY   KUBECONFIG
default       cluster1-capi   true    cluster1-capi-kubeconfig
fleet-local   local           true    local-kubeconfig

$ kubectl get cluster.management.cattle.io -A
NAME           AGE
c-m-2j2gwvd4   14m
local          27m

@kkaempf kkaempf added this to the July release milestone Jul 2, 2024
@furkatgofurov7
Copy link
Contributor

@cpinjani I am closing this, steps 1-3 are not issue anymore after upgrading to the new release v0.9.0 and only Step 4 was, but that has a separate issue: #588

Please re-open if you face the same issue again in the future.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Development

No branches or pull requests

3 participants