-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SURE-8182] Rancher rolls back to the previous Launch Template version of an EKS cluster after updating it in AWS #469
Comments
Able to reproduce with steps:
|
Hi, This issue started only happening after upgrading to version > = 2.8.0. eksConfig:
amazonCredentialSecret: cattle-global-data:cc-k2qss
displayName: dev01-dev
ebsCSIDriver: null
imported: true
kmsKey: null
kubernetesVersion: null
loggingTypes: null
nodeGroups: null
privateAccess: null
publicAccess: null
publicAccessSources: null
region: eu-west-1
secretsEncryption: null
securityGroups: null
serviceRole: ""
subnets: null
tags: null We are creating EKS clusters with Terraform and using Rancher2 Terraform provider to import the EKS clusters to Rancher. SO we have been facing, tags to get reseted, loggingTypes to get reseted, as well as Launch templates to get rolled back, because changes have been applied with Terraform and Rancher is not fetching those changes, but the opposite forces what is actually currently configured from the UI. According to Rancher docs:
From the Docs, we understand that Rancher should sync with AWS provider, and not the other way around. And this is the proper approach, Rancher should not force the configuration set on UI, but should always sync first with the provider state. |
Hello @LefterisBanos, thanks for your detailed message. We have been investigating this issue and testing out different scenarios and we've noticed that the paragraph from the Rancher Docs that you're referring to does not align with the actual behavior of the controller in specific situations. The original description of the issue references Launch Templates because that's what we initiated the investigation with, but it can be extensible to other parameters, as you mentioned. When a cluster is created from AWS and later imported into Rancher, the One of the scenarios we've tested involves creating a cluster via AWS, importing into Rancher and then applying changes via AWS Console. In this case, any modifications to
We're updating the docs to reflect this behavior and prevent the current misunderstanding caused by the paragraph you quoted. Please, let us know if you have any other concerns related to this. |
hi @salasberryfin, Thank you for your comment.
So you mean that on step 4 of your tests after applying changes via Rancher, and then applying changes via AWS console the changes at step 4 were rolled back? If that is the case I am not sure I understand how this can be considered as expected behaviour? This is going against IaC practises. I mean that it is clear that you can modify a cluster from Rancher UI, but you can not force Rancher to be the source of truth or to have higher priority over AWS console. This way once you do any minor change from the Rancher UI, your IaC can not be used anymore. Regarding IaC (terraform), we consider that always AWS console should be the actual source of truth. Our use case is the following:
One more thing is that, it seems that Thank you. |
Let's continue with regular communication process through official support request. |
Closing due to no response. |
This issue has a priority not sure why it is closed. |
@LefterisBanos the "community issue" on Github is closed, the (internal) support request is still open. |
Issue description:
If you have an EKS cluster created in AWS and then imported into Rancher, once you modify the Launch Template version for the Node Group in Rancher, any change you make to the version on the AWS side is rolled back by Rancher.
The customer sees this behavior in Rancher 2.8.2. I was able to reproduce this in 2.7.12. I thought perhaps the behavior is expected per the documentation: https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/sync-clusters
The AKSConfig, EKSConfig or GKEConfig represents the desired state. Nil values are ignored. Fields that are non-nil in the config object can be thought of as managed. When a cluster is created in Rancher, all fields are non-nil and therefore managed. When a pre-existing cluster is registered in Rancher all nillable fields are set to nil and aren’t managed. Those fields become managed once their value has been changed by Rancher.
The text was updated successfully, but these errors were encountered: