You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have observed the Terraform Helm provider incorrectly saving state apply runs that fail due to authorization issues
When the Kubernetes API server rejects requests due to expired auth tokens, the Terraform run fails. This is fine and expected; it happens when there is >15 minutes between plan and apply usually.
However, on these failed runs, the terraform state still gets updated with as if there was no failure. That means I can't reapply to fix the issue; I have to revert the changes, apply, then replay the changes to workaround invalid state.
NOTE: In addition to Terraform debugging, please set HELM_DEBUG=1 to enable debugging info from helm.
The job ran in Terraform Cloud. Some output logs:
Terraform v1.1.9
on linux_amd64
Initializing plugins and modules...
helm_release.metrics_server: Modifying... [id=metrics-server]
╷
│ Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials
│
│ with helm_release.metrics_server,
│ on metrics-server.tf line 7, in resource "helm_release" "metrics_server":
│ 7: resource "helm_release" "metrics_server" {
│
╵
Panic Output
Steps to Reproduce
In Terraform Cloud, do a plan run
Wait 15 minutes for the temporary EKS token to expire
Click Apply
Calls to the Kubernetes API fail as Unauthorized since the token expired
Helm provider fails: "Kubernetes cluster unreachable: the server has asked for the client to provide credentials"
But the Terraform state still gets updated as if there was no failure
Rerun the Terraform plan indicates there are "No changes", but this shouldn't have happened since the API server rejected the requests and they errored
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
I have observed the Terraform Helm provider incorrectly saving state apply runs that fail due to authorization issues
When the Kubernetes API server rejects requests due to expired auth tokens, the Terraform run fails. This is fine and expected; it happens when there is >15 minutes between plan and apply usually.
However, on these failed runs, the terraform state still gets updated with as if there was no failure. That means I can't reapply to fix the issue; I have to revert the changes, apply, then replay the changes to workaround invalid state.
Terraform, Provider, Kubernetes and Helm Versions
Affected Resource(s)
Terraform Configuration Files
a snippet:
Debug Output
NOTE: In addition to Terraform debugging, please set HELM_DEBUG=1 to enable debugging info from helm.
The job ran in Terraform Cloud. Some output logs:
Panic Output
Steps to Reproduce
Unauthorized
since the token expiredExpected Behavior
No state changes should have persisted
Actual Behavior
State changes persisted on the failed run
Important Factoids
It has happened to me multiple times in the past.
References
Community Note
The text was updated successfully, but these errors were encountered: