-
Notifications
You must be signed in to change notification settings - Fork 979
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: Failed to construct REST client on kubernetes_manifest resource #1775
Comments
That error message is saying that the client configuration to the K8s API is missing (or improperly configured). Can you please share your |
I think this issue don’t sound’s like a misconfig because another provider "helm" {
kubernetes {
host = aws_eks_cluster.eks_cluster.endpoint
cluster_ca_certificate = base64decode(aws_eks_cluster.eks_cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.default.token
}
}
provider "kubernetes" {
host = aws_eks_cluster.eks_cluster.endpoint
cluster_ca_certificate = base64decode(aws_eks_cluster.eks_cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.default.token
}
|
@msfidelis Have you tried a different provider version? I have observed very odd behavior with the Kubernetes provider in versions In all my cases, pinning the provider to an older version has fixed my issues. Definitely not ideal, but I'd be interested to hear if other users are experiencing similar problems. terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.11.0"
}
}
} |
I am getting it too - this was also an issue with their beta version too - ref - hashicorp/terraform-provider-kubernetes-alpha#199 I tried @jrsdav 's solution - but it didn't work for me. Could it have something to do with the cloud providers we are using - i am using azure aks... and I set that cluster up using here is my provider setup
|
just reread the doc... it says ... This resource requires API access during planning time. This means the cluster has to be accessible at plan time and thus cannot be created in the same apply operation. We recommend only using this resource for custom resources or resources not yet fully supported by the provider. I need to maintain two terraform plans one to set up the cluster - the other to throw my resources into it |
FYI same error here My entire code : terraform {
required_version = ">= 1.0"
required_providers {
kubectl = {
source = "hashicorp/kubernetes"
version = ">= 2.12.1"
}
}
}
provider "kubernetes" {
config_path = "./kubeconfig.yaml"
config_context = "my_context_name" # redacted
}
resource "kubernetes_manifest" "some_manifest" {
manifest = yamldecode(file("manifest.yaml"))
}
❯ tf plan
╷
│ Error: Failed to construct REST client
│
│ with kubernetes_manifest.some_manifest,
│ on main2.tf line 17, in resource "kubernetes_manifest" "some_manifest":
│ 17: resource "kubernetes_manifest" "some_manifest" {
│
│ cannot create REST client: no client config
❯ tf version
Terraform v1.1.7
on linux_amd64
+ provider registry.terraform.io/hashicorp/kubernetes v2.11.0
Your version of Terraform is out of date! The latest version
is 1.2.5. You can update by downloading from https://www.terraform.io/downloads.html |
@williamohara Nailed the problem here. The AFAICS, every person who reported seeing similar issues above, configures the attributes of the This limitation is stemming from Terraform itself, and the provider tries to push things as far as it can, but there is no way around needing access to schema from the API (Terraform is fundamentally a strongly-typed / schema based system). |
This is also explained in the closed issue #1625 It is curious though that I can use the alternative without issue i.e. plan without needing to make a k8s API connection . The diff in their usage is pretty minimal
This is also related to: |
I tested and confirm kubectl alternative you mentioned from https://github.com/gavinbunney/terraform-provider-kubectl works just like you said @pfrydids |
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you! |
I hate making comments that are just noise and add nothing to the conversation, but doing so to placate stale-bot as this continues to be a real issue |
I'm considering porting https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs/resources/kubectl_manifest to CDKTF if I can work out how to do that. I wish there was a less crazy way to get this done, if anyone has ideas or wants to help I'm all ears :) edit: if anyone is trying to deploy an ArgoCD by using a because I'm now unblocked I won't be porting the kubectl_manifest resource to CDKTF |
as a workaround i have done the following on the first run when Kubernetes is getting created locals {
aks_cluster_available = false
}
resource "kubernetes_manifest" "some_manifest" {
count = local.aks_cluster_available ? 1 : 0
manifest = yamldecode(file("manifest.yaml"))
}
resource "azurerm_kubernetes_cluster" "az_kubernetes_cluster" {
location = ""
name = ""
resource_group_name = ""
..........
} once the kubernetes is available we just have to switch the locals variable to true locals {
aks_cluster_available = true
} |
Will this be fixed by hashicorp/terraform#30937 ? |
This issue is still present. |
Yes, it's currently blocked by hashicorp/terraform#30937; they confirmed over there. It's actively being worked on though. |
Terraform Version, Provider Version and Kubernetes Version
Affected Resource(s)
Terraform Configuration Files
Debug Output
https://gist.github.com/msfidelis/a85e6ec596ba4c762d8f3d3b76fa3aac
Steps to Reproduce
Expected Behavior
The resource should respect the provider configuration before construct the client, like the other kubernetes provider resources.
Actual Behavior
The text was updated successfully, but these errors were encountered: