Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provider 1.11.0 fails to initialize #759

Closed
bowczarek opened this issue Feb 10, 2020 · 99 comments
Closed

Provider 1.11.0 fails to initialize #759

bowczarek opened this issue Feb 10, 2020 · 99 comments
Labels

Comments

@bowczarek
Copy link

bowczarek commented Feb 10, 2020

Hi there, like 40ty minutes ago provider has been updated and I started getting issues during its initialization:

Error: Failed to initialize config: stat /Users/xxx/.kube/config: no such file or directory

Downgrading provider to 1.10.0 fixes that issue.

Using newest TF, which is 0.12.20

@ankushagarwal
Copy link

The error message I was seeing was

Error: Failed to initialize config: invalid configuration: no configuration has been provided

  on kubernetes.tf line 1, in provider "kubernetes":
   1: provider "kubernetes" {

@alexsomesan
Copy link
Member

Can you please post provider configuration blocks here?
Redacted, of course.

@ankushagarwal
Copy link

ankushagarwal commented Feb 10, 2020

tf aws provider version : 2.48.0
tf version : v0.12.16

provider "kubernetes" {
  version                = "~> 1.10"
  host                   = aws_eks_cluster.mycluster.endpoint
  cluster_ca_certificate = base64decode(aws_eks_cluster.mycluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.mycluster.token
  load_config_file       = false
}

@immae1
Copy link

immae1 commented Feb 10, 2020

got the same error like @bowczarek with Terraform v0.11.14

provider "kubernetes" {
    version = "~> 1.10.0"
    host                   = "${azurerm_kubernetes_cluster.k8scluster.kube_config.0.host}"
    client_certificate     = "${base64decode(azurerm_kubernetes_cluster.k8scluster.kube_config.0.client_certificate)}"
    client_key             = "${base64decode(azurerm_kubernetes_cluster.k8scluster.kube_config.0.client_key)}"
    cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.k8scluster.kube_config.0.cluster_ca_certificate)}"
}

@bowczarek
Copy link
Author

Yeah, nothing special:

data "google_client_config" "current" {}

provider "kubernetes" {
  host                   = google_container_cluster.service_cluster.endpoint
  token                  = data.google_client_config.current.access_token
  client_certificate     = base64decode(google_container_cluster.service_cluster.master_auth.0.client_certificate)
  client_key             = base64decode(google_container_cluster.service_cluster.master_auth.0.client_key)
  cluster_ca_certificate = base64decode(google_container_cluster.service_cluster.master_auth.0.cluster_ca_certificate)
}

@alexsomesan
Copy link
Member

@bowczarek Does setting load_config_file = false fix your issue?
Explained here https://www.terraform.io/docs/providers/kubernetes/index.html#statically-defined-credentials

@alexsomesan
Copy link
Member

Also, what's the reason for using both a token and client_certificate at the same time?

@ankushagarwal
Copy link

I am using load_config_file = false and I am still running into an issue (which I hope has a similar root cause to the issue reported on the top)

@bowczarek
Copy link
Author

bowczarek commented Feb 10, 2020

@alexsomesan seems like adding load_config_file = false fixed my problem but some guys may still encounter it.

Still need to test applying some fake changes to see if it works there as well.

@dubb-b
Copy link

dubb-b commented Feb 10, 2020

I have the same issue, mine will initialize, but the plan fails with file cannot be found in the config dir. I have many clusters and use kubectx. The error is file not found, just just forced a revert back to 1.9.0 and everything is working normally. All I am doing in the state is at the end spitting out the kube config.

Here is my provider:

provider "kubernetes" {
  version = "<= 1.9.0"
  host                   = module.eks-cluster.aws-eks-cluster-eks-cluster-endpoint
  cluster_ca_certificate = base64decode(module.eks-cluster.aws_eks_cluster-eks-cluster-certificate-authority-data)
  token                  = data.aws_eks_cluster_auth.cluster_auth.token
}

@alexsomesan
Copy link
Member

@dubb-b see my previous remark

@ruizink
Copy link

ruizink commented Feb 10, 2020

I'm having the same issue. In fact this broke my live demo 😄
I'm running terraform v0.12.19 kubectl v1.16.2
And this is my terraform hcl snippet

provider "kubernetes" {
  host = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token = data.aws_eks_cluster_auth.cluster.token
  load_config_file = "false"
  version = "~> 1.10"
}

@immae1
Copy link

immae1 commented Feb 10, 2020

i tried with load_config_file="false"

but now i get this error:

module.components.provider.kubernetes: Failed to initialize config: invalid configuration: no configuration has been provided

Again my provider, i use tf version v0.11.14

provider "kubernetes" {
    load_config_file = false
    host                   = "${azurerm_kubernetes_cluster.k8scluster.kube_config.0.host}"
    client_certificate     = "${base64decode(azurerm_kubernetes_cluster.k8scluster.kube_config.0.client_certificate)}"
    client_key             = "${base64decode(azurerm_kubernetes_cluster.k8scluster.kube_config.0.client_key)}"
    cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.k8scluster.kube_config.0.cluster_ca_certificate)}"
    
}

@alexsomesan
Copy link
Member

I'm looking into the EKS case.

@ruizink Sorry about your live demo. Pinning provider versions to known good configurations is a best practice we encourage users to adopt. It would help avoid situations like this from happening in the future.

@alexsomesan
Copy link
Member

I just confirmed the following configuration to be valid and working, using minikube.

provider "kubernetes" {
  load_config_file = false

  host = "https://192.168.64.35:8443"
  client_certificate = file("/Users/alex/.minikube/client.crt")
  client_key = file("/Users/alex/.minikube/client.key")
  cluster_ca_certificate = file("/Users/alex/.minikube/ca.crt")
}

resource "kubernetes_namespace" "name" {
  metadata {
    name = "test"
  }
}

Please double-check that your interpolation expressions are resolving to valid values.
I'll double-check the EKS case next.

@alexsomesan
Copy link
Member

@immae1 I tested above configuration with TF 0.11.14 against minikube.
Works as expected:

~/test-creds » terraform plan                                                                                                                                                                 alex@alex-macbook
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + kubernetes_namespace.name
      id:                          <computed>
      metadata.#:                  "1"
      metadata.0.generation:       <computed>
      metadata.0.name:             "test"
      metadata.0.resource_version: <computed>
      metadata.0.self_link:        <computed>
      metadata.0.uid:              <computed>


Plan: 1 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

------------------------------------------------------------
~/test-creds » terraform apply -auto-approve                                                                                                                                                  alex@alex-macbook
kubernetes_namespace.name: Creating...
  metadata.#:                  "" => "1"
  metadata.0.generation:       "" => "<computed>"
  metadata.0.name:             "" => "test"
  metadata.0.resource_version: "" => "<computed>"
  metadata.0.self_link:        "" => "<computed>"
  metadata.0.uid:              "" => "<computed>"
kubernetes_namespace.name: Creation complete after 0s (ID: test)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
------------------------------------------------------------
~/test-creds » terraform destroy -auto-approve                                                                                                                                                alex@alex-macbook
kubernetes_namespace.name: Refreshing state... (ID: test)
kubernetes_namespace.name: Destroying... (ID: test)
kubernetes_namespace.name: Destruction complete after 6s

Destroy complete! Resources: 1 destroyed.
------------------------------------------------------------

@Victorion
Copy link

it seems that this PR #690 broke config init (initializeConfiguration)

@Victorion
Copy link

Example of broken config: (terraform-aws-eks)
https://github.com/terraform-aws-modules/terraform-aws-eks#usage-example

@Victorion
Copy link

logs:

2020-02-10T18:38:24.413Z [DEBUG] plugin.terraform-provider-kubernetes_v1.11.0_x4: 2020/02/10 18:38:24 [DEBUG] Trying to load configuration from file
2020-02-10T18:38:24.413Z [DEBUG] plugin.terraform-provider-kubernetes_v1.11.0_x4: 2020/02/10 18:38:24 [DEBUG] Configuration file is: /Users/username/.kube/config
2020/02/10 18:38:24 [ERROR] <root>: eval: *terraform.EvalConfigProvider, err: Failed to initialize config: invalid configuration: no configuration has been provided
2020/02/10 18:38:24 [ERROR] <root>: eval: *terraform.EvalSequence, err: Failed to initialize config: invalid configuration: no configuration has been provided
2020/02/10 18:38:24 [ERROR] <root>: eval: *terraform.EvalOpFilter, err: Failed to initialize config: invalid configuration: no configuration has been provided
2020/02/10 18:38:24 [ERROR] <root>: eval: *terraform.EvalSequence, err: Failed to initialize config: invalid configuration: no configuration has been provided

@Markieta
Copy link

Started having this issue today also.

Error: Failed to initialize config: invalid configuration: no configuration has been provided

  on modules/beta-private-cluster/auth.tf line 29, in provider "kubernetes":
  29: provider "kubernetes" {

Configuration looks like this:

provider "kubernetes" {
  load_config_file       = false
  host                   = "https://${local.cluster_endpoint}"
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = base64decode(local.cluster_ca_certificate)
}

@v-morlock
Copy link

v-morlock commented Feb 10, 2020

Having the same Issue with GKE:

data "google_client_config" "primary" {}

provider "kubernetes" {
  load_config_file       = false
  host                   = google_container_cluster.primary.endpoint
  token                  = data.google_client_config.primary.access_token
  client_certificate     = base64decode(google_container_cluster.primary.master_auth.0.client_certificate)
  client_key             = base64decode(google_container_cluster.primary.master_auth.0.client_key)
  cluster_ca_certificate = base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)
}

@Victorion
Copy link

Victorion commented Feb 10, 2020

guys, don't spam) Use version pinning:

version = "1.10.0". # Stable version

provider "kubernetes" {
  load_config_file       = false
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  version                = "1.10.0"
  
  alias                  = "override"
}

@adamrbennett
Copy link

adamrbennett commented Feb 10, 2020

Pinning to v1.10 works for me, but it's not a solution. A minor release should not introduce breaking changes (not to imply it was intended).

@xposix
Copy link

xposix commented Feb 11, 2020

I'm having the same issue, I locked to 1.10.0 and works now. This is my config:
`data "aws_eks_cluster" "cluster" {
name = module.eks-cluster.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
name = module.eks-cluster.cluster_id
}

provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
version = "= 1.10.0"
}

provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
}
version = "~> 1.0.0"
}`

@alecor191
Copy link

@alexsomesan sure, here you go:

provider "kubernetes" {
  host                   = module.cluster.k8s_host
  client_certificate     = base64decode(module.cluster.k8s_client_certificate)
  client_key             = base64decode(module.cluster.k8s_client_key)
  cluster_ca_certificate = base64decode(module.cluster.k8s_cluster_ca_certificate)
  version                = "1.11.1"
}

resource "kubernetes_cluster_role_binding" "admins" {
  metadata {
    name = "aks-cluster-admins"
  }

  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "cluster-admin"
  }

  subject {
    kind      = "Group"
    name      = var.admin_group
    api_group = "rbac.authorization.k8s.io"
  }

  subject {
    kind      = "ServiceAccount"
    name      = "default"
    namespace = "kube-system"
  }

  subject {
    kind      = "ServiceAccount"
    name      = "kubernetes-dashboard"
    namespace = "kube-system"
  }

  depends_on = [
    azurerm_kubernetes_cluster.aks
  ]
}

@alexsomesan
Copy link
Member

Thanks! If you set an output with the value of module.cluster.k8s_host is it a valid URL format? Does it start with https://?

output "api_host" {
  value = module.cluster.k8s_host
}

@hazcod
Copy link
Contributor

hazcod commented Feb 29, 2020

@alecor191
Copy link

@alexsomesan yes, that's correct: the output of

output "api_host" {
  value = module.cluster.k8s_host
}

is

api_host = https://rg5168-aks1-a0e50f13.hcp.westeurope.azmk8s.io:443

Essentially we first create an Azure AKS cluster using TF and then use the output of it (like the host name) to configure the K8S provider.

@alexsomesan
Copy link
Member

@hazcod Thanks for the confirmation.

@alecor191 I'll be trying to reproduce your case, but I'm baffled by one thing: if you wrap your cluster provisioning resources in a module, why is your depends_on clause referencing a top-level resource?
Also, which version of Terraform are you on?

@alecor191
Copy link

@alexsomesan sorry for the delay and not being more precise. I used TF v0.12.20.

Essentially what we have is setting up AKS cluster + assigning roles in a shared module that is being used by 3 "environments". In short, we have this setup:

/environments/cluster-test, /environments/cluster-qa, /environments/cluster-prod folders for our environments. tf apply is called from these folders. Each contains a main.tf with the following:

provider "azurerm" {
  subscription_id = var.subscription_id
  tenant_id       = var.tenant_id
}

provider "kubernetes" {
  ... like in my previous message
}

// reference to the "shared" module containing the actual resources
module "cluster" {
  source                  = "../../modules/aks"
  ...
}

And then we have a shared /modules/aks folder containing a main.tf with the following resources:

resource "azurerm_kubernetes_cluster" "aks" {
  ... create the AKS cluster first
}

resource "kubernetes_cluster_role_binding" "admins" {
  ... like in my previous message

  depends_on = [
    azurerm_kubernetes_cluster.aks
  ]
}

with outputs.tf as follows (some of them you can see being used in provider "kubernetes" in my previous message):

output "k8s_host" {
  value = azurerm_kubernetes_cluster.aks.kube_admin_config.0.host
}

output "k8s_client_certificate" {
  value     = azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_certificate
  sensitive = true
}

output "k8s_client_key" {
  value     = azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_key
  sensitive = true
}

output "k8s_cluster_ca_certificate" {
  value     = azurerm_kubernetes_cluster.aks.kube_admin_config.0.cluster_ca_certificate
  sensitive = true
}

output "k8s_kube_admin_config_raw" {
  value     = azurerm_kubernetes_cluster.aks.kube_admin_config_raw
  sensitive = true
}

I'm new to TF, so I may miss something obvious; like if it is an issue to reference outputs of the "shared-module main.tf" in our "environment-specific main.tf" provider "kubernetes", when having the actual resource "kubernetes_cluster_role_binding" in the "shared-module main.tf".

@Comradin
Copy link

Comradin commented Mar 5, 2020

The 1.11.1 release notes specifically mention this issue as fixed, but it's still open?

Do I miss something?

@alexsomesan
Copy link
Member

@Comradin you didn’t miss anything. I’m just keeping the issue open for a while to collect confirmations from users who reported here. This issue was only manifesting in peculiar setups which I cannot reproduce all of.

@adamrbennett
Copy link

I tested this with an existing cluster (and existing Terraform state) as well as on a new cluster (with no Terraform state) and it all appears to be working as it did before. Thanks @alexsomesan!

For reference, this is my provider config:

data "aws_eks_cluster_auth" "main" {
  name = aws_eks_cluster.main.name
}

provider "kubernetes" {
  version                = "~> 1.10"
  host                   = aws_eks_cluster.main.endpoint
  cluster_ca_certificate = base64decode(aws_eks_cluster.main.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.main.token
  load_config_file       = false
}

Also, I don't think it should matter for this issue, but I also have a local-exec provisioner on my cluster resource to wait for the k8s API:

resource "aws_eks_cluster" "main" {
  name = "my-cluster"

  ...

  # local-exec will execute after a resource is created
  # this provisioner waits until the k8s API is healthy

  # this way, any other resources that depend on this one
  # will not be created until the k8s API is operational

  # curl should wait up to 5 minutes (default) to connect,
  # so the loop/sleep logic will only apply if curl times out
  # this effectively means that once curl is able to connect
  # it will wait up to 60 seconds for a healthy response
  provisioner "local-exec" {
    command = <<EOF
RETRIES=0
until curl -sk --fail ${self.endpoint}/healthz || [ $RETRIES -eq 6 ]; do
  echo "Waiting for EKS..."
  sleep 10
  RETRIES=$(($RETRIES+1))
done
EOF
  }
}

satadruroy pushed a commit to SUSE/cap-terraform that referenced this issue Mar 9, 2020
* Add node_count variable definition to gke/variables.tf

* Pin kubernetes provider version to 1.10.0

Workaround for
hashicorp/terraform-provider-kubernetes#759

Signed-by: Chris and Victor
@rajivreddy
Copy link

Moving to 1.11, fixed the issue.

version                = "~> 1.11"
  host                   = eks_endpoint
  cluster_ca_certificate = eks_certificate_authority
  token                  = eks_token.token
  load_config_file       = false
}```

@TBeijen
Copy link

TBeijen commented Mar 13, 2020

Looks like 1.11.1 fixes my use case as well. Tried adding a new cluster to a project that already contained a cluster. Both 'from scratch' and existing worked without flaws.

I use terraform-aws-eks, conditionally creating all k8s resources via a variable (the ones managed by aforementioned module + some others). Similar to this example.

So new clusters I roll out in 2 steps (one Terraform project). First pass creates just the cluster without 'touching' anything related to kubernetes provider. Second pass sets up de aws-auth configmaps and other bootstrapping.

@aareet
Copy link
Member

aareet commented Apr 8, 2020

Sounds like this issue has been fixed. Does anyone have further reports of this still being an issue?

@alecor191
Copy link

alecor191 commented Apr 8, 2020

@aareet unfortunately I still have a 100% repro. I have an AKS cluster created with TF. I used kubernetes provider 1.10.0 in my TF files.

If I now just change the provider version to 1.11.1 and run terraform plan again, then I get the following:

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

module.appgateway.module.appgateway-frontoffice-api.data.azurerm_key_vault_secret.backend_https_certificate: Refreshing state...
module.appgateway.module.appgateway-backoffice-api.data.azurerm_key_vault_secret.backend_https_certificate: Refreshing state...
module.appgateway.module.appgateway-corporate-api.data.azurerm_key_vault_secret.backend_https_certificate: Refreshing state...
module.appgateway.module.appgateway-customer-api.data.azurerm_key_vault_secret.backend_https_certificate: Refreshing state...
azurerm_resource_group.aks: Refreshing state... 
module.appgateway.module.appgateway-customer-api.azurerm_public_ip.ag_public_ip: Refreshing state... 
module.cluster.azurerm_role_assignment.aks_rg_access: Refreshing state... 
module.appgateway.module.appgateway-frontoffice-api.azurerm_public_ip.ag_public_ip: Refreshing state... 
module.vnet.azurerm_virtual_network.cluster_net: Refreshing state... 
module.appgateway.module.appgateway-backoffice-api.azurerm_public_ip.ag_public_ip: Refreshing state... 
module.appgateway.module.appgateway-corporate-api.azurerm_public_ip.ag_public_ip: Refreshing state... 
module.vnet.azurerm_subnet.cluster_net_corporate_api_ag: Refreshing state... 
module.vnet.azurerm_subnet.cluster_net_k8s_pods_worker: Refreshing state... 
module.vnet.azurerm_subnet.cluster_net_k8s_pods_default: Refreshing state... 
module.vnet.azurerm_subnet.cluster_net_backoffice_ag: Refreshing state... 
module.vnet.azurerm_subnet.cluster_net_frontoffice_ag: Refreshing state... 
module.vnet.azurerm_subnet.cluster_net_customer_api_ag: Refreshing state... 
module.vnet.azurerm_subnet.cluster_net_lb: Refreshing state... 
module.cluster.azurerm_kubernetes_cluster.aks: Refreshing state... 
module.appgateway.module.appgateway-frontoffice-api.azurerm_application_gateway.ag: Refreshing state... 
module.appgateway.module.appgateway-backoffice-api.azurerm_application_gateway.ag: Refreshing state... 
module.appgateway.module.appgateway-customer-api.azurerm_application_gateway.ag: Refreshing state... 
module.cluster.azurerm_kubernetes_cluster_node_pool.aks_worker_node_pool: Refreshing state... 
module.cluster.kubernetes_cluster_role_binding.admins: Refreshing state... 
module.appgateway.module.appgateway-corporate-api.azurerm_application_gateway.ag: Refreshing state... 

Error: serializer for text/html; charset=utf-8 doesn't exist

Is there any way for me to provide some sort of diagnostic logs?

@ghost ghost removed the waiting-response label Apr 8, 2020
@alexsomesan
Copy link
Member

@alecor191 That's a weird error message. Can you do the same operation with the env var TF_LOG=TRACE set and share the output? It will be a quite hefty log so maybe put it in a gist instead of pasting.

@alecor191
Copy link

alecor191 commented Apr 9, 2020

@alexsomesan thanks for the hint. I re-ran terraform plan against the existing AKS cluster using that env variable and stored the relevant logs in this gist. Let me know if you need the whole log.

From what I see the issue is that we're getting redirected as the request is unauthorized (TF400813: The user '' is not authorized to access this resource.). However, the result of the redirect is a HTML page that TF can't deal with and thus it fails.

For context: I'm running the TF command as part of a CI pipeline in Azure DevOps Pipelines.

Update: I also ran terraform plan with trace logging enabled with provider version 1.10.0 and the difference is, that with 1.10.0 there is no redirect due to auth. Instead, the request succeeds right away.

Both runs were executed on the same CI pipeline. The only difference between the two runs is the TF Kubernetes provider version.

Right before that K8S API call, I noticed this diff between 1.10.0 and 1.11.1:

1.10.0

[INFO] Unable to load config file as it doesn't exist at "/root/.kube/config"
[DEBUG] Enabling HTTP requests/responses tracing
1.11.1

[DEBUG] Trying to load configuration from file
[DEBUG] Configuration file is: /root/.kube/config
[WARN] Invalid provider configuration was supplied. Provider operations likely to fail.
[DEBUG] Enabling HTTP requests/responses tracing

As a workaround, I tried to create the kube config file as part of the CI pipeline and re-ran using 1.11.1. This time it worked. However, it was OK because the AKS cluster already existed and I knew what to put in kube config. But what if I create the cluster from scratch using TF: before running TF there is no kube config I can set, as the cluster doesn't exist yet.

My understanding is that the Kubernetes provider should pick up the kube config from the AKS module (in my case):

provider "kubernetes" {
  host                   = module.cluster.k8s_host
  client_certificate     = base64decode(module.cluster.k8s_client_certificate)
  client_key             = base64decode(module.cluster.k8s_client_key)
  cluster_ca_certificate = base64decode(module.cluster.k8s_cluster_ca_certificate)
  version                = "1.11.1"
}

However, from the logs above it seems the K8S provider wasn't really able to use those creds. I may be missing something obvious here, so I would be grateful for any pointer you can provide.

@Mahendrasiddappa
Copy link

Mahendrasiddappa commented Jun 1, 2020

Having the same issue with kubernetes version 1.11.0.

I cannot rollback to 1.10.0 because that version does not recognize resource type "kubernetes_priority_class"

@Mahendrasiddappa
Copy link

Error: Invalid resource type
on .terraform/modules/eks_cluster/main.tf line 27, in resource "kubernetes_priority_class" "DS-priority":
27: resource "kubernetes_priority_class" "DS-priority" {
The provider provider.kubernetes does not support resource type
"kubernetes_priority_class".

@aareet
Copy link
Member

aareet commented Jun 2, 2020

@Mahendrasiddappa this issue was filed against 1.11.0, and resolved in 1.11.1 - can you retry with 1.11.1?

@thorlarholm
Copy link

thorlarholm commented Jun 29, 2020

I'm running terraform in a docker container via Docker Desktop on Windows.

Restarting docker itself by restarting Docker Desktop fixed my issue, similar to the one from dubb_b

Edit for more info: I usually run my terraform container detached, and let it live across laptop hibernation.

@aareet aareet added the bug label Jul 2, 2020
@ghost
Copy link

ghost commented Aug 10, 2020

I used to encounter this issue and had to resort to v1.10. Just tried again in a new project, the issue seems to be fixed. See terraform version output below:

Terraform v0.12.29
+ provider.google v3.33.0
+ provider.google-beta v3.33.0
+ provider.kubernetes v1.12.0

@ghost
Copy link

ghost commented Oct 10, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Oct 10, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests