-
Notifications
You must be signed in to change notification settings - Fork 971
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Call to http://localhost/version with configured host and credentials #708
Comments
Hm. Now I even somehow configured my local environment, that that happens. 🤷♂ |
Happens with me as well. I have changed kubernetes secret metadata name from a string to interpolated value... which resolves into same string. The original has no issue, the interpolated connects to locahost...
When name is "vault-gcp", it's fine. In new branch with above code and deployment name set to "vault", hence resulting interpolation being "vault-gcp", this fails with connection to locahost. Seems like TF/provider thinks this is some new/different instance of the resource, which somehow does not belong to the configured kub cluster, so it probably fails back to default "localhost" address. |
I have no interpolated values in metadata, but in the spec. But I have that in all kubernetes resources and only the resource mentioned above has the problem (or it is the first one it comes across and then stops, could be as well). Quite the phenomenom really. Any core developer around? 😉 |
Tried a workaround with conditional:
This way I wanted to make it have non-interpolated string directly in some cases, but this ended up with the same issue. My TF version is 0.12.18. I have kub provider configured with host and config:
Then I have tried another workaround, with defining 2 resources, one with interpolation, one with string and then controlling which resource actually gets deployed with
But this ended up with new resources[0] even for legacy deployment (where it is already deployed and I am trying to achieve 0 changes on TF apply). So I would say the issue is somehow not respecting existing kubernetes provider config for new resources...
|
I think, it's interesting, that it even tries to call via HTTP and not HTTPS, which would be the default I think. |
So it turns out that in my case, I was also pointing to a wrong location in the bucket where it had no tfstate. As most of the resources in GCP have same ID as name, even without state, terraform was able to find and refresh my whole stack, except the kub secrets, where it was connecting to localhost, as it had no state about where the cluster was... In EC2, that would blow up probably sooner, as resource IDs are quite different from resource names and if you lose state you have a lot of trouble finding where everything is... |
Okay, I found the problem for my case. This line here: If you remove all the |
@alexsomesan @pdecat You added that line there refactoring the whole client handling. Could you think of any implications that could cause this behaviour? It seems as if the |
|
@pdecat You probably know how to do this. I just stumbled through the code. 😆 Are you able to provide a PR for that? |
Or can you point me on how to implement that? Just replacing CustomizeDIff with CustomizeDiffFunc didn't work at least. :) |
Never mind, it won't work, Let me think of something else. |
@dploeger Are you building the AKS resources from |
Yes, I am. And that all worked until 12-9. I can’t really grasp what has changed then, because we didn’t update or change anything there. |
Good point, that's the most frequent issue when localhost is involved. The configuration is not available at the time the kubernetes provider is initialized. |
Further question: is this happening when running TF in a Pod on the cluster? |
Ummmm... I haven't tried that. Is that important? I'd have to set that up. I just tried locally. It also happens outside the container now. |
I'm experiencing this with a module that nests other modules, sometimes the child modules lose provider configuration and the terraform config becomes un-applyable, but also un-destroyable! The parent creates a DigitalOcean Kubernetes cluster inside a module, then uses the output of the module to get a data source which configures the provider e.g.
This provider is then used for a bunch of modules (which also contain modules) that then exhibit the localhost behavior (sometimes, but it seems deterministic between runs). |
any updates on this? Im trying to upgrade from 7.0.1 to 8.2.0 of the EKS terraform module (https://github.com/terraform-aws-modules/terraform-aws-eks) -- I'm able to get through the initial import of the aws-auth configmap by using a local kubeconfig the first time (overriding load_config_file to true for the import), but subsequent plans always fail with a call to localhost. my provider config looks like
I'm happy to provide further information/logs/tests to get this issue resolved ASAP. I have tried provider versions 1.8.1, 1.9.0, 1.10.0 and 1.11.0 (1.11.0 gives me a different error corresponding to issue 759). I'm using terraform 0.12.20 |
Having the same issue where I use the scaleway kapsule provider kubeconfig output as input for my kubernetes terraform provider. Using local kubeconfig does not resolve the issue during |
@brpaz : so it works if you run it from the root module? |
@hazcod yes, I had all my Terraform resources into But then I tried a fresh install (clean state and a new cluster provision from scratch and it worked. |
This might be related to hashicorp/terraform#24131 |
After reaching out to terraform core, above issue seems to indicate that it's a kubernetes provider issue where it's not handling the unknown variables well. |
I have drilled this down to the following: if a kubernetes provider is receiving unknown values (because of a dependency), it should go through with the plan because it would normally be fulfilled in the apply phase. I think that's a better approach than just erroring out now. |
This is really frustrated, if my scaleway provider cluster is removed, I have to take following manual steps:
|
I circumvented this with: provider "kubernetes" {
# fixed to 1.10.0 because of https://github.com/terraform-providers/terraform-provider-kubernetes/issues/759
version = "1.10.0"
# set the variable in the root module or else we have a dependency issue
token = module.scaleway.token
} |
they're irrelevant files, just how the modules are organised in git repo. :) |
@liangyungong I still do not get how you can have AWS resources in the Your
That means there a Can you check the content of that module? |
There're many other modules in the same git repo, and they are irrelevant to the module that I use. Whenever I do |
So the |
I'm hitting this problem, but not with any modules. $ terraform providers
.
├── provider.google ~> 3.13
├── provider.google-beta ~> 3.13
├── provider.kubernetes.xxx ~> 1.11.1
└── provider.kubernetes.yyy ~> 1.11.1 (two separate kubernetes providers with aliases) Is there a known workaround that doesn't involve winding back the kubernetes provider to 1.10? I need to be using 1.11 for other reasons. |
Actually my setup has started working again after forcibly re-fetching credentials, though it was very confusing why it was trying to contact localhost when the creds were bad. |
Not sure if this is the same problem, but just in case, I hit the following. I had a kubernetes provider blob looking a bit like this.
This failed in both 1.10 and 1.11. With 1.10, I got an error report explaining that I must set username and password or bearer token not both (fair enough). With 1.11, no error and it ignored host, contacting localhost. If I removed username and password from then it all worked (in both versions). That makes me think that a failure in validation in 1.11 might lead to it dropping through with the host still set to localhost. |
@plwhite The error you got in 1.10 was not right, but not exhaustive since client certificates are also an equivalent form of authentication. Better validation was introduced in 1.11 that why you are not seeing that error anymore. The rule is to have one of either: token, user/pass OR client certificates. Having two of these like in you example is not deterministic (which one should be used to authenticate you?) and it looks like that's not being validated - we'll work on fixing that. However, the reason you're seeing the connection to localhost is likely because Terraform is unable to resolve the value for |
@alexsomesan I was populating |
The key aspect here is whether you are creating the azurerm cluster resource in the same |
In the same apply run. Sometimes the azure cluster already existed, and sometimes not (and was created by the apply run). |
I experience similar issues with this setup:
Interestingly, all works well if I run |
fyi, my problem was also related to #708 (comment) My interesting observation was though:
Could someone explain me why in one case it's necessary to set |
Has same issue with version "1.11.2". Solved following way:
Enjoy. |
I have the issue with I validate that I do not have any other kubernetes provider set that could override. I still unsure but that could be related to the fact that I'm using terragrunt 🤷 |
I just tried reverting to 1.10.0 version of the provider. It worked. I managed to create the resources but next plan failed with:
I guess it is related to EKS rbac but how is it possible to not use anonymous user without a kube config? |
I managed to make it work with
I think I understand what is happening here.
Getting the token on each provider call as above solution works just fine. |
I have the same issue when using Kubernetes Provider > 1.10 (maybe related to #759). Using Provider Version 1.10.0 works as expected. 1.11 and 1.12 do not work with the following config running inside a Kubernetes Cluster:
Steps to reproduce:
Results in provider "kubernetes" {
version = "~> 1.11"
}
resource "kubernetes_secret" "test" {
metadata {
name = "test"
namespace = "default"
}
data = {
test = "data"
}
} I tried to configure the Kubernetes Provider using |
@etwillbefine I wasn't able to reproduce the issue with the configuration you provided.
|
I'm going to close this issue as it's become a catch-all for credentials misconfigurations. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks! |
Terraform Version
Terraform v0.12.17
Affected Resource(s)
Please list the resources as a list, for example:
Terraform Configuration Files
Debug Output
(The debug output is huge and I just pasted a relevant section of it. If you need more, I'll create a gist)
Expected Behavior
When running terraform in the
hashicorp/terraform
container, aterraform plan
should run properlyActual Behavior
The plan errors out with the following error:
This only happens, when running terraform in the container. When ran locally, everything is fine. (Even when the local .kube directory is removed)
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform plan
orterraform apply
Important Factoids
hashicorp/terraform
imageThe text was updated successfully, but these errors were encountered: