Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion: Helm v3 & namespace automatic creation #399

Closed
pierresteiner opened this issue Feb 7, 2020 · 7 comments
Closed

Discussion: Helm v3 & namespace automatic creation #399

pierresteiner opened this issue Feb 7, 2020 · 7 comments

Comments

@pierresteiner
Copy link

pierresteiner commented Feb 7, 2020

First of all, this is not really an bug, we are seeking guidance regarding a big difference between v2 & v3: the removal of the automatic namespace creation: linkerd/linkerd2#3211

We (like certainly several others) are currently deploying our microservices independently with TF (based on the name of the branch). I don't see how this can be achieved now because:

  • If we do not explicitly create the namespace (resource "kubernetes_namespace") the deployment will fail
  • If we do explicitly create it, we will have conflict - as each micro service will try to create it (and removal would be even worse...)

Is version 1.0.0 honoring the removal of automatic namespace creation (we haven't tested it yet, and found nothing explicit about it). Should it be the case; how can we mitigate the previously mentioned issue?

@robinkb
Copy link

robinkb commented Feb 9, 2020

I don't know exactly what your deployment process looks like, but maybe you can use kubectl to create the namespace (if it does not exist) before running Terraform?

@pierresteiner
Copy link
Author

Thanks for the proposal, while I cannot say that it will not work we will lose a ton of advantage of Terraform:

  • Token to the cluster would need to be managed within & outside of TF
  • kubectl would need to be maintained & updated manually
  • ...

@legal90
Copy link
Contributor

legal90 commented Feb 11, 2020

Hi @pierresteiner. I just want to add my 2 cents.
I'm sorry but I don't think that helm provider should manage the namespace for a release.
Previosly, the automatic namespace creation happened just because Helm 2 did that.
This feature has been removed from Helm 3, so the helm3-compatible provider should not do that neither.

IMO, the best choice for that is using kubernetes_namespace resource from kubernetes provider. I use it very widely in the big scale in my organization and never had any issues with it. You can set implicit or explicit dependency between kubernetes_namespace and helm_release to guarantee that the namespace is created first:

resource "kubernetes_namespace" "superset" {
  metadata {
    name = "superset"

    labels = {
      # ...
    }
  }
}

resource "helm_release" "superset" {
  namespace = kubernetes_namespace.superset.metadata.0.name  # the dependency on `kubernetes_namespace.superset`
  name      = "superset"

  repository = "https://kubernetes-charts.storage.googleapis.com"
  chart      = "superset"
  version    = "1.1.7"

  depends_on = [kubernetes_namespace.superset]  # another way of setting dependency on  `kubernetes_namespace.superset`
}

as each micro service will try to create it (and removal would be even worse...)

Sorry, I might not fully understand your use case, but that doesn't look as proper behavior. Usually, the service should not try to manage the namespace where it's running. It should be treated as runtime / infrastructure configuration and managed separately (for example, with kubernetes_namespace as I shown above).

P.s. Anyway, thank you for raising this question. 👍
The above is just my personal opinion. Let's see what maintainers and other community members will say.

@pierresteiner
Copy link
Author

@legal90 Thank you for your proposal; this work well in a monolithe. We do have independent pipeline for different microservices (frontend, backend, ...) that need to end up in the same namespace (the name of the git branch).

One part of the solution would be define one microservice as more important as the other (i.e. backend). But this will have unwanted side effects:

  • one cannot deploy frontend without having deployed the backend first (in this trivial case this could make sense)
  • removing backend will trash the whole namespace, and remove the other microservices

@mrkwtz
Copy link

mrkwtz commented Feb 12, 2020

Although I understand the sentiment from @legal90 that the service should not manage its own namespace, the reality is far different as @pierresteiner mentioned with his use-case. Sometimes one uses namespaces for pure organizational purposes and IMO its ok then, if the service creates and manages its own namespace.

But I agree, this is not the right place (repo/project) for a discussion or a fix, because that was a functionality that helm2 provided, not the terraform provider itself. It should be discussed in the helm repository.

@jrhouston
Copy link
Contributor

jrhouston commented Feb 12, 2020

Thanks for opening this discussion @pierresteiner.

Is version 1.0.0 honoring the removal of automatic namespace creation (we haven't tested it yet, and found nothing explicit about it).

Yes, the provider calls out the same package used by the Helm cli so you can expect the same behaviour.

Should it be the case; how can we mitigate the previously mentioned issue?

For the moment the answer to this is to use the terraform kubernetes provider, or kubectl to create the namespace prior to install as @legal90 and @robinkb have suggested.

There currently isn't a way to use Helm to create a release that manages its own namespace. However, it seems there is work in progress to towards adding this. You can see this discussion for more details: helm/helm#6794

@pierresteiner
Copy link
Author

@jrhouston for the precise answer. I will track progress on that issue then

@ghost ghost locked and limited conversation to collaborators Apr 27, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants