-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discussion: Helm v3 & namespace automatic creation #399
Comments
I don't know exactly what your deployment process looks like, but maybe you can use |
Thanks for the proposal, while I cannot say that it will not work we will lose a ton of advantage of Terraform:
|
Hi @pierresteiner. I just want to add my 2 cents. IMO, the best choice for that is using resource "kubernetes_namespace" "superset" {
metadata {
name = "superset"
labels = {
# ...
}
}
}
resource "helm_release" "superset" {
namespace = kubernetes_namespace.superset.metadata.0.name # the dependency on `kubernetes_namespace.superset`
name = "superset"
repository = "https://kubernetes-charts.storage.googleapis.com"
chart = "superset"
version = "1.1.7"
depends_on = [kubernetes_namespace.superset] # another way of setting dependency on `kubernetes_namespace.superset`
}
Sorry, I might not fully understand your use case, but that doesn't look as proper behavior. Usually, the service should not try to manage the namespace where it's running. It should be treated as runtime / infrastructure configuration and managed separately (for example, with P.s. Anyway, thank you for raising this question. 👍 |
@legal90 Thank you for your proposal; this work well in a monolithe. We do have independent pipeline for different microservices (frontend, backend, ...) that need to end up in the same namespace (the name of the git branch). One part of the solution would be define one microservice as more important as the other (i.e. backend). But this will have unwanted side effects:
|
Although I understand the sentiment from @legal90 that the service should not manage its own namespace, the reality is far different as @pierresteiner mentioned with his use-case. Sometimes one uses namespaces for pure organizational purposes and IMO its ok then, if the service creates and manages its own namespace. But I agree, this is not the right place (repo/project) for a discussion or a fix, because that was a functionality that helm2 provided, not the terraform provider itself. It should be discussed in the helm repository. |
Thanks for opening this discussion @pierresteiner.
Yes, the provider calls out the same package used by the Helm cli so you can expect the same behaviour.
For the moment the answer to this is to use the terraform kubernetes provider, or kubectl to create the namespace prior to install as @legal90 and @robinkb have suggested. There currently isn't a way to use Helm to create a release that manages its own namespace. However, it seems there is work in progress to towards adding this. You can see this discussion for more details: helm/helm#6794 |
@jrhouston for the precise answer. I will track progress on that issue then |
First of all, this is not really an bug, we are seeking guidance regarding a big difference between v2 & v3: the removal of the automatic namespace creation: linkerd/linkerd2#3211
We (like certainly several others) are currently deploying our microservices independently with TF (based on the name of the branch). I don't see how this can be achieved now because:
Is version 1.0.0 honoring the removal of automatic namespace creation (we haven't tested it yet, and found nothing explicit about it). Should it be the case; how can we mitigate the previously mentioned issue?
The text was updated successfully, but these errors were encountered: