Skip to content
This repository has been archived by the owner on Jul 17, 2024. It is now read-only.

Issue upgrading tomcat 0.3.0 to 0.4.x #162

Closed
Nuru opened this issue Aug 14, 2020 · 30 comments
Closed

Issue upgrading tomcat 0.3.0 to 0.4.x #162

Nuru opened this issue Aug 14, 2020 · 30 comments
Labels
question Further information is requested

Comments

@Nuru
Copy link

Nuru commented Aug 14, 2020

After using helm 2to3 convert <release>, the release is still not in a proper state for helm version 3 to use, because it is missing required metadata. If you re-apply the current chart and values, helm will fix it for you, but if you immediately try to update the release with something new, it fails with an error like:

Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. 
  Unable to continue with update: Deployment "web-api" in namespace "dev" exists and cannot be imported into the current release: 
  invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm";
  annotation validation error: missing key "meta.helm.sh/release-name": must be set to "web-api-dev"; 
  annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "dev"       

2to3 should automatically add the correct metadata when doing the convert. See Release Note at helm/helm#7649 (comment)

Note: This is the same starting place as #147, but that was closed when the OP found and fixed a different problem.

@dlipovetsky
Copy link

@Nuru I could not reproduce the error by doing the following:

helm2 install --name foo stable/tomcat --version 0.4.0
helm3 2to3 convert foo
helm3 2to3 cleanup --release-cleanup --skip-confirmation
helm3 upgrade foo -v=10 --repo https://kubernetes-charts.storage.googleapis.com tomcat --version 0.4.1

The tomcat chart creates a Deployment and Service. I agree that 2to3 does not apply the meta.helm.sh/release-name and meta.helm.sh/release-namespace annotations to the Deployment or Service. I observed that the annotations were not present before I ran the upgade command, and I observed that they were present after the upgrade command finished.

Can you please provide a sequence of commands that reproduce your error? (Alternatively, can you please explain why my sequence of commands would fail to reproduce the issue.)

Thank you!

@Nuru
Copy link
Author

Nuru commented Aug 22, 2020

I published a script to add the needed metadata. After running helm 2to3 convert you can run the script against the release and it will do the update. It works by regenerating the manifest for the current release, modifying the metadata, and then using kubectl apply to apply the changes.

@dlipovetsky I have not made an exhaustive or definitive study of exactly what causes the error. My impression is that if you use helm3 to apply the same chart with the same values as you last used with helm2, helm3 will simply make the metadata changes. The error arises when you try to make substantive changes to existing resources in the release at the same time.

You upgraded the stable/tomcat chart from version 0.4.0 to 0.4.1 using the same values. Try changing something substantial, like he port internal port number, that changes the resources without wholesale replacing them. I have not tried this myself yet, but I expect this would produce the error:

helm2 install --name foo stable/tomcat --version 0.4.1
helm3 2to3 convert foo
helm3 2to3 cleanup --release-cleanup --skip-confirmation
helm3 upgrade foo --set service.internalPort=8800 -v=10 --repo https://kubernetes-charts.storage.googleapis.com tomcat --version 0.4.1

@Nuru
Copy link
Author

Nuru commented Aug 23, 2020

@dlipovetsky so far I have not reproduced this under Kubernetes 1.16. Reviewing my logs I can see that previously this occurred under Kubernetes 1.15 when updating the apiVersion. So that would correspond to your test, except starting with Tomcat chart version 0.3.0:

helm2 install --name foo stable/tomcat --version 0.3.0
helm3 2to3 convert foo
helm3 2to3 cleanup --release-cleanup --skip-confirmation
helm3 upgrade foo -v=10 --repo https://kubernetes-charts.storage.googleapis.com tomcat --version 0.4.1

I have confirmed the above fails on a cluster running Kubernetes 1.15.10. I have confirmed that if the helm2 install is of 0.4.0 the upgrade to 0.4.1 succeeds even on Kubernetes 1.15. Unfortunately, you cannot install stable/tomcat chart version 0.3.0 in Kubernetes 1.16 because it uses API versions that are no longer available.

I have yet to find a chart that has versions that can deploy on Kubernetes 1.16, one with Ingress apiVersion extensions/v1beta1 and one with Ingress apiVersion networking.k8s.io/v1beta1 but that would be a similar situation that you could test.

@hickeyma
Copy link
Collaborator

@Nuru Thanks for raising this issue.

I will try my best to explain this by first providing some background on how it all fits together.

The meta.helm.sh annotations are new to Helm 3 (i.e. there were not in Helm 2 and not in Helm 3 prior to Helm 3.2.0). Their concept and the functionality to use them was added in PR helm/helm#7649 for Helm 3.2.0. The managed-by label already existed in Helm 2, but as a best-practice recommendation and not automatically added by Helm.

PR helm/helm#7649 was added in Helm 3.2.0 to provide functionality where orphaned kubernetes resources (not associated to a Helm release) in a cluster may under certain circumstances be associated to a Helm release. It is optional capability and there has been some open issues with it, such as helm/helm#8350.

The helm-2to3 plugin's chief goal is that resources deployed by Helm 2 can be then managed by Helm 3. At its core is the mapping of Helm 2 release object to a Helm 3 release object. In some circumstances it is a 1:1 mapping of properties and in other cases it requires some modification of the properties. This process includes annotations and labels which are mapped as is. In other words if they exist, they are mapped across as defined in the Helm 2 release object. This needs to be maintained as is, because it is part of the current state of the release. The plugin does NOT touch the deployed kubernetes resources of the Helm release.

If you convert a release from Helm 2 to Helm 3 and then upgrade it using Helm 3 and its chart, it should upgrade as expected.

Re #147, the OP had a number of different issues which I tried to explain 1 by 1 as they were independent of each other. His main problem seemed to be a CRD issue in the end as he explained in #147 (comment). Sometimes a symptom shown by a Helm error may be caused by a different issue and not related to your issue.

Getting back to your issue. Why does a resource already exist for you when you are trying to upgrade/install? That is your issue and helm/helm#7649 is a potential solution if that is what you want to do. If it is then you need to add the meta.helm.sh annotations and the managed-by label to use this capability.

Let me know if this helps answer your issue.

@hickeyma hickeyma added the question Further information is requested label Aug 28, 2020
@Nuru
Copy link
Author

Nuru commented Aug 31, 2020

@hickeyma wrote:

If you convert a release from Helm 2 to Helm 3 and then upgrade it using Helm 3 and its chart, it should upgrade as expected.

That is what I thought, but it did not happen that way, which is why I opened this issue.

Why does a resource already exist for you when you are trying to upgrade/install?

Why would a resource not exist when I am trying to upgrade? The usual case for an upgrade is to modify existing resources.

As I explained above, helm complains that a resource that should have been imported by helm 2to3 "exists and cannot be imported into the current release". From the error message, I concluded that this was because helm 2to3 failed to add required metadata (annotations), but perhaps the error lies elsewhere and this is just a symptom.

The key issue is that the upgrade of the converted release fails and it has nothing to do with CRDs. I have given you a reproducible case under Kubernetes 1.15. The general issue seems to be with resources that were present in the helm2 version and would be deleted by the new chart applied with helm3. Somehow the conversion process is not sufficient for helm3 to be comfortable deleting the resources because it is not sure it owns it.

@joejulian
Copy link

joejulian commented Sep 3, 2020

Rephrasing, please correct me if I'm wrong.

A chart template implemented the label app.kubernetes.io/managed-by that was not required in helm2 but is required in helm3, eg https://github.com/jetstack/cert-manager/blob/v0.10.0/deploy/charts/cert-manager/templates/deployment.yaml#L19.

Because that label was created with the content of "Tiller" the 2to3 conversion keeps that label the same.

Helm 3 requires that label to be "Helm" or else it fails the ownership check when attempting to upgrade to a newer chart version.

@hickeyma
Copy link
Collaborator

hickeyma commented Sep 3, 2020

@joejulian The label app.kubernetes.io/managed-by is not required by Helm in its charts but it is recommended to use it in Helm 2 and Helm 3. It is set to the Release.Service property which during deployment (install/upgrade) is Tiller on v2 and Helm on v3. The 2to3 plugin did not change this label because it is part of the rendered manifests which is a big blob of string. The reason for not changing anything in here is because it represents the last deployment of the chart which was deployed on v2. The label does not need to be updated for migration. When the release is migrated and you run a helm3 get manifest you will see it still set to Tiller. If you upgrade the release using Helm 3, it will then be changed to Helm.

The label app.kubernetes.io/managed-by is used by the Helm binary (helm) in the functionality provided by PR helm/helm#7649 for Helm 3.2.0. This functionality is not mandatory for helm upgrades, it is optional. It is used in conjunction with the meta.helm.sh annotations to associate already deployed Kubernetes resource(s) to a Helm release being deployed. This could be because the original Helm release was removed but the Kubernetes resources remain and are now orphaned from a Helm release and you want to make then part of the new release. It could also be a legitimate conflict because another Helm release uses the same Kubernetes resource name in the same namespace or globally.

This brings us to the issue raised by the OP (@Nuru ). Looking at the error message again from the description above:

Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. 
  Unable to continue with update: Deployment "web-api" in namespace "dev" exists and cannot be imported into the current release: 
  invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm";
  annotation validation error: missing key "meta.helm.sh/release-name": must be set to "web-api-dev"; 
  annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "dev"       

The Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. message is saying when trying to upgrade the release it finds a resource it does not expect to be present in the upgrade. I have been unfortunately unable to reproduce the issue and would appreciate steps where this can be reproduced consistently. I cannot explain why the resource exists outside of the release being updated as it could depends on the cluster environment.

PR helm/helm#7649 functionality now kicks in as it provides the user with the option to associate this orphaned/unexplained resource to the release. For this to happen it is saying that the meta.helm.sh annotations and the label app.kubernetes.io/managed-by must be set accordingly.

I hope this helps shed more light on the issue as it is tricky to explain all the parts as they are not all one atomic entity but different parts.

@joejulian
Copy link

I'm also thinking it may be unfixable. In the cert-manager chart linked above, they used that label as part of the selector. Any template that does this cannot be upgraded because the selector is immutable.

@Nuru
Copy link
Author

Nuru commented Sep 12, 2020

@hickeyma wrote:

I have been unfortunately unable to reproduce the issue and would appreciate steps where this can be reproduced consistently.

I explained the steps to reproduce this consistently in #162 (comment). If you do not have a Kubernetes 1.15 cluster, you can create one on AWS with EKS for probably only a few dollars if you just leave it running long enough to run the tests. You can also create a local Kubernetes cluster using Docker Desktop, and I believe Docker Desktop version 2.2.0.5 for Mac or Windows will install Kubernetes 1.15 for you.

Note that the chart used to reproduce the problem, stable/tomcat 0.3.0 -> 0.4.1, does not set or use the app.kubernetes.io/managed-by label. It does set heritage: Tiller.

@hickeyma
Copy link
Collaborator

Thanks for responding @Nuru. I am able to reproduce it now and not sure why I was unable previously but maybe because I didn't update the version in the upgrade. My apologies.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.12", GitCommit:"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725", GitTreeState:"clean", BuildDate:"2020-09-14T08:06:34Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}

$ helm3 plugin list
NAME       	VERSION	DESCRIPTION                                                               
2to3       	0.6.0  	migrate and cleanup Helm v2 configuration and releases in-place to Helm v3

$ helm2 install --name tomcat stable/tomcat --version 0.3.0
NAME:   tomcat
LAST DEPLOYED: Thu Sep 24 11:51:10 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME                     READY  STATUS    RESTARTS  AGE
tomcat-7dcf5886c9-2xxfx  0/1    Init:0/1  0         0s

==> v1/Service
NAME    TYPE          CLUSTER-IP    EXTERNAL-IP  PORT(S)       AGE
tomcat  LoadBalancer  10.111.21.76  <pending>    80:30244/TCP  0s

==> v1beta2/Deployment
NAME    READY  UP-TO-DATE  AVAILABLE  AGE
tomcat  0/1    1           0          0s


NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w tomcat'
  export SERVICE_IP=$(kubectl get svc --namespace default tomcat -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
  echo http://$SERVICE_IP:

$ helm 2to3 convert tomcat
2020/09/24 11:53:46 Release "tomcat" will be converted from Helm v2 to Helm v3.
2020/09/24 11:53:46 [Helm 3] Release "tomcat" will be created.
2020/09/24 11:53:46 [Helm 3] ReleaseVersion "tomcat.v1" will be created.
2020/09/24 11:53:46 [Helm 3] ReleaseVersion "tomcat.v1" created.
2020/09/24 11:53:46 [Helm 3] Release "tomcat" created.
2020/09/24 11:53:46 Release "tomcat" was converted successfully from Helm v2 to Helm v3.
2020/09/24 11:53:46 Note: The v2 release information still remains and should be removed to avoid conflicts with the migrated v3 release.
2020/09/24 11:53:46 v2 release information should only be removed using `helm 2to3` cleanup and when all releases have been migrated over.

$ helm3 upgrade tomcat stable/tomcat --version 0.4.1
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: Deployment "tomcat" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "tomcat"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"

@hickeyma
Copy link
Collaborator

I am unsure yet why the Helm 3 upgrade is checking the meta.helm.sh annotations and the label app.kubernetes.io/managed-by as they are not part of the stable/tomcat chart in both versions. I have noticed that when you upgrade tomcat, it creates an additional pod and maintains the existing pod. Here is an example of the tomcat upgrade I did:

default              pod/tomcat-569ff5bf4d-4rsfr                         0/1     Pending   0          136m
default              pod/tomcat-7dcf5886c9-2xxfx                         1/1     Running   0          158m

I am unsure if this is expected behavior or not. This might be triggering the adopt functionality under he hood. This is something that will need to be investigated further in the Helm engine and with the tomcat Helm chart.

In the meantime, I recommend working around this issue by performing an upgrade which is identical to the previous deployment (install or upgrade). So in the case of stable/tomcat, this is what I did after conversion:

$ helm2 install --name tomcat stable/tomcat --version 0.3.0
NAME:   tomcat
LAST DEPLOYED: Thu Sep 24 11:51:10 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME                     READY  STATUS    RESTARTS  AGE
tomcat-7dcf5886c9-2xxfx  0/1    Init:0/1  0         0s

==> v1/Service
NAME    TYPE          CLUSTER-IP    EXTERNAL-IP  PORT(S)       AGE
tomcat  LoadBalancer  10.111.21.76  <pending>    80:30244/TCP  0s

==> v1beta2/Deployment
NAME    READY  UP-TO-DATE  AVAILABLE  AGE
tomcat  0/1    1           0          0s

$ helm 2to3 convert tomcat
2020/09/24 11:53:46 Release "tomcat" will be converted from Helm v2 to Helm v3.
2020/09/24 11:53:46 [Helm 3] Release "tomcat" will be created.
2020/09/24 11:53:46 [Helm 3] ReleaseVersion "tomcat.v1" will be created.
2020/09/24 11:53:46 [Helm 3] ReleaseVersion "tomcat.v1" created.
2020/09/24 11:53:46 [Helm 3] Release "tomcat" created.
2020/09/24 11:53:46 Release "tomcat" was converted successfully from Helm v2 to Helm v3.
2020/09/24 11:53:46 Note: The v2 release information still remains and should be removed to avoid conflicts with the migrated v3 release.
2020/09/24 11:53:46 v2 release information should only be removed using `helm 2to3` cleanup and when all releases have been migrated over.

$ helm3 ls
NAME  	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART       	APP VERSION
tomcat	default  	1       	2020-09-24 11:51:10.962369258 +0000 UTC	deployed	tomcat-0.3.0	7.0        

$ helm3 upgrade tomcat stable/tomcat
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: Deployment "tomcat" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "tomcat"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"

$ helm3 upgrade tomcat stable/tomcat --version 0.3.0
Release "tomcat" has been upgraded. Happy Helming!
NAME: tomcat
LAST DEPLOYED: Thu Sep 24 12:12:35 2020
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w tomcat'
  export SERVICE_IP=$(kubectl get svc --namespace default tomcat -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
  echo http://$SERVICE_IP:

$ helm3 ls
NAME  	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART       	APP VERSION
tomcat	default  	2       	2020-09-24 12:12:35.785146187 +0000 UTC	deployed	tomcat-0.3.0	7.0  

$ helm3 upgrade tomcat stable/tomcat --version 0.4.1
Release "tomcat" has been upgraded. Happy Helming!
NAME: tomcat
LAST DEPLOYED: Thu Sep 24 12:13:01 2020
NAMESPACE: default
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w tomcat'
  export SERVICE_IP=$(kubectl get svc --namespace default tomcat -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
  echo http://$SERVICE_IP:

By repeating the helm upgrade, it syncs the converted release with Helm 3 adopt capability requirements.

I am hesitant for the plugin to make changes to the manifest blob as explained in #162 (comment) and #162 (comment). For now, I will open a doc PR against the plugin README to document this in the "Troubleshooting section".

Let me know what you think @Nuru.

@Nuru
Copy link
Author

Nuru commented Sep 25, 2020

@hickeyma wrote:

I am hesitant for the plugin to make changes to the manifest blob as explained in #162 (comment) and #162 (comment). For now, I will open a doc PR against the plugin README to document this in the "Troubleshooting section".

Let me know what you think @Nuru.

This is a problem that people will likely run into before documentation will help. It is nothing specific to the tomcat chart, and for custom apps (what companies are building for their customers), it is often the case that repeating the exact helm2 deploy with helm3 is not practical.

As it stands, 2to3 does not fulfill its basic promise to convert a helm2 release to a helm3 release in this situation. I strongly encourage you to connect with whoever you need to on the Helm engine team to figure out the root cause of the issue and fix it either in helm or 2to3.

@hickeyma
Copy link
Collaborator

This is a problem that people will likely run into before documentation will help.

If a person does then they do the upgrade as described in #162 (comment). It will not block them.

It is nothing specific to the tomcat chart

I am yet to see a recurring pattern with other charts and in other scenarios. It is an edge case which will be investigated to see if it can be handled. The workaround is in place for the moment.

@Nuru
Copy link
Author

Nuru commented Oct 6, 2020

@hickeyma

I am yet to see a recurring pattern with other charts and in other scenarios. It is an edge case which will be investigated to see if it can be handled. The workaround is in place for the moment.

The pattern I see is that the first update to a release after running helm2to3 (using helm3 for the first time on the release) will fail if the update would remove a resource. A common reason for an update to remove a resource is to replace it with a resource of the same general kind, but a newer API version. I believe this happens with tomcat as it replaces the apps/v1beta2 Deployment with the apps/v1 Deployment.

@hickeyma
Copy link
Collaborator

hickeyma commented Oct 9, 2020

A common reason for an update to remove a resource is to replace it with a resource of the same general kind, but a newer API version. I believe this happens with tomcat as it replaces the apps/v1beta2 Deployment with the apps/v1 Deployment.

I think that it is correct in that it is reason for causing the unexpected situation in the cluster (i.e. pod nopt been replaced) but I think it is Kubernetes itself which is not handling the patch request from Helm when a helm upgrade is called. I will try and explain it from further investigations below.

@hickeyma
Copy link
Collaborator

hickeyma commented Oct 9, 2020

Firstly, the Deployment template for v0.3.0:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: {{ template "tomcat.fullname" . }}
  labels:
    app: {{ template "tomcat.name" . }}
[...]

and Deployment template for v0.4.1:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ template "tomcat.fullname" . }}
  labels:
    app: {{ template "tomcat.name" . }}

have different APIVersion as you mentioned. Kuberentes 1.15 should be able to handle both the deprecated apps/v1beta2 and supported apps/v1 APIs as the deprecated API are not removed in the k8s version.

On Kubernetes 1.15, when installing (using Helm 3 only) tomcat version (0.3.0) it deploys as expected with one pod. However, when upgrading to tomcat version (0.4.1) an additional pod is deployed as follows:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.12", GitCommit:"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725", GitTreeState:"clean", BuildDate:"2020-09-14T08:06:34Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}

$ helm3 install tomcat stable/tomcat --version 0.3.0
NAME: tomcat
LAST DEPLOYED: Fri Oct  9 13:53:30 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w tomcat'
  export SERVICE_IP=$(kubectl get svc --namespace default tomcat -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
  echo http://$SERVICE_IP:

$ kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
tomcat-7dcf5886c9-mntpr   1/1     Running   0          100s

$ helm3 upgrade tomcat stable/tomcat --version 0.4.1
Release "tomcat" has been upgraded. Happy Helming!
NAME: tomcat
LAST DEPLOYED: Fri Oct  9 13:55:33 2020
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w tomcat'
  export SERVICE_IP=$(kubectl get svc --namespace default tomcat -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
  echo http://$SERVICE_IP:

$ kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
tomcat-569ff5bf4d-9hgdm   0/1     Pending   0          7s
tomcat-7dcf5886c9-mntpr   1/1     Running   0          2m10s

$ kubectl get deployments --all-namespaces
NAMESPACE            NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
default              tomcat                   1/1     1            1           13m
kube-system          coredns                  2/2     2            2           15d
kube-system          tiller-deploy            1/1     1            1           15d
local-path-storage   local-path-provisioner   1/1     1            1           15d

2 things seems strange to me with this:

  • It didn't replace the previous deployment/pod. I would have expected Kubernetes to handle this on the fly.
  • The deployment is APIVersion extensions/v1beta1 (which is neither APIVersion specified in the templates) as follows:
$ kubectl get deployment tomcat -o yaml | more
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "2"
    meta.helm.sh/release-name: tomcat
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2020-10-09T13:53:31Z"
[..]

If I clean the previous deployment, install fresh with version 0.4.0 and then do an upgrade, it works as expected:

$ helm3 upgrade tomcat stable/tomcat --version 0.4.1
Release "tomcat" has been upgraded. Happy Helming!
NAME: tomcat
LAST DEPLOYED: Fri Oct  9 18:03:09 2020
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w tomcat'
  export SERVICE_IP=$(kubectl get svc --namespace default tomcat -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
  echo http://$SERVICE_IP:

$ kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
tomcat-569ff5bf4d-qb6rr   1/1     Running   0          3h12m

@hickeyma
Copy link
Collaborator

hickeyma commented Oct 9, 2020

I think there will be an issue with any chart as follows:

  • where one version uses a deprecated APIVersion and another version uses a supported one
  • it is installed with the deprecated APIVersion and you then upgrade with the supported one in a Kubernetes cluster that still supports the deprecated API (e.g. 1.15)

Kubernetes will probably handle it in much the same manner as #162 (comment).

That is why I believe your issue is reproducible only as follows:

  • Kubernetes version which still supports deprecated API - in this case Deployment APIs in 1.15
  • Chart with deprecated API which is installed using Helm 2, converted to Helm 3 and then upgraded with chart with the supported API

@hickeyma
Copy link
Collaborator

hickeyma commented Oct 9, 2020

My suggestion would be to protect users from trying to upgrade with updated APIs where the deprecated API is still supported because the way Kubernetes tries to handle it.

@Nuru
Copy link
Author

Nuru commented Oct 12, 2020

@hickeyma Your test in #162 (comment) is flawed because you used an unsupported version of kubectl (see kubectl version skew policy):

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.12", GitCommit:"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725", GitTreeState:"clean", BuildDate:"2020-09-14T08:06:34Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}

I suggest re-running your test with kubectl v1.16.15.

@hickeyma
Copy link
Collaborator

@Nuru The test needs to be run in a cluster with Kubernetes 1.15 as the deprecated API is not removed in that version. Hence, I can show the issue happening when you try to upgrade a release with deprecated API, with a chart containing the new or supported API.

@hickeyma hickeyma changed the title Convert does not add required metadata Issue upgrading tomcat 0.3.0 to 0.4.x Nov 11, 2020
@hickeyma
Copy link
Collaborator

Updated the issue title to better describe the issue. Analysis of issue described in #162 (comment).

The issue is not with the plugin but a side effect of the Kubernetes API and Helm chart. Closing the issue therefore.

@rimusz
Copy link
Collaborator

rimusz commented Nov 11, 2020

I agree with @hickeyma here as well, this plugin cannot cover all use cases.

@Nuru
Copy link
Author

Nuru commented Nov 11, 2020

@rimusz I disagree that this is an edge case that should not be supported. At any given time, for any release of Kubernetes since at least 1.14, Kubernetes supports some deprecated APIs and some replacement APIs. This issue arises any time someone tries to simultaneously upgrade from helm 2 to helm 3 and from deprecated to newly supported APIs. These are likely to go together as part of generally preparing for an upgrade to a new version of Kubernetes.

@hickeyma
Copy link
Collaborator

hickeyma commented Nov 12, 2020

I agree with you @Nuru that it is not an edge case and it should try and be supported. It is however not something that the plugin can solve as such.

It is a issue that occur when you upgrade a release with deprecated Kubernetes APIs with a new version of the tomcat chart which contains new supported APIs. The problem is that previous pod is not updated and you end up with 2 versions of the pod.

I will re-open and open an issue in Helm (sorry forgot to do this).

@hickeyma hickeyma reopened this Nov 12, 2020
@rimusz
Copy link
Collaborator

rimusz commented Nov 12, 2020

I agree with @hickeyma here as well, this plugin cannot cover all use cases.
@Nuru I have update my comment, plugin has its limits as well.

I do not really agreed on upgrade from helm 2 to helm 3 and from deprecated to newly supported APIs.
The recommend scenario is to make all upgrades with helm v2 first using the latest chart version, then only migrate to helm v3.

@hickeyma
Copy link
Collaborator

@Nuru I have raised helm/helm#9014 in Helm core.

@Nuru
Copy link
Author

Nuru commented Nov 12, 2020

@rimusz

I do not really agreed on upgrade from helm 2 to helm 3 and from deprecated to newly supported APIs.
The recommend scenario is to make all upgrades with helm v2 first using the latest chart version, then only migrate to helm v3.

The problem will still occur if the next version of the chart upgrades the apiVersion.

@hickeyma I disagree with you changing the title of this issue. I was using the tomcat chart as an example. helm/helm#9014 (comment) cites "about half a dozen charts" that had this problem. I ran into the problem with several non-public charts that updated the Ingress apiVersion and supplied the Tomcat chart as a public example because I could not supply the private ones.

The error message:

Unable to continue with update: Deployment "web-api" in namespace "dev" exists and cannot be imported into the current release:

is from helm. Kubernetes does not have a concept of a "release". And it only happens with the conversion from helm v2 to v3.

@hickeyma
Copy link
Collaborator

I disagree with you changing the title of this issue. I was using the tomcat chart as an example. helm/helm#9014 (comment) cites "about half a dozen charts" that had this problem. I ran into the problem with several non-public charts that updated the Ingress apiVersion and supplied the Tomcat chart as a public example because I could not supply the private ones.

I changed the title to better describe the problem. At this moment we are only able to reproduce this issue using the stable/tomcat chart. I can broaden the title when I receive other ways/charts of reproducing the issue.

The error message:

Unable to continue with update: Deployment "web-api" in namespace "dev" exists and cannot be imported into the current release:

is from helm. Kubernetes does not have a concept of a "release". And it only happens with the conversion from helm v2 to v3.

Yes the error message is from Helm. Yes you are seeing this error after a conversion. But the underlying problem when you "peel back the onion" is fundamentally how Kubernetes resources containing deprecated APIs are upgraded by Kubernetes when helm upgrade is run. Unfortunately the logic added in helm/helm#7649 tries to reconcile this additional resource (which should not exist) as part of the release. You see the error because Helm 2 does not contain the hooks (annotations and label) to reconcile and you see the error which should happen because there is an error resource in the cluster.

@hickeyma
Copy link
Collaborator

The fundamental issue may have to do with just upgrading from one tomcat version to another: helm/helm#9014 (comment).

@hickeyma
Copy link
Collaborator

Closing as repository is archived.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants