Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refact: namespace all labels in the helm charts #765

Closed
wants to merge 3 commits into from

Conversation

wilmardo
Copy link
Contributor

Describe what this PR does

  • Removes heritage and release label
  • Renames:
    • app > app.kubernetes.io/name
    • chart > helm.sh/chart:
    • component > app.kubernetes.io/component
  • Adds:
    • app.kubernetes.io/managed-by label
    • app.kubernetes.io/managed-by

This results in the following labels:

    labels:
      app.kubernetes.io/component: provisioner
      app.kubernetes.io/instance: ceph-csi-rbd
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: ceph-csi-rbd
      app.kubernetes.io/version: canary
      helm.sh/chart: ceph-csi-rbd-1.3.0-canary

Is there anything that requires special attention

The change isn't backwards compatible, helm upgrade will fail since labels are immutable after an deployment:

# helm upgrade ceph-csi-rbd charts/ceph-csi-rbd --namespace storing
Error: UPGRADE FAILED: cannot patch "ceph-csi-rbd-nodeplugin" with kind DaemonSet: DaemonSet.apps "ceph-csi-rbd-nodeplugin" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/component":"nodeplugin", "app.kubernetes.io/instance":"ceph-csi-rbd", "app.kubernetes.io/managed-by":"Helm", "app.kubernetes.io/name":"ceph-csi-rbd", "app.kubernetes.io/version":"canary"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "ceph-csi-rbd-provisioner" with kind Deployment: Deployment.apps "ceph-csi-rbd-provisioner" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/component":"provisioner", "app.kubernetes.io/instance":"ceph-csi-rbd", "app.kubernetes.io/managed-by":"Helm", "app.kubernetes.io/name":"ceph-csi-rbd", "app.kubernetes.io/version":"canary"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

Can be resolved by uninstalling the current release and redeploying with the same values but this should be mentioned and this must be a major Helm chart release.

Related issues

Fixes: #612

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 2, 2020

if you are using cephfs fuse client or rbd-nbd driver, This could be an issue as we need to delete the daemonset of both cephfs and rbd (all mounts inside the daemonset container will be gone you will see transport endpoint not connected issues) #703

we need to document the step on how to do upgrade for the same

@wilmardo
Copy link
Contributor Author

wilmardo commented Jan 14, 2020

@Madhu-1 Are the upgrade scenarios described in #770? Is there more documentation needed?

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 14, 2020

@wilmardo we need to add a document for the upgrade with helm charts

@humblec
Copy link
Collaborator

humblec commented Mar 5, 2020

@wilmardo can you rebase this PR ?

@wilmardo
Copy link
Contributor Author

wilmardo commented Mar 5, 2020

@humblec Will do in the next couple days :)

@humblec
Copy link
Collaborator

humblec commented Mar 10, 2020

@wilmardo I hope we are progressing on this :) .

@wilmardo wilmardo force-pushed the update-helm-labels branch from b4b2e65 to 0ff7ca4 Compare March 10, 2020 11:28
Signed-off-by: wilmardo <info@wilmardenouden.nl>
Signed-off-by: wilmardo <info@wilmardenouden.nl>
@wilmardo wilmardo force-pushed the update-helm-labels branch from 0ff7ca4 to 90b18c3 Compare March 10, 2020 11:29
@wilmardo wilmardo changed the title refact: namespace all labels in the helm charts WIP: refact: namespace all labels in the helm charts Mar 10, 2020
@wilmardo
Copy link
Contributor Author

@Madhu-1 Thanks for the reminder, working on it as we speak. Rebased to current master but found some missed values when reviewing myself, on it as we speak!

@humblec
Copy link
Collaborator

humblec commented Mar 10, 2020

@wilmardo Thanks ! :) , iic the last comment was for me .

@wilmardo wilmardo force-pushed the update-helm-labels branch 2 times, most recently from 23e4b7d to 7e0d0e0 Compare March 10, 2020 11:41
@wilmardo wilmardo changed the title WIP: refact: namespace all labels in the helm charts refact: namespace all labels in the helm charts Mar 10, 2020
@humblec
Copy link
Collaborator

humblec commented Mar 12, 2020

@wilmardo may be little more efforts to make CI happy ? :)

Signed-off-by: wilmardo <info@wilmardenouden.nl>
@wilmardo wilmardo force-pushed the update-helm-labels branch from 7e0d0e0 to 4dac8a0 Compare March 12, 2020 18:30
@wilmardo
Copy link
Contributor Author

@humblec Done, didn't notice that the CI failed 👍

@humblec
Copy link
Collaborator

humblec commented Mar 13, 2020

@humblec Done, didn't notice that the CI failed +1

Cool. thanks . The PR looks good to me ! LGTM.

@humblec
Copy link
Collaborator

humblec commented Mar 13, 2020

@Madhu-1 ptal .

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Mar 13, 2020

This is a breaking change, we need upgrade doc for this one and we cannot track this for 2.1.0

@nixpanic nixpanic added the component/deployment Helm chart, kubernetes templates and configuration Issues/PRs label Apr 18, 2020
@nixpanic nixpanic requested a review from Madhu-1 April 23, 2020 12:28
@Madhu-1
Copy link
Collaborator

Madhu-1 commented May 14, 2020

this is a breaking change we need to see what we can do on this one. @nixpanic any suggestion from your side on this one?

@Madhu-1 Madhu-1 added the breaking-change This PR introduces a breaking change label May 14, 2020
@nixpanic
Copy link
Member

@wilmardo could you write down your recommendation for upgrading a current deployment to the new label/format?

@humblec
Copy link
Collaborator

humblec commented May 14, 2020

@wilmardo ping .. revisit?

@wilmardo
Copy link
Contributor Author

Yeah busy with work and not that much time for open source at the moment.

Don't know if there is an casual upgrade path for this, Kubernetes sees these labels as immutable so it will error on trying to upgrade.
I don't know if it is feasible to remove and reinstall for user?

@Madhu-1
Copy link
Collaborator

Madhu-1 commented May 15, 2020

Yeah busy with work and not that much time for open source at the moment.

Don't know if there is an casual upgrade path for this, Kubernetes sees these labels as immutable so it will error on trying to upgrade.
I don't know if it is feasible to remove and reinstall for user?

Thanks for the reply @wilmardo even I have the same concern as it's not feasible to reinstall the whole cluster

@wilmardo
Copy link
Contributor Author

Might be something for the 3.x.x release then?
I deleted and redeployed my ceph pods without problems in the past, since the PV and PVC keep existing for the current pods I had no downtime.
Needs testing though, it has been sometime since I have done that so my memory might be incorrect.

@Madhu-1
Copy link
Collaborator

Madhu-1 commented May 15, 2020

Might be something for the 3.x.x release then?
I deleted and redeployed my ceph pods without problems in the past, since the PV and PVC keep existing for the current pods I had no downtime.
Needs testing though, it has been sometime since I have done that so my memory might be incorrect.

looks like we can do it. but it will be a downtime for the user. application pods cannot use PVC during the upgrade as we are deleting and recreating all the cephcsi pods. if we can add documentation for the upgrade we can do it.
as you said we need good testing for it for both cephfs and rbd.

@Madhu-1 Madhu-1 added this to the release-3.0.0 milestone May 15, 2020
@mergify
Copy link
Contributor

mergify bot commented May 21, 2020

This pull request now has conflicts with the target branch. Could you please resolve conflicts and force push the corrected changes? 🙏

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jul 20, 2020

moved this out of release-v3.0.0 milestone

@Madhu-1 Madhu-1 removed this from the release-3.0.0 milestone Jul 20, 2020
@stale
Copy link

stale bot commented Sep 26, 2020

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in a month if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix This will not be worked on label Sep 26, 2020
@Madhu-1
Copy link
Collaborator

Madhu-1 commented Oct 12, 2020

Might be something for the 3.x.x release then?
I deleted and redeployed my ceph pods without problems in the past, since the PV and PVC keep existing for the current pods I had no downtime.
Needs testing though, it has been sometime since I have done that so my memory might be incorrect.

looks like we can do it. but it will be a downtime for the user. application pods cannot use PVC during the upgrade as we are deleting and recreating all the cephcsi pods. if we can add documentation for the upgrade we can do it.
as you said we need good testing for it for both cephfs and rbd.

@wilmardo any comment on this one?

@stale stale bot removed the wontfix This will not be worked on label Oct 12, 2020
@stale
Copy link

stale bot commented Dec 18, 2020

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in a month if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix This will not be worked on label Dec 18, 2020
@stale
Copy link

stale bot commented Jan 24, 2021

This pull request has been automatically closed due to inactivity. Please re-open if these changes are still required.

@stale stale bot closed this Jan 24, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking-change This PR introduces a breaking change component/deployment Helm chart, kubernetes templates and configuration Issues/PRs wontfix This will not be worked on
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Update the helm labels to new app.kubernetes.io namespace
4 participants