Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support syncing a set of labels from MachineDeployment/MachineSet/Machine to Nodes #493

Closed
sidharthsurana opened this issue Sep 6, 2018 · 108 comments · Fixed by #7331
Closed
Labels
area/api Issues or PRs related to the APIs help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/design Categorizes issue or PR as related to design. kind/proposal Issues or PRs related to proposals. lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@sidharthsurana
Copy link
Contributor

This issue is to add the capability of defining user defined labels, annotations & taints to be created automatically on the created nodes.

for e.g.: The following MachineSet when realized on any of the provider, should result in the 2 user defined labels latency-sensitive: yes and service-class: gold to be created on the nodes created as part of this machineset.

apiVersion: cluster.k8s.io/v1alpha1
kind: MachineSet
metadata:
  name: gold-workers
spec:
  replicas: 3
  selector:
    matchLabels:
      node-type: worker-node
  template:
    metadata:
      labels:
        node-type: worker-node
    spec:
      metadata:
        labels:
          latency-sensitive: yes
          service-class: gold
      providerConfig:
      ...
      ...

The same mechanics should be extended for annotations as well as taints.

@dgoodwin
Copy link
Contributor

dgoodwin commented Sep 6, 2018

This would be nice, we had to write a little controller ourselves to get those applied.

@derekwaynecarr
Copy link
Contributor

the list of labels, taints, and annotations need to be additive (not authoritative)

@vikaschoudhary16
Copy link
Contributor

@dgoodwin

This would be nice, we had to write a little controller ourselves to get those applied.

Wondering what is the issue in extending existing machine controller in cluster-api to perform this task as well in the reconcile loop?

@roberthbailey
Copy link
Contributor

/assign @hardikdr

@enxebre
Copy link
Member

enxebre commented Jan 25, 2019

Related to #493 and #658. Also for reference kubernetes/kubernetes#73097

@ncdc
Copy link
Contributor

ncdc commented Feb 28, 2019

@hardikdr are you currently working on this?

@sidharthsurana do you believe this is a hard requirement for v1alpha1, or could it slip to v1alpha2?

@sidharthsurana
Copy link
Contributor Author

@ncdc I don't this is a hard requirement from the cluster-api point of view for the v1alpha1 milestone. Given that individual provider implementations can implement these independently as well. I opened this Issue so that if we can figure out a provider agnostic way of implementing this behavior.

@detiber
Copy link
Member

detiber commented Feb 28, 2019

/milestone Next

@k8s-ci-robot k8s-ci-robot modified the milestones: v1alpha1, Next Feb 28, 2019
@hardikdr
Copy link
Member

hardikdr commented Apr 4, 2019

@ncdc Yes I have started looking into it, I will soon be providing a proposal-doc for the same.

@jichenjc
Copy link
Contributor

#881
this one can also be covered ?

@gyliu513
Copy link
Contributor

@hardikdr What is the progress of this? If you are not working, can I take this over?

@gyliu513
Copy link
Contributor

@qiujian16 @xunpan @clyang82

@vincepri
Copy link
Member

/area api

@k8s-ci-robot k8s-ci-robot added the area/api Issues or PRs related to the APIs label Jun 10, 2019
@hardikdr
Copy link
Member

@gyliu513 I had prepared and presented the proposal doc. There was not any conclusion though. Doc has to be converted into the defined template as well.
I am more than happy to collaborate with you, to push it forward.

@gyliu513
Copy link
Contributor

Thanks @hardikdr , the doc looks good ;-)

@vincepri @detiber can you please post some comments to the design doc as well? I think this is a very important project for cluster life cycle management.

@detiber
Copy link
Member

detiber commented Jun 14, 2019

The linked doc proposes full management of Labels, Annotations, and Taints for Machines. It might be better to initial target initial state (Labels and Taints are already supported in this manner if using kubeadm for bootstrapping).

@k8s-ci-robot k8s-ci-robot added this to the v1.2 milestone Feb 3, 2022
@Karthik-K-N
Copy link
Contributor

Hi team, awaiting to use this feature, Even we wanted to have this capability where we can bring up a cluster with predefined labels on the nodes with the label values are applied based on some internal computation / vm or cloud specific details.

@Arvinderpal If needed i would like to contribute to it. Please let me know. Thank you

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 11, 2022
@sbueringer
Copy link
Member

/remove-lifecycle stale
/lifecycle active

@k8s-ci-robot k8s-ci-robot added lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 11, 2022
@fabriziopandini fabriziopandini added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jul 29, 2022
@fabriziopandini fabriziopandini removed this from the v1.2 milestone Jul 29, 2022
@fabriziopandini fabriziopandini removed the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jul 29, 2022
@fabriziopandini
Copy link
Member

/triage accepted

@deepakpunjabi
Copy link

@hardikdr was this merged into CAPI?

@hardikdr
Copy link
Member

hardikdr commented Dec 6, 2022

@sbueringer
Copy link
Member

/reopen

given that the issue is only resolved after the implementation merges

@k8s-ci-robot k8s-ci-robot reopened this Dec 21, 2022
@k8s-ci-robot
Copy link
Contributor

@sbueringer: Reopened this issue.

In response to this:

/reopen

given that the issue is only resolved after the implementation merges

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@fabriziopandini
Copy link
Member

given that the issue is only resolved after the implementation merges

What about linking the issues where we are tracking implementation work breakdown and closing this one, otherwise we are tracking this in two places

@sbueringer
Copy link
Member

sbueringer commented Dec 22, 2022

I wasn't aware of the umbrella issue at that time :)

Fine for me to close. Looks like the umbrella issue links directly to the PR instead of to this issue. But we can either just have it as a sub-task in the umbrella issue or create a new issue if we really need one.

/close
as it's covered by #7731 now

@k8s-ci-robot
Copy link
Contributor

@sbueringer: Closing this issue.

In response to this:

I wasn't aware of the umbrella issue at that time :)

Fine for me to close. Looks like the umbrella issue links directly to the PR instead of to this issue. But we can either just have it as a sub-task in the umbrella issue or create a new one if necessary.

/close
as it's covered by #7731 now

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Issues or PRs related to the APIs help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/design Categorizes issue or PR as related to design. kind/proposal Issues or PRs related to proposals. lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.