Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inter-pod affinity/anti-affinity #60

Closed
22 tasks
aronchick opened this issue Jul 24, 2016 · 31 comments
Closed
22 tasks

Inter-pod affinity/anti-affinity #60

aronchick opened this issue Jul 24, 2016 · 31 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. stage/beta Denotes an issue tracking an enhancement targeted for Beta status
Milestone

Comments

@aronchick
Copy link
Contributor

aronchick commented Jul 24, 2016

Design Doc: https://github.com/kubernetes/kubernetes/blob/master/docs/design/podaffinity.md

  • e.g. "put these pods in zone us-central1-a"
  • predicate and priority function are done, it was alpha in 1.2, no change in 1.3
  • remaining work (low-priority since nobody has been asking for it) is to implement the "RequiredDuringExecution" option which means evict a pod if node labels change, or pod's affinity/anti-affinity request changes, such that the pod's affinity/anti-affinity is no longer satisfied
  • In theory we could move it to Beta in 1.4 but I think we should leave it as alpha for two reasons: (1) get more people using it so we can get feedback, (2) it shares the same annotation (scheduler.alpha.kubernetes.io/affinity) with inter-pod affinity/anti-affinity (see below), and we definitely need to keep that one in alpha in 1.4

Progress Tracker

  • Before Alpha
    • Write and maintain draft quality doc
      • During development keep a doc up-to-date about the desired experience of the feature and how someone can try the feature in its current state. Think of it as the README of your new feature and a skeleton for the docs to be written before the Kubernetes release. Paste link to Google Doc: DOC-LINK
    • Design Approval
      • Design Proposal. This goes under docs/proposals. Doing a proposal as a PR allows line-by-line commenting from community, and creates the basis for later design documentation. Paste link to merged design proposal here: PROPOSAL-NUMBER
      • Initial API review (if API). Maybe same PR as design doc. PR-NUMBER
        • Any code that changes an API (/pkg/apis/...)
        • cc @kubernetes/api
      • Identify shepherd (your SIG lead and/or kubernetes-pm@googlegroups.com will be able to help you). My Shepherd is: replace.me@replaceme.com (and/or GH Handle)
        • A shepherd is an individual who will help acquaint you with the process of getting your feature into the repo, identify reviewers and provide feedback on the feature. They are not (necessarily) the code reviewer of the feature, or tech lead for the area.
        • The shepherd is not responsible for showing up to Kubernetes-PM meetings and/or communicating if the feature is on-track to make the release goals. That is still your responsibility.
      • Identify secondary/backup contact point. My Secondary Contact Point is: replace.me@replaceme.com (and/or GH Handle)
    • Write (code + tests + docs) then get them merged. ALL-PR-NUMBERS
      • Code needs to be disabled by default. Verified by code OWNERS
      • Minimal testing
      • Minimal docs
        • cc @kubernetes/docs on docs PR
        • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
        • New apis: Glossary Section Item in the docs repo: kubernetes/kubernetes.github.io
      • Update release notes
  • Before Beta
    • Testing is sufficient for beta
    • User docs with tutorials
      • Updated walkthrough / tutorial in the docs repo: kubernetes/kubernetes.github.io
      • cc @kubernetes/docs on docs PR
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
    • Thorough API review
      • cc @kubernetes/api
  • Before Stable
    • docs/proposals/foo.md moved to docs/design/foo.md
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
    • Soak, load testing
    • detailed user docs and examples
      • cc @kubernetes/docs
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off

FEATURE_STATUS is used for feature tracking and to be updated by @kubernetes/feature-reviewers.
FEATURE_STATUS: IN_DEVELOPMENT

More advice:

Design

  • Once you get LGTM from a @kubernetes/feature-reviewers member, you can check this checkbox, and the reviewer will apply the "design-complete" label.

Coding

  • Use as many PRs as you need. Write tests in the same or different PRs, as is convenient for you.
  • As each PR is merged, add a comment to this issue referencing the PRs. Code goes in the http://github.com/kubernetes/kubernetes repository,
    and sometimes http://github.com/kubernetes/contrib, or other repos.
  • When you are done with the code, apply the "code-complete" label.
  • When the feature has user docs, please add a comment mentioning @kubernetes/feature-reviewers and they will
    check that the code matches the proposed feature and design, and that everything is done, and that there is adequate
    testing. They won't do detailed code review: that already happened when your PRs were reviewed.
    When that is done, you can check this box and the reviewer will apply the "code-complete" label.

Docs

  • Write user docs and get them merged in.
  • User docs go into http://github.com/kubernetes/kubernetes.github.io.
  • When the feature has user docs, please add a comment mentioning @kubernetes/docs.
  • When you get LGTM, you can check this checkbox, and the reviewer will apply the "docs-complete" label.
@aronchick aronchick added this to the v1.4 milestone Jul 24, 2016
@idvoretskyi
Copy link
Member

cc @kubernetes/sig-scheduling

@idvoretskyi idvoretskyi added the sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. label Jul 25, 2016
@jberkus
Copy link
Contributor

jberkus commented Jul 26, 2016

One thing which might make a common case of anti-affinity simpler is to allow expansion of the "spread" concept to an abitrary label. That is, if I could say:

spread: { type: database }

That would let me express the idea of "don't run a pod with type: database on a node with any other pod of type: database", and thus allow a very simple way of expressing "don't put two Postgres pods on the same node, and don't put them on the same node as Cassandra or MySQL".

I'd expect that there are a number of cases where a specific class of applications tends to use the same resources. For example, I can imagine not wanting two busy http routers to go on the same node due to network competition, even if one is HAProxy and the other is Nginx.

@jberkus
Copy link
Contributor

jberkus commented Jul 26, 2016

... continued:

One refinement of this is that I can imagine wanting user-controllable "weak" vs. "hard" spread rule. For example, in most of my Postgres deployments, I would rather be short one or two pods than put two Postgres pods on the same machine (a hard rule). On the other hand, for Etcd, I could imagine saying "don't put two pods from this class on the same node if you can help it", which would be a soft rule.

@davidopp
Copy link
Member

Both of the things you mentioned are supported. See the design doc linked to above (it had the wrong URL originally and I fixed it last night, so you may have read the wrong doc if you already looked at that).

@jberkus
Copy link
Contributor

jberkus commented Jul 26, 2016

Ah, I read the design doc and I couldn't find that particular feature. Keywords/lines?

@davidopp
Copy link
Member

"don't run a pod with type: database on a node with any other pod of type: database"

See "Can only schedule P onto nodes that are running pods that satisfy P1. (Assumes all nodes have a label with key node and value specifying their node name.)". Then substitute

  • P1 is a label selector that expresses "pod of type: database"
  • Can only -> Cannot (by using PodAntiAffinity instead of PodAffinity)

"weak" vs. "hard" spread rule.

See the comment for PreferredDuringSchedulingIgnoredDuringExecution (that's the "soft" flavor), as compared to the other two, in the PodAffinity/PodAntiAffinity types in the API section of the doc.

@jberkus
Copy link
Contributor

jberkus commented Jul 26, 2016

Keen, thanks!

@alex-mohr
Copy link

@davidopp says this is done.

@davidopp davidopp modified the milestones: v1.5, v1.4 Aug 17, 2016
@davidopp
Copy link
Member

Sorry, I was mis-remembering what this issue is; the part of this that is described in #51 is done, but this one was not intended to be finished in 1.4. I've moved to 1.5 milestone.

@ivan4th
Copy link

ivan4th commented Sep 20, 2016

Trying to implement pod (anti)affinity for DaemonSets too. Someone PTAL: kubernetes/kubernetes#31136

@davidopp
Copy link
Member

davidopp commented Oct 1, 2016

Goal for 1.5 is to move this to Beta. More details in kubernetes/kubernetes#25319

@timothysc
Copy link
Member

/cc @rrati @jayunit100

@idvoretskyi
Copy link
Member

@wojtek-t can you explain in which stage this feature is going to be delivered in 1.5? @davidopp has defined beta, while in this conversation kubernetes/kubernetes#31136 (comment) I see some comments with concerns?

@wojtek-t
Copy link
Member

@idvoretskyi - most probably it won't get to beta, but that's not final decision from what I know.

@davidopp
Copy link
Member

It's not going to be beta. There are a few features we recently decided to remove from the set we were going to move to beta in 1.5. I'll update the feature bugs shortly.

@timothysc
Copy link
Member

The details can be found here: kubernetes/kubernetes#30819 and here: kubernetes/kubernetes#34508

The general gist is: annotations as a mechanism for alpha-beta-GA api promotion has a number of issues, and @kubernetes/sig-api-machinery is working on a "happy-path" which is still TBD.

@davidopp
Copy link
Member

Yes, what @timothysc . I'm removing the beta-in-1.5 label and the 1.5 milestone.

@davidopp davidopp modified the milestones: next-milestone, v1.5 Oct 18, 2016
This was referenced Oct 18, 2016
@jimmycuadra
Copy link

I have a use case for this feature which I don't think is covered by the current design, but please correct me if I'm wrong.

Imagine a simple example cluster with two nodes. I want to create deployments in this cluster with two pod replicas each. I want to require that the two pods are not on the same node. I can do this with pod affinity based on a label like app=foo, but when I edit the deployment, creating a new replica set, the new pods can't be scheduled, because each node already has a pod with the label app=foo. I would have to change the deployment's labels and affinity rules each time I deploy.

What I really want is a way to require that pods with the same labels and the same pod-template-hash don't end up on the same node, but I don't think there's a way to express that in the current affinity system because there's no operator for "equal to the value of that label for this pod". In other words, I'd have to know the value of the pod-template-hash in advance somehow.

@davidopp
Copy link
Member

davidopp commented Dec 4, 2016

My understanding of the way Deployments work for rolling update is that a second RS is created, initially with 0 replicas, and then the first RS is scaled down as the second RS is scaled up. So the total number of replicas across the two RSes is 2, except perhaps for transient conditions. Initially both are in the "old version," then one is in the "new version" and one is in the "old vesion", and finally both are in the "new vesion."

@smarterclayton
Copy link
Contributor

smarterclayton commented Dec 5, 2016 via email

@davidopp
Copy link
Member

davidopp commented Dec 5, 2016

Ah, thanks for the explanation.

As a workaround, could you have a pod label in your podTemplate with key "version" (or "generation" or something like that) and a value that is initially 0, and a corresponding pod anti-affinity annotation with the same key/value pair, and each time you modify the podTemplate, you bump up both values (label and anti-affinity annotation)? The value could be the hash of everything in the podTemplate except this one field, in which case I think it's basically equivalent to the feature @jimmycuadra requested. (Though a simple version number you bump up on each modification is simpler.)

@davidopp
Copy link
Member

We will be moving this feature to beta in 1.6. Tracking issue is
kubernetes/kubernetes#25319

Current user guide documentation is here

@jimmycuadra
Copy link

Any chance of the use case I mentioned being part on the roadmap for the stable release? The suggested workaround might be prone to error. It'd be great to have the server aware of the user's intent. If not, would this be considered for a future iteration on this API? In that case, should I open a new issue somewhere to track it?

@davidopp
Copy link
Member

You can open a feature request in the kubernetes/kubernetes repo and link it to this issue. We could consider it if enough people want it. Personally I'd prefer if Deployment controller managed the label changes (i.e. automate the "workaround") and we didn't change the API for pod (anti-)affinity.

@ivan4th
Copy link

ivan4th commented Jan 20, 2017

Given that inter-pod (anti)affinity is going to be beta soon, can we get back to kubernetes/kubernetes#34543 maybe? It's a quirk (unneeded dependency) that makes it hard to move inter-pod affinity to General Predicates for instance.

@davidopp
Copy link
Member

davidopp commented Jan 20, 2017

kubernetes/kubernetes#34543 (and moving to General Predicates) isn't an API change, so it's not strictly necessary for beta (i.e. it can be done after moving to beta).

Sorry we haven't reviewed that PR yet. We're working on getting more people up to speed on the scheduler code, but right now we only have the bandwidth to review things that are critical or trivial. I hope we'll get to it in the next couple of weeks.

Thanks for your patience...

@idvoretskyi idvoretskyi added the stage/beta Denotes an issue tracking an enhancement targeted for Beta status label Jan 26, 2017
@idvoretskyi
Copy link
Member

@davidopp any update on this feature? Docs and release notes are required (please, provide them to the features spreadsheet.

@davidopp
Copy link
Member

davidopp commented Mar 8, 2017

Updated spreadsheet with release note and link to documentation.

alena1108 pushed a commit to alena1108/kubernetes-package that referenced this issue Jul 25, 2017
to workaround
kubernetes/enhancements#60 (comment)
when pods with anti affinity fail to be upgraded
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 21, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 20, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

ingvagabund pushed a commit to ingvagabund/enhancements that referenced this issue Apr 2, 2020
Reorganize files into appropriate directories
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. stage/beta Denotes an issue tracking an enhancement targeted for Beta status
Projects
None yet
Development

No branches or pull requests