Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Opaque integer resources #76

Closed
9 of 22 tasks
ConnorDoyle opened this issue Aug 24, 2016 · 16 comments
Closed
9 of 22 tasks

Opaque integer resources #76

ConnorDoyle opened this issue Aug 24, 2016 · 16 comments
Assignees
Labels
sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling.

Comments

@ConnorDoyle
Copy link
Contributor

ConnorDoyle commented Aug 24, 2016

This feature provides a simple way to advertise some amount of arbitrary countable resource for a node in a Kubernetes cluster. Users can consume these resources by including them in pod specs, and the scheduler takes them into account when placing pods on nodes.

Issues:

Related:

Progress Tracker

  • Before Alpha
    • Write and maintain draft quality doc
      • During development keep a doc up-to-date about the desired experience of the feature and how someone can try the feature in its current state. Think of it as the README of your new feature and a skeleton for the docs to be written before the Kubernetes release. Paste link to Google Doc: http://goo.gl/IoKYP1, http://goo.gl/xu697K
    • Design Approval
      • Design Proposal. This goes under docs/proposals. Doing a proposal as a PR allows line-by-line commenting from community, and creates the basis for later design documentation. Paste link to merged design proposal here: PROPOSAL-NUMBER
      • Initial API review (if API). Maybe same PR as design doc. PR 31652
        • Any code that changes an API (/pkg/apis/...)
        • cc @kubernetes/api
      • Identify shepherd (your SIG lead and/or kubernetes-pm@googlegroups.com will be able to help you). My Shepherd is: @timothysc (and/or GH Handle)
        • A shepherd is an individual who will help acquaint you with the process of getting your feature into the repo, identify reviewers and provide feedback on the feature. They are not (necessarily) the code reviewer of the feature, or tech lead for the area.
        • The shepherd is not responsible for showing up to Kubernetes-PM meetings and/or communicating if the feature is on-track to make the release goals. That is still your responsibility.
      • Identify secondary/backup contact point. My Secondary Contact Point is: @davidopp (and/or GH Handle)
    • Write (code + tests + docs) then get them merged. PR 31652
      • Code needs to be disabled by default. Verified by code OWNERS
      • Minimal testing
      • Minimal docs
        • cc @kubernetes/docs on docs PR DOCS PR 1783
        • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
        • New apis: Glossary Section Item in the docs repo: kubernetes/kubernetes.github.io
      • Update release notes
  • Before Beta
    • Testing is sufficient for beta
    • User docs with tutorials
      • Updated walkthrough / tutorial in the docs repo: kubernetes/kubernetes.github.io
      • cc @kubernetes/docs on docs PR
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
    • Thorough API review
      • cc @kubernetes/api
  • Before Stable
    • docs/proposals/foo.md moved to docs/design/foo.md
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
    • Soak, load testing
    • detailed user docs and examples
      • cc @kubernetes/docs
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off

FEATURE_STATUS is used for feature tracking and to be updated by @kubernetes/feature-reviewers.
FEATURE_STATUS: IN_DEVELOPMENT

More advice:

Design

  • Once you get LGTM from a @kubernetes/feature-reviewers member, you can check this checkbox, and the reviewer will apply the "design-complete" label.

Coding

  • Use as many PRs as you need. Write tests in the same or different PRs, as is convenient for you.
  • As each PR is merged, add a comment to this issue referencing the PRs. Code goes in the http://github.com/kubernetes/kubernetes repository,
    and sometimes http://github.com/kubernetes/contrib, or other repos.
  • When you are done with the code, apply the "code-complete" label.
  • When the feature has user docs, please add a comment mentioning @kubernetes/feature-reviewers and they will
    check that the code matches the proposed feature and design, and that everything is done, and that there is adequate
    testing. They won't do detailed code review: that already happened when your PRs were reviewed.
    When that is done, you can check this box and the reviewer will apply the "code-complete" label.

Docs

  • Write user docs and get them merged in.
  • User docs go into http://github.com/kubernetes/kubernetes.github.io.
  • When the feature has user docs, please add a comment mentioning @kubernetes/docs.
  • When you get LGTM, you can check this checkbox, and the reviewer will apply the "docs-complete" label.
@ConnorDoyle
Copy link
Contributor Author

cc @davidopp @balajismaniam

@ConnorDoyle
Copy link
Contributor Author

@davidopp could you please mark this for 1.5?

@davidopp davidopp added this to the v1.5 milestone Sep 15, 2016
@idvoretskyi
Copy link
Member

idvoretskyi commented Sep 29, 2016

@ConnorDoyle any progress on this feature? Which SIG is responsible for it?

@ConnorDoyle
Copy link
Contributor Author

@idvoretskyi the current progress is mostly discussed in the in-flight PR (31652). The feature spans sig-scheduling and sig-node, it's on the 1.5 roadmaps for both. Since most of the work is in the scheduler @timothysc has volunteered to shepherd the feature. Please let me know if you need more info.

@davidopp davidopp added the sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. label Oct 1, 2016
@idvoretskyi
Copy link
Member

@ConnorDoyle thank you.

@idvoretskyi idvoretskyi added the sig/node Categorizes an issue or PR as relevant to SIG Node. label Oct 13, 2016
k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this issue Oct 29, 2016
Automatic merge from submit-queue

[PHASE 1] Opaque integer resource accounting.

## [PHASE 1] Opaque integer resource accounting.

This change provides a simple way to advertise some amount of arbitrary countable resource for a node in a Kubernetes cluster. Users can consume these resources by including them in pod specs, and the scheduler takes them into account when placing pods on nodes. See the example at the bottom of the PR description for more info.

Summary of changes:

- Defines opaque integer resources as any resource with prefix `pod.alpha.kubernetes.io/opaque-int-resource-`.
- Prevent kubelet from overwriting capacity.
- Handle opaque resources in scheduler.
- Validate integer-ness of opaque int quantities in API server.
- Tests for above.

Feature issue: kubernetes/enhancements#76

Design: http://goo.gl/IoKYP1

Issues:

#28312
#19082

Related:

#19080

CC @davidopp @timothysc @balajismaniam 

**Release note**:
<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->
```release-note
Added support for accounting opaque integer resources.

Allows cluster operators to advertise new node-level resources that would be
otherwise unknown to Kubernetes. Users can consume these resources in pod
specs just like CPU and memory. The scheduler takes care of the resource
accounting so that no more than the available amount is simultaneously
allocated to pods.
```

## Usage example

```sh
$ echo '[{"op": "add", "path": "pod.alpha.kubernetes.io~1opaque-int-resource-bananas", "value": "555"}]' | \
> http PATCH http://localhost:8080/api/v1/nodes/localhost.localdomain/status \
> Content-Type:application/json-patch+json
```

```http
HTTP/1.1 200 OK
Content-Type: application/json
Date: Thu, 11 Aug 2016 16:44:55 GMT
Transfer-Encoding: chunked

{
    "apiVersion": "v1",
    "kind": "Node",
    "metadata": {
        "annotations": {
            "volumes.kubernetes.io/controller-managed-attach-detach": "true"
        },
        "creationTimestamp": "2016-07-12T04:07:43Z",
        "labels": {
            "beta.kubernetes.io/arch": "amd64",
            "beta.kubernetes.io/os": "linux",
            "kubernetes.io/hostname": "localhost.localdomain"
        },
        "name": "localhost.localdomain",
        "resourceVersion": "12837",
        "selfLink": "/api/v1/nodes/localhost.localdomain/status",
        "uid": "2ee9ea1c-47e6-11e6-9fb4-525400659b2e"
    },
    "spec": {
        "externalID": "localhost.localdomain"
    },
    "status": {
        "addresses": [
            {
                "address": "10.0.2.15",
                "type": "LegacyHostIP"
            },
            {
                "address": "10.0.2.15",
                "type": "InternalIP"
            }
        ],
        "allocatable": {
            "alpha.kubernetes.io/nvidia-gpu": "0",
            "cpu": "2",
            "memory": "8175808Ki",
            "pods": "110"
        },
        "capacity": {
            "alpha.kubernetes.io/nvidia-gpu": "0",
            "pod.alpha.kubernetes.io/opaque-int-resource-bananas": "555",
            "cpu": "2",
            "memory": "8175808Ki",
            "pods": "110"
        },
        "conditions": [
            {
                "lastHeartbeatTime": "2016-08-11T16:44:47Z",
                "lastTransitionTime": "2016-07-12T04:07:43Z",
                "message": "kubelet has sufficient disk space available",
                "reason": "KubeletHasSufficientDisk",
                "status": "False",
                "type": "OutOfDisk"
            },
            {
                "lastHeartbeatTime": "2016-08-11T16:44:47Z",
                "lastTransitionTime": "2016-07-12T04:07:43Z",
                "message": "kubelet has sufficient memory available",
                "reason": "KubeletHasSufficientMemory",
                "status": "False",
                "type": "MemoryPressure"
            },
            {
                "lastHeartbeatTime": "2016-08-11T16:44:47Z",
                "lastTransitionTime": "2016-08-10T06:27:11Z",
                "message": "kubelet is posting ready status",
                "reason": "KubeletReady",
                "status": "True",
                "type": "Ready"
            },
            {
                "lastHeartbeatTime": "2016-08-11T16:44:47Z",
                "lastTransitionTime": "2016-08-10T06:27:01Z",
                "message": "kubelet has no disk pressure",
                "reason": "KubeletHasNoDiskPressure",
                "status": "False",
                "type": "DiskPressure"
            }
        ],
        "daemonEndpoints": {
            "kubeletEndpoint": {
                "Port": 10250
            }
        },
        "images": [],
        "nodeInfo": {
            "architecture": "amd64",
            "bootID": "1f7e95ca-a4c2-490e-8ca2-6621ae1eb5f0",
            "containerRuntimeVersion": "docker://1.10.3",
            "kernelVersion": "4.5.7-202.fc23.x86_64",
            "kubeProxyVersion": "v1.3.0-alpha.4.4285+7e4b86c96110d3-dirty",
            "kubeletVersion": "v1.3.0-alpha.4.4285+7e4b86c96110d3-dirty",
            "machineID": "cac4063395254bc89d06af5d05322453",
            "operatingSystem": "linux",
            "osImage": "Fedora 23 (Cloud Edition)",
            "systemUUID": "D6EE0782-5DEB-4465-B35D-E54190C5EE96"
        }
    }
}
```

After patching, the kubelet's next sync fills in allocatable:

```
$ kubectl get node localhost.localdomain -o json | jq .status.allocatable
```

```json
{
  "alpha.kubernetes.io/nvidia-gpu": "0",
  "pod.alpha.kubernetes.io/opaque-int-resource-bananas": "555",
  "cpu": "2",
  "memory": "8175808Ki",
  "pods": "110"
}
```

Create two pods, one that needs a single banana and another that needs a truck load:

```
$ kubectl create -f chimp.yaml
$ kubectl create -f superchimp.yaml
```

Inspect the scheduler result and pod status:

```
$ kubectl describe pods chimp
Name:           chimp
Namespace:      default
Node:           localhost.localdomain/10.0.2.15
Start Time:     Thu, 11 Aug 2016 19:58:46 +0000
Labels:         <none>
Status:         Running
IP:             172.17.0.2
Controllers:    <none>
Containers:
  nginx:
    Container ID:       docker://46ff268f2f9217c59cc49f97cc4f0f085d5ac0e251f508cc08938601117c0cec
    Image:              nginx:1.10
    Image ID:           docker://sha256:82e97a2b0390a20107ab1310dea17f539ff6034438099384998fd91fc540b128
    Port:               80/TCP
    Limits:
      cpu:                                      500m
      memory:                                   64Mi
      pod.alpha.kubernetes.io/opaque-int-resource-bananas:   3
    Requests:
      cpu:                                      250m
      memory:                                   32Mi
      pod.alpha.kubernetes.io/opaque-int-resource-bananas:   1
    State:                                      Running
      Started:                                  Thu, 11 Aug 2016 19:58:51 +0000
    Ready:                                      True
    Restart Count:                              0
    Volume Mounts:                              <none>
    Environment Variables:                      <none>
Conditions:
  Type          Status
  Initialized   True 
  Ready         True 
  PodScheduled  True 
No volumes.
QoS Class:      Burstable
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath           Type            Reason                  Message
  ---------     --------        -----   ----                            -------------           --------        ------                  -------
  9m            9m              1       {default-scheduler }                                    Normal          Scheduled               Successfully assigned chimp to localhost.localdomain
  9m            9m              2       {kubelet localhost.localdomain}                         Warning         MissingClusterDNS       kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  9m            9m              1       {kubelet localhost.localdomain} spec.containers{nginx}  Normal          Pulled                  Container image "nginx:1.10" already present on machine
  9m            9m              1       {kubelet localhost.localdomain} spec.containers{nginx}  Normal          Created                 Created container with docker id 46ff268f2f92
  9m            9m              1       {kubelet localhost.localdomain} spec.containers{nginx}  Normal          Started                 Started container with docker id 46ff268f2f92
```

```
$ kubectl describe pods superchimp
Name:           superchimp
Namespace:      default
Node:           /
Labels:         <none>
Status:         Pending
IP:
Controllers:    <none>
Containers:
  nginx:
    Image:      nginx:1.10
    Port:       80/TCP
    Requests:
      cpu:                                      250m
      memory:                                   32Mi
      pod.alpha.kubernetes.io/opaque-int-resource-bananas:   10Ki
    Volume Mounts:                              <none>
    Environment Variables:                      <none>
Conditions:
  Type          Status
  PodScheduled  False 
No volumes.
QoS Class:      Burstable
Events:
  FirstSeen     LastSeen        Count   From                    SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                    -------------   --------        ------                  -------
  3m            1s              15      {default-scheduler }                    Warning         FailedScheduling        pod (superchimp) failed to fit in any node
fit failure on node (localhost.localdomain): Insufficient pod.alpha.kubernetes.io/opaque-int-resource-bananas
```
@spiffxp
Copy link
Member

spiffxp commented Nov 18, 2016

@ConnorDoyle @idvoretskyi this needs an alpha-in-1.5, beta-in-1.5, or stable-in-1.5 label if it's going to be included in release notes for kubernetes 1.5; it's listed as "alpha" in this spreadsheet

@spiffxp
Copy link
Member

spiffxp commented Dec 1, 2016

I'm removing the team/SIG-Node label given that SIG-Scheduling is listed as the owner in the 1.5 feature spreadsheet; if we're using labels to describe areas of overlap then we need something else in these issues to identify owner

@spiffxp spiffxp removed the sig/node Categorizes an issue or PR as relevant to SIG Node. label Dec 1, 2016
@idvoretskyi
Copy link
Member

@ConnorDoyle can you confirm that this item targets beta in 1.6?

@ConnorDoyle
Copy link
Contributor Author

OIR is deprecated as of v1.8 in favor of Extended Resources (this is in the release notes draft already for v1.8).

@zhangxiaoyu-zidif
Copy link

@ConnorDoyle could you give me some links about Extended Resources.
Many thanks. =)

@ConnorDoyle
Copy link
Contributor Author

@zhangxiaoyu-zidif so far the best source of information is the PR -- docs to come. kubernetes/kubernetes#48922

@ConnorDoyle
Copy link
Contributor Author

@zhangxiaoyu-zidif docs PR is up kubernetes/website#5399

@zhangxiaoyu-zidif
Copy link

@ConnorDoyle Thanks a lot!
It's much helpful. =)

@ConnorDoyle
Copy link
Contributor Author

As mentioned above, this feature was deprecated in v1.8.
The PR to remove them in v1.9 is here: kubernetes/kubernetes#55103

@idvoretskyi
Copy link
Member

@ConnorDoyle am I right that we may close this issue now (no further progress on the feature is expected)?

@ConnorDoyle
Copy link
Contributor Author

Correct — closing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling.
Projects
None yet
Development

No branches or pull requests

5 participants