Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

downwardAPI volumes should also have hugepages-<size> exposed to the pod through resourceFieldRef. #85148

Closed
DeySouvik opened this issue Nov 12, 2019 · 15 comments · Fixed by #86102
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. triage/unresolved Indicates an issue that can not or will not be resolved.

Comments

@DeySouvik
Copy link

What would you like to be added:
Currently the resourceFieldRef only exposes the cpu/memory and ephemeral-storage limit and requests to the POD. The hugepages information is not retrieve through the same mechanism.
POD manifest :
apiVersion: v1
kind: Pod
metadata:
name: isbc
namespace: development
labels:
name: isbc
annotations:
k8s.v1.cni.cncf.io/networks: pkt0-conf, pkt0-conf, pkt1-conf
spec:
nodeName: "node2"
hostname: "vsbc1"
containers:

  • name: isbc
    image: artifact1.eng.sonusnet.com:8443/sbx-docker-prod-westford/isbc/sbc-image:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/bash","-c", "sleep 2000000000000"]
    securityContext:
    capabilities:
    add:
    - NET_ADMIN
    allowPrivilegeEscalation: false
    volumeMounts:
    • mountPath: /hugepages
      name: hugepage
      resources:
      limits:
      hugepages-2Mi: "2048Mi"
      memory: "16Gi"
      cpu: "8"
      ephemeral-storage: "100Gi"
      volumeMounts:
      • name: podinfo
        mountPath: /etc/podinfo
        readOnly: false
        volumes:
    • name: hugepage
      emptyDir:
      medium: HugePages
    • name: podinfo
      downwardAPI:
      items:
      - path: "cpu"
      resourceFieldRef:
      containerName: isbc
      resource: limits.cpu
      - path: "memory"
      resourceFieldRef:
      containerName: isbc
      resource: limits.memory
      divisor: 1Mi
      - path: "storage"
      resourceFieldRef:
      containerName: isbc
      resource: limits.ephemeral-storage
      divisor: 1Mi
      - path: "hugepages"
      resourceFieldRef:
      containerName: isbc
      resource: limits.hugepages-2Mi

      divisor: 1Mi
      - path: "uid"
      fieldRef:
      fieldPath: metadata.uid

The above manifest fails the validation as the value limits.hugepages- is not supported.
[root@node1 ~]# kubectl create -f abc_pod.yml
The Pod "isbc" is invalid:

  • spec.volumes[1].downwardAPI.resourceFieldRef.resource: Unsupported value: "limits.hugepages-2Mi": supported values: "limits.cpu", "limits.ephemeral-storage", "limits.memory", "requests.cpu", "requests.ephemeral-storage", "requests.memory"
  • spec.containers[0].volumeMounts[0].name: Not found: "podinfo"
    [root@node1 ~]#

Why is this needed:
As we are running dpdk and other real time apps it is required to know the number of hugepages allocated for the pod. We need to have both the limit and request for the hugepages- reported back to the pod as a part of the resourceFieldRef in the downwardAPI. This will help in allocating the proper number of hugepages to the apps in the POD.

@DeySouvik DeySouvik added the kind/feature Categorizes issue or PR as related to a new feature. label Nov 12, 2019
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Nov 12, 2019
@DeySouvik
Copy link
Author

@kubernetes/sig-node-feature-requests
@kubernetes/sig-service-catalog-feature-requests
@kubernetes/sig-scheduling-feature-requests
@kubernetes/sig-instrumentation-feature-requests
@kubernetes/sig-network-feature-requests
@kubernetes/sig-architecture-feature-requests
/wg resource-management

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. wg/resource-management sig/service-catalog Categorizes an issue or PR as relevant to SIG Service Catalog. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Nov 12, 2019
@k8s-ci-robot
Copy link
Contributor

@DeySouvik: Reiterating the mentions to trigger a notification:
@kubernetes/sig-node-feature-requests, @kubernetes/sig-service-catalog-feature-requests, @kubernetes/sig-scheduling-feature-requests, @kubernetes/sig-instrumentation-feature-requests, @kubernetes/sig-network-feature-requests, @kubernetes/sig-architecture-feature-requests

In response to this:

@kubernetes/sig-node-feature-requests
@kubernetes/sig-service-catalog-feature-requests
@kubernetes/sig-scheduling-feature-requests
@kubernetes/sig-instrumentation-feature-requests
@kubernetes/sig-network-feature-requests
@kubernetes/sig-architecture-feature-requests
/wg resource-management

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@athenabot
Copy link

/triage unresolved

Comment /remove-triage unresolved when the issue is assessed and confirmed.

🤖 I am a bot run by vllry. 👩‍🔬

@k8s-ci-robot k8s-ci-robot added the triage/unresolved Indicates an issue that can not or will not be resolved. label Nov 12, 2019
@robscott
Copy link
Member

It looks like this ended up with a few too many sigs assigned, removing the ones that are not related, though I think this may only actually be relevant to sig-node.

/remove-sig network service-catalog instrumentation architecture

@k8s-ci-robot k8s-ci-robot removed sig/network Categorizes an issue or PR as relevant to SIG Network. sig/service-catalog Categorizes an issue or PR as relevant to SIG Service Catalog. sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. labels Nov 14, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 12, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 13, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@odinuge
Copy link
Member

odinuge commented May 11, 2020

/reopen

@odinuge
Copy link
Member

odinuge commented May 11, 2020

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 11, 2020
@k8s-ci-robot
Copy link
Contributor

@odinuge: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this May 11, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 9, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 8, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. triage/unresolved Indicates an issue that can not or will not be resolved.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants