You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
We have an admission webhook that will inject sidecar container.
The following error occurred while scheduling pod:
message: 'admission webhook "vpod.kb.io" denied the request: [pod.spec.containers.tether-1.resources.requests.kubernetes.io/batch-cpu: Required value: request of container tether-1 does not have resource kubernetes.io/batch-cpu, pod.spec.containers.tether-1.resources.limits.kubernetes.io/batch-cpu: Required value: limit of container tether-1 does not have resource kubernetes.io/batch-cpu, pod.spec.containers.tether-1.resources.requests.kubernetes.io/batch-memory: Required value: request of container tether-1 does not have resource kubernetes.io/batch-memory, pod.spec.containers.tether-1.resources.limits.kubernetes.io/batch-memory: Required value: limit of container tether-1 does not have resource kubernetes.io/batch-memory, pod.spec.containers.tether-2.resources.requests.kubernetes.io/batch-cpu: Required value: request of container tether-2 does not have resource kubernetes.io/batch-cpu, pod.spec.containers.tether-2.resources.limits.kubernetes.io/batch-cpu: Required value: limit of container tether-2 does not have resource kubernetes.io/batch-cpu, pod.spec.containers.tether-2.resources.requests.kubernetes.io/batch-memory: Required value: request of container tether-2 does not have resource kubernetes.io/batch-memory, pod.spec.containers.tether-2.resources.limits.kubernetes.io/batch-memory: Required value: limit of container tether-2 does not have resource kubernetes.io/batch-memory]'
It seems that when sidecarsets.apps.kruise.io is injected into the sidecar container, it will conflict with the time of Koordinator mutating pod. Have the bosses encountered this situation? is there any good way?
What you expected to happen:
Environment:
Koordinator version: - v1.0.0
Kubernetes version (use kubectl version): v1.21.7
docker/containerd version: containerd 1.5.0
OS (e.g: cat /etc/os-release): Ubuntu 20.04.4 LTS
Kernel (e.g. uname -a): Linux 5.10.112-11.al8.x86_64 ✨ Add NodeMetric API #1 SMP Tue May 24 16:05:50 CST 2022 x86_64 x86_64 x86_64 GNU/Linux
Anything else we need to know:
The text was updated successfully, but these errors were encountered:
What happened:
We have an admission webhook that will inject sidecar container.
The following error occurred while scheduling pod:
message: 'admission webhook "vpod.kb.io" denied the request: [pod.spec.containers.tether-1.resources.requests.kubernetes.io/batch-cpu: Required value: request of container tether-1 does not have resource kubernetes.io/batch-cpu, pod.spec.containers.tether-1.resources.limits.kubernetes.io/batch-cpu: Required value: limit of container tether-1 does not have resource kubernetes.io/batch-cpu, pod.spec.containers.tether-1.resources.requests.kubernetes.io/batch-memory: Required value: request of container tether-1 does not have resource kubernetes.io/batch-memory, pod.spec.containers.tether-1.resources.limits.kubernetes.io/batch-memory: Required value: limit of container tether-1 does not have resource kubernetes.io/batch-memory, pod.spec.containers.tether-2.resources.requests.kubernetes.io/batch-cpu: Required value: request of container tether-2 does not have resource kubernetes.io/batch-cpu, pod.spec.containers.tether-2.resources.limits.kubernetes.io/batch-cpu: Required value: limit of container tether-2 does not have resource kubernetes.io/batch-cpu, pod.spec.containers.tether-2.resources.requests.kubernetes.io/batch-memory: Required value: request of container tether-2 does not have resource kubernetes.io/batch-memory, pod.spec.containers.tether-2.resources.limits.kubernetes.io/batch-memory: Required value: limit of container tether-2 does not have resource kubernetes.io/batch-memory]'
It seems that when sidecarsets.apps.kruise.io is injected into the sidecar container, it will conflict with the time of Koordinator mutating pod. Have the bosses encountered this situation? is there any good way?
What you expected to happen:
Environment:
Anything else we need to know:
The text was updated successfully, but these errors were encountered: