We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What happened: kubectl create ns namespace1 kubectl create ns namespace2 相关配置 创建队列 队列配置文件如下 root.yml apiVersion: scheduling.sigs.k8s.io/v1alpha1 kind: ElasticQuota metadata: name: root labels: quota.scheduling.koordinator.sh/is-parent: "true" quota.scheduling.koordinator.sh/allow-lent-resource: "true" spec: max: cpu: 2 memory: 2Gi min: cpu: 2 memory: 2Gi
queue-a.yml apiVersion: scheduling.sigs.k8s.io/v1alpha1 kind: ElasticQuota metadata: name: a namespace: namespace1 labels: quota.scheduling.koordinator.sh/parent: "root" quota.scheduling.koordinator.sh/is-parent: "false" quota.scheduling.koordinator.sh/allow-lent-resource: "true" annotations: quota.scheduling.koordinator.sh/shared-weight: '{"cpu":"1","memory":"1Gi"}' spec: max: cpu: 2 memory: 2Gi min: cpu: 1 memory: 1Gi
queue-b.yaml apiVersion: scheduling.sigs.k8s.io/v1alpha1 kind: ElasticQuota metadata: name: b namespace: namespace2 labels: quota.scheduling.koordinator.sh/parent: "root" quota.scheduling.koordinator.sh/is-parent: "false" quota.scheduling.koordinator.sh/allow-lent-resource: "true" annotations: quota.scheduling.koordinator.sh/shared-weight: '{"cpu":"1","memory":"1Gi"}' spec: max: cpu: 2 memory: 2Gi min: cpu: 1 memory: 1Gi
先运行pod-a.yaml,qos等级为BE apiVersion: v1 kind: Pod metadata: name: pod-a namespace: namespace1 labels: quota.scheduling.koordinator.sh/name: "a" koordinator.sh/qosClass: BE #qos spec: schedulerName: koord-scheduler priorityClassName: koord-batch containers:
再次运行pod-d.yaml,qos等级为LSE apiVersion: v1 kind: Pod metadata: name: pod-d namespace: namespace2 labels: quota.scheduling.koordinator.sh/name: "b" koordinator.sh/qosClass: "LSE" spec: schedulerName: koord-scheduler priorityClassName: koord-prod containers:
结果如下,pod-a和pod-d都running
再运行一个qos等级为LSE的pod-e,不能抢占pod-a的资源
pod-e.yaml apiVersion: v1 kind: Pod metadata: name: pod-e namespace: namespace2 labels: quota.scheduling.koordinator.sh/name: "b" koordinator.sh/qosClass: "LSE" spec: schedulerName: koord-scheduler priorityClassName: koord-prod containers:
为什么pod-e不会抢占pod-a的资源 ~ :
What you expected to happen:
Environment:
Anything else we need to know:
The text was updated successfully, but these errors were encountered:
PTAL @shaloulcy
Sorry, something went wrong.
No branches or pull requests
What happened:
![image](https://private-user-images.githubusercontent.com/40164590/345677920-f7305da7-7bb6-4cff-bf83-503c5bd87c82.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjAzODEzNDUsIm5iZiI6MTcyMDM4MTA0NSwicGF0aCI6Ii80MDE2NDU5MC8zNDU2Nzc5MjAtZjczMDVkYTctN2JiNi00Y2ZmLWJmODMtNTAzYzViZDg3YzgyLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MDclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzA3VDE5MzcyNVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTFjN2E1MmJkMzNjODI4ZTM4NmUzMmUwNjI1OWY4Y2E4M2VkZTcwNjE3OWFiN2ViNGNkMDU3NjQwYWI5OWM1YTkmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.ehZEfUo2WJgVDcZh0EYJCCSshdqAUd1Koaa4T83Stq8)
![image](https://private-user-images.githubusercontent.com/40164590/345677939-07860dac-53dd-4841-85f5-67350961c20c.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjAzODEzNDUsIm5iZiI6MTcyMDM4MTA0NSwicGF0aCI6Ii80MDE2NDU5MC8zNDU2Nzc5MzktMDc4NjBkYWMtNTNkZC00ODQxLTg1ZjUtNjczNTA5NjFjMjBjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MDclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzA3VDE5MzcyNVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTE1YmJkY2FhYWI1N2VmZjJhY2U0YzhlMzA2Yzk1Mzc5MTU4MjYyY2Q1NmFhMDAwODZmMDY3MWU4NWNjZTRlNWQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Bf7z19AAe8EF1Q1KXXut8Bmn7AmQsk3ezcGXiqr3DBE)
kubectl create ns namespace1
kubectl create ns namespace2
相关配置
创建队列
队列配置文件如下
root.yml
apiVersion: scheduling.sigs.k8s.io/v1alpha1
kind: ElasticQuota
metadata:
name: root
labels:
quota.scheduling.koordinator.sh/is-parent: "true"
quota.scheduling.koordinator.sh/allow-lent-resource: "true"
spec:
max:
cpu: 2
memory: 2Gi
min:
cpu: 2
memory: 2Gi
queue-a.yml
apiVersion: scheduling.sigs.k8s.io/v1alpha1
kind: ElasticQuota
metadata:
name: a
namespace: namespace1
labels:
quota.scheduling.koordinator.sh/parent: "root"
quota.scheduling.koordinator.sh/is-parent: "false"
quota.scheduling.koordinator.sh/allow-lent-resource: "true"
annotations:
quota.scheduling.koordinator.sh/shared-weight: '{"cpu":"1","memory":"1Gi"}'
spec:
max:
cpu: 2
memory: 2Gi
min:
cpu: 1
memory: 1Gi
queue-b.yaml
apiVersion: scheduling.sigs.k8s.io/v1alpha1
kind: ElasticQuota
metadata:
name: b
namespace: namespace2
labels:
quota.scheduling.koordinator.sh/parent: "root"
quota.scheduling.koordinator.sh/is-parent: "false"
quota.scheduling.koordinator.sh/allow-lent-resource: "true"
annotations:
quota.scheduling.koordinator.sh/shared-weight: '{"cpu":"1","memory":"1Gi"}'
spec:
max:
cpu: 2
memory: 2Gi
min:
cpu: 1
memory: 1Gi
先运行pod-a.yaml,qos等级为BE
apiVersion: v1
kind: Pod
metadata:
name: pod-a
namespace: namespace1
labels:
quota.scheduling.koordinator.sh/name: "a"
koordinator.sh/qosClass: BE #qos
spec:
schedulerName: koord-scheduler
priorityClassName: koord-batch
containers:
image: kde1:8025/warehouse/nginx
imagePullPolicy: IfNotPresent
name: curlimage
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
restartPolicy: Always
再次运行pod-d.yaml,qos等级为LSE
apiVersion: v1
kind: Pod
metadata:
name: pod-d
namespace: namespace2
labels:
quota.scheduling.koordinator.sh/name: "b"
koordinator.sh/qosClass: "LSE"
spec:
schedulerName: koord-scheduler
priorityClassName: koord-prod
containers:
image: kde1:8025/warehouse/nginx
imagePullPolicy: IfNotPresent
name: curlimage
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
restartPolicy: Always
结果如下,pod-a和pod-d都running
![image](https://private-user-images.githubusercontent.com/40164590/345679255-785a92c8-6820-4496-9178-d0ee0531dc10.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjAzODEzNDUsIm5iZiI6MTcyMDM4MTA0NSwicGF0aCI6Ii80MDE2NDU5MC8zNDU2NzkyNTUtNzg1YTkyYzgtNjgyMC00NDk2LTkxNzgtZDBlZTA1MzFkYzEwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MDclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzA3VDE5MzcyNVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTI0MmQzOWRmMzk2ZWYzYTUzNWExM2QzZTA3YTkwYTllNTE1NzM4YzcxOTUxM2M2Y2I5YzlhNjY3ZTJiM2JlZmYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.kxKLxptNe3xbUzf9HPNE3G6ZAy8AJZtTF5AdU719I2Q)
再运行一个qos等级为LSE的pod-e,不能抢占pod-a的资源
![image](https://private-user-images.githubusercontent.com/40164590/345679527-885e2758-1085-4b77-85ed-f5a5818118a4.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjAzODEzNDUsIm5iZiI6MTcyMDM4MTA0NSwicGF0aCI6Ii80MDE2NDU5MC8zNDU2Nzk1MjctODg1ZTI3NTgtMTA4NS00Yjc3LTg1ZWQtZjVhNTgxODExOGE0LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MDclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzA3VDE5MzcyNVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPThmMWM0ZmViZTMwNTMzNzAxYzMzZjUxMzc1YzI4OWYwMzNhYTUzMTI4MWYyYWExZjQ4Yjc3OTU4NWU1YmYxNWImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.FVsq1ONf0p6oNzDiNLNWuFIjwTW1Gw-e99LgM04paB4)
pod-e.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-e
namespace: namespace2
labels:
quota.scheduling.koordinator.sh/name: "b"
koordinator.sh/qosClass: "LSE"
spec:
schedulerName: koord-scheduler
priorityClassName: koord-prod
containers:
image: kde1:8025/warehouse/nginx
imagePullPolicy: IfNotPresent
name: curlimage
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
restartPolicy: Always
为什么pod-e不会抢占pod-a的资源
~
:
What you expected to happen:
Environment:
Anything else we need to know:
The text was updated successfully, but these errors were encountered: