Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support schedulerEstimator as a list in karmada helm chart #4368

Open
wengyao04 opened this issue Dec 4, 2023 · 8 comments · Fixed by #4358
Open

Support schedulerEstimator as a list in karmada helm chart #4368

wengyao04 opened this issue Dec 4, 2023 · 8 comments · Fixed by #4358
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@wengyao04
Copy link
Contributor

wengyao04 commented Dec 4, 2023

What would you like to be added:

  • Support schedulerEstimator as a list in karmada helm chart
  • add command line argument in karmada-scheulder.yaml to enable estimator like
    - --enable-scheduler-estimator=true
    
  • Could we also make "helm.sh/hook-delete-policy": hook-succeeded configurable in post-install-job.yaml. We want to keep the job pod for debugging issues if post-install job fails

Why is this needed:
Hi, we want to install scheduler estimator using helm chart. But the helm char only supporting one cluster https://github.com/karmada-io/karmada/blob/master/charts/karmada/values.yaml#L748. I manually install one for another member cluster. Could we make them as a list ?

Thank you !

@wengyao04 wengyao04 added the kind/feature Categorizes issue or PR as related to a new feature. label Dec 4, 2023
@chaosi-zju
Copy link
Member

/assign

@chaosi-zju
Copy link
Member

hi @wengyao04, as for first problem, how about helm chart change like this PR: #4358

## karmada scheduler estimator
schedulerEstimator:
## @param schedulerEstimator.memberClusters each cluster requires an estimator component, fill in the information for each cluster
memberClusters:
## @param schedulerEstimator.memberClusterInfo[0].clusterName the name of the member cluster
- clusterName: ""
## @param schedulerEstimator.memberClusterInfo[0].replicaCount target replicas
replicaCount: 1
## kubeconfig of the member cluster
kubeconfig:
## @param schedulerEstimator.memberClusterInfo[0].kubeconfig.server apiserver of the member cluster
server: ""
## @param schedulerEstimator.memberClusterInfo[0].kubeconfig.caCrt ca of the certificate
caCrt: |
-----BEGIN CERTIFICATE-----
XXXXXXXXXXXXXXXXXXXXXXXXXXX
-----END CERTIFICATE-----
## @param schedulerEstimator.memberClusterInfo[0].kubeconfig.crt crt of the certificate
crt: |
-----BEGIN CERTIFICATE-----
XXXXXXXXXXXXXXXXXXXXXXXXXXX
-----END CERTIFICATE-----
## @param schedulerEstimator.memberClusterInfo[0].kubeconfig.key key of the certificate
key: |
-----BEGIN RSA PRIVATE KEY-----
XXXXXXXXXXXXXXXXXXXXXXXXXXX
-----END RSA PRIVATE KEY-----
## @param schedulerEstimator.labels labels of the scheduler-estimator deployment
labels:
app: karmada-scheduler-estimator
## @param schedulerEstimator.podAnnotations annotations of the scheduler-estimator pods
podAnnotations: {}
## @param schedulerEstimator.podLabels labels of the scheduler-estimator pods
podLabels: {}
## @param image.registry karmada schedulerEstimator image registry
## @param image.repository karmada schedulerEstimator image repository
## @param image.tag karmada schedulerEstimator image tag (immutable tags are recommended)
## @param image.pullPolicy karmada schedulerEstimator image pull policy
## @param image.pullSecrets Specify docker-registry secret names as an array
##
image:
registry: docker.io
repository: karmada/karmada-scheduler-estimator
tag: *karmadaImageVersion
pullPolicy: *karmadaImagePullPolicy
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## Example:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## @param schedulerEstimator.resources resource quota of the scheduler-estimator
resources: {}
# If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## @param schedulerEstimator.nodeSelector node selector of the scheduler-estimator
nodeSelector: {}
## @param schedulerEstimator.affinity affinity of the scheduler-estimator
affinity: {}
## @param schedulerEstimator.tolerations tolerations of the scheduler-estimator
tolerations: []
# - key: node-role.kubernetes.io/master
# operator: Exists
## @param schedulerEstimator.strategy strategy of the scheduler-estimator
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 50%
## @param apiServer.podDisruptionBudget
podDisruptionBudget: *podDisruptionBudget
## descheduler config
descheduler:
## @param descheduler.labels labels of the descheduler deployment
labels:
app: karmada-descheduler
## @param descheduler.replicaCount target replicas of the descheduler
replicaCount: 2
## @param descheduler.podAnnotations annotations of the descheduler pods
podAnnotations: {}
## @param descheduler.podLabels labels of the descheduler pods
podLabels: {}
## @param image.registry karmada descheduler image registry
## @param image.repository karmada descheduler image repository
## @param image.tag karmada descheduler image tag (immutable tags are recommended)
## @param image.pullPolicy karmada descheduler image pull policy
## @param image.pullSecrets Specify docker-registry secret names as an array
##
image:
registry: docker.io
repository: karmada/karmada-descheduler
tag: *karmadaImageVersion
pullPolicy: *karmadaImagePullPolicy
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## Example:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## @param descheduler.resources resource quota of the descheduler
resources: {}
# If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## @param descheduler.nodeSelector node selector of the descheduler
nodeSelector: {}
## @param descheduler.affinity affinity of the descheduler
affinity: {}
## @param descheduler.tolerations tolerations of the descheduler
tolerations: []
# - key: node-role.kubernetes.io/master
# operator: Exists
## @param descheduler.strategy strategy of the descheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 50%
## @param descheduler.kubeconfig kubeconfig of the descheduler
kubeconfig: karmada-kubeconfig
## @param apiServer.podDisruptionBudget
podDisruptionBudget: *podDisruptionBudget

@RainbowMango RainbowMango moved this to Planned In Release 1.9 in Karmada Overall Backlog Dec 5, 2023
@RainbowMango RainbowMango moved this from Todo to In Progress in Karmada Release 1.9 Dec 5, 2023
@wengyao04
Copy link
Contributor Author

Hi @chaosi-zju Thank you very much ! It works perfect for us !
BTW

Could we also add command line argument in karmada-scheulder.yaml to enable estimator like
- --enable-scheduler-estimator=true

@RainbowMango
Copy link
Member

Could we also add command line argument in karmada-scheulder.yaml to enable estimator like

  • --enable-scheduler-estimator=true

That makes sense to me. I wonder, would you like to send a PR for it?

@wengyao04
Copy link
Contributor Author

wengyao04 commented Dec 5, 2023

Thanks @RainbowMango ! Yes we can send a PR for it

@chaosi-zju
Copy link
Member

hi @wengyao04, I am not quite sure if this PR #4393 is what your problem 3 want?

please help explaining what's your perfect configurable way~

@RainbowMango
Copy link
Member

RainbowMango commented Dec 11, 2023

/reopen
Seems there are two more tasks addressed by this issue. The first one was completed by #4358

@karmada-bot karmada-bot reopened this Dec 11, 2023
@karmada-bot
Copy link
Collaborator

@RainbowMango: Reopened this issue.

In response to this:

/reopen
Seems there are two more tasks addressed by this issue. The first one completed by #4358

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@RainbowMango RainbowMango moved this from Done to In Progress in Karmada Release 1.9 Dec 11, 2023
@RainbowMango RainbowMango moved this from Planned In Release 1.9 to Accepted in Karmada Overall Backlog Jul 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
Status: Accepted
Status: In Progress
Development

Successfully merging a pull request may close this issue.

4 participants