Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move prow jobs to use the community clusters #1273

Open
aramase opened this issue Jun 12, 2023 · 8 comments
Open

Move prow jobs to use the community clusters #1273

aramase opened this issue Jun 12, 2023 · 8 comments
Assignees
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@aramase
Copy link
Member

aramase commented Jun 12, 2023

xref: kubernetes/test-infra#29722

@rjsadow
Copy link

rjsadow commented Jun 13, 2023

hey @aramase do you know why the changes from kubernetes/test-infra#29473 prevented the jobs from finishing? There are a lot of folks that are going to be making this transition and your implementation looked pretty on track with what I'd expect.

@rjsadow
Copy link

rjsadow commented Jun 14, 2023

@aramase based on https://monitoring-eks.prow.k8s.io/d/96Q8oOOZk/builds?orgId=1&var-org=kubernetes-sigs&var-repo=secrets-store-csi-driver&var-job=pull-secrets-store-csi-driver-lint&var-build=All&from=1686575114078&to=1686578965140, it looks like the resource quotas should be significantly increased.

Recommendations from @xmudrii would be 2-4 CPU and 4-8 GB Mem for linting jobs. If you merge with new capacity values, you should be able to monitor the above dashboard to see how it's performing.

@aramase
Copy link
Member Author

aramase commented Jun 14, 2023

@aramase based on https://monitoring-eks.prow.k8s.io/d/96Q8oOOZk/builds?orgId=1&var-org=kubernetes-sigs&var-repo=secrets-store-csi-driver&var-job=pull-secrets-store-csi-driver-lint&var-build=All&from=1686575114078&to=1686578965140, it looks like the resource quotas should be significantly increased.

Recommendations from @xmudrii would be 2-4 CPU and 4-8 GB Mem for linting jobs. If you merge with new capacity values, you should be able to monitor the above dashboard to see how it's performing.

@rjsadow I'm happy to try the new resource limits and this issue was opened so we can follow up and move the jobs. The revert was done to unblock our patch release.

Recommendations from @xmudrii would be 2-4 CPU and 4-8 GB Mem for linting jobs.

It might be good to document these recommendations for future reference.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2024
@aramase
Copy link
Member Author

aramase commented Jan 22, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 21, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 21, 2024
@aramase
Copy link
Member Author

aramase commented Jun 4, 2024

/remove-lifecycle rotten
/lifecycle frozen

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

4 participants