Skip to content
This repository has been archived by the owner on Sep 12, 2023. It is now read-only.

Handle pod failures for all policies #189

Merged
merged 1 commit into from
Jun 9, 2022

Conversation

georgkaleido
Copy link
Contributor

If a pod is in phase failure we have to create a new one.
Currently it was assumed the pod would restart due to a RestartPolicy on the pod level
This doesn't work if the pod fails for a system reason.

@google-cla
Copy link

google-cla bot commented Apr 8, 2022

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

For more information, open the CLA check for this pull request.

@google-oss-prow google-oss-prow bot requested a review from terrytangyuan April 8, 2022 12:30
georgkaleido added a commit to georgkaleido/training-operator that referenced this pull request Apr 8, 2022
Fixes kubeflow#1570
Together with kubeflow/common#189

There can be pod level failures caused by the system, which would perviously caused the entire job to fail on all policies except ExitCode.
@johnugeorge
Copy link
Member

can you do a rebase?

georgkaleido added a commit to georgkaleido/training-operator that referenced this pull request Jun 9, 2022
Fixes kubeflow#1570
Together with kubeflow/common#189

There can be pod level failures caused by the system, which would perviously caused the entire job to fail on all policies except ExitCode.
@georgkaleido
Copy link
Contributor Author

@johnugeorge Done

@johnugeorge
Copy link
Member

@georgkaleido Can you fix golint ?

If a pod is in phase failure we have to create a new one.
Currently it was assumed the pod would restart due to a RestartPolicy on the pod level
This doesn't work if the pod fails for a system reason.
@georgkaleido
Copy link
Contributor Author

@johnugeorge done

google-oss-prow bot pushed a commit to kubeflow/training-operator that referenced this pull request Jun 9, 2022
Fixes #1570
Together with kubeflow/common#189

There can be pod level failures caused by the system, which would perviously caused the entire job to fail on all policies except ExitCode.
Copy link
Member

@terrytangyuan terrytangyuan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

/lgtm

@google-oss-prow
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: terrytangyuan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@google-oss-prow google-oss-prow bot merged commit 8f0ddb5 into kubeflow:master Jun 9, 2022
djwhatle pushed a commit to djwhatle/common that referenced this pull request Jul 26, 2022
If a pod is in phase failure we have to create a new one.
Currently it was assumed the pod would restart due to a RestartPolicy on the pod level
This doesn't work if the pod fails for a system reason.

(cherry picked from commit 8f0ddb5)
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants