Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add examples for three existing failure policy actions. #601

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

jedwins1998
Copy link
Contributor

Add examples for each of the following failure policy actions:

  1. FailJobSet,
  2. RestartJobSet,
  3. RestartJobSetAndIgnoreMaxRestarts.

Fixes #600.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: jedwins1998
Once this PR has been reviewed and has the lgtm label, please assign danielvegamyhre for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jun 10, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @jedwins1998. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Jun 10, 2024
Copy link

netlify bot commented Jun 10, 2024

Deploy Preview for kubernetes-sigs-jobset canceled.

Name Link
🔨 Latest commit dad8964
🔍 Latest deploy log https://app.netlify.com/sites/kubernetes-sigs-jobset/deploys/667b36f53cddad0008bffbb8

@danielvegamyhre
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jun 10, 2024
@@ -0,0 +1,60 @@
apiVersion: jobset.x-k8s.io/v1alpha2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we also need to make changes in the site directory?
site/content/en/docs/tasks/_index.md
site/static/examples

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am afraid I am not quite sure what you mean. Can you explain more?

spec:
failurePolicy:
maxRestarts: 3
rules:
Copy link
Contributor

@danielvegamyhre danielvegamyhre Jun 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should use OnJobFailureReasons with PodFailurePolicy in one of these examples

Copy link
Contributor Author

@jedwins1998 jedwins1998 Jun 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added an example using OnJobFailureReasons. Can you expand how you envision PodFailurePolicy being used?

Is the idea to use PodFailurePolicy as the chosen failure reason? If so, any reason to prefer it over the others?

Copy link
Contributor

@danielvegamyhre danielvegamyhre Jun 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the idea to use PodFailurePolicy as the chosen failure reason? If so, any reason to prefer it over the others?

Yep, the idea I have in mind is to show an example of how a user can configure a JobSet to address the host maintenance use case described in the KEP user stories. Basically, have the JobSet restart and not count against max restarts if it fails due to a host maintenance event, but fail and do count against max restarts if it fails for any other reason.

We should be able to do this by combining a JobSet failure policy rule and a Pod Failure Policy rule. At the time I thought container exit code would be the way to do this, but now I think the pod condition type DisruptionTarget is what we need to do.

On the Job template, something like this will fail the job with a reason of "PodFailurePolicy"

  podFailurePolicy:
    rules:
    - action: FailJob
      onPodConditions:
      - type: DisruptionTarget

This will fail a Job if a pod fails with the condition type DisruptionTarget, and the Job will have failure reason of PodFailurePolicy, allowing you to define JobSet failure policy rules appropriately.

As a forewarning, you may run into a minor bug in an upstream k8s client package we have as a dependency that requires you set the "onPodConditions" or "onContainerExitCodes" fields to an empty list if unused. You should be able to just specify one or the other, but if you see an error saying a field cannot be nil, you may have to set one to an empty list. However, we've bumped our dependencies a few times since then, so hopefully the upstream fix is now included in our current k8s packages we use.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you link documentation explaining how DisruptionTarget works, how to test it, and what the expected behavior is? I have not seen nor used it before.

In the meantime, I added an example using PodFailurePolicy as the selected job failure reason with the pod failure policy using onExitCodes as the selector.

Copy link
Contributor

@danielvegamyhre danielvegamyhre Jun 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the meantime, I added an example using PodFailurePolicy as the selected job failure reason with the pod failure policy using onExitCodes as the selector.

Awesome

Can you link documentation explaining how DisruptionTarget works, how to test it, and what the expected behavior is? I have not seen nor used it before.

DisruptionTarget condition will be added in various scenarios where pod is gracefully killed due to a disruption.

I think one way to test this is you can just taint the nodes the pods are running on, which will trigger the pod DisruptionTarget condition to be added with reason of DeletionByTaintManager, so when the pod is killed and this triggers a Job failure, this should trigger the podFailurePolicy example I added above. That pod failure policy should then fail the Job with condition reason PodFailurePolicy, which will trigger the JobSet failurePolicy rule for OnJobFailureReasons PodFailurePolicy.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you get a chance to test this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, have not had the chance to. Been busy with other work. Apologize for the delay.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried using the YAML [1], but am seeing the error

2024-07-23T22:43:25Z	ERROR	Reconciler error	{"controller": "jobset", "controllerGroup": "jobset.x-k8s.io", "controllerKind": "JobSet", "JobSet": {"name":"host-maintenance-event-model","namespace":"default"}, "namespace": "default", "name": "host-maintenance-event-model", "reconcileID": "4756e557-15b4-43b8-ab3e-817efc0ab633", "error": "job \"host-maintenance-event-model-leader-0\" creation failed with error: Job.batch \"host-maintenance-event-model-leader-0\" is invalid: [spec.podFailurePolicy.rules[0].onExitCodes.containerName: Invalid value: \"\": must be one of the container or initContainer names in the pod template, spec.podFailurePolicy.rules[0].onExitCodes.values: Invalid value: []int32(nil): at least one value is required, spec.podFailurePolicy.rules[0]: Invalid value: specifying both OnExitCodes and OnPodConditions is not supported]"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/controller/controller.go:329
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/controller/controller.go:22

in the logs for the jobset controller pod. It appears that setting both onPodConditions and onExitCodes is being explicitly checked for and disallowed. This seems inline with the Kubernetes reference documentation [3]. Do we know of any other workarounds to the bug mentioned in comment [2]?

[1] The YAML in question:

apiVersion: jobset.x-k8s.io/v1alpha2
kind: JobSet
metadata:
  name: host-maintenance-event-model
spec:
  failurePolicy:
    maxRestarts: 50
    rules:
      - action: RestartJobSet
        targetReplicatedJobs:
        - leader
      - action: RestartJobSetAndIgnoreMaxRestarts
        onJobFailureReasons:
        - PodFailurePolicy
  replicatedJobs:
  - name: leader
    replicas: 1
    template:
      spec:
        # Set backoff limit to 0 so job will immediately fail if any pod fails.
        backoffLimit: 0
        completions: 2
        parallelism: 2
        template:
          spec:
            restartPolicy: Never
            containers:
            - name: leader
              image: bash:latest
              command:
              - bash
              - -xc
              - |
                echo "JOB_COMPLETION_INDEX=$JOB_COMPLETION_INDEX"
                if [[ "$JOB_COMPLETION_INDEX" == "0" ]]; then
                  for i in $(seq 10 -1 1)
                  do
                    echo "Sleeping in $i"
                    sleep 1
                  done
                  exit 1
                fi
                for i in $(seq 1 1000)
                do
                  echo "$i"
                  sleep 1
                done
        podFailurePolicy:
          rules:
            - action: FailJob
              onPodConditions: 
              - type: DisruptionTarget
              onExitCodes:
                containerName: ""
                operator: NotIn
                values: []
  - name: workers
    replicas: 1
    template:
      spec:
        backoffLimit: 0
        completions: 2
        parallelism: 2
        template:
          spec:
            containers:
            - name: worker
              image: bash:latest
              command:
              - bash
              - -xc
              - |
                sleep 1000

[2] #601 (comment)
[3] https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/job-v1/#:~:text=PodFailurePolicyRule%20describes%20how%20a%20pod%20failure%20is%20handled%20when%20the%20requirements%20are%20met.%20One%20of%20onExitCodes%20and%20onPodConditions%2C%20but%20not%20both%2C%20can%20be%20used%20in%20each%20rule.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah when I started writing e2e tests for configurable failure policy I discovered that this workaround (populated onExitCodes, empty onPodConditions):

        podFailurePolicy:
          rules:
            - action: FailJob
              onExitCodes:
                containerName: leader
                operator: NotIn
                values: [143]
              onPodConditions: []

doesn't work in reverse (empty onExitCodes, populated OnPodConditions, like your example above):

        podFailurePolicy:
          rules:
            - action: FailJob
              onPodConditions: 
              - type: DisruptionTarget
              onExitCodes:
                containerName: ""
                operator: NotIn
                values: []

I created kubernetes/kubernetes#126040 to get this upstream k8s issue sorted out and @mimowo has completed the fix and cherry picks, but we missed the last patch release deadline, so it won't be available in an official k8s api release for us to use for about a month.

In the meantime we could technically use a k8s feature branch like in #620 which I am thinking about doing but haven't decided yet.

@jedwins1998 jedwins1998 force-pushed the adding-configurable-failure-policy-examples branch from 37f5a92 to c08778c Compare June 13, 2024 22:06
Add examples for each of the following failure policy actions:
1. FailJobSet,
2. RestartJobSet,
3. RestartJobSetAndIgnoreMaxRestarts.
@jedwins1998 jedwins1998 force-pushed the adding-configurable-failure-policy-examples branch from 721c42f to dad8964 Compare June 25, 2024 21:30
@danielvegamyhre danielvegamyhre self-assigned this Jul 1, 2024
metadata:
name: failjobset-action-example
spec:
failurePolicy:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a brief comment to the failurePolicy section of each example describing what behavior the user should expect to see when running the example?

spec:
failurePolicy:
maxRestarts: 3
rules:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you get a chance to test this?

@danielvegamyhre danielvegamyhre added the tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges. label Aug 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add Examples for Failure Policy Actions
4 participants