-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Finally tasks should be triggered in case of missing results #4438
Comments
thank you @jensh007 for this report 👍 It is designed such that the Q. What happens to Pipeline when the dependent task boskos-acquire either failed or not executed, and the task result leased-resource is not initialized?A. The finally Task boskos-release is attempted and included in the list of skippedTasks since the task result leased-resource is not initialized. Pipeline controller logs this validation failure including param name leased-resource of the finally task boskos-release with the result reference leased-resource and result producing task boskos-acquire. Pipeline continues executing rest of the final tasks and exits with completion. We are looking at this proposal to help solve your use case: This proposal was demonstrated at the API WG: https://docs.google.com/document/d/17PodAxG8hV351fBhSu7Y_OIPhGTVgj6OJ2lPphYYRpU/edit#heading=h.qq31rcli4y3f The demo is available here at the mark of 16:30. Havent been able to work on it since then. |
Thanks for sharing the proposal. I understand that it may be designed like this but I consider this not to be very practical. I assume in many cases finally tasks are used to e.g. send notifications about build status etc. In case something fails (and thus a result may be missing) you always want to get the notification and not a silent death... So hope to see the enhancements soon :) |
And one more comment: You at least should adapt the documentation (see citation above). The statement that "finally tasks are guaranteed to be executed regardless of of success or error" is wrong if this is by design. |
Ran into this today, and the documentation is absolutely confusing. I was sitting here scratching my head as to why my finally task kept getting skipped. There was no error or other indicator that there was something wrong with the results. |
TEP-0103 is addressing this missing piece where the pipelineRun reports the reason why a task was skipped. @Yablargo what is your use case? What is the expectation, run the |
@pritidesai I actually post results, but since the task fails ( I fail it intentionally, the QC failed.), I can't read the results I posted, when I get to the In a simplified example, I run |
I see, thanks @Yablargo for the details! The pipelineRun does not publish results when a task fails and that's the reason We have designed I think, this use case is more suitable for #3749 where pipelineRun must publish results (if they were initialized) in case of a failure. Do you vote for #3749? wdyt? |
I noticed you do have vote for it 😄 #3749 (comment) |
Yep, Thanks! For right now I upload everything at the end of each TaskRun and pull it back down in the Finally (well, I was already uploading anyways, I just was hoping to use /results to save some calls and task params). It works, but it would save some complexity to be able to just poll the /results. |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
this is not fixed yet 🙃 /remove-lifecycle rotten |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
still not fixed and very annoying /remove-lifecycle stale |
Still exists on latest pipeline release: v0.44.0 |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
it still exists. |
This is still incredibly important, and if I am being honest this sort of thing is why I have moved on from tekton. |
I am very sorry to hear that! Yup, I can understand, we are working towards making this project more resilient. At the same time, any kind of contribution is highly appreciated. We have implemented a feature to allow producing a task result from a failed task: This is available since https://github.com/tektoncd/pipeline/releases/tag/v0.48.0. Please let me know if this does not work for you or looking for something more. Thanks! |
The use case mentioned in the description ⬆️ is now supported: https://github.com/tektoncd/pipeline/blob/main/test/task_results_from_failed_tasks_test.go |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Seems like it is fixed with 0.56.0 release at least. I tried the reproducer today, and it works as expected. $ kubectl get tr
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
resulttest-run-5bzmx-generator False Failed 107s 93s
resulttest-run-5bzmx-notification True Succeeded 93s 89s
$ tkn tr logs resulttest-run-5bzmx-notification
[use-result] foo I'll go ahead and close this one. |
Expected Behavior
I have a finally task that consumes results of previous tasks. The documentation states:
The task producing the result fails with a nonzero exit code. I expect that the finally task gets triggered.
Actual Behavior
Finally task is not run instead it is waiting forever for results. It is fine if the expected results are missing (and set to null). But the finally task should always be triggered.
Steps to Reproduce the Problem
resulttest
Additional Info
There is a related section in the documentation about results after error but this is not helpful. Setting
onError: continue
completely hides the error which is not what I want.Not sure but may be related to TEP-0058. Potentially another use case to be covered?
Kubernetes version:
Output of
kubectl version
:Tekton Pipeline version:
Output of
tkn version
orkubectl get pods -n tekton-pipelines -l app=tekton-pipelines-controller -o=jsonpath='{.items[0].metadata.labels.version}'
Client version: 0.17.0
Pipeline version: v0.28.2
Dashboard version: v0.21.0
The text was updated successfully, but these errors were encountered: