-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle k6 exit codes #75
Comments
So, I think this is because k6 exits with a non 0 exit code and so the k6 operator will try to keep it going till it succeeds. We could probably add that to the crd as an option, restart never. And have k6-operator interpret that. |
@b0nete thanks for opening the issue! I agree with @knechtionscoding that this happens because of non-zero exit of k6 run. It seems that number of IMO, this shouldn't be the default behavior: if thresholds fail, it is a reason for someone to look into the SUT and the script and figure out what to do with that. So k6-operator shouldn't be restarting any pods on failing thresholds 🤔 |
Looking at https://github.com/grafana/k6/blob/master/errext/exitcodes/codes.go:
EDIT 17 Feb: updated the table with Simme's input and additional info. |
Do note that I use the term reschedule rather than restart though. Restarting the exact same pod would likely lead to another f failure, but allowing k8s to destroy the pod and reschedule it (preferably even to another node) might not. |
Good point! There should be a limit to number of such restarts though. |
In PR #86, backoff limit for runner jobs was set to 0: that disables all restarts no matter the exit codes. It's a partial solution to this issue. Cases when there should be a restart (as noted in above comments) should be solved separately. |
Any progress on this? It blocks usage of the operator for me unfortunately. I'm thinking as a workaround, I could patch the job after the operator creates it. |
Hi @jsravn, as described in the last comment before yours, this was partially fixed in 0cdcc9d as part of PR #86. I expected that PR to be merged in by now but it's being delayed due to other issues 😞 I'll pull out this specific commit with backoff tomorrow so that it can be merged into |
Was this merged up? @yorugac |
what image is that? because i tried v0.0.7rc4 (https://github.com/grafana/k6-operator/tree/v0.0.7rc4/config/default) and it doesnt have it). ghcr.io/grafana/operator:latest or do i build it myself? |
No, you don't need to build it, it's present with commit as tag: |
Connected issue in k6: grafana/k6#2804 |
Hi, i'm executing load tests in my kubernetes cluster but i have a problem when tests fails.
I need tests be executed only one time, and if these run succesfully o fails don't be executed again.
Currently if tests running ok these dont be executed again, but if test threshold faild automatically starter container is created and launch another pod to try run test again.
I leave my config files here, i tried to set abortOnFail in threshold and use abortTest() function but the problem persist.
I think it is a k6-operator behaviour, maybe you can help me.
This is my test file.
And this is my k6 definition.
I hope you can help me, thanks!
The text was updated successfully, but these errors were encountered: