-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
⚠️ CI failed ⚠️ #299
Comments
This one is a bit tricky to judge because it is quite noisy. Note that the runs that failed (x in the plot) are not used to compute the mean that we use for the analysis. This is running with runtime 0.0.4 (no packages changed, except maybe coiled), which leads me to think that this regression report is a result of noise. The only thing that makes me doubt is that in upstream about the same date we see an increment in durations, but not so bad yet to call it regression. @ian-r-rose do you have any thoughts? |
On upstream, even though not reported as regression since the signal it's very noisy, wall time seems to be better since this was reported, Although the average memory seems to be increasing. @gjoseph92 any thoughts on this? trying to asses whether I should close this, or keep it open. |
I wonder if this is due to
However, the movement is the opposite of what I'd expect. The first PR should have made runtime go down from baseline, then the follow-up should have made it go back up to baseline. Unless the "fix" really didn't work properly. I also would expect it to be affected by dask/distributed#6975. It's kind of unbelievable how much noisiness and variance there is on 0.0.4. I would say that this workload is simply unrunnable on those older versions. |
Sure, but the pictures on my latest comment are from upstream (see here https://coiled.github.io/coiled-runtime/coiled-upstream-py3.9.html) and I see that wall time went down 09/02 but avg memory went up is this expected |
Workflow Run URL
The text was updated successfully, but these errors were encountered: