-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intermittent failures with "Signalted to exit!" message #591
Comments
Not sure if that's related, but here's a job that was killed twice now with OOM: https://cirrus-ci.com/task/6598968113102848 It's just a few code formatters, it never exceeded its allotted 256MB before. (Worked fine with manual re-run.) |
I only saw similar behaviour when execution environment is killed due to OOM. Cirrus only detect OOMs for containers at the moment. Cirrus agent is invoked inside a container or a VM and therefore killed with it in case of OOMs and logs #93 should help to get better visibility in your builds but from my experiences it's usually related to tools that don't respect cgroup limits of containers. Quite a few tools still are not aware of being executed in a containers and by default picks up resources of a host and not the containers. Cirrus uses 32 CPU / 96GB host VMs for running containers. Maybe your tool thinks that it has that many resources instead of 1 CPU / 0.25 GBs? |
We did have this issue, but I fixed it a while ago. I think I'll try to create a VM or something that'll emulate that many cores, and see if I can reproduce the issue there. Thanks! |
Question
(I'm not convinced this is a bug in Cirrus, so filing a question).
For a few weeks now, my jobs have been failing sometimes with "Signaled to exit!" at the end of their log, like this: https://cirrus-ci.com/task/6101281580253184 Most of the time this happens on FreeBSD, but sometimes with Docker on Linux as well: https://cirrus-ci.com/task/4998716163620864
I thought it might be caused by OOM killer. Local builds in Docker succeed with 2GB, so I used that as a limit on Cirrus. Still sporadic failures. I then used 3GB, and now 4GB, and still seeing failures.
A similar issue have been reported earlier (#137), and it looks like it was a bug in Cirrus and it was resolved. So I guess I'm running into something different.
Is there anything I can do to debug this?
The text was updated successfully, but these errors were encountered: