-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feed unreliability affecting CI #55449
Comments
I couldn't figure out the best area label to add to this issue. If you have write-permissions please help me learn by adding exactly one area label. |
Tagging subscribers to this area: @dotnet/runtime-infrastructure Issue DetailsRunfo Tracking Issue: Feed unreliability affecting CI
Build Result Summary
|
Are these hangs https://dev.azure.com/dnceng/public/_build/results?buildId=1237740&view=ms.vss-test-web.build-test-results-tab&runId=36827102&resultId=110545&paneView=dotnet-dnceng.dnceng-build-release-tasks.helix-test-information-tab instances of this issue?
and nothing here. |
At a glance no. If they were then you would see this GitHub issue associated with the build in runfo. At the moment there are no associated issues. https://runfo.azurewebsites.net/view/build/?Number=1237740 Dug in a bit and it's hitting the Python PIP error I saw in some other logs. Filed the following to track in core-eng |
Thanks @jaredpar, I see that dotnet/core-eng#13736 covers failures like "The tests all completed successfully but ended up failing with the following root error:" but it does not cover OSX arm64 hangs where we have the test starting and then there is nothing in the log https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-main-050e73acf1cf4c5b91/JIT.CodeGenBringUpTests/console.ae514c4e.log?sv=2019-07-07&se=2021-08-03T16%3A52%3A11Z&sr=c&sp=rl&sig=ZfN3aMhfYLiFMMdWWmscAGvcn3aPJXnHHtkVWyCmHqk%3D. Do we need a separate issue for it? |
@sandreenko yes seems like we should have a separate issue for that. Can you point me to some builds which hit that failure? I want to look at some of the other log files. |
@hoyosjs do you have links to the AzDO builds that produced those logs? |
This build has many cases. It is a scheduled main testing. This is a PR triggered job but also with many such failures https://dev.azure.com/dnceng/public/_build/results?buildId=1237123&view=ms.vss-test-web.build-test-results-tab |
Looking at the run client.py:
It looks like the job exited but left a lot of child processes around that were then killed but later there were issues uploading the core files (believe that is what the log says). @MattGal or @ChadNedzlek would need to confirm. |
taking a look. |
I looked at this. There are two annoying things going on,
To see this more clearly, try this Kusto query:
... and look closely at the overlap in start times for work items
|
Mitigated with 3827b7f (based on the data that runfo shows). Keeping open until the change gets into the 6.0 branches (via an Arcade update). |
Closing as the fix is now part of every branch that showed the symptom (main, release/6.0*). |
Runfo Tracking Issue: Feed unreliability affecting CI
Build Result Summary
The text was updated successfully, but these errors were encountered: