-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
some crates take long enough to make docs.rs look stuck #335
Comments
The queue has lagged behind because the rapid release of many The same thread both loads crates onto the queue and builds them, so if a particular section of the queue is taking a while, nothing else will get loaded until it's gone through its backlog. In fact, if you look now, you'll see that the queue has changed, and now the builder is stuck on more |
Out of curiosity, I downloaded stm32f4 0.7.0 from crates.io, un-tar'd and ran: cargo doc --features "rt, stm32f401, stm32f407, stm32f413, stm32f469" --no-deps ...per the selected features for docs.rs. Sure enough it takes ~20 minutes of 1 CPU and wants 6.1g of RAM to produce 1.1g of docs. Looks like the project is aware of the issue: stm32-rs/stm32-rs#3 Should individual doc build jobs be canceled, reported as error, after some more reasonable time period? The queue has become hard to find. I think it was previously linked in somewhere? If there is not much advantage to having a separate thread to poll for crate updates and push these to the queue, or perhaps in any case, some reporting improvements might include:
This way I'd have been able to conclude that there wasn't some transient failure with my crate update. |
In case anyone finds this thread again today, the queue is currently stuck thanks to |
Oh, wow! Isn't this a case of some rare and exceptional crates interfering too regularly with steady operation for normal crates? If there was some timeout mechanism (and a way to cancel the build) would this not be more fare to the majority of crates, making docs.rs more stable? With that in place another potential accommodation would be to schedule a re-trial with a much longer timeout at some off-peak time, assuming there is an off-peak? |
...or scheduling re-trial with longer timeout at a lower priority, as per your #344. |
Rescheduling with lower priority post-#344 won't really help, since there's still only one builder - it'll still block all other builds while it's running. I feel like the easiest solution is #343, since these crates don't need to be built for more than one platform (i've asked their maintainers in the past if that would be an issue, and they said no), and it would speed up crate builds generally. |
I'll close this issue for now:
|
Noticed this after waiting 4 hours for a crate update to build on docs.rs. The docs.rs queue has remained unchanged, and the most recent docs are for stm32f7 (4 hours ago) and last failure: riot-sys-0.2.2 (5 hours ago).
https://docs.rs/releases/queue
Meanwhile, crates.io has many updated crates since these:
Sampling suggests these haven't been built by docs.rs yet. Should they not at least have appeared in the queue by now?
The text was updated successfully, but these errors were encountered: