-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Occasional hangs at updating registry #2090
Comments
This looks like resolution is going into a seemingly infinite loop... seems bad! |
That could be it. Is there other information I can provide, such as more detailed logging? I'm not sure how to provide more info. Another piece of information that may help: I don't recall this being an issue before we locked down to specific version of dependencies in our Cargo.toml file. I'll be updating to Rust 1.4.0 today, maybe cargo 0.5.0 has this fixed. |
Dang, same issue. $ rustc --version
rustc 1.4.0 (8ab8581f6 2015-10-27)
$ cargo --version
cargo 0.5.0-nightly (833b947 2015-09-13)
git checkout be07f36
cargo clean && cargo package
Packaging rusoto v0.7.0 (file:///Users/matthewmayer/Documents/DualSpark/rust-aws)
Verifying rusoto v0.7.0 (file:///Users/matthewmayer/Documents/DualSpark/rust-aws)
Updating registry `https://github.com/rust-lang/crates.io-index`
|
Nah it's ok, I can reproduce locally with the latest nightly, so I'll try to get around to investigating soon! |
Just thought I'd let you know we're getting this issue when trying to update and build glutin. Here is the error occuring within travis, for example:
That's the output of the current stable osx build. Strangely however, the build passed without issues on both nightly osx and stable linux? 😕 This is the PR in which I first noticed the issue. I'm also currently hitting this trying to build glutin locally and haven't yet worked out a way around it. |
Workaround: Change the dependencies to "x.y.z" (which is the same as "^x.y.z) instead of locking them at exact versions with "=x.y.z". This is probably a better practice for a library anyway. |
Anything I can do to help work on this? Is whatever fix #2064 will yield going to solve this? Happy to do some investigation work if pointed in the right direction. |
@matthewkmayer yeah I suspect that #2064 will likely yield a resolution for this, unfortunately other than that I'm not sure if there's an easy way to tackle this. One idea @wycats had was to bail out after more than N instances of recursion (where N is something like 20k) which would cause Cargo to generate an error, but that unfortunately still wouldn't be the greatest because these graphs do have solutions which can be reached quickly via one traversal, just not another. |
Currently when we're attempting to resolve a dependency graph we locally optimize the order in which we visit candidates for a resolution (most constrained first). Once a version is activated, however, it will add a whole mess of new dependencies that need to be activated to the global list, currently appended at the end. This unfortunately can lead to pathological behavior. By always popping from the back and appending to the back of pending dependencies, super constrained dependencies in the front end up not getting visited for quite awhile. This in turn can cause Cargo to appear to hang for quite awhile as it's so aggressively backtracking. This commit switches the list of dependencies-to-activate from a `Vec` to a `BinaryHeap`. The heap is sorted by the number of candidates for each dependency, with the least candidates first. This ends up massively cutting down on resolution times in practice whenever `=` dependencies are encountered because they are resolved almost immediately instead of way near the end if they're at the wrong place in the graph. This alteration in traversal order ended up messing up the existing cycle detection, so that was just removed entirely from resolution and moved to its own dedicated pass. Closes rust-lang#2090
Globally optimize traversal in resolve Currently when we're attempting to resolve a dependency graph we locally optimize the order in which we visit candidates for a resolution (most constrained first). Once a version is activated, however, it will add a whole mess of new dependencies that need to be activated to the global list, currently appended at the end. This unfortunately can lead to pathological behavior. By always popping from the back and appending to the back of pending dependencies, super constrained dependencies in the front end up not getting visited for quite awhile. This in turn can cause Cargo to appear to hang for quite awhile as it's so aggressively backtracking. This commit switches the list of dependencies-to-activate from a `Vec` to a `BinaryHeap`. The heap is sorted by the number of candidates for each dependency, with the least candidates first. This ends up massively cutting down on resolution times in practice whenever `=` dependencies are encountered because they are resolved almost immediately instead of way near the end if they're at the wrong place in the graph. This alteration in traversal order ended up messing up the existing cycle detection, so that was just removed entirely from resolution and moved to its own dedicated pass. Closes #2090
I seem to be hit with this. cargo is hanging on "updating registry" and pinning a core on stable(1.18) and the latest nightly on both linux and mac. |
Update: I got it to work again by changing the newly added (to my |
Hi. |
It is back again :( got it on my Server and on TravisCI builds :/ Happens on 0.19.0 and 0.21.0 I guess it could be a repo issue instead of a cargo one. Log: https://travis-ci.org/Nordgedanken/IMAPServer-rs/jobs/253160867 |
Also happening here inside a Docker container, although it might just be taking a very long time. It doesn't seem to take as long when running directly on the host system, though. (No VMs are involved, just Docker.) |
If you think you're hitting this issue then it'd be great to get assistance in debugging this. If Cargo's hogging CPU then you're probably running into #4066. If Cargo's not hogging CPU it'd be great if you could attach a debugger. The backtrace of the main thread will very likely indicate that #4261 is the actual issue here. |
I'm not too keen on trying to attach a debugger inside a Docker container, but it's happening right now, and don't see any significant CPU usage, so it's probably #4261. I'm on a janky workplace network, so it wouldn't surprise me if git operations sometimes take unreasonably long. |
I have tried to attach a debugger to the cargo process when it hanged at "Updating registry". I have tried to get cargo from Github and build cargo with debugging symbols. Unfortunately, my cargo from Homebrew hanged at "Updating registry" while trying to build cargo with Inception. If this not-meaningful backtrace will be helpful (don't believe this):
|
I suspect that one of the packages may be unavailable but cargo continuously tries to access it. |
I'm seeing
cargo build
and other cargo commands such astest
andpackage
start hanging sometimes at this step:Updating registry `https://github.com/rust-lang/crates.io-index`
Example of
cargo package
:Packaging rusoto v0.7.0 (file:///Users/matthewmayer/Documents/DualSpark/rust-aws) Archiving .gitignore Archiving .travis.yml Archiving AWS-CREDENTIALS.md Archiving Cargo.toml Archiving codegen/botocore_parser.py Archiving codegen/requirements.txt Archiving codegen/s3.json Archiving codegen/s3.rs Archiving codegen/sqs.json Archiving codegen/sqs.rs Archiving docgen.sh Archiving LICENSE Archiving README.md Archiving RELEASING.md Archiving s3-sample-creds Archiving src/bin/main.rs Archiving src/credentials.rs Archiving src/error.rs Archiving src/lib.rs Archiving src/params.rs Archiving src/regions.rs Archiving src/request.rs Archiving src/s3.rs Archiving src/sample-credentials Archiving src/signature.rs Archiving src/sqs.rs Archiving src/xmlutil.rs Archiving tests/sample-data/default_profile_credentials Archiving tests/sample-data/list_queues_with_queue.xml Archiving tests/sample-data/multiple_profile_credentials Archiving tests/sample-data/no_credentials Archiving tests/sample-data/s3_complete_multipart_upload.xml Archiving tests/sample-data/s3_get_buckets.xml Archiving tests/sample-data/s3_initiate_multipart_upload.xml Archiving tests/sample-data/s3_list_multipart_uploads.xml Archiving tests/sample-data/s3_list_multipart_uploads_no_multipart_uploads.xml Archiving tests/sample-data/s3_multipart_uploads_with_parts.xml Archiving tests/sample-data/s3_temp_redirect.xml Verifying rusoto v0.7.0 (file:///Users/matthewmayer/Documents/DualSpark/rust-aws) Updating registry `https://github.com/rust-lang/crates.io-index`
It was at the last line for 15 minutes, pinning a single core of my machine, until I gave up and ctrl-c'd it. Rerunning the cargo command finishes instantly.
I'm also seeing this behavior with our Travis CI builds, sometimes. Travis automatically cancels the build after 10 minutes of waiting.
I think the easiest way of reproducing this issue is to check out Rusoto (https://github.com/DualSpark/rusoto) @ be07f36 and running this command:
cargo clean && cargo package
The text was updated successfully, but these errors were encountered: