Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Occasional hangs at updating registry #2090

Closed
matthewkmayer opened this issue Oct 29, 2015 · 17 comments · Fixed by #2454
Closed

Occasional hangs at updating registry #2090

matthewkmayer opened this issue Oct 29, 2015 · 17 comments · Fixed by #2454

Comments

@matthewkmayer
Copy link

I'm seeing cargo build and other cargo commands such as test and package start hanging sometimes at this step:

Updating registry `https://github.com/rust-lang/crates.io-index` 

Example of cargo package:

Packaging rusoto v0.7.0 (file:///Users/matthewmayer/Documents/DualSpark/rust-aws)
   Archiving .gitignore
   Archiving .travis.yml
   Archiving AWS-CREDENTIALS.md
   Archiving Cargo.toml
   Archiving codegen/botocore_parser.py
   Archiving codegen/requirements.txt
   Archiving codegen/s3.json
   Archiving codegen/s3.rs
   Archiving codegen/sqs.json
   Archiving codegen/sqs.rs
   Archiving docgen.sh
   Archiving LICENSE
   Archiving README.md
   Archiving RELEASING.md
   Archiving s3-sample-creds
   Archiving src/bin/main.rs
   Archiving src/credentials.rs
   Archiving src/error.rs
   Archiving src/lib.rs
   Archiving src/params.rs
   Archiving src/regions.rs
   Archiving src/request.rs
   Archiving src/s3.rs
   Archiving src/sample-credentials
   Archiving src/signature.rs
   Archiving src/sqs.rs
   Archiving src/xmlutil.rs
   Archiving tests/sample-data/default_profile_credentials
   Archiving tests/sample-data/list_queues_with_queue.xml
   Archiving tests/sample-data/multiple_profile_credentials
   Archiving tests/sample-data/no_credentials
   Archiving tests/sample-data/s3_complete_multipart_upload.xml
   Archiving tests/sample-data/s3_get_buckets.xml
   Archiving tests/sample-data/s3_initiate_multipart_upload.xml
   Archiving tests/sample-data/s3_list_multipart_uploads.xml
   Archiving tests/sample-data/s3_list_multipart_uploads_no_multipart_uploads.xml
   Archiving tests/sample-data/s3_multipart_uploads_with_parts.xml
   Archiving tests/sample-data/s3_temp_redirect.xml
   Verifying rusoto v0.7.0 (file:///Users/matthewmayer/Documents/DualSpark/rust-aws)
    Updating registry `https://github.com/rust-lang/crates.io-index`

It was at the last line for 15 minutes, pinning a single core of my machine, until I gave up and ctrl-c'd it. Rerunning the cargo command finishes instantly.

I'm also seeing this behavior with our Travis CI builds, sometimes. Travis automatically cancels the build after 10 minutes of waiting.

I think the easiest way of reproducing this issue is to check out Rusoto (https://github.com/DualSpark/rusoto) @ be07f36 and running this command:

cargo clean && cargo package
@alexcrichton
Copy link
Member

This looks like resolution is going into a seemingly infinite loop... seems bad!

@matthewkmayer
Copy link
Author

That could be it. Is there other information I can provide, such as more detailed logging? I'm not sure how to provide more info.

Another piece of information that may help: I don't recall this being an issue before we locked down to specific version of dependencies in our Cargo.toml file.

I'll be updating to Rust 1.4.0 today, maybe cargo 0.5.0 has this fixed.

@matthewkmayer
Copy link
Author

Dang, same issue.

$ rustc --version
rustc 1.4.0 (8ab8581f6 2015-10-27)

$ cargo --version
cargo 0.5.0-nightly (833b947 2015-09-13)

git checkout be07f36

cargo clean && cargo package
   Packaging rusoto v0.7.0 (file:///Users/matthewmayer/Documents/DualSpark/rust-aws)
   Verifying rusoto v0.7.0 (file:///Users/matthewmayer/Documents/DualSpark/rust-aws)
    Updating registry `https://github.com/rust-lang/crates.io-index`

@alexcrichton
Copy link
Member

Nah it's ok, I can reproduce locally with the latest nightly, so I'll try to get around to investigating soon!

@mitchmindtree
Copy link

Just thought I'd let you know we're getting this issue when trying to update and build glutin. Here is the error occuring within travis, for example:

0.00s0.00s0.00s$ rustc --version
rustc 1.4.0 (8ab8581f6 2015-10-27)
$ cargo --version
cargo 0.5.0-nightly (833b947 2015-09-13)
$ cargo build --verbose
    Updating registry `https://github.com/rust-lang/crates.io-index`
No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
The build has been terminated

That's the output of the current stable osx build.
The nightly linux build also produced the same error.

Strangely however, the build passed without issues on both nightly osx and stable linux? 😕

This is the PR in which I first noticed the issue.

I'm also currently hitting this trying to build glutin locally and haven't yet worked out a way around it.

@jimmycuadra
Copy link
Contributor

Workaround: Change the dependencies to "x.y.z" (which is the same as "^x.y.z) instead of locking them at exact versions with "=x.y.z". This is probably a better practice for a library anyway.

@matthewkmayer
Copy link
Author

Anything I can do to help work on this? Is whatever fix #2064 will yield going to solve this?

Happy to do some investigation work if pointed in the right direction.

@alexcrichton
Copy link
Member

@matthewkmayer yeah I suspect that #2064 will likely yield a resolution for this, unfortunately other than that I'm not sure if there's an easy way to tackle this.

One idea @wycats had was to bail out after more than N instances of recursion (where N is something like 20k) which would cause Cargo to generate an error, but that unfortunately still wouldn't be the greatest because these graphs do have solutions which can be reached quickly via one traversal, just not another.

alexcrichton added a commit to alexcrichton/cargo that referenced this issue Mar 9, 2016
Currently when we're attempting to resolve a dependency graph we locally
optimize the order in which we visit candidates for a resolution (most
constrained first). Once a version is activated, however, it will add a whole
mess of new dependencies that need to be activated to the global list, currently
appended at the end.

This unfortunately can lead to pathological behavior. By always popping from the
back and appending to the back of pending dependencies, super constrained
dependencies in the front end up not getting visited for quite awhile. This in
turn can cause Cargo to appear to hang for quite awhile as it's so aggressively
backtracking.

This commit switches the list of dependencies-to-activate from a `Vec` to a
`BinaryHeap`. The heap is sorted by the number of candidates for each
dependency, with the least candidates first. This ends up massively cutting down
on resolution times in practice whenever `=` dependencies are encountered
because they are resolved almost immediately instead of way near the end if
they're at the wrong place in the graph.

This alteration in traversal order ended up messing up the existing cycle
detection, so that was just removed entirely from resolution and moved to its
own dedicated pass.

Closes rust-lang#2090
bors added a commit that referenced this issue Mar 12, 2016
Globally optimize traversal in resolve

Currently when we're attempting to resolve a dependency graph we locally
optimize the order in which we visit candidates for a resolution (most
constrained first). Once a version is activated, however, it will add a whole
mess of new dependencies that need to be activated to the global list, currently
appended at the end.

This unfortunately can lead to pathological behavior. By always popping from the
back and appending to the back of pending dependencies, super constrained
dependencies in the front end up not getting visited for quite awhile. This in
turn can cause Cargo to appear to hang for quite awhile as it's so aggressively
backtracking.

This commit switches the list of dependencies-to-activate from a `Vec` to a
`BinaryHeap`. The heap is sorted by the number of candidates for each
dependency, with the least candidates first. This ends up massively cutting down
on resolution times in practice whenever `=` dependencies are encountered
because they are resolved almost immediately instead of way near the end if
they're at the wrong place in the graph.

This alteration in traversal order ended up messing up the existing cycle
detection, so that was just removed entirely from resolution and moved to its
own dedicated pass.

Closes #2090
@jaemk
Copy link

jaemk commented Jun 18, 2017

I seem to be hit with this. cargo is hanging on "updating registry" and pinning a core on stable(1.18) and the latest nightly on both linux and mac.

@jaemk
Copy link

jaemk commented Jun 18, 2017

Update: I got it to work again by changing the newly added (to my Cargo.toml) crate version to *. After it updated, I was able to set it back to a partial version (0.7).

@Masood-Lapeh
Copy link

Hi. $ cargo search bidi hangs:
Updating registryhttps://github.com/rust-lang/crates.io-index``
I'm using cargo 0.19.0.

@MTRNord
Copy link

MTRNord commented Jul 13, 2017

It is back again :( got it on my Server and on TravisCI builds :/

Happens on 0.19.0 and 0.21.0 I guess it could be a repo issue instead of a cargo one.

Log: https://travis-ci.org/Nordgedanken/IMAPServer-rs/jobs/253160867

@exowaucka
Copy link

Also happening here inside a Docker container, although it might just be taking a very long time. It doesn't seem to take as long when running directly on the host system, though. (No VMs are involved, just Docker.)

@alexcrichton
Copy link
Member

If you think you're hitting this issue then it'd be great to get assistance in debugging this. If Cargo's hogging CPU then you're probably running into #4066. If Cargo's not hogging CPU it'd be great if you could attach a debugger. The backtrace of the main thread will very likely indicate that #4261 is the actual issue here.

@exowaucka
Copy link

I'm not too keen on trying to attach a debugger inside a Docker container, but it's happening right now, and don't see any significant CPU usage, so it's probably #4261. I'm on a janky workplace network, so it wouldn't surprise me if git operations sometimes take unreasonably long.

@PeterZhizhin
Copy link

PeterZhizhin commented Nov 9, 2017

I have tried to attach a debugger to the cargo process when it hanged at "Updating registry".
Of course, I did not have debugging symbols in the binary so I could not get readable backtrace.

I have tried to get cargo from Github and build cargo with debugging symbols. Unfortunately, my cargo from Homebrew hanged at "Updating registry" while trying to build cargo with cargo build.

Inception.

If this not-meaningful backtrace will be helpful (don't believe this):

#0  0x00007fffe01e9eb6 in ?? ()
#1  0x0000000100352663 in wait_for ()
#2  0x0000000100352389 in curls_read ()
#3  0x0000000100387bc6 in read_cb ()
#4  0x00007fffd0dd8874 in ?? ()
#5  0x00007fff5fbf5720 in ?? ()
#6  0x0000000100373864 in use_git_alloc ()
#7  0x00007fffd0c0ba77 in ?? ()
#8  0x00007fff5fbf5800 in ?? ()
#9  0x00007fffd0c0f2b4 in ?? ()
#10 0x0000000000001bf0 in ?? ()
#11 0x00007fffe8f82f58 in ?? ()
#12 0x00007fff5fbf57c8 in ?? ()
#13 0x00007fff5fbf57c0 in ?? ()
#14 0x00007fff5fbf57c8 in ?? ()
#15 0x0000000100b32ec0 in ?? ()
#16 0x0000000100b32ec8 in ?? ()
#17 0x0000000102022644 in ?? ()
#18 0x00007fff5fbf5818 in ?? ()
#19 0x000000000000fccc in ?? ()
#20 0x0000000000000200 in ?? ()
#21 0xc7dc66f0a28a124b in ?? ()
#22 0xc0eecb0cc340fe3b in ?? ()
#23 0x00000001020222b8 in ?? ()
#24 0x0000000000000000 in ?? ()

@Aarowaim
Copy link

Aarowaim commented Dec 3, 2017

gfx-rs is unstable so I attempted to switch one of my projects to Piston-Graphics. I encountered this issue of cargo being stuck updating the registry today while trying to compile with these Cargo.toml dependencies from the piston tutorials:

[dependencies]
piston = "0.35.0"
piston2d-graphics = "0.23.0"
pistoncore-glutin_window = "0.42.0"
piston2d-opengl_graphics = "0.49.0"

I suspect that one of the packages may be unavailable but cargo continuously tries to access it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants