Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Refactor] Speedup nvm_list_aliases() / nvm ls #1517

Merged
merged 1 commit into from
Apr 23, 2018

Conversation

PeterDaveHello
Copy link
Collaborator

@PeterDaveHello PeterDaveHello commented Apr 27, 2017

The same trick as how did we speed up installation script.

Origin time spent for 1000 times nvm_list_aliases & nvm ls:

$ time for a in {1..1000}; do nvm_list_aliases > /dev/null; done                                                                             

real    16m38.301s
user    1m24.416s
sys     1m21.192s

$ time for a in {1..1000}; do nvm ls > /dev/null; done                                                                                                              

real    20m55.143s
user    2m4.852s
sys     1m57.508s

New time spent for 1000 times nvm_list_aliases & nvm ls:

$ time for a in {1..1000}; do nvm_list_aliases > /dev/null; done

real    11m59.601s
user    2m0.272s
sys     2m23.560s

$ time for a in {1..1000}; do nvm ls > /dev/null; done

real    13m31.442s
user    1m55.320s
sys     2m11.132s

Copy link
Member

@ljharb ljharb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately this causes the results to be listed in nondeterministic order. Parallelizing is only really safe if output is captured and then joined together later; the nvm_print functions output to stdout.

@ljharb
Copy link
Member

ljharb commented Apr 28, 2017

Specifically:

master:

$ time nvm alias
airbnb -> 4.6.2 (-> v4.6.2)
default -> node (-> v7.9.0)
loopback -> loopback (-> ∞)
node -> stable (-> v7.9.0) (default)
stable -> 7.9 (-> v7.9.0) (default)
unstable -> 0.11 (-> v0.11.16) (default)
iojs -> iojs-v3.3 (-> iojs-v3.3.1) (default)
lts/* -> lts/boron (-> v6.10.2)
lts/argon -> v4.8.2
lts/boron -> v6.10.2

real	0m2.225s
user	0m0.967s
sys	0m1.387s

PR:

$ time nvm alias
loopback -> loopback (-> ∞)
airbnb -> 4.6.2 (-> v4.6.2)
default -> node (-> v7.9.0)
stable -> 7.9 (-> v7.9.0) (default)
unstable -> 0.11 (-> v0.11.16) (default)
node -> stable (-> v7.9.0) (default)
iojs -> iojs-v3.3 (-> iojs-v3.3.1) (default)
lts/argon -> v4.8.2
lts/boron -> v6.10.2
lts/* -> lts/boron (-> v6.10.2)

real	0m1.363s
user	0m1.061s
sys	0m1.608s

@PeterDaveHello
Copy link
Collaborator Author

It'll be nondeterministic order but predictable, as the related commands use really a long time to finish the output, I wonder how can we make this acceptable to save some time? Thanks.

@ljharb
Copy link
Member

ljharb commented Apr 28, 2017

It's definitely one of the slower parts of nvm and I'd love to speed it up.

However, I'm not sure how to do so without rearchitecting the entire process - currently it says "for each alias, resolve it, colorize it, output it" - I think it'd have to be "grab all aliases → resolve all aliases → colorize all aliases → output all aliases", but I don't think POSIX has enough control structures to be able to do that cleanly.

@PeterDaveHello
Copy link
Collaborator Author

So I wonder if this is how best we can do currently, as it's predictable, and also saving about 0.9 sec on your computer?

@ljharb
Copy link
Member

ljharb commented Apr 28, 2017

When you say "predictable", I'm not sure how it could be since there's no guarantee that the items (within a category) will resolve in the same order every time.

@PeterDaveHello
Copy link
Collaborator Author

@ljharb it's now ordered by the time spent on each command, I'm now testing it using:

for a in {1..100}; do nvm ls &> $a; done

to see if the results the same, I think it should be.

Another question, if the minor order (as you can see, the upper order was not changed, pre installed versions first, and aliases, and LTS) changed, which part will be affected?

@PeterDaveHello
Copy link
Collaborator Author

PeterDaveHello commented Apr 28, 2017

Oops, interesting results ...

61%

stable -> 7.9 (-> v7.9.0) (default)
unstable -> 0.1 (-> v0.1.2) (default)

39%

unstable -> 0.1 (-> v0.1.2) (default)
stable -> 7.9 (-> v7.9.0) (default)

Other parts are the same.

A little bit stupid, but I'm now testing with 1000 rounds to see what's the different on the rates.

@PeterDaveHello
Copy link
Collaborator Author

62% vs 38%, almost the same 👀

@ljharb
Copy link
Member

ljharb commented Apr 28, 2017

Any nondeterminism is problematic.

@PeterDaveHello
Copy link
Collaborator Author

So if the unstable / stable problem been fixed this will be good?

@ljharb
Copy link
Member

ljharb commented Apr 28, 2017

Yes, if the PR output matches the master output deterministically, this seems great.

It would be ideal, but not required, to beef up the test coverage on this though, if possible :-)

@PeterDaveHello
Copy link
Collaborator Author

PeterDaveHello commented Apr 18, 2018

Now use sort to fix the out of order output. Tests all passed. Also updated the benchmark on the first comment.

@PeterDaveHello PeterDaveHello changed the title [Refactor] Speedup nvm_list_aliases() [Refactor] Speedup nvm_list_aliases() / nvm ls Apr 18, 2018
@ljharb ljharb added the performance This relates to anything regarding the speed of using nvm. label Apr 23, 2018
Copy link
Member

@ljharb ljharb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still concerned this will cause some ordering issues in the future, but for now this seems good :-)

@ljharb ljharb merged commit d3a9272 into nvm-sh:master Apr 23, 2018
@PeterDaveHello PeterDaveHello deleted the speedup-nvm_list_aliases branch April 24, 2018 05:47
msaladna added a commit to msaladna/nvm that referenced this pull request Mar 16, 2024
…s.current bean counter resulting in 64+ concurrent tids (tasks, w/ threads) thus making it susceptible to hit pids.max. Once reached, subshell behavior is unexpected yielding "retry: Resource temporarily unavailable" errors until cgroup capacity is restored.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance This relates to anything regarding the speed of using nvm.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants