Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing methodology does not take IO behavior into account #11

Open
AlexAegis opened this issue May 24, 2023 · 0 comments
Open

Testing methodology does not take IO behavior into account #11

AlexAegis opened this issue May 24, 2023 · 0 comments

Comments

@AlexAegis
Copy link

My relevant setup is a Ryzen 7950X3D, and a Sabrent Rocket NVMe SSD.

Initially I got that Turbo is almost twice as fast as NX.
image

The benchmark itself showed me a very interesting thing: Turbo consistently ran the cached output around ~380ms with very little deviance, and NX did ~880ms and ~420ms runs one after the other. ~880ms, ~420ms, ~880ms, ~420ms, ~880ms, ~420ms... and so on. That seems odd.

But since these tests are running back-to-back and hit the filesystem, I figured it's something IO related, so I modified the tests a tiny bit and added a sleep between each run to let everything settle. I had to increase this sleep all the way to 5 seconds to get NX to give me as consistent results as Turbo. But after this NX managed to finish around ~430ms.
image

Anyway, I hope no one wants to settle on a tool based on minute performance differences. The whole experience around it is much more important. For example I don't like that for the NX example every package has to have a nx entry in the package.json file that contains workspace relative paths instead of package relative paths, it's just one more thing to update when you want to move/rename a package.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant