Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add MVP of mocha-like benchmark tests #1167

Closed
wants to merge 1 commit into from

Conversation

IvanGoncharov
Copy link
Member

@IvanGoncharov IvanGoncharov commented Dec 18, 2017

@mohawk2 I really like your idea from #1163 and I think we need to cover more functions (validate, buildSchema, parse, etc.) with benchmarks like that.

So I created a very small mocha-like wrapper around benchmark.js, plus I think babel-node isn't suitable for performance measurement so I benchmark pure JS. As a bonus, you can specify git revision (HEAD by default) to benchmark your local copy against.

Also, you can benchmark multiple Git revision against each other, e.g:
image

@mohawk2
Copy link
Contributor

mohawk2 commented Dec 18, 2017

While it would make me sad-face to have this PR merged and not mine, more importantly I suggest that @leebyron 's steer that it should be a complex schema is more important. The GitHub schema I put in mine is large, so introspecting it is, in my view, more of a meaningful test than kitchen-sink.graphql.

@IvanGoncharov
Copy link
Member Author

@mohawk2 Sorry, I didn't explained my intention for this PR. Last week I also tryed to refactor part of this library (visit function in my case) as #1145 and having benchmark for it will be very useful.

So this PR more about creating an engine for benchmark than adding particular benchmarks. As example I just added benchmarks for print and parse operations.

I deliberately didn't add tests for execute because I fully agree that you figure out how to test it properly. I just think such test should live in **/__test__/**-benchmark.js files e.g. src/execute/__test__/execute-benchmark.js. Plus I added some bells and whistles so you can write benchmarks in mocha-style:

suite('Parse string to AST', graphql => {
  const kitchenSink = readFileSync(join(__dirname, './kitchen-sink.graphql'), {
    encoding: 'utf8',
  });
  measure('Kitchen Sink', () => graphql.parse(kitchenSink));
 
  const introspectionQuery = getIntrospectionQuery();
  measure('Introspection Query', () => graphql.parse(introspectionQuery));
});

If this PR is merged I will be able to add benchmark for visit function and don't conflit with your execute benchmark because they will live inside different files.

@mohawk2
Copy link
Contributor

mohawk2 commented Dec 18, 2017

That makes lots of sense! (At least to me)

One observation I have: if we put benchmarks (which are very intensive and a bit time-consuming, by design) in the tests, then they'll get run all the time. Instead they ought to live outside of the __test__ and probably outside the src. Maybe resources/benchmark after all? I'm pretty neutral about the mocha thing but if you think it's a good idea, then great.

You may or may not have seen that my benchmark thing now also has a profile since I thought that would be generally useful, and I'm trying to figure why my other PR is 20% slower :-)

@mohawk2
Copy link
Contributor

mohawk2 commented Dec 22, 2017

@IvanGoncharov Further thought; I really like the idea of the modularised tests, though I'm still convinced they mustn't live within the normal tests' location. What I'm also very keen on is being able to have each test-module be run say just one or two times under the profiler, so there's not dozens of runs to drown us in data. If you can steer me on how to do this, or you want to take it on, I'm happy with either!

@IvanGoncharov IvanGoncharov force-pushed the benchmarkTests branch 4 times, most recently from 9329a92 to 027e29b Compare January 12, 2018 18:40
@IvanGoncharov IvanGoncharov mentioned this pull request Jan 13, 2018
@leebyron
Copy link
Contributor

Sorry for not addressing this PR much sooner. I really like this but have some high level feedback.

  • While I agree that we should be testing the compiled code we expect people to encounter via an npm install, I'd prefer we don't create a new way to produce that. Ideally we can recycle the npm run build command we already use which builds an npm-ready output in ./dist (which could be copied to a local dir)

  • I agree with @mohawk2 that placing these within __test__ directories is potentially confusing. How about putting the benchmarks in __bench__ directories? Then they could still be colocated with relevant code and we could still use simple find commands to get them all.

  • The mocha-style benchmark files are really nice, but I'd prefer to see imported functions instead of global functions. Also perhaps we should stick to describe instead of suite? Though I like measure as the companion to it.

  • I wonder if it is possible to avoid the custom distRequire? in favor of standard import/require statements in the file. I'm sure that would require some additional work though, so just a thought

@IvanGoncharov
Copy link
Member Author

@leebyron Thanks for the feedback.
First, three should be pretty easy to implement.
For the fourth one, I need to do some research.

leebyron added a commit that referenced this pull request Feb 16, 2018
This adds a benchmark task inspired by #1167 and #1163. Invoke with `yarn benchmark`.

Allows filtering down which benchmark to run, ala jest, and supplying which revisions to run against.
@leebyron
Copy link
Contributor

leebyron commented Feb 16, 2018

I have a version I built to look at eliminating distRequire - I started with your code. I just put it up at #1251 - I'd love your feedback and thoughts on it

I realised that staying within__tests__ is helpful, though differentiating via *-benchmark.js can be enough. I added one benchmark for parsing text, and having access to the same utility functions as tests is quite important.

@leebyron leebyron mentioned this pull request Feb 16, 2018
leebyron added a commit that referenced this pull request Feb 17, 2018
This adds a benchmark task inspired by #1167 and #1163. Invoke with `yarn benchmark`.

Allows filtering down which benchmark to run, ala jest, and supplying which revisions to run against.
leebyron added a commit that referenced this pull request Feb 17, 2018
This adds a benchmark task inspired by #1167 and #1163. Invoke with `yarn benchmark`.

Allows filtering down which benchmark to run, ala jest, and supplying which revisions to run against.
leebyron added a commit that referenced this pull request Feb 20, 2018
This adds a benchmark task inspired by #1167 and #1163. Invoke with `yarn benchmark`.

Allows filtering down which benchmark to run, ala jest, and supplying which revisions to run against.
@IvanGoncharov IvanGoncharov deleted the benchmarkTests branch June 20, 2020 18:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants