Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: add jest test file to benchmarks #4792

Closed
wants to merge 2 commits into from
Closed

Conversation

DonIsaac
Copy link
Contributor

@DonIsaac DonIsaac commented Aug 9, 2024

This will help bench Jest linter rules. See conversation in #4787 for details.

This will help bench Jest linter rules. See conversation in #4787 for details.
Copy link

graphite-app bot commented Aug 9, 2024

Your org has enabled the Graphite merge queue for merging into main

Add the label “merge” to the PR and Graphite will automatically add it to the merge queue when it’s ready to merge. Or use the label “hotfix” to add to the merge queue as a hot fix.

You must have a Graphite account and log in to Graphite in order to use the merge queue. Sign up using this link.

Copy link

codspeed-hq bot commented Aug 9, 2024

CodSpeed Performance Report

Merging #4792 will not alter performance

Comparing don/chore/jest-benchmark (683f5f4) with main (4dd29db)

Summary

✅ 29 untouched benchmarks

🆕 4 new benchmarks

Benchmarks breakdown

Benchmark main don/chore/jest-benchmark Change
🆕 lexer[coverageReport.test.ts] N/A 32.6 µs N/A
🆕 parser[coverageReport.test.ts] N/A 162 µs N/A
🆕 semantic[coverageReport.test.ts] N/A 202.1 µs N/A
🆕 transformer[coverageReport.test.ts] N/A 340.8 µs N/A

@Boshen
Copy link
Member

Boshen commented Aug 10, 2024

CodSpeed Performance Report

Merging #4792 will not alter performance

Comparing don/chore/jest-benchmark (683f5f4) with main (4dd29db)

Summary

✅ 29 untouched benchmarks

🆕 4 new benchmarks

Benchmarks breakdown

Benchmark main don/chore/jest-benchmark Change
🆕 lexer[coverageReport.test.ts] N/A 32.6 µs N/A
🆕 parser[coverageReport.test.ts] N/A 162 µs N/A
🆕 semantic[coverageReport.test.ts] N/A 202.1 µs N/A
🆕 transformer[coverageReport.test.ts] N/A 340.8 µs N/A

// If `FIXTURE` env is set, only run the specified benchmark. This is used for sharding in CI.
let test_files = if let Ok(fixture_index) = env::var("FIXTURE") {
let fixture_index = fixture_index.parse::<usize>().unwrap();
TestFiles::complicated_one(fixture_index)
} else {
TestFiles::complicated()
};

And adding more test files will hurt CI time 😢

fixture:
- 0
- 1

@rzvxa
Copy link
Contributor

rzvxa commented Aug 10, 2024

And adding more test files will hurt CI time 😢

This isn't directly related to this PR but I just want to express something on my mind.

I believe we can use an on-demand CI pass for heavy stuff. Instead of running some of the CI tasks on push we can use the action button or provide a command through @oxc-bot.

We can benchmark general stuff like before(on push) but provide a command for maintainers and the PR's author to run e2e benchmarks, oxlint-ecosystem, monitor-oxc, or other complicated tasks. We usually only need to run these once per review, Sometime we wouldn't even need to run these on trivial changes which saves a lot of trees(as suggested by @overlookmotel).

I've mentioned it here before oxc-project/backlog#86
I haven't worked with Github's bot API but it shouldn't be too hard to implement. We can also use our own CI runner for these(a small cluster of docker containers). By using such a CI environment we can eliminate a lot of unpredictable variance caused by Github agents. Since with this setup, we can always run a benchmark in the same environment(and round robin between them as new requests queue). We might even be able to time some of our stuff instead of doing system-independent benchmarks.

I also noticed @overlookmotel is working toward filtering CI tasks so they only run on appropriate changes. That can help as well.

I wish Graphite's CI optimization had a better way to strategize tasks and where they should run.

@Boshen
Copy link
Member

Boshen commented Aug 12, 2024

Closing in favor of selective benchmark setup.

@Boshen Boshen closed this Aug 12, 2024
@Boshen Boshen deleted the don/chore/jest-benchmark branch August 12, 2024 02:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants