Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarks completely redone #2352

Merged
merged 1 commit into from
Jan 10, 2022
Merged

Benchmarks completely redone #2352

merged 1 commit into from
Jan 10, 2022

Conversation

voidpumpkin
Copy link
Member

@voidpumpkin voidpumpkin commented Jan 9, 2022

Description

A lot has been done so here is a compiled list to get you up to speed:

  • redone benchmark action completely, it now
    • pulls from js-framework-benchmark master
    • uses new crates that are essentially copies from js-framework-benchmark, being tools/benchmark-struct tools/benchmark-hooks
    • displays results using github-action-benchmark (see pictures)
    • stores every commit results in gh-pages branch and we can serve that using github pages
  • new crate tools/process-benchmark-results introduced, to transform data to be ready for github-action-benchmark
  • use caching all around so now benchmarks take ~6 minutes

Preview:

gh-pages page

image

Comment on the commit with the results:
image

Alert comment when 150% threshold is reached:
image

Fixes #1453
Fixes #858
Touches #5 I would just need to add docs about these benchmarks somehow

Checklist

  • I have run cargo make pr-flow
  • I have reviewed my own code
  • I have added tests

@voidpumpkin voidpumpkin added the meta Repository chores and tasks label Jan 9, 2022
@voidpumpkin
Copy link
Member Author

I would assume that there is no comment on my commit atm, because master latest commit has not been benchmarked and thus my commit has nothing to be compared against.

@ranile
Copy link
Member

ranile commented Jan 9, 2022

Where does it store the previous results needed to build the new site?

@voidpumpkin
Copy link
Member Author

Where does it store the previous results needed to build the new site?

In gh-pages branch

@Madoshakalaka
Copy link
Contributor

I wonder whether the accumulated graph makes sense in the long term, since we can't control the hardware in github actions.

Copied from https://github.com/krausest/js-framework-benchmark

It should ideally stay/be updated to always match what is on https://github.com/krausest/js-framework-benchmark
Except the fixes required to tun using unreleased yew version.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Except the fixes required to tun using unreleased yew version.
Except the fixes required to run using unreleased yew version.

Copy link
Member

@ranile ranile left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have nothing against this. If any issues pop up, those can be resolved later.

I do share the runner hardware concerns mentioned in #2352.

A (potentially crazy) idea I had was to save the benchmark results in Firestore. Not sure how applicable that is though

@ranile ranile merged commit 4be9308 into yewstack:master Jan 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
meta Repository chores and tasks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Make it easier to run benchmarks Track binary sizes for each Yew release
3 participants