Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: remove extra erroneous backtick #1482

Merged
merged 1 commit into from
Nov 9, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4948,7 +4948,7 @@ When you spin up a process yourself you should generally have it pipe its output

Go's built-in `testing` package provides support for running `Benchmark`s. Earlier versions of Ginkgo subject-node variants that were able to mimic Go's `Benchmark` tests. As of Ginkgo 2.0 these nodes are no longer available. Instead, Ginkgo users can benchmark their code using Gomega's substantially more flexible `gmeasure` package. If you're interested, check out the `gmeasure` [docs](https://onsi.github.io/gomega/#gmeasure-benchmarking-code). Here we'll just provide a quick example to show how `gmeasure` integrates into Ginkgo's reporting infrastructure.

`gmeasure` is structured around the metaphor of Experiments. With `gmeasure` you create ``Experiments` that can record multiple named `Measurements`. Each named `Measurement` can record multiple values (either `float64` or `duration`). `Experiments` can then produce reports to show the statistical distribution of their `Measurements` and different `Measurements`, potentially from different `Experiments` can be ranked and compared. `Experiments` can also be cached using an `ExperimentCache` - this can be helpful to avoid rerunning expensive experiments _and_ to save off "gold-master" experiments to compare against to identify potential regressions in performance - orchestrating all that is left to the user.
`gmeasure` is structured around the metaphor of Experiments. With `gmeasure` you create `Experiments` that can record multiple named `Measurements`. Each named `Measurement` can record multiple values (either `float64` or `duration`). `Experiments` can then produce reports to show the statistical distribution of their `Measurements` and different `Measurements`, potentially from different `Experiments` can be ranked and compared. `Experiments` can also be cached using an `ExperimentCache` - this can be helpful to avoid rerunning expensive experiments _and_ to save off "gold-master" experiments to compare against to identify potential regressions in performance - orchestrating all that is left to the user.

Here's an example where we profile how long it takes to repaginate books:

Expand Down
Loading