-
Notifications
You must be signed in to change notification settings - Fork 13.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarks are run when just executing tests, leading to very slooooow tests #25293
Comments
ping @huonw, as we talked a bit about this on IRC. |
The benchmark should only be actually run once, not multiple times as |
And in my case, I do some potentially expensive setup in each benchmark. I'm careful to do that outside the I don't understand the original decision behind the change - none of my benchmarks have any assertions, so running them with the tests seems not useful. What am I missing? |
NB. even the function marked
They can still trigger assertions if they're incorrect, e.g. indexing out of bounds. The motivation was #15842. |
Maybe move benchmarks into cargo's benches directory? I assume those won't run during cargo test. |
Wouldn't that impose the public / private visibility rules? I'd be sad If I couldn't get fine-grained benchmarks... |
For what it's worth, I have a workaround as I group all my benchmarks into a I also don't know that this is a high-priority issue since benchmarking isn't a stable feature. However, I have personal interest in helping fix it. :-) One idea I have would be to redirect the concept of #[bench]
fn slow(b: &mut test::Bencher) {
let s: String = iter::repeat("a").take(b.suggested_len).collect();
b.iter(|| 1 + 1);
b.bytes = s.len() as u64;
} This could be set to a small value in test runs, and provide the side feature of allowing the benchmarking to automatically scale out different test sizes (1, 10, 100, 1000, etc). Another would be to have a flag denoting if benching is actually happening: #[bench]
fn slow(b: &mut test::Bencher) {
let size = if b.is_benching { 5 * 1024 * 1024 } else { 1 };
let s: String = iter::repeat("a").take(size).collect();
b.iter(|| 1 + 1);
b.bytes = s.len() as u64;
} Both of these feel hacky though... |
FWIW, I just encountered an instance where a benchmark was triggering integer overflow in my library that was only picked up by That said, I think I'd be happy to switch this to opt-in (although I would be opting-in essentially always). Alternatively, we could provide an opt-out flag. |
This was not something that I was consciously aware of, although it makes sense with what I have read about how the benchmarking tool works. I guess it's a good thing that the benching functionality is unstable, as there does seem to be some awkward corner cases 😸
It's hard to argue with results, I suppose
I know you don't mean it this way, but there's a potential slippery slope here. I like using focused tools for specific jobs. There are tools like quickcheck or afl.rs that help stretch the input space of our code to find issues we didn't think about unit testing specifically. If it makes sense to use benchmarking tools for testing, does it also make sense to use testing tools for benchmarking?
I don't know if it's worth changing anything yet, especially based on a single person complaining. I probably wouldn't even be talking about this if I could generate a 5{KB,MB,GB} in a "quick" time. |
Benchmarks still run even when run with |
@shepmaster Do you think it'd be worth adding a |
It would certainly provide a more reusable knob to turn than my current "nest all benches in a module" approach, but I'd worry that there's still the surprise inherent with "Oh, If the attribute were something like |
We could certainly deprecate |
To temporarily resolve this issue, I have done the following: In my
Then, I add the following to my benchmarks module:
When I just run |
is there something like |
In my benchmarks, I generate some non-trivial sized blobs of data. Recently, my regular test runs have been very slow, and I believe it's because the benchmarks are running even when not passing
--bench
bench.rs
And running it:
Even running with
--test
still runs these very slow tests.The text was updated successfully, but these errors were encountered: