This module contains benchmarks written using JMH from OpenJDK. Writing correct micro-benchmarks in Java (or another JVM language) is difficult and there are many non-obvious pitfalls (many due to compiler optimizations). JMH is a framework for running and analyzing benchmarks (micro or macro) written in Java (or another JVM language).
If you want to set specific JMH flags or only run certain benchmarks, passing arguments via
gradle tasks is cumbersome. These are simplified by the provided jmh.sh
script.
The default behavior is to run all benchmarks:
./jmh-benchmarks/jmh.sh
Pass a pattern or name after the command to select the benchmarks:
./jmh-benchmarks/jmh.sh LRUCacheBenchmark
Check which benchmarks that match the provided pattern:
./jmh-benchmarks/jmh.sh -l LRUCacheBenchmark
Run a specific test and override the number of forks, iterations and warm-up iteration to 2
:
./jmh-benchmarks/jmh.sh -f 2 -i 2 -wi 2 LRUCacheBenchmark
Run a specific test with async and GC profilers on Linux and flame graph output:
./jmh-benchmarks/jmh.sh -prof gc -prof async:libPath=/path/to/libasyncProfiler.so\;output=flamegraph LRUCacheBenchmark
The following sections cover async profiler and GC profilers in more detail.
It's good practice to check profiler output for microbenchmarks in order to verify that they represent the expected application behavior and measure what you expect to measure. Some example pitfalls include the use of expensive mocks or accidental inclusion of test setup code in the benchmarked code. JMH includes async-profiler integration that makes this easy:
./jmh-benchmarks/jmh.sh -prof async:libPath=/path/to/libasyncProfiler.so
With flame graph output (the semicolon is escaped to ensure it is not treated as a command separator):
./jmh-benchmarks/jmh.sh -prof async:libPath=/path/to/libasyncProfiler.so\;output=flamegraph
Simultaneous cpu, allocation and lock profiling with async profiler 2.0 and jfr output (the semicolon is escaped to ensure it is not treated as a command separator):
./jmh-benchmarks/jmh.sh -prof async:libPath=/path/to/libasyncProfiler.so\;output=jfr\;alloc\;lock LRUCacheBenchmark
A number of arguments can be passed to configure async profiler, run the following for a description:
./jmh-benchmarks/jmh.sh -prof async:help
It's good practice to run your benchmark with -prof gc
to measure its allocation rate:
./jmh-benchmarks/jmh.sh -prof gc
Of particular importance is the norm
alloc rates, which measure the allocations per operation rather than allocations
per second which can increase when you have make your code faster.
The JMH benchmarks can be run outside of gradle as you would with any executable jar file:
java -jar <kafka-repo-dir>/jmh-benchmarks/build/libs/kafka-jmh-benchmarks-*.jar -f2 LRUCacheBenchmark