This directory contains the microbenchmark suite of Elasticsearch. It relies on JMH.
We do not want to microbenchmark everything but the kitchen sink and should typically rely on our macrobenchmarks with Rally. Microbenchmarks are intended to spot performance regressions in performance-critical components. The microbenchmark suite is also handy for ad-hoc microbenchmarks but please remove them again before merging your PR.
Just run gradlew -p benchmarks run
from the project root
directory. It will build all microbenchmarks, execute them and print the
result.
Running via an IDE is not supported as the results are meaningless because we have no control over the JVM running the benchmarks.
If you want to run a specific benchmark class like, say,
MemoryStatsBenchmark
, you can use --args
:
gradlew -p benchmarks run --args ' AllocationBenchmark'
Everything in the '
gets sent on the command line to
JMH. The leading
inside the '
s is important.
Without it parameters are sometimes sent to gradle.
Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the JMH samples.
In contrast to tests, the actual name of the benchmark class is not
relevant to JMH. However, stick to the naming convention and end the
class name of a benchmark with Benchmark
. To have JMH
execute a benchmark, annotate the respective methods with
@Benchmark
.
To get realistic results, you should exercise care when running benchmarks. Here are a few tips:
Error
column in the benchmark
results to see the run-to-run variance.taskset
.cpufreq-set
and
the performance
CPU governor.@Param
.-prof gc
to
check whether the garbage collector runs during a microbenchmarks and
skews your results. If so, try to force a GC between runs (-gc
true
) but watch out for the caveats.-prof perf
or -prof perfasm
(both only
available on Linux) to see hotspots.-prof perfasm
.Score
column and ignore
Error
. Instead take countermeasures to keep
Error
low / variance explainable.