@@ -4,36 +4,39 @@ This directory contains the microbenchmark suite of Elasticsearch. It relies on
44
55## Purpose
66
7- We do not want to microbenchmark everything but the kitchen sink and should typically rely on our
8- [ macrobenchmarks] ( https://elasticsearch-benchmarks.elastic.co/app/kibana#/dashboard/Nightly-Benchmark-Overview ) with
9- [ Rally] ( http://github.com/elastic/rally ) . Microbenchmarks are intended to spot performance regressions in performance-critical components.
7+ We do not want to microbenchmark everything but the kitchen sink and should typically rely on our
8+ [ macrobenchmarks] ( https://elasticsearch-benchmarks.elastic.co/app/kibana#/dashboard/Nightly-Benchmark-Overview ) with
9+ [ Rally] ( http://github.com/elastic/rally ) . Microbenchmarks are intended to spot performance regressions in performance-critical components.
1010The microbenchmark suite is also handy for ad-hoc microbenchmarks but please remove them again before merging your PR.
1111
1212## Getting Started
1313
14- Just run ` gradle :benchmarks:jmh ` from the project root directory. It will build all microbenchmarks, execute them and print the result.
14+ Just run ` gradlew -p benchmarks run ` from the project root
15+ directory. It will build all microbenchmarks, execute them and print
16+ the result.
1517
1618## Running Microbenchmarks
1719
18- Benchmarks are always run via Gradle with ` gradle :benchmarks:jmh ` .
19-
20- Running via an IDE is not supported as the results are meaningless (we have no control over the JVM running the benchmarks).
20+ Running via an IDE is not supported as the results are meaningless
21+ because we have no control over the JVM running the benchmarks.
2122
22- If you want to run a specific benchmark class, e.g. ` org.elasticsearch.benchmark.MySampleBenchmark ` or have special requirements
23- generate the uberjar with ` gradle :benchmarks:jmhJar ` and run it directly with :
23+ If you want to run a specific benchmark class like, say,
24+ ` MemoryStatsBenchmark ` , you can use ` --args ` :
2425
2526```
26- java -jar benchmarks/build/distributions/elasticsearch-benchmarks-*.jar
27+ gradlew -p benchmarks run --args ' MemoryStatsBenchmark'
2728```
2829
29- JMH supports lots of command line parameters. Add ` -h ` to the command above to see the available command line options.
30+ Everything in the ` ' ` gets sent on the command line to JMH. The leading ` `
31+ inside the ` ' ` s is important. Without it parameters are sometimes sent to
32+ gradle.
3033
3134## Adding Microbenchmarks
3235
33- Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the
36+ Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the
3437[ JMH samples] ( http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/ ) .
3538
36- In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and
39+ In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and
3740end the class name of a benchmark with ` Benchmark ` . To have JMH execute a benchmark, annotate the respective methods with ` @Benchmark ` .
3841
3942## Tips and Best Practices
@@ -42,15 +45,15 @@ To get realistic results, you should exercise care when running benchmarks. Here
4245
4346### Do
4447
45- * Ensure that the system executing your microbenchmarks has as little load as possible. Shutdown every process that can cause unnecessary
48+ * Ensure that the system executing your microbenchmarks has as little load as possible. Shutdown every process that can cause unnecessary
4649 runtime jitter. Watch the ` Error ` column in the benchmark results to see the run-to-run variance.
4750* Ensure to run enough warmup iterations to get the benchmark into a stable state. If you are unsure, don't change the defaults.
4851* Avoid CPU migrations by pinning your benchmarks to specific CPU cores. On Linux you can use ` taskset ` .
49- * Fix the CPU frequency to avoid Turbo Boost from kicking in and skewing your results. On Linux you can use ` cpufreq-set ` and the
52+ * Fix the CPU frequency to avoid Turbo Boost from kicking in and skewing your results. On Linux you can use ` cpufreq-set ` and the
5053 ` performance ` CPU governor.
5154* Vary the problem input size with ` @Param ` .
5255* Use the integrated profilers in JMH to dig deeper if benchmark results to not match your hypotheses:
53- * Run the generated uberjar directly and use ` -prof gc ` to check whether the garbage collector runs during a microbenchmarks and skews
56+ * Run the generated uberjar directly and use ` -prof gc ` to check whether the garbage collector runs during a microbenchmarks and skews
5457 your results. If so, try to force a GC between runs (` -gc true ` ) but watch out for the caveats.
5558 * Use ` -prof perf ` or ` -prof perfasm ` (both only available on Linux) to see hotspots.
5659* Have your benchmarks peer-reviewed.
@@ -59,4 +62,4 @@ To get realistic results, you should exercise care when running benchmarks. Here
5962
6063* Blindly believe the numbers that your microbenchmark produces but verify them by measuring e.g. with ` -prof perfasm ` .
6164* Run more threads than your number of CPU cores (in case you run multi-threaded microbenchmarks).
62- * Look only at the ` Score ` column and ignore ` Error ` . Instead take countermeasures to keep ` Error ` low / variance explainable.
65+ * Look only at the ` Score ` column and ignore ` Error ` . Instead take countermeasures to keep ` Error ` low / variance explainable.
0 commit comments