I would suggest to simply build a set of Runner
options via the OptionsBuilder
and call run
on it from within a Junit test.
While some authors recommend against this on the grounds of not running the benchmark in a "clean" environment, I think the effects are very marginal and probably irrelevant when comparing against a reference run in the same environment.
See here for the most trivial example of setting up a Runner
manually.
Runner.run()
( or in the case of a single benchmark Runner.runSingle()
) will then return a Collection<RunResult>
or just RunResult
, that assertions can be made against.
In order to do so you could simply use the Statistics
(see docs here) you can extract from the RunResult
via RunResult.getPrimaryResult().getStatistics()
and assert against the numeric values you can extract from Statistics
... or use the isDifferent()
method that gives you the option to compare two benchmark runs within a confidence interval (might be useful to automatically catch outliers in both directions).