Mercurial > repos > public > sbplib_julia
changeset 1184:c06d8eb8b0f0 tooling/benchmarks
Add docstrings, comparing version of run_benchmark, and allow main to call any version of run_benchmark
author | Jonatan Werpers <jonatan@werpers.com> |
---|---|
date | Fri, 27 Jan 2023 11:58:02 +0100 |
parents | aefe4b551901 |
children | 6fc0adcd5b97 |
files | benchmark/run_and_view.jl |
diffstat | 1 files changed, 40 insertions(+), 2 deletions(-) [+] |
line wrap: on
line diff
--- a/benchmark/run_and_view.jl Fri Jan 27 11:56:38 2023 +0100 +++ b/benchmark/run_and_view.jl Fri Jan 27 11:58:02 2023 +0100 @@ -9,14 +9,25 @@ const results_dir = mkpath(joinpath(sbplib_root, "benchmark/results")) const template_path = joinpath(sbplib_root, "benchmark/result.tmpl") -function main() - r = run_benchmark() +""" + main(args...; kwargs...) + +Calls `run_benchmark(args...; kwargs...)` and writes the results as an HTML file in `benchmark/results`. +See [`run_benchmark`](@ref) for possible arguments. +""" +function main(args...; kwargs...) + r = run_benchmark(args...; kwargs...) file_path = write_result_html(r) open_in_default_browser(file_path) end # TBD: What parts are PkgBenchmark contributing? Can it be stripped out? Can we replace the html output part? +""" + run_benchmark() + +Runs the benchmark suite for the current working directory and returns a `PkgBenchmark.BenchmarkResult` +""" function run_benchmark() r = PkgBenchmark.benchmarkpkg(Sbplib) @@ -25,6 +36,13 @@ return add_rev_info(r, rev) end +""" + run_benchmark(rev) + +Updates the repository to the given revison and runs the benchmark suite. When done, updates the repository to the origianl state. + +Returns a `PkgBenchmark.BenchmarkResult` +""" function run_benchmark(rev) rev_before = hg_rev() hg_update(rev) @@ -34,6 +52,26 @@ return run_benchmark() end +""" + run_benchmark(target, baseline, f=minimum; judgekwargs=Dict()) + +Runs the benchmark at revisions `target` and `baseline` and compares them using `PkgBenchmark.judge`. +`f` is the function used to compare. `judgekwargs` are keyword arguments passed to `judge`. + +Returns a `PkgBenchmark.BenchmarkJudgement` +""" +function run_benchmark(target, baseline, f=minimum; judgekwargs=Dict()) + t = run_benchmark(target) + b = run_benchmark(baseline) + + judged = PkgBenchmark.judge(t,b,f; judgekwargs...) + + return BenchmarkJudgement(t,b,judged) +end + +# TBD: How to compare against current working directory? Possible to create a temporary commit? + + function add_rev_info(benchmarkresult, rev) return PkgBenchmark.BenchmarkResults( benchmarkresult.name,