Mercurial > repos > public > sbplib_julia
comparison benchmark/run_and_view.jl @ 1184:c06d8eb8b0f0 tooling/benchmarks
Add docstrings, comparing version of run_benchmark, and allow main to call any version of run_benchmark
author | Jonatan Werpers <jonatan@werpers.com> |
---|---|
date | Fri, 27 Jan 2023 11:58:02 +0100 |
parents | aefe4b551901 |
children | 6fc0adcd5b97 |
comparison
equal
deleted
inserted
replaced
1183:aefe4b551901 | 1184:c06d8eb8b0f0 |
---|---|
7 | 7 |
8 const sbplib_root = splitpath(pathof(Sbplib))[1:end-2] |> joinpath | 8 const sbplib_root = splitpath(pathof(Sbplib))[1:end-2] |> joinpath |
9 const results_dir = mkpath(joinpath(sbplib_root, "benchmark/results")) | 9 const results_dir = mkpath(joinpath(sbplib_root, "benchmark/results")) |
10 const template_path = joinpath(sbplib_root, "benchmark/result.tmpl") | 10 const template_path = joinpath(sbplib_root, "benchmark/result.tmpl") |
11 | 11 |
12 function main() | 12 """ |
13 r = run_benchmark() | 13 main(args...; kwargs...) |
14 | |
15 Calls `run_benchmark(args...; kwargs...)` and writes the results as an HTML file in `benchmark/results`. | |
16 See [`run_benchmark`](@ref) for possible arguments. | |
17 """ | |
18 function main(args...; kwargs...) | |
19 r = run_benchmark(args...; kwargs...) | |
14 file_path = write_result_html(r) | 20 file_path = write_result_html(r) |
15 open_in_default_browser(file_path) | 21 open_in_default_browser(file_path) |
16 end | 22 end |
17 | 23 |
18 # TBD: What parts are PkgBenchmark contributing? Can it be stripped out? Can we replace the html output part? | 24 # TBD: What parts are PkgBenchmark contributing? Can it be stripped out? Can we replace the html output part? |
19 | 25 |
26 """ | |
27 run_benchmark() | |
28 | |
29 Runs the benchmark suite for the current working directory and returns a `PkgBenchmark.BenchmarkResult` | |
30 """ | |
20 function run_benchmark() | 31 function run_benchmark() |
21 r = PkgBenchmark.benchmarkpkg(Sbplib) | 32 r = PkgBenchmark.benchmarkpkg(Sbplib) |
22 | 33 |
23 rev = hg_id() | 34 rev = hg_id() |
24 | 35 |
25 return add_rev_info(r, rev) | 36 return add_rev_info(r, rev) |
26 end | 37 end |
27 | 38 |
39 """ | |
40 run_benchmark(rev) | |
41 | |
42 Updates the repository to the given revison and runs the benchmark suite. When done, updates the repository to the origianl state. | |
43 | |
44 Returns a `PkgBenchmark.BenchmarkResult` | |
45 """ | |
28 function run_benchmark(rev) | 46 function run_benchmark(rev) |
29 rev_before = hg_rev() | 47 rev_before = hg_rev() |
30 hg_update(rev) | 48 hg_update(rev) |
31 r = run_benchmark() | 49 r = run_benchmark() |
32 hg_update(rev_before) | 50 hg_update(rev_before) |
33 | 51 |
34 return run_benchmark() | 52 return run_benchmark() |
35 end | 53 end |
54 | |
55 """ | |
56 run_benchmark(target, baseline, f=minimum; judgekwargs=Dict()) | |
57 | |
58 Runs the benchmark at revisions `target` and `baseline` and compares them using `PkgBenchmark.judge`. | |
59 `f` is the function used to compare. `judgekwargs` are keyword arguments passed to `judge`. | |
60 | |
61 Returns a `PkgBenchmark.BenchmarkJudgement` | |
62 """ | |
63 function run_benchmark(target, baseline, f=minimum; judgekwargs=Dict()) | |
64 t = run_benchmark(target) | |
65 b = run_benchmark(baseline) | |
66 | |
67 judged = PkgBenchmark.judge(t,b,f; judgekwargs...) | |
68 | |
69 return BenchmarkJudgement(t,b,judged) | |
70 end | |
71 | |
72 # TBD: How to compare against current working directory? Possible to create a temporary commit? | |
73 | |
36 | 74 |
37 function add_rev_info(benchmarkresult, rev) | 75 function add_rev_info(benchmarkresult, rev) |
38 return PkgBenchmark.BenchmarkResults( | 76 return PkgBenchmark.BenchmarkResults( |
39 benchmarkresult.name, | 77 benchmarkresult.name, |
40 rev, | 78 rev, |