[benchmark] Legacy factor

This adds optional `legacyFactor` to the `BenchmarkInfo`, which allows for linear modification of constants that unnecesarily inflate the base workload of benchmarks, while maintaining the continuity of log-term benchmark tracking.

For example, if a benchmark uses `for _ in N*10_000` in its run function, we could lower this to `for _ in N*1_000` and adding a `legacyFactor: 10` to its `BenchmarkInfo`.

Note that this doesn’t affect the real measurements gathered from the `--verbose` output. The `BenchmarkDoctor` has been slightly adjusted to work with these real samples, therefore `Benchmark_Driver check` will not flag these benchmarks for slow run time reported in the summary, if their real runtimes fall into the recommended range.
This commit is contained in:
Pavol Vaskovic
2018-11-01 06:24:27 +01:00
parent 84c93c9e13
commit a7f832fb57
4 changed files with 24 additions and 14 deletions

View File

@@ -423,7 +423,7 @@ class TestLoggingReportFormatter(unittest.TestCase):
def _PTR(min=700, mem_pages=1000, setup=None):
"""Create PerformanceTestResult Stub."""
return Stub(min=min, mem_pages=mem_pages, setup=setup)
return Stub(samples=Stub(min=min), mem_pages=mem_pages, setup=setup)
def _run(test, num_samples=None, num_iters=None, verbose=None,
@@ -483,7 +483,8 @@ class TestBenchmarkDoctor(unittest.TestCase):
"""
driver = BenchmarkDriverMock(tests=['B1'], responses=([
# calibration run, returns a stand-in for PerformanceTestResult
(_run('B1', num_samples=3, num_iters=1), _PTR(min=300))] +
(_run('B1', num_samples=3, num_iters=1,
verbose=True), _PTR(min=300))] +
# 5x i1 series, with 300 μs runtime its possible to take 4098
# samples/s, but it should be capped at 2k
([(_run('B1', num_samples=2048, num_iters=1,