Commit Graph

43 Commits

Author SHA1 Message Date
Erik Eckstein
a46cda8c51 benchmarks: fix run_smoke_bench to support new benchmark executable naming scheme
Find the right benchmark executable with a glob pattern.
Also, add an option "-arch" to select between executables for different architectures.
2020-07-07 11:01:49 +02:00
Sergej Jaskiewicz
cce9e81f0b Support Python 3 in the benchmark suite 2020-02-28 01:45:35 +03:00
Ross Bayer
b1961745e0 [Python: black] Reformatted the benchmark Python sources using utils/python_format.py. 2020-02-08 15:32:44 -08:00
Alex Hoppen
932525d762 [gardening] Fix several python-lint warnings 2019-10-29 10:40:20 -07:00
Alex Hoppen
776e2c0030 Revert "Migrate building SwiftSyntax to swift_build_support" 2019-10-29 09:55:32 -07:00
Alex Hoppen
46501b881f [gardening] Fix several python-lint warnings 2019-10-25 15:58:07 -07:00
Pavol Vaskovic
84e7d4dfb8 [benchmark] Adjust Driver’s console output format
…to handle longer benchmark names, assuming maximum length of 40 characters.
2019-02-19 23:28:51 +01:00
Gwynne Raskind
faf8a5edb6 Fix indentation for python_lint 2019-01-17 02:00:40 -06:00
Gwynne Raskind
09b4159cb2 Global replace of "assertEquals" with "assertEqual" in compliance with deprecation of assertEquals name in Python 2.7 2019-01-16 04:06:38 -06:00
Pavol Vaskovic
2c271493d5 [benchmark] Limit of Accuracy in Setup Overhead
Clarified limit of accuracy in setup overhead detection.
2019-01-09 18:01:06 +01:00
Pavol Vaskovic
2096151ee9 [benchmark] BenchmarkDoctor: Lower runtime limit
Warn about runtimes under 20 μs and flag 0 μs runtimes as errors.
2019-01-08 19:16:40 +01:00
Pavol Vaskovic
8a8a3ad6df [benchmark] Limit setup overhead detection (>20)
For really small runtimes < 20 μs this method of setup overhead detection doesn’t work. Even 1μs change in 20μs runtime is 5%. Just return no overhead.
2019-01-08 19:15:29 +01:00
Pavol Vaskovic
4a716445df [benchmark] BernchmarkDriver run in batch mode
Finished support for running all active tests in one batch. Returns a dictionary of PerformanceTestResults.

Known test names are passed to the harness in a compressed form as test numbers.
2019-01-07 20:59:39 +01:00
Pavol Vaskovic
df3389259b [benchmark] BenchmarkDriver: store test_numbers 2019-01-07 20:57:47 +01:00
Pavol Vaskovic
3023ab5545 [benchmark] BenchmarkDriver sample_time support
Added support for Benchmark_X’s `--sample-time` parameter .
2019-01-07 20:57:42 +01:00
Pavol Vaskovic
46f94d7709 [benchmark] BenchmarkDriver check --markdown
Added `--markdown` flag for the `check` command to output the `BenchmarkDoctor`’s report in the Markdown format (as used by swift-ci on GitHub).
2018-12-21 01:22:38 +01:00
Andrew Trick
5154886491 Merge pull request #20334 from palimondo/within-cells-interlinked
[benchmark] Naming Convention
2018-12-13 08:20:23 -08:00
Pavol Vaskovic
9d6f7ad160 [benchmark] Driver & Doctor: Lower the sample cap
Lowered the default sample cap from 2k to 200. (This doesn’t effect manually specified `--num-samples` argument in the driver.)

Swift benchmarks have pretty constant performance profile over time. It’s more beneficial to get multiple independent measurements faster, than more samples from the same run.
2018-12-07 15:06:43 +01:00
Pavol Vaskovic
92cf40dcd3 [benchmark] MarkdownReportHandler
`logging.Handler` that creates nicely formatted report from `BecnhmarkDoctor`’s `check` in Markdown table for display on GitHub.
2018-11-27 22:55:02 +01:00
Pavol Vaskovic
9a04207735 [benchmark] Doctor: emit mem_page details info
Promoting previously DEBUG message to INFO.
2018-11-27 22:49:07 +01:00
Pavol Vaskovic
b4f901bae4 [benchmark] Naming Convention
New benchmark naming convention for better readability and improved naming system that accounts for performance coverage growth going forward.
2018-11-05 22:44:50 +01:00
Pavol Vaskovic
a7f832fb57 [benchmark] Legacy factor
This adds optional `legacyFactor` to the `BenchmarkInfo`, which allows for linear modification of constants that unnecesarily inflate the base workload of benchmarks, while maintaining the continuity of log-term benchmark tracking.

For example, if a benchmark uses `for _ in N*10_000` in its run function, we could lower this to `for _ in N*1_000` and adding a `legacyFactor: 10` to its `BenchmarkInfo`.

Note that this doesn’t affect the real measurements gathered from the `--verbose` output. The `BenchmarkDoctor` has been slightly adjusted to work with these real samples, therefore `Benchmark_Driver check` will not flag these benchmarks for slow run time reported in the summary, if their real runtimes fall into the recommended range.
2018-11-01 06:24:27 +01:00
Pavol Vaskovic
a24d0ff7a5 [benchmark] BenchmarkDoctor checks setup time
Add a check against unreasonably long setup times for benchmarks that do their initialization work in the `setUpFunction`. Given the typical benchmark measurements will last about 1 second, it’s reasonable to expect the setup to take at most 20% extra, on top of that: 200 ms.

The `DictionaryKeysContains*` benchmarks are an instance of this mistake. The setup of `DictionaryKeysContainsNative` takes 3 seconds on my machine, to prepare a dictionary for the run function, whose typical runtime is 90 μs. The setup of Cocoa version takes 8 seconds!!! It is trivial to rewite these with much smaller dictionaries that demonstrate the point of these benchmarks perfectly well, without the need to wait for ages to setup these benchmarks.
2018-10-15 09:06:38 +02:00
Pavol Vaskovic
638f4f8e5e [benchmark] Recommended runtime should be < 1ms
* Lowered the threshold for healthy benchmark runtime to be under 1000 μs.
* Offer suitable divisor that is power of 10, in addition to the one that’s power of 2.
* Expanded the motivation in the docstring.
2018-10-13 22:09:25 +02:00
Pavol Vaskovic
d9a89ffea2 [benchmark] Use header in CSV log
Since the meaning of some columns was changed, but their overall number remained, let’s include the header in the CSV log to make it clear that we are now reporting MIN, Q1, MEDIAN, Q3, MAX, MAX_RSS, instead of the old MIN, MAX, MEAN, SD, MEDIAN, MAX_RSS format.
2018-10-12 10:03:33 +02:00
Pavol Vaskovic
a04edd1d47 [benchmark] Quantiles in Benchmark_Driver
Switching the measurement technique from gathering `i` independent samples characterized by their mean values, to a finer grained characterization of these measurements using quantiles.

The distribution of benchmark measurements is non-normal, with outliers that significantly inflate the mean and standard-deviation due to presence of uncontrolled variable of the system load. Therefore the MEAN and SD were incorrect statistics to properly characterize the benchmark measurements.

Benchmark_Driver now gathers more individual measurements from Benchmark_O. It is executed with `--num-iters=1`, because we don’t want to average the runtimes, we want raw data. This collects variable number of measurements gathered in about 1 second.  Using the `--quantile=20` we get up to 20 measured values that properly characterize the empirical distribution of the benchmark from each independent run. The measurements from `i` independent executions are combined to form the final empirical distribution, which is reported in a five-number summary (MIN, Q1, MEDIAN, Q3, MAX).
2018-10-11 18:56:27 +02:00
Pavol Vaskovic
0438c45e2d [benchmark] B_D iterations => independent-samples
Renamed Benchmark_Driver’s `iterations` argument to `independent-samples` to clarify its true meaning and  disambiguate it from the concept of `num-iters` used in Benchmark_O. The short form of the argument — `-i` — remains unchanged.
2018-10-11 18:56:27 +02:00
Pavol Vaskovic
9bd599a914 [benchmark] Doctor explicitly measures memory
Small fix following the last refactorig of MAX_RSS, the `--memory` option is required to measure memory in `--verbose` mode. Added integration test for `check` command of Benchmark_Driver that depended on it.
2018-09-14 23:40:43 +02:00
Ben Langmuir
423e145b0c Revert "[benchmark] Report Quantiles from Benchmark_O and a TON of Gardening" 2018-09-14 13:24:01 -07:00
Pavol Vaskovic
84bf15836d [benchmark] Doctor explicitly measures memory
Small fix following the last refactorig of MAX_RSS, the `--memory` option is required to measure memory in `--verbose` mode. Added integration test for `check` command of Benchmark_Driver that depended on it.
2018-09-06 18:21:50 +02:00
Pavol Vaskovic
49e8e692fb [benchmark] Strangle run and run_benchmarks
Moved all `run` command related functionality to `BenchmarkDriver`.
2018-08-23 18:01:46 +02:00
Pavol Vaskovic
7ae5d7754c [benchmark] Report totals as a sentence
Clean up after removing bogus agregate statistics from last line of the log. It makes more sense to report the total number of executed benchmarks as a sentence that trying to fit into the format of preceding table.

Added test assertion that `run_benchmarks` return csv formatted log, as it is used to write the log into file in `log_results`.
2018-08-23 18:01:46 +02:00
Pavol Vaskovic
ef1461ca46 [benchmark] Strangle log_results
Moved `log_results` to BenchmarkDriver.
2018-08-23 18:01:46 +02:00
Pavol Vaskovic
a10b6070dd [benchmark] Refactor log_results
Added tests for `log_results` and the *space-justified-columns* format emited to stdout while logging to file.
2018-08-23 18:01:46 +02:00
Pavol Vaskovic
1d3fa87fdd [benchmark] Strangle log_results -> log_file
Moved the `log_file` path construction to the `BenchmarkDriver`.
Retired `get_*_git_*` functions.
2018-08-23 18:01:41 +02:00
Pavol Vaskovic
f38e6df914 [benchmark] Doctor verifies constant memory use
This needs to be finished with function approximating normal range based on the memory used.
2018-08-20 16:52:07 +02:00
Pavol Vaskovic
06061976da [benchmark] BenchmarkDoctor checks setup overhead
Detect setup overhead in benchmark and report if it exceeds 5%.
2018-08-17 08:50:04 +02:00
Pavol Vaskovic
7725c0096e [benchmark] Measure and analyze benchmark runtimes
`BenchmarkDoctor` measures benchmark execution (using `BenchmarkDriver`) and verifies that their runtime stays under 2500 microseconds.
2018-08-17 08:40:39 +02:00
Pavol Vaskovic
ab16999e20 [benchmark] Created BenchmarkDoctor (naming)
`BenchmarkDoctor` analyzes performance tests and reports their conformance to the set of desired criteria. First two rules verify the naming convention.

`BenchmarkDoctor` is invoked from `Benchmark_Driver` with `check` aurgument.
2018-08-17 08:40:39 +02:00
Pavol Vaskovic
076415f969 [benchmark] Strangler run_benchmarks
Replaced guts of the `run_benchmarks` function with implementation from `BenchmarDriver`. There was only single client which called it with `verbose=True`, so this parameter could be safely removed.

Function `instrument_test` is replaced by running the `Benchmark_0` with `--memory` option, which implements the MAX_RSS measurement while also excluding the overhead from the benchmarking infrastructure. The incorrect computation of standard deviation was simply dropped for measurements of more than one independent sample. Bogus aggregated `Totals` statistics were removed, now reporting only the total number of executed benchmarks.
2018-08-17 08:40:39 +02:00
Pavol Vaskovic
a84db83062 [benchmark] BenchmarkDriver can run tests
The `run` method on `BenchmarkDriver` invokes the test harness with specified number of iterations, samples. It supports mesuring memory use and in the verbose mode it also collects individual samples and monitors the system load by counting the number of voluntary and involuntary context switches.

Output is parsed using `LogParser` from `compare_perf_tests.py`. This makes that file a required dependency for the driver, therefore it is also copied to the bin directory during the build.
2018-08-17 08:39:50 +02:00
Pavol Vaskovic
ce39b12929 [benchmark] Strangler: BenchmarkDriver get_tests
See https://www.martinfowler.com/bliki/StranglerApplication.html for more info on the used pattern for refactoring legacy applications.

Introduced class `BenchmarkDriver` as a beginning of strangler application that will gradually replace old functions. Used it instead of `get_tests()` function in Benchmark_Driver.

The interaction with Benchmark_O is simulated through mocking. `SubprocessMock` class records the invocations of command line processes and responds with canned replies in the format of Benchmark_O output.

Removed 3 redundant lit tests that are now covered by the unit test `test_gets_list_of_all_benchmarks_when_benchmarks_args_exist`. This saves 3 seconds from test execution. Keeping only single integration test that verifies that the plumbing is connected correstly.
2018-08-17 00:32:04 +02:00
Pavol Vaskovic
69d5d5e732 [benchmark] Adding tests for BenchmarkDriver
The imports are a bit sketchy because it doesn’t have `.py` extension and they had to be hacked manually. :-/

Extracted `parse_args` from `main` and added test coverage for argument parsing.
2018-08-17 00:32:04 +02:00