The Swift benchmarking harness now has two distinct output formats:
* Default: Formatted text that's intended for human consumption.
Right now, this is just the minimum value, but we can augment that.
* `--json`: each output line is a JSON-encoded object that contains raw data
This information is intended for use by python scripts that aggregate
or compare multiple independent tests.
Previously, we tried to use the same output for both purposes. This required
the python scripts to do more complex parsing of textual layouts, and also meant
that the python scripts had only summary data to work with instead of full raw
sample information. This in turn made it almost impossible to derive meaningful
comparisons between runs or to aggregate multiple runs.
Typical output in the new JSON format looks like this:
```
{"number":89, "name":"PerfTest", "samples":[1.23, 2.35], "max_rss":16384}
{"number":91, "name":"OtherTest", "samples":[14.8, 19.7]}
```
This format is easy to parse in Python. Just iterate over
lines and decode each one separately. Also note that the
optional fields (`"max_rss"` above) are trivial to handle:
```
import json
for l in lines:
j = json.loads(l)
# Default 0 if not present
max_rss = j.get("max_rss", 0)
```
Note the `"samples"` array includes the runtime for each individual run.
Because optional fields are so much easier to handle in this form, I reworked
the Python logic to translate old formats into this JSON format for more
uniformity. Hopefully, we can simplify the code in a year or so by stripping
out the old log formats entirely, along with some of the redundant statistical
calculations. In particular, the python logic still makes an effort to preserve
mean, median, max, min, stdev, and other statistical data whenever the full set
of samples is not present. Once we've gotten to a point where we're always
keeping full samples, we can compute any such information on the fly as needed,
eliminating the need to record it.
This is a pretty big rearchitecture of the core benchmarking logic. In order to
try to keep things a bit more manageable, I have not taken this opportunity to
replace any of the actual statistics used in the higher level code or to change
how the actual samples are measured. (But I expect this rearchitecture will make
such changes simpler.) In particular, this should not actually change any
benchmark results.
For the future, please keep this general principle in mind: Statistical
summaries (averages, medians, etc) should as a rule be computed for immediate
output and rarely if ever stored or used as input for other processing. Instead,
aim to store and transfer raw data from which statistics can be recomputed as
necessary.
The script defaulted to a mode that no one uses without checking
whether the input was compatible with that mode.
This is the script used for run-to-run comparison of benchmark
results. The in-tree benchmarks happened to work with the script only
because of a fragile string comparison burried deep within the
script. Other out-of-tree benchmark scripts that generate results were
silently broken when using this script for comparison.
The `__future__` we relied on is now, where the 3 specific things are
all included [since Python 3.0](https://docs.python.org/3/library/__future__.html):
* absolute_import
* print_function
* unicode_literals
* division
These import statements are no-ops and are no longer necessary.
Improve inline headers in `single_table` mode to also print labels for the numeric columns.
Sections in the `single_table` are visually distinguished by a separator row preceding the the inline headers.
Separated header label styles for git and markdown modes with UPPERCASE and **Bold** formatting respectively.
Inlined section template definitions.
Now, run_smoke_bench runs the benchmarks, compares performance and code size and reports the results - on stdout and as a markdown file.
No need to run bench_code_size.py and compare_perf_tests.py separately.
This has two benefits:
- It's much easier to run it locally
- It's now more transparent what's happening in '@swiftci benchmark', because now all the logic is in run_smoke_bench rather than in the not visible script on the CI bot.
I also remove the branch-arguments from ReportFormatter in ompare_perf_tests.py. They were not used anyway.
For a smooth rollout in CI, I created a new script rather than changing the existing one. Once everything is setup in CI, I'll delete the old run_smoke_test.py and bench_code_size.py.
Use the box-plot inspired technique for filtering out outlier measurements. Values that are higher than the top inner fence (TIF = Q3 + IQR * 1.5) are excluded from the sample.
When num_samples is less than quantile + 1, some of the measurements are repeated in the report summary. Parsed samples should strive to be a true reflection of the measured distribution, so we’ll correct this by discarding the repetated artifacts from quantile estimation.
This avoids introducting a bias from this oversampling into the empirical distribution obtained from merging independent samples.
See also:
https://en.wikipedia.org/wiki/Oversampling_and_undersampling_in_data_analysis
Turns out that both the old code in `DriverUtils` that computed median, as well as newer quartiles in `PerformanceTestSamples` had off-by-1 error.
It trully is the 3rd of the 2 hard things in computer science!
Since the results comparisons are now used to also compare code sizes in addition to runtimes, it makes sense to rename the column label to the more neutral term “ratio” instead of old “speedup”.
Turns out that both the old code in `DriverUtils` that computed median, as well as newer quartiles in `PerformanceTestSamples` had off-by-1 error.
It trully is the 3rd of the 2 hard things in computer science!
The test number column in the space justified column format emmited by the Benchmark_Driver to stdout while logging to file is right aligned, so it must handle leading whitespace.
Replaced guts of the `run_benchmarks` function with implementation from `BenchmarDriver`. There was only single client which called it with `verbose=True`, so this parameter could be safely removed.
Function `instrument_test` is replaced by running the `Benchmark_0` with `--memory` option, which implements the MAX_RSS measurement while also excluding the overhead from the benchmarking infrastructure. The incorrect computation of standard deviation was simply dropped for measurements of more than one independent sample. Bogus aggregated `Totals` statistics were removed, now reporting only the total number of executed benchmarks.
Introduce algorithm for excluding of outliers after collecting all samples using the Interquartile Range rule.
The `exclude_outliers` method uses 1st and 3rd Quartile to compute Interquartile Range, then uses inner fences at Q1 - 1.5*IQR and Q3 + 1.5*IQR to remove samples outside this fence.
Based on experiments with collecting hundreads and thousands of samples (`num_samples`) per test with low iteration count (`num_iters`) with ~1s runtime, this rule is very effective in providing much better quality of sample population, effectively removing short environmental fluctuations that were previously averaged into the overall result (by the adaptively determined `num_iters` to run for ~1s), enlarging the reported result with these measurement errors. This technique can be used for some benchmarks, to get more stable results faster than before.
This outlier filering is employed when parsing `--verbose` test results.
* Moved the functionality to compute median, standard deviation and related statistics from `PerformanceTestResult` into `PerformanceTestSamples`.
* Fixed wrong unit in comments
Measure more of environment during test
In addition to measuring maximum resident set size, also extract number of voluntary and involuntary context switches from the verbose mode.
LogParser doesn’t use `csv.reader` anymore.
Parsing is handled by a Finite State Machine. Each line is matched against a set of (mutually exclusive) regular expressions that represent known states. When a match is found, corresponding parsing action is taken.
Moved result formatting methods from `PerformanceTestResult` and `ResultComparison` to `ReportFormatter`, in order to free PTR to take more computational responsibilities in the future.