Now, run_smoke_bench runs the benchmarks, compares performance and code size and reports the results - on stdout and as a markdown file.
No need to run bench_code_size.py and compare_perf_tests.py separately.
This has two benefits:
- It's much easier to run it locally
- It's now more transparent what's happening in '@swiftci benchmark', because now all the logic is in run_smoke_bench rather than in the not visible script on the CI bot.
I also remove the branch-arguments from ReportFormatter in ompare_perf_tests.py. They were not used anyway.
For a smooth rollout in CI, I created a new script rather than changing the existing one. Once everything is setup in CI, I'll delete the old run_smoke_test.py and bench_code_size.py.
This adds optional `legacyFactor` to the `BenchmarkInfo`, which allows for linear modification of constants that unnecesarily inflate the base workload of benchmarks, while maintaining the continuity of log-term benchmark tracking.
For example, if a benchmark uses `for _ in N*10_000` in its run function, we could lower this to `for _ in N*1_000` and adding a `legacyFactor: 10` to its `BenchmarkInfo`.
Note that this doesn’t affect the real measurements gathered from the `--verbose` output. The `BenchmarkDoctor` has been slightly adjusted to work with these real samples, therefore `Benchmark_Driver check` will not flag these benchmarks for slow run time reported in the summary, if their real runtimes fall into the recommended range.
Since this benchmark has been significantly modified and needs to be renamed, we can also lower the workload by a factor of 10, to keep up with the best practices.
The old benchmark that uses `NSDecimalNumber` as the tested class is renamed to `MapReduceNSDecimalNumber` and the renamed `MapReduceClass2` now newly measures Swift class `Box` that wrap an `Int`. Short versions were modified analogously.
This does a few things:
1. We were not updated for libProc's addition. I bumped the swiftpm version
number to get the systemLibrary functionality (thanks Ankit).
2. I split up a bunch of lines to help the typechecker out a little bit.
The DictionaryKeysContains used unreasonably large dictionary to demonstrate the failed O(n) instead of O(1) performance in case of Cocoa Dictionary. The setup of 1M element dictionary took 8 seconds on my old machine!
The old pathological behavior can be equaly well demonstrated with a much smaller dictionary. (Validated by modifying `public func _customContainsEquatableElement` in `Dictionary.swift` to `return _variant.index(forKey: element) != nil`)
The reported performance with correct O(1) behavior is unchanged.
MapReduceClass had setup overhead fo 868 μs (7%).
Setup overhead of MapReduceClassShort was practically lost in the measurement noise from it’s artificially high base load, but it was there.
Extracting the decimal array initialization into `SetUpFunction` also takes out the cost of releasing the [NSDecimalNumber], which turns out to be about half of the measured runtime in the case of MapReduceClass benchmark. This significantly changes the reported runtimes (to about half), therfore the modified benchmarks get a new name with suffix `2`.
Sequence benchmarks that test operations on Arrays have setup overhead of 14 μs. (Up from 4 μs a year ago!) That’s just the creation of an [Int] with 2k elements from a range… This array is now extracted into a constant.
This commit also removes the .unstable tag from some CountableRange benchmarks, restoring them back to commit set of the Swift Benchmark Suite.
SubstringComparable had setup overhead of 58 μs (26%).
This was a tricky modification: extracting `substrings` and `comparison` constants out of the run function surprisingly resulted in decreased performance. For some reason this configuration causes significant increase in retain/release traffic. Aliasing the constants in the run function somehow works around this deoptimization.
Also the initial split of the string into 8 substrings takes 44ms!!! (I’m suspecting some king of one-time ICU initialization?)
SortSortedStrings had setup overhead of 914 μs (30%).
Renamed [String] constants to be shorter and more descriptive. Extracted the lazy initialiation of all these constants into `setUpFunction`, for cleaner measurements.
RandomShuffleLCG2 had setup overhead of 902 μs (17%) even though it already used the setUpFunction. Turns out that copying 100k element array is measurably costly.
The only way to eliminate this overhead from measurement I could think of is to let the numbersLCG array linger around (800 kB), because shuffling the IOU version had different performance.
IterateData has setup overhead of 480 μs (10%).
There remained strange setup overhead after extracting the data into setUpFunction, because of of-by-one error in the main loop. It should be either: `for _ 1…10*N` or: `for _ 0..<10*N`. It’s error to use 0…m*N, because this will result in `m*N + 1` iterations that will be divided by N in the reported measurement. The extra iteration then manifests as a mysterious setup overhead!
Dictionary had setup overhad of 136 μs (6%).
DictionaryOfObjects had setup overhead of 616 μs (7%).
Also fixed variable naming convention (lowerCameCase).
DataCount had setup overhead of 18 μs (20%).
DataSubscript had setup overhead of 18 μs (2%).
SetUpFunction wasn’t necessary, because of short initialization (18 μs for `sampleData(.medium)`), which will inflate only the initial measurement.
Runtimes of other benchmarks hide the sampleData initialization in their artificially high runtimes — most use internal multiplier of 10 000 iterations — but were changed to use the same constant data, since it was already available. The overhead will already be extracted if we go for more precise measurement with lower multipliers in the future.
ArrayAppendStrings had setup overhead of 10ms (42%). ArrayAppendLazyMap had setup overhead of 24 μs (1%).
ArrayAppendOptionals and ArrayAppendArrayOfInt also had barely visible, small overhead of ~18μs, that was mostly hidden in measurement noise, but I’ve extracted the setup from all places that had 10 000 element array initializations, in preparation for more precise measurement in the future.
Add a check against unreasonably long setup times for benchmarks that do their initialization work in the `setUpFunction`. Given the typical benchmark measurements will last about 1 second, it’s reasonable to expect the setup to take at most 20% extra, on top of that: 200 ms.
The `DictionaryKeysContains*` benchmarks are an instance of this mistake. The setup of `DictionaryKeysContainsNative` takes 3 seconds on my machine, to prepare a dictionary for the run function, whose typical runtime is 90 μs. The setup of Cocoa version takes 8 seconds!!! It is trivial to rewite these with much smaller dictionaries that demonstrate the point of these benchmarks perfectly well, without the need to wait for ages to setup these benchmarks.
* Lowered the threshold for healthy benchmark runtime to be under 1000 μs.
* Offer suitable divisor that is power of 10, in addition to the one that’s power of 2.
* Expanded the motivation in the docstring.
Since the meaning of some columns was changed, but their overall number remained, let’s include the header in the CSV log to make it clear that we are now reporting MIN, Q1, MEDIAN, Q3, MAX, MAX_RSS, instead of the old MIN, MAX, MEAN, SD, MEDIAN, MAX_RSS format.
Use the box-plot inspired technique for filtering out outlier measurements. Values that are higher than the top inner fence (TIF = Q3 + IQR * 1.5) are excluded from the sample.
When num_samples is less than quantile + 1, some of the measurements are repeated in the report summary. Parsed samples should strive to be a true reflection of the measured distribution, so we’ll correct this by discarding the repetated artifacts from quantile estimation.
This avoids introducting a bias from this oversampling into the empirical distribution obtained from merging independent samples.
See also:
https://en.wikipedia.org/wiki/Oversampling_and_undersampling_in_data_analysis
Switching the measurement technique from gathering `i` independent samples characterized by their mean values, to a finer grained characterization of these measurements using quantiles.
The distribution of benchmark measurements is non-normal, with outliers that significantly inflate the mean and standard-deviation due to presence of uncontrolled variable of the system load. Therefore the MEAN and SD were incorrect statistics to properly characterize the benchmark measurements.
Benchmark_Driver now gathers more individual measurements from Benchmark_O. It is executed with `--num-iters=1`, because we don’t want to average the runtimes, we want raw data. This collects variable number of measurements gathered in about 1 second. Using the `--quantile=20` we get up to 20 measured values that properly characterize the empirical distribution of the benchmark from each independent run. The measurements from `i` independent executions are combined to form the final empirical distribution, which is reported in a five-number summary (MIN, Q1, MEDIAN, Q3, MAX).
Renamed Benchmark_Driver’s `iterations` argument to `independent-samples` to clarify its true meaning and disambiguate it from the concept of `num-iters` used in Benchmark_O. The short form of the argument — `-i` — remains unchanged.
When measuring with specified number of iterations (generally, `--num-iters=1` makes sense), automaticially determine the number of samples to take, so that the overall measurement duration comes close to `sample-time`.
This is the same technique used to scale `num-iters` before, but for `num-samples`.