Small fix following the last refactorig of MAX_RSS, the `--memory` option is required to measure memory in `--verbose` mode. Added integration test for `check` command of Benchmark_Driver that depended on it.
The spaghetti if-else code was untangled into nested function that computes `iterationsPerSampleTime` and a single constant `numIters` expression that takes care of the overflow capping as well as the choice between fixed and computed `numIters` value.
The `numIters` is now computed and logged only once per benchmark measurement instead of on every sample.
The sampling loop is now just a single line. Hurrah!
Modified test to verify that the `LogParser` maintains `num-iters` derived from the `Measuring with scale` message across samples.
Turns out that both the old code in `DriverUtils` that computed median, as well as newer quartiles in `PerformanceTestSamples` had off-by-1 error.
It trully is the 3rd of the 2 hard things in computer science!
Clean up after removing bogus agregate statistics from last line of the log. It makes more sense to report the total number of executed benchmarks as a sentence that trying to fit into the format of preceding table.
Added test assertion that `run_benchmarks` return csv formatted log, as it is used to write the log into file in `log_results`.
The test number column in the space justified column format emmited by the Benchmark_Driver to stdout while logging to file is right aligned, so it must handle leading whitespace.
`BenchmarkDoctor` analyzes performance tests and reports their conformance to the set of desired criteria. First two rules verify the naming convention.
`BenchmarkDoctor` is invoked from `Benchmark_Driver` with `check` aurgument.
Replaced guts of the `run_benchmarks` function with implementation from `BenchmarDriver`. There was only single client which called it with `verbose=True`, so this parameter could be safely removed.
Function `instrument_test` is replaced by running the `Benchmark_0` with `--memory` option, which implements the MAX_RSS measurement while also excluding the overhead from the benchmarking infrastructure. The incorrect computation of standard deviation was simply dropped for measurements of more than one independent sample. Bogus aggregated `Totals` statistics were removed, now reporting only the total number of executed benchmarks.
The `run` method on `BenchmarkDriver` invokes the test harness with specified number of iterations, samples. It supports mesuring memory use and in the verbose mode it also collects individual samples and monitors the system load by counting the number of voluntary and involuntary context switches.
Output is parsed using `LogParser` from `compare_perf_tests.py`. This makes that file a required dependency for the driver, therefore it is also copied to the bin directory during the build.
Introduce algorithm for excluding of outliers after collecting all samples using the Interquartile Range rule.
The `exclude_outliers` method uses 1st and 3rd Quartile to compute Interquartile Range, then uses inner fences at Q1 - 1.5*IQR and Q3 + 1.5*IQR to remove samples outside this fence.
Based on experiments with collecting hundreads and thousands of samples (`num_samples`) per test with low iteration count (`num_iters`) with ~1s runtime, this rule is very effective in providing much better quality of sample population, effectively removing short environmental fluctuations that were previously averaged into the overall result (by the adaptively determined `num_iters` to run for ~1s), enlarging the reported result with these measurement errors. This technique can be used for some benchmarks, to get more stable results faster than before.
This outlier filering is employed when parsing `--verbose` test results.
* Moved the functionality to compute median, standard deviation and related statistics from `PerformanceTestResult` into `PerformanceTestSamples`.
* Fixed wrong unit in comments
Measure more of environment during test
In addition to measuring maximum resident set size, also extract number of voluntary and involuntary context switches from the verbose mode.
LogParser doesn’t use `csv.reader` anymore.
Parsing is handled by a Finite State Machine. Each line is matched against a set of (mutually exclusive) regular expressions that represent known states. When a match is found, corresponding parsing action is taken.
See https://www.martinfowler.com/bliki/StranglerApplication.html for more info on the used pattern for refactoring legacy applications.
Introduced class `BenchmarkDriver` as a beginning of strangler application that will gradually replace old functions. Used it instead of `get_tests()` function in Benchmark_Driver.
The interaction with Benchmark_O is simulated through mocking. `SubprocessMock` class records the invocations of command line processes and responds with canned replies in the format of Benchmark_O output.
Removed 3 redundant lit tests that are now covered by the unit test `test_gets_list_of_all_benchmarks_when_benchmarks_args_exist`. This saves 3 seconds from test execution. Keeping only single integration test that verifies that the plumbing is connected correstly.
The imports are a bit sketchy because it doesn’t have `.py` extension and they had to be hacked manually. :-/
Extracted `parse_args` from `main` and added test coverage for argument parsing.
Moving the `captured_output` function to own file.
Adding homegrown unit testing helper classes `Stub` and `Mock`.
The issue is that the unittest.mock was added in Python 3.3 and we need to run on Python 2.7. `Stub` and `Mock` were organically developed as minimal implementations to support the common testing patterns used on the original branch, but since I’m rewriting the commit history to provide an easily digestible narrative, it makes sense to introduce them here in one step as a custom unit testing library.
Moved result formatting methods from `PerformanceTestResult` and `ResultComparison` to `ReportFormatter`, in order to free PTR to take more computational responsibilities in the future.
Reintroduced feature lost during `BenchmarkInfo` modernization: All registered benchmarks are ordered alphabetically and assigned an index. This number can be used as a shortcut to invoke the test instead of its full name. (Adding and removing tests from the suite will naturally reassign the indices, but they are stable for a given build.)
The `--list` parameter now prints the test *number*, *name* and *tags* separated by delimiter.
The `--list` output format is modified from:
````
Enabled Tests,Tags
AngryPhonebook,[String, api, validation]
...
````
to this:
````
\#,Test,[Tags]
2,AngryPhonebook,[String, api, validation]
…
````
(There isn’t a backslash before the #, git was eating the whole line without it.)
Note: Test number 1 is Ackermann, which is marked as “skip”, so it’s not listed with the default `skip-tags` value.
Fixes the issue where running tests via `Benchmark_Driver` always reported each test as number 1. Each test is run independently, therefore every invocation was “first”. Restoring test numbers resolves this issue back to original state: The number reported in the first column when executing the tests is its ordinal number in the Swift Benchmark Suite.
Fixed failure in `get_tests` which depended on the removed `Benchmark_O --run-all` option for listing all test (not just the pre-commit set).
Fix: Restored the ability to run tests by ordinal number from `Benchmark_Driver` after the support for this was removed from `Benchmark_O`.
Added tests that verify output format of `Benchmark_O --list` and the support for `--skip-tags= ` option which effectively replaced the old `--run-all` option. Other tools, like `Benchmark_Driver` depend on it.
Added integration tests for the dependency between `Benchmark_Driver` and `Benchmark_O`.
Running pre-commit test set isn’t tested explicitly here. It would take too long and it is run fairly frequently by CI bots, so if that breaks, we’ll know soon enough.
To use this, one needs to first build an installable root for swift (i.e. like
the smoke testbot does). Then use the tool ./benchmark/scripts/build_linux.py
with the appropriate locations of the build-directory, installable snapshot,
and it will build the benchmarks. (There are more arguments, just use --help).
rdar://40541972
Disable the random hash seed while benchmarking. By its nature, it makes the number of hash collisions fluctuate between runs, adding unnecessary noise to benchmark results.
I expect we'll be able to re-enable random seeding here once we have made hash collisions cheaper -- they are currently always resolved by calling the Key's Equatable implementation, which can be expensive.
This benchmark script is similar to the guard malloc/runtime runner, but it only
runs the tests. The intention is that one can use this to quickly in a
multithreaded way verify that all benchmarks run successfully. In contrast, the
normal driver will run only single threaded since it is meant to test
performance, so is not able to take advantage of all cores on a system.
I wrote this quickly to verify some benchmark tests still worked. No point in
not sharing with everyone else.
On the bots, we have a timeout without output of 60 minutes for the entire test.
This should ensure that we are able to kill mis-behaving tests and give a good
error instead of just getting a jenkins timeout error.
For those confused, this is for the guard malloc/leaks test.
rdar://36874229
Support specifying a baseline branch to compare the current results
against. Previously, the master branch was hardcoded.
Fixes: rdar://problem/32751587