Disable the random hash seed while benchmarking. By its nature, it makes the number of hash collisions fluctuate between runs, adding unnecessary noise to benchmark results.
I expect we'll be able to re-enable random seeding here once we have made hash collisions cheaper -- they are currently always resolved by calling the Key's Equatable implementation, which can be expensive.
This benchmark script is similar to the guard malloc/runtime runner, but it only
runs the tests. The intention is that one can use this to quickly in a
multithreaded way verify that all benchmarks run successfully. In contrast, the
normal driver will run only single threaded since it is meant to test
performance, so is not able to take advantage of all cores on a system.
I wrote this quickly to verify some benchmark tests still worked. No point in
not sharing with everyone else.
On the bots, we have a timeout without output of 60 minutes for the entire test.
This should ensure that we are able to kill mis-behaving tests and give a good
error instead of just getting a jenkins timeout error.
For those confused, this is for the guard malloc/leaks test.
rdar://36874229
Support specifying a baseline branch to compare the current results
against. Previously, the master branch was hardcoded.
Fixes: rdar://problem/32751587
Add support for running benchmarks by reffering to them by their ordinal number in `Benchmark_Driver`, as is supported by `Benchmark_O`(`Onone`, `Ounchecked`).
Updated documentation to reflect this.
SR-4780 Can not run performance tests that are not in precommit suite
Modified driver to honor command line arguments when listing enabled tests. Fixed interaction between filters (positional arguments) and --run-all option.
Benchmark_Driver lists available benchmarks with --run-all option when benchmarks or filters are specified.
Coverage at 99% according to coverage.py
* `compare_perf_tests.py` now always outputs the same format to stdout as is written to `--output` file
* Added integration test for the main() function
* Added tests for console output (and suppressed it leaking during testing)
* Fixed file name in test’s file header
compare_perf_test.py is now covered with unit tests and public methods are documented in the implementation.
Minor refactoring to better conform to Python conventions:
* classes declared in new style
* proper private method prefix of single underscore
* replacing map with list comprehension where it was clearer
Unit test are executed as part of validation-test.
.gitignore was modified to ignore .coverage and htmlcov artifacts generated by the coverage.py package
* Refactor compare_perf_tests.py
* Fix SR-4601 Report Added and Removed Benchmarks in Performance Comparison
Improved HTML styling.
* Added back support for reading concatenated Benchmark_Driver output
PerformanceTestResults can be merged, computing new MIN, MAX, and running MEAN and SD.
* Handle output from Benchmark_O again
Treat MAX_RSS as optional column
Column names must not contain spaces for tools that auto-format the table.
The extra "(%)" was completely redundant since every value in the column
reads as a percentage.
Both regressions and improvements are sorted by the delta, which in case
of improvements produces the reversed order due to negative values of
delta.
This change makes the improvements ordered 'naturally':
most-improved-first.