The `run` method on `BenchmarkDriver` invokes the test harness with specified number of iterations, samples. It supports mesuring memory use and in the verbose mode it also collects individual samples and monitors the system load by counting the number of voluntary and involuntary context switches.
Output is parsed using `LogParser` from `compare_perf_tests.py`. This makes that file a required dependency for the driver, therefore it is also copied to the bin directory during the build.
This benchmark script is similar to the guard malloc/runtime runner, but it only
runs the tests. The intention is that one can use this to quickly in a
multithreaded way verify that all benchmarks run successfully. In contrast, the
normal driver will run only single threaded since it is meant to test
performance, so is not able to take advantage of all cores on a system.
I wrote this quickly to verify some benchmark tests still worked. No point in
not sharing with everyone else.