Previously, we had some white listing lists for allocations that were assigned
into globals during a benchmarks running. Now we instead run two different
iterations with the second running an additional time. Then we subtract the
counts.
This enables me to get rid of the whitelist.
Make sure all Python code in the repo specifies which exceptions to
catch when using `except:`.
Regressions can be catched using `flake8-blind-except` going forward.
* E101: indentation contains mixed spaces and tabs
* E111: indentation is not a multiple of four
* E128: continuation line under-indented for visual indent
* E302: expected 2 blank lines, found 1
* W191: indentation contains tabs
Benchmark_Driver used string comparison to calculate the minimum and maximum
of durations in benchmark results, sometimes leading to wildly inaccurate reports.
The repo contains roughly 80 Python scripts. "snake_case" naming is used for
local variables in all those scripts. This is the form recommended by the PEP 8
naming recommendations (Python Software Foundation) and typically associated
with idiomatic Python code.
However, in nine of the 80 scripts there were at least one instance of
"camelCase" naming prior to this commit.
This commit improves consistency in the Python code base by making sure that
these nine remaining files follow the variable naming convention used for
Python code in the project.
References:
* PEP 8: https://www.python.org/dev/peps/pep-0008/
* pep8-naming: https://pypi.python.org/pypi/pep8-naming
When comparing scores from either multiple samples, or multiple runs,
print (?) if scores for the configurations being compared have
overlapping ranges.