The `__future__` we relied on is now, where the 3 specific things are
all included [since Python 3.0](https://docs.python.org/3/library/__future__.html):
* absolute_import
* print_function
* unicode_literals
* division
These import statements are no-ops and are no longer necessary.
Timers are just too noisy to give reliable results. Previously, the
timers were not considered due to a bug in process-stats-dir that would
(almost) always output 0. This bug has been fixed in 92073c671e.
During the Python 2 to Python 3 conversion, the difference in encoding
became apparent. Explicitly handle the encoding by opening the file
with an explicit encoding. This prevents falling back to the `C` locale
which will use ASCII for UTF-8 which fails.
* More Python3 lint fixes
Some of the issues addressed include:
* Don't use `l` as a variable name (confusable with `1` or `I`)
* `print` statement does not exist in Py3, use `print` function instead
* Implicit tuple deconstruction in function args is no longer supported,
use explicit splat `*` at the call site instead
* `xrange` does not exist in Py3, use `range` instead
* Better name per review feedback
`zip` will return a generator in Python 3 rather than the zipped list.
This results in the Nelder-Mead Simplex to fail as it does not actually
perform the optimization of the data. Explicitly convert the data to a
form which can be consumed.
- Outlaw duplicate input files, fix driver, fix tests, and add test.
- Reflect that no buffer is present without a (possibly pseudo) named file.
- Reflect fact that every input has a (possible pseudo) name.
- Break up CompilerInstance::setup.
Don't bail on dups.
This allows invocations that fail with a specific exit code, e.g. 0 is the old
behaviour, for success, but also, for instance, 1 for measuring examples that
fail to typecheck.
This, somewhat questionably, fits the polynomial model and the
exponential model, and then chooses the one with the best R^2. However,
no matter how statistically valid this is, it works reasonably in
practice.
Slow growing things sometimes get classified as 1.0^n or 1.1^n, but
these are either spurious or not relevant, and so a similar thresholding
to the polynomial fit is used.