Commit Graph

24 Commits

Author SHA1 Message Date
Daniel Duan
3dfc40898c [NFC] Remove Python 2 imports from __future__ (#42086)
The `__future__` we relied on is now,  where the 3 specific things are
all included [since Python 3.0](https://docs.python.org/3/library/__future__.html):

* absolute_import
* print_function
* unicode_literals
* division

These import statements are no-ops and are no longer necessary.
2022-04-13 14:01:30 -07:00
Sergej Jaskiewicz
cce9e81f0b Support Python 3 in the benchmark suite 2020-02-28 01:45:35 +03:00
Ross Bayer
b1961745e0 [Python: black] Reformatted the benchmark Python sources using utils/python_format.py. 2020-02-08 15:32:44 -08:00
Michael Gottesman
2840a7609d When gathering counters, check for instability and FAIL otherwise.
The way we already gather numbers for this test is that we run two runs of
`Benchmark_O $TEST` with num-samples=2, iters={2,3}. Under the assumption that
the only difference in counter numbers can be caused by that extra iteration,
subtracting the group of counts for 2,3 gives us the number of counts in that
iteration.

In certain cases, I have found that a small subset of the benchmarks are
producing weird output and I haven't had the time to look into why. That being
said, I do know what these weird results look like, so in this commit we do some
extra validation work to see if we need to fail a test due to instability.

The specific validation is that:

1. We perform another run with num-samples=2, iter=5 and subtract the iter=3
counts from that. Under the assumption that overall work should increase
linearly with iteration size in our benchmarks, we check if the counts are
actual 2x.

2. If either `result[iter=3] - result[iter=2]` or `result[iter=5] -
result[iter=3]` is negative. All of the counters we gather should never decrease
with iteration count.
2020-01-15 14:41:21 -08:00
Michael Gottesman
461f17e5b7 Change -csv flag to be --emit-csv. 2020-01-15 14:41:21 -08:00
Michael Gottesman
676411f0b0 Have dtrace aggregate rr opts and start tracking {retain,release}_n.
Otherwise, one can get results that seem to imply more rr traffic when in
reality, one was not tracking {retain,release}_n that as a result of better
optimization become just simple retain, release.
2020-01-15 14:39:55 -08:00
Michael Gottesman
6fff30c122 [benchmark-dtrace] Enabling multiprocessing option to speed up gathering data. 2020-01-08 16:06:56 -08:00
Michael Gottesman
c7c2e6e17b [benchmark-dtrace] Fix the amount of samples taken along side the number of iters.
Otherwise, the output is not stable.
2020-01-08 16:06:56 -08:00
Michael Gottesman
d48cdd9cad [benchmark-dtrace] Set SWIFT_DETERMINISTIC_HASHING=1 before calling subjobs.
This prevents a bunch of instability in the retain, release numbers. I am still
getting some of it, but this helps a lot.
2020-01-08 16:06:56 -08:00
Erik Eckstein
45a2ae48ce benchmarks: replace the Ounchecked build with an Osize build
We don't measure Ounchecked anymore. On the other hand we want to benchmark the Osize build.
2017-10-06 14:09:43 -07:00
Hugh Bellamy
8671854674 Properly python lint remaining files 2017-03-23 14:06:46 +07:00
practicalswift
6d1ae2a39c [gardening] 2016 → 2017 2017-01-06 16:41:22 +01:00
practicalswift
797b80765f [gardening] Use the correct base URL (https://swift.org) in references to the Swift website
Remove all references to the old non-TLS enabled base URL (http://swift.org)
2016-11-20 17:36:03 +01:00
practicalswift
1edb62dc38 [Python] Make flake8 linting pass without errors/warning (w/ default rules) 2016-03-13 20:19:51 +01:00
practicalswift
d5326bfdc4 [Python] Replace global linting excludes with local line-level excludes ("noqa")
Replace the project global linting rule excludes (as defined in .pep8) with
fine-grained "# noqa" annotations.

By using noqa annotation the excludes are made on a per line basis instead of
globally.

These annotations are used where we make deliberate deviations from the standard
linting rules.

To lint the Python code in the project:

  $ flake8

To install flake8:

  $ pip install flake8

See https://flake8.readthedocs.org/en/latest/ for details.

To enable checking of the PEP-8 naming conventions, install the optional
extension pep8-naming:

  $ pip install pep8-naming

To enable checking of blind "except:" statements, install the optional
extension flake8-blind-except:

  $ pip install flake8-blind-except

To enable checking of import statement order, install the optional
extension flake8-import-order:

  $ pip install flake8-import-order
2016-03-10 16:22:48 +01:00
practicalswift
0796eaad1f [Python] Fix 80-column violations 2016-03-09 23:52:11 +01:00
practicalswift
30b66ea036 Merge pull request #1584 from practicalswift/python-3-compatible-print
[Python] Use Py3k compatible print operator: print "foo" → print("foo")
2016-03-09 08:00:49 +01:00
Brian Gesiak
c9000af795 Merge pull request #1526 from practicalswift/fix-pep8-violations-ii
[Python] Fix five classes of PEP-8 violations (E101/E111/E128/E302/W191)
2016-03-08 23:55:46 -05:00
practicalswift
0fd0c48648 [Python] Use Py3k compatible print operator: print "foo" → print("foo") 2016-03-08 23:10:52 +01:00
practicalswift
265835fdfc [Python] Use consistent import ordering for Python code
Ordering used:
1.) standard library imports
2.) third party imports
3.) local package imports

Each group is individually alphabetized.
2016-03-07 23:25:16 +01:00
practicalswift
183da818df [Python] Fix five classes of PEP-8 violations (E101/E111/E128/E302/W191)
* E101: indentation contains mixed spaces and tabs
* E111: indentation is not a multiple of four
* E128: continuation line under-indented for visual indent
* E302: expected 2 blank lines, found 1
* W191: indentation contains tabs
2016-03-07 22:36:23 +01:00
practicalswift
87bcd45c9e [Python] Fix recently introduced PEP 8 regressions.
After this commit:

$ flake8
$

(No PEP 8 warnings or errors left in the repo.)
2016-02-10 22:23:49 +01:00
Luke Larson
b606297f83 [benchmark] Correct perf_test_driver class name 2016-02-10 12:02:39 -08:00
Michael Gottesman
1c2f40e246 [perf-test] Add in the Benchmark_DTrace driver.
This and the associated *.d file can be used to determine dynamic
retain/release counts over the perf test suite.
2016-02-08 14:35:47 -08:00