The `getFalse` utility function was not excluded from cross-module-optimization, which led the optimizer to completely eliminate the main loop body
Also, the passed `N` was shadowed by a local `n`.
Function bodies of `blackHole`, `identity`, etc. must not be visible in the benchmark modules.
Enabling CMO by default broke this. Since then we need to explicitly exclude those functions from cross-module-optimization.
* Add differentiation benchmarks.
* Make install name of _Differentiation be @rpath/libswift_Differentiation.dylib.
Co-authored-by: Marc Rasi <marcrasi@google.com>
This currently copies the array each time it swaps elements. This
makes it 1500x slower than it should be to sort the array. The
benchmark now runs in 15ms but should be around 10us when fully
optimized.
This algorithm is an interesting optimization problem involving array
optimization, uniqueness, bounds checks, and exclusivity. But the
general first order problem is how to modify a CoW data structure
that's stored in a class property. As it stands, the property getter
retains the class property around the modify accesses that checks
uniqueness.
These should be audited since some might not actually need to be
@inlinable, but for now:
- Anything public and @inline(__always) is now also @inlinable
- Anything @usableFromInline and @inline(__always) is now @inlinable
This adds optional `legacyFactor` to the `BenchmarkInfo`, which allows for linear modification of constants that unnecesarily inflate the base workload of benchmarks, while maintaining the continuity of log-term benchmark tracking.
For example, if a benchmark uses `for _ in N*10_000` in its run function, we could lower this to `for _ in N*1_000` and adding a `legacyFactor: 10` to its `BenchmarkInfo`.
Note that this doesn’t affect the real measurements gathered from the `--verbose` output. The `BenchmarkDoctor` has been slightly adjusted to work with these real samples, therefore `Benchmark_Driver check` will not flag these benchmarks for slow run time reported in the summary, if their real runtimes fall into the recommended range.
When listing benchmarks with `--list` parameter, present the tags in format that is actually accepted by the `--tags` and `--skip-tags` parameters.
Changes the `--list` output from
````
Enabled Tests,Tags
AngryPhonebook,[TestsUtils.BenchmarkCategory.validation, TestsUtils.BenchmarkCategory.api, TestsUtils.BenchmarkCategory.String]
...
````
into
````
Enabled Tests,Tags
AngryPhonebook,[String, api, validation]
…
````
Today, one can not completely disable a benchmark depending on the platform
without changing the source of main.swift. We would like to be able to disable
benchmarks locally in a benchmark's file without needing to modify the rest of
the infrastructure. The closest that one can get to such behavior is to just
conditionally compile out the file locally. But one still will have the test
run.
This commit adds support for not-running the benchmark on specific
platforms. This in combination with conditional compilation of benchmark bodies,
allows us to not have to comment out module's in main.swift or have to
conditionally compile testinfo.
rdar://40541972
Add substring-view-oriented array append benchmarks. Put a
getString/getSubstring call into the innermost loop to prevent some
constant folding towards triviality.
The key thing here is that by providing one of these closures, a benchmark can
inject the initialization/deinitialization of its internal data structures,
outside of the time period where timing is occurring.
The intention is that this will provide us the framework for as we annotate
tests with BenchmarkInfo, to move initialization work out of benchmarks.
It will also allow for more complex benchmarks to be written such as ones that
perform bulk reads from a pipe (my interest in this).
*NOTE* We always prefer a registered benchmark if we have one.
I am going to use BenchmarkInfo to solve the "create data for benchmark while we
are already timing" problem. I am going to add a field to BenchmarkInfo that if
it is not-null is called before we start measuring time. This closure can be
used to initialize any global data structures/etc.
But to do this, I need to be able to combine the registered and legacy
not-registered benchmarks.
In order to minimize impact of results checking on test performance, this removes the @autoclosure for error message.
Added new version of `CheckResults` that takes only `resultsMatch: Bool` - rest of the parameters are defaulted to `StaticString`s for method and file name, plus line number. Old method was deprecated, but left in place as tool for debugging failing checks. All tests were move to use the new method.