* Benchmark {Float,Double,Float80}.description
This just tests how fast we can format the standard
floating-point types. It also includes a test that
uses the result, to ensure that future optimized
_creation_ of the string doesn't incidentally pessimize
_use_ of the results.
* Benchmark {Float,Double,Float80}.description
This just tests how fast we can format the standard
floating-point types. It also includes a test that
uses the result, to ensure that future optimized
_creation_ of the string doesn't incidentally pessimize
_use_ of the results.
* Add benchmarks for values close to 1 + other review suggestions
Dictionary and Set currently exhibit O(n^2) behavior for certain operations involving copying elements in bulk. Add benchmarks to verify an upcoming fix and to catch regressions later.
https://bugs.swift.org/browse/SR-3268
The first is copied from https://github.com/apple/swift/pull/13930's
contribution (with a minor bug fix applied). The second is an
adaptation that tries to avoid creating copies and operate using
indices directly.
UnsafePointer implementation contains the following note:
> Note: The following family of operator overloads are redundant with
Strideable. However, optimizer improvements are needed before they can
be removed without affecting performance.
... but it looks like there is no benchmark to support this claim.
The idea being, we need to decide what benchmarks to run solely based on
tags.
`--tag` allows to list all tags that are required;
`--skip-tags` allows to skip benchmarks that have any of those tags.
By default, skip-tags list contains .unstable and .String, which results
in the same subset of benchmarks as before.
*NOTE* We always prefer a registered benchmark if we have one.
I am going to use BenchmarkInfo to solve the "create data for benchmark while we
are already timing" problem. I am going to add a field to BenchmarkInfo that if
it is not-null is called before we start measuring time. This closure can be
used to initialize any global data structures/etc.
But to do this, I need to be able to combine the registered and legacy
not-registered benchmarks.
Only use the .abstraction tag for benchmarks that are primarilly stressing the
optimization of a specific kind of abstraction. Then I can use these as CPU
benchmarks.
Previously this tag was applied to any benchmark that happened to involve
classes or protocols. When a compiler engineer changes something that might
affect codegen in these cases, they should simply benchmark the entire stable
suite. There's no value in having a separate set of 97 "abstraction" benchmarks
that are primarilly testing something else, like a stdlib API.