* Fix: flushing stdout before crashing to enable testing.
* Added tests that verify reporting of errors when the parsing of command line arguments fails.
Measure and report system environment indicators during benchmark execution:
* Memory usage with maximum resident set size (MAX_RSS) in bytes
Proxy indicators of system load level:
* Number of Involuntary Context Switches (ICS)
* Number of Voluntary Context Switches (VCS)
MAX_RSS delta is always reported in the last column of the log report.
The `--verbose` mode additionaly reports full values measured before and after the benchmark execution as well as their delta for MAX_RSS, ICS and VCS.
The indices (test numbers) are strings on both ends of the IO — in user input as well as when printed to the console. We can spare few conversions and just store them directly as `String`.
Moved the formatting of`BenchmarkResults` into `runBenchmarks` function which already contained the logging of header and the special case of unsupported benchmark.
Report only the total number of executed tests.
Aggregating MIN, MAX and MEAN values for all executed benchmarks together (with microsecond precision!) has no statistical relevance.
Reintroduced feature lost during `BenchmarkInfo` modernization: All registered benchmarks are ordered alphabetically and assigned an index. This number can be used as a shortcut to invoke the test instead of its full name. (Adding and removing tests from the suite will naturally reassign the indices, but they are stable for a given build.)
The `--list` parameter now prints the test *number*, *name* and *tags* separated by delimiter.
The `--list` output format is modified from:
````
Enabled Tests,Tags
AngryPhonebook,[String, api, validation]
...
````
to this:
````
\#,Test,[Tags]
2,AngryPhonebook,[String, api, validation]
…
````
(There isn’t a backslash before the #, git was eating the whole line without it.)
Note: Test number 1 is Ackermann, which is marked as “skip”, so it’s not listed with the default `skip-tags` value.
Fixes the issue where running tests via `Benchmark_Driver` always reported each test as number 1. Each test is run independently, therefore every invocation was “first”. Restoring test numbers resolves this issue back to original state: The number reported in the first column when executing the tests is its ordinal number in the Swift Benchmark Suite.
When listing benchmarks with `--list` parameter, present the tags in format that is actually accepted by the `--tags` and `--skip-tags` parameters.
Changes the `--list` output from
````
Enabled Tests,Tags
AngryPhonebook,[TestsUtils.BenchmarkCategory.validation, TestsUtils.BenchmarkCategory.api, TestsUtils.BenchmarkCategory.String]
...
````
into
````
Enabled Tests,Tags
AngryPhonebook,[String, api, validation]
…
````
This means that we can now edit benchmarks in Xcode! Keep in mind:
1. This is not an official build. It is just so we can use Xcode to edit files
and get IDE features.
2. I had to do a little hackery to keep the build the way it is today where all
single-source files are their own modules.
3. As long as we do not change the directory structure, everything should just
update and work since I added a little code that dynamically adds the tests.
Also, to do this I had to rename multi-source/PrimsSplit/main.swift =>
Prims_main.swift. That is because the name main.swift is special in some way and
I hit linker errors. By simply changing the name from main.swift =>
Prims_main.swift, everything is good. I am going to file a separate bug for
that.
Today, one can not completely disable a benchmark depending on the platform
without changing the source of main.swift. We would like to be able to disable
benchmarks locally in a benchmark's file without needing to modify the rest of
the infrastructure. The closest that one can get to such behavior is to just
conditionally compile out the file locally. But one still will have the test
run.
This commit adds support for not-running the benchmark on specific
platforms. This in combination with conditional compilation of benchmark bodies,
allows us to not have to comment out module's in main.swift or have to
conditionally compile testinfo.
rdar://40541972
* Use the `__has_include` and `GRND_RANDOM` macros
* Use `getentropy` instead of `getrandom`
* Use `std::min` from the <algorithm> header
* Move `#if` out of the `_stdlib_random` function
* Use `getrandom` with "/dev/urandom" fallback
* Use `#pragma comment` to import "Bcrypt.lib"
* <https://docs.microsoft.com/en-us/cpp/preprocessor/comment-c-cpp>
* <https://clang.llvm.org/docs/UsersManual.html#microsoft-extensions>
* Use "/dev/urandom" instead of `SecRandomCopyBytes`
* Use `swift::StaticMutex` for shared "/dev/urandom"
* Add `getrandom_available`; use `O_CLOEXEC` flag
Add platform impl docs
Update copyrights
Fix docs
Add _stdlib_random test
Update _stdlib_random test
Add missing &
Notice about _stdlib_random
Fix docs
Guard on upperBound = 0
Test full range of 8 bit integers
Remove some gyb
Clean up integerRangeTest
Remove FixedWidthInteger constraint
Use arc4random universally
Fix randomElement
Constrain shuffle to RandomAccessCollection
warning instead of error
Move Apple's implementation
Fix failing test on 32 bit systems
* [stdlib] Revise documentation for new random APIs
* [stdlib] Fix constraints on random integer generation
* [test] Isolate failing Random test
* [benchmark] Add benchmarks for new random APIs
Fix Float80 test
Value type generators
random -> randomElement
Fix some docs
One more doc fix
Doc fixes & bool fix
Use computed over explicit
Add substring-view-oriented array append benchmarks. Put a
getString/getSubstring call into the innermost loop to prevent some
constant folding towards triviality.
* Benchmark {Float,Double,Float80}.description
This just tests how fast we can format the standard
floating-point types. It also includes a test that
uses the result, to ensure that future optimized
_creation_ of the string doesn't incidentally pessimize
_use_ of the results.
* Benchmark {Float,Double,Float80}.description
This just tests how fast we can format the standard
floating-point types. It also includes a test that
uses the result, to ensure that future optimized
_creation_ of the string doesn't incidentally pessimize
_use_ of the results.
* Add benchmarks for values close to 1 + other review suggestions
Dictionary and Set currently exhibit O(n^2) behavior for certain operations involving copying elements in bulk. Add benchmarks to verify an upcoming fix and to catch regressions later.
https://bugs.swift.org/browse/SR-3268