On Darwin (and most other OSes), subnormal support is turned off for armv7, which causes these tests to fail. Longer-term, instead of gating this on architecture we may want to be able to query if subnormals are supported for a BinaryFloatingPoint type, and then we could key off of that instead.
StdlibUnittest uses gyb to avoid duplicating many source-context
arguments. However, this means that any test that wishes to add new
expect helpers has to also be gybbed. Given that this structure hasn't
changed in years, and we should have a real language support
eventually, de-gyb it.
Due to SR-7124, some float literals are inconsistently handled
across architectures. That's a problem for a test which is
trying to verify that certain exact floating-point values are
formatted in specific ways.
Fortunately, the run-time float parsing does not have this
problem, so I've changed this test to make sparing use of
run-time parsing (`Float("1.234")`) instead of compile-time
parsing (`1.234 as Float`).
This is admittedly slower, but the test isn't particularly
performance critical, and I don't want to use a hex literal
here since I feel that would make the test case harder to
understand.
NaNs are mangled in various ways on various platforms. Some
of the fun possibilities:
* sign bits can be cleared
* payloads get zeroed
* NaNs get forced to quiet
* Merely passing a value into a function can trigger these behaviors
* Trivial changes to the generated assembly can change this
This makes it very hard to validate the printing of NaN
details, since we can't be certain what value the formatter
actually saw.
So now I only do exact checks of the NaN formatter on x86_64,
which has pretty well-understood NaN support. On all other
platforms, I just check that the debugDescription output
has the expected format:
```
-?s?nan(\(0x[0-9a-f]+\))?
```
Merge SR-3131 fix:
For each floating-point type, there is a range of integers which
can be exactly represented in that type. Adjust the formatting
logic so that we use decimal format for integers within this
range, exponential format for numbers outside of this range.
For example, Double has a 53-bit significand so can exactly
represent every integer from `-(2^53)...(2^53)`. With this
change, we now use decimal format for these integers and
exponential format for values outside of this range. This is
a relatively small change from the previous logic -- we've
basically just moved the cutoff from 10^15 to 2^53 (about 10^17).
The decision for using exponential format for small numbers is
not changed.
This was an oversight from PR #15474. Most of the Float80
tests were correctly compiled only on `!Windows && (x86 || i386)`
but I failed to mark some Float80 test data.
Fixes: Radar 39246292
i386 does not support signaling nans *accross calls* (SR-1515). However,
after inlining we do get a signaling representation. So only run the
test in Onone mode.
rdar://39104087
* SR-106: New floating-point `description` implementation
This replaces the current implementation of `description` and
`debugDescription` for the standard floating-point types with a new
formatting routine based on a variation of Florian Loitsch' Grisu2
algorithm with changes suggested by Andrysco, Jhala, and Lerner's 2016
paper describing Errol3.
Unlike the earlier code based on `sprintf` with a fixed number of
digits, this version always chooses the optimal number of digits. As
such, we can now use the exact same output for both `description` and
`debugDescription` (except of course that `debugDescription` provides
full detail for NaNs).
The implementation has been extensively commented; people familiar with
Grisu-style algorithms should find the code easy to understand.
This implementation is:
* Fast. It uses only fixed-width integer arithmetic and has constant
memory and time requirements.
* Simple. It is only a little more complex than Loitsch' original
implementation of Grisu2. The digit decomposition logic for double is
less than 300 lines of standard C (half of which is common arithmetic
support routines).
* Always Accurate. Converting the decimal form back to binary (using an
accurate algorithm such as Clinger's) will always yield exactly the
original binary value. For the IEEE 754 formats, the round-trip will
produce exactly the same bit pattern in memory. This is an essential
requirement for JSON serialization, debugging, and logging.
* Always Short. This always selects an accurate result with the minimum
number of decimal digits. (So that `1.0 / 10.0` will always print
`0.1`.)
* Always Close. Among all accurate, short results, this always chooses
the result that is closest to the exact floating-point value. (In case
of an exact tie, it rounds the last digit even.)
This resolves SR-106 and related issues that have complained
about the floating-point `description` properties being inexact.
* Remove duplicate infinity handling
* Use defined(__SIZEOF_INT128__) to detect uint128_t support
* Separate `extracting` the integer part from `clearing` the integer part
The previous code was unnecessarily obfuscated by the attempt to combine
these two operations.
* Use `UINT32_MAX` to mask off 32 bits of a larger integer
* Correct the expected NaN results for 32-bit i386
* Make the C++ exceptions here consistent
Adding a C source file somehow exposed an issue in an unrelated C++ file.
Thanks to Joe Groff for the fix.
* Rename SwiftDtoa to ".cpp"
Having a C file in stdlib/public/runtime causes strange
build failures on Linux in unrelated C++ files.
As a workaround, rename SwiftDtoa.c to .cpp to see
if that avoids the problems.
* Revert "Make the C++ exceptions here consistent"
This reverts commit 6cd5c20566.