This collects a number of changes I've been testing over the
last month.
* Bug fix: The single-precision float formatter did not always
round the last digit even in cases where there were two
possible outputs that were otherwise equally good.
* Algorithm simplification: The condition for determining
whether to widen or narrow the interval was more complex than
necessary. I now simply widen the interval for all even
significands.
* Code simplification: The single-precision float formatter now uses fewer
64-bit features. This eliminated some 32-bit vs. 64-bit conditionals in
exchange for a minor loss of performance (~2%).
* Minor performance tweaks: Steve Canon pointed out a few places
where I could avoid some extraneous arithmetic.
I've also rewritten a lot of comments to try to make the exposition
clearer.
The earlier testing regime focused on testing from first
principles. For example, I verified accuracy by feeding the
result back into the C library `strtof`, `strtod`, etc. and
checking round-trip exactness. Unfortunately, this approach
requires many checks for each value, limiting test performance.
It's also difficult to validate last-digit rounding.
For this round of updates, I've instead compared the digit
decompositions to other popular algorithms:
* David M. Gay's gdtoa library is a robust and well-tested
implementation based on Dragon4. It supports all formats, but
is slow. (netlib.org/fp)
* Grisu3 supports Float and Double. It is fast but incomplete,
failing on about 1% of all inputs.
(github.com/google/double-conversion)
* Errol4 is fast and complete but only supports Double. The
repository includes an implementation of the enumeration
algorithm described in the Errol paper.
(github.com/marcandrysco/errol)
The exact tests varied by format:
* Float: SwiftDtoa now generates the exact same digits as gdtoa
for every single-precision Float.
* Double: Testing against Grisu3 (with fallback to Errol4 when
Grisu3 failed) greatly improved test performance. This
allowed me to test 100 trillion (10^14) randomly-selected
doubles in a reasonable amount of time. I also checked all
values generated by the Errol enumeration algorithm.
* Float80: I compared the Float80 output to the gdtoa library
because neither Grisu3 nor Errol4 yet supports 80-bit extended
precision. All values generated by the Errol enumeration
algorithm have been checked, as well as several billion
randomly-selected values.
Merge SR-3131 fix:
For each floating-point type, there is a range of integers which
can be exactly represented in that type. Adjust the formatting
logic so that we use decimal format for integers within this
range, exponential format for numbers outside of this range.
For example, Double has a 53-bit significand so can exactly
represent every integer from `-(2^53)...(2^53)`. With this
change, we now use decimal format for these integers and
exponential format for values outside of this range. This is
a relatively small change from the previous logic -- we've
basically just moved the cutoff from 10^15 to 2^53 (about 10^17).
The decision for using exponential format for small numbers is
not changed.
* SR-106: New floating-point `description` implementation
This replaces the current implementation of `description` and
`debugDescription` for the standard floating-point types with a new
formatting routine based on a variation of Florian Loitsch' Grisu2
algorithm with changes suggested by Andrysco, Jhala, and Lerner's 2016
paper describing Errol3.
Unlike the earlier code based on `sprintf` with a fixed number of
digits, this version always chooses the optimal number of digits. As
such, we can now use the exact same output for both `description` and
`debugDescription` (except of course that `debugDescription` provides
full detail for NaNs).
The implementation has been extensively commented; people familiar with
Grisu-style algorithms should find the code easy to understand.
This implementation is:
* Fast. It uses only fixed-width integer arithmetic and has constant
memory and time requirements.
* Simple. It is only a little more complex than Loitsch' original
implementation of Grisu2. The digit decomposition logic for double is
less than 300 lines of standard C (half of which is common arithmetic
support routines).
* Always Accurate. Converting the decimal form back to binary (using an
accurate algorithm such as Clinger's) will always yield exactly the
original binary value. For the IEEE 754 formats, the round-trip will
produce exactly the same bit pattern in memory. This is an essential
requirement for JSON serialization, debugging, and logging.
* Always Short. This always selects an accurate result with the minimum
number of decimal digits. (So that `1.0 / 10.0` will always print
`0.1`.)
* Always Close. Among all accurate, short results, this always chooses
the result that is closest to the exact floating-point value. (In case
of an exact tie, it rounds the last digit even.)
This resolves SR-106 and related issues that have complained
about the floating-point `description` properties being inexact.
* Remove duplicate infinity handling
* Use defined(__SIZEOF_INT128__) to detect uint128_t support
* Separate `extracting` the integer part from `clearing` the integer part
The previous code was unnecessarily obfuscated by the attempt to combine
these two operations.
* Use `UINT32_MAX` to mask off 32 bits of a larger integer
* Correct the expected NaN results for 32-bit i386
* Make the C++ exceptions here consistent
Adding a C source file somehow exposed an issue in an unrelated C++ file.
Thanks to Joe Groff for the fix.
* Rename SwiftDtoa to ".cpp"
Having a C file in stdlib/public/runtime causes strange
build failures on Linux in unrelated C++ files.
As a workaround, rename SwiftDtoa.c to .cpp to see
if that avoids the problems.
* Revert "Make the C++ exceptions here consistent"
This reverts commit 6cd5c20566.