These were mostly bugs with code of the following form:
```
if uint64Value < (... literal expression ...)
```
Swift's comparison operators allow their left- and right-hand sides to be of
different widths. This in turn means that the literal expression above
typically gets typechecked by default as a plain `Int` or `UInt` expression.
In a number of cases, this led to truncation on platforms where `Int` is
not 64 bits.
In particular, this seems to fix tests on wasm32.
After further thought, I realize that it makes more sense
to keep the public API in FloatingPointParsing.swift.gyb
(where we can consolidate the repetitive doc comments)
and have this as a purely internal set of service functions.
This reverts commit 7084ab8e6e.
This reimplements the underlying support for `Float16(_:StringSlice)`,
`Float32(_:StringSlice)`, and `Float64(_:StringSlice)` in pure Swift,
using the same core algorithm currently used by Apple's libc. Those
`StringSlice` initializers are in turn used by `Float16(_:String)`,
`Float32(_:String)`, and `Float64(_:String)`.
**Supports Embedded**: This fully supports Embedded Swift and
insulates us from variations in libc implementations.
**Corrects bugs in Float16 parsing**: The previous version of
`Float16` parsing called libc `strtof()` to parse to a 32-bit float,
then rounded to `Float16`. (This was necessary because float16
parsing functions are not widely supported in C implementations.)
This double-rounding systematically corrupted NaN payloads and
resulted in 1 ULP errors for certain decimal and hexadecimal inputs.
The new version parses `Float16` directly, avoiding these errors.
**Modest perforamnce improvement**: The old version had to copy
the Swift string to construct a C string. For inputs longer than
15 characters, this typically required a heap allocation, which added
up to 20% to the runtime. The new version parses directly from a Swift
string, avoiding this copy and heap allocation.