The default precision which is used for converting floating point numbers to strings leads to many confusing results. If we take a Float32 1.00000000 value and 1.00000012 of the same type, these two, obviously are not equal. However, if we log them, we are displayed the same value. So a much more helpful display using 9 decimal digits is thus: [1.00000000 != 1.00000012] showing that the two values are in fact different.
(example taken from: http://www.boost.org/doc/libs/1_59_0/libs/test/doc/html/boost_test/test_output/log_floating_points.html)
I'm by no means a floating point number expert, however having investigated this issue I found numerous sources saying that "magic" numbers 9 and 17 for 32 and 64 bit values respectively are the correct format. Numbers 9 and 17 represent the maximum number of decimal digits that round trips. This means that number 0.100000000000000005 and 0.1000000000000000 are the same as their floating-point representations are concerned.
The C++ ABI for static locals is a bit heavy compared to dispatch_once; doing this saves more than 1KB in runtime code size. Dispatch_once/call_once is also more likely to be hot because it's also used by Swift and ObjC code.
Alas, llvm::get_execution_seed() from llvm/ADT/Hashing.h still inflicts one static local initialization on us we can't override (without forking Hashing.h, anyway).
Set up a separate libSwiftStubs.a archive for C++ stub functionality that's needed by the standard library but not part of the core runtime interface. Seed it with the Stubs.cpp and LibcShims.cpp files, which consist only of stubs, though a few stubs are still strewn across the runtime code base.