The non-generic nominal type nodes do not actually need to use LLVM's
FoldingSetNode, and on my workstation the release build of the standard
library completes about 1/3 of a second faster after switching to LLVM
DenseMap. This is perhaps not surprising, because Decl to Type mappings
are only needed during early compiler stages, but the intrusive
FoldingSetNode data decreases CPU cache efficiency during all compiler
stages. As a bonus, the resulting code is simpler.
Where possible, pass around a ClassDecl or a CanType instead of a
SILType that might wrap a metatype; the unwrapping logic was
repeated in several places.
Also add a FIXME for a bug I found by inspection.
This simplifies some boilerplate, and in particular, some SIL verifier
logic; and fixes a couple bugs related to always loadable reference
storage types.
LLDB would like to substitute the original Archetype names from the
source code when demangling symbols instead of the confusing generic
'A', 'B', ...
<rdar://problem/48259889>
When we inspect a binary, verify that it is a PE/COFF binary before
trying to interpret it as a PE/COFF binary. This prepares the code
for extraction of the file inspection and will permit cross-platform
builds to introspect foreign binaries.
When loading a module supporting multiple targets, the module loader now looks for a file named with a normalized version of the target triple first, and only falls back to the architecture name if the normalized triple is not found.
In this diff we add first bits to make reflection work on Windows,
in particular, properly extract the metadata from the corresponding sections.
With this diff box_descriptors.sil and capture_descriptors.sil start passing on Windows
after some minor tweaks to the lit testing infrastructure.
When compiling for a 32 bit machine, uintptr_t from ReflectionInfo will
be the integer sized to hold a 32 bit pointer, so a 64 bit pointer might
not fit.
This commit removes the solution in
0f20c486e0 and does a runtime check that
the calculated offset will fit into the target machine uintptr_t, which
might not be true for 32 bits machines trying to read 64 bits images,
which should not be that common (and those images have to have offsets
bigger than what a 32 bits number can hold).
I am going to use this to refactor a bunch of the goop in the cast optimizer. At
a high level, we are really just performing a giant switch over the casts to
grab different state. We then take that state and we pass it into the bridge
cast optimizer.
To make such code more compact/easier to understand, I am adding in this commit
a type erased dynamic cast instruction type called "SILDynamicCastInst". In
subsequent commits, I wire up each of the individual instructions to it one at a
time.
As an additional advantage it will enable us to take advantage of covered
switches when ever in the future we introduce new casts.
When emitting metadata for a Swift-defined @objc protocol that has
provided a specific Objective-C name (e.g., via @objc(renamed)),
mangle such protocols using their Objective-C names so they can be
found at runtime.
Only do this for metadata, because doing it anywhere else would cause
an ABI break. Fixes rdar://problem/47877748.
While trying to match function types, detect and fix any missing
arguments (by introducing type variables), such arguments would
get type information from corresponding parameters and aid in
producing solutions which are much easier to diagnose.
The host tools may be built with the host compiler. cl objects to the
"extern C" function returning a C++ type which the keypath functions do.
However, these declarations are needed only in the runtime, which is
always built with clang. Preprocess away the declarations during the
build of the compiler. This allows us to build with cl once more.