The witness-table parameters got added to all witnesses as part of the
resilience work, but the hardcoded witness table in the runtime's
dynamic-casting infrastructure didn't get updated. Nothing seems to be
relying on these right now, so we cannot actually *test* it, but I've
verified that the types line up.
This lets us eliminate the _getObjectiveCType() value witness, which
was working around the lack of proper type witness metadata in witness
tables. Boilerplate -= 1.
clang and gcc provide a preprocessor macro called `__USER_LABEL_PREFIX__` which
provides the user label prefix for the specific target that the translation unit
is being built for. Rather than trying to reconstruct the logic in place via
various checks, fallback to the compiler to provide this information. Although
this limits the compilers (MSVC does not provide this preprocessor macro
definition), the only supported compiler ATM is clang, and it has provided this
definition for some time now.
This addresses the FIXME that was associated with the user label prefix being
applied under specific cases.
NFC.
It's to be used by code produced by the ReleaseDevirtualizer.
As the function is only used for non-escaping objects, the deallocating bit is set non-atomically.
Be more conservative in terms of masking ISAs. This reduces tight coupling with the objc runtime. This commit adds the required calls to IRGen and the runtime, and a test case to make sure IRGen is correct.
Teach swift_deallocPartialClassInstance how to deal with classes that
have pure Objective-C classes in their hierarchy. In such cases, we
need to make sure a few things happen:
1) We deallocate via objc_release rather than
swift_deallocClassInstance.
2) We only attempt to find an execute ivar destroyers for
Swift-defined classes in the hierarchy
3) When we hit the most-derived pure Objective-C class, make sure that we
only execute the dealloc of that class and not any of the subclasses
(which would end up trying to destroy ivars again).
Fixes rdar://problem/25023544.
This reverts commit 2262bd579a.
This information isn't necessary for field descriptor lookup,
after all. It's only the fields that need to have generic information,
which is already in the field descriptor.
Previously, the mangling didn't include generics, but these are
needed to key off of the new field descriptor metadata, as well
as to construct type references for the nominal type.
- Add RuntimeTarget template This will allow for converting between
metadata structures for native host and remote target architectures.
- Create InProcess and External templates for stored pointers
Add a few more types to abstract pointer access in the runtime
structures but keep native in-process pointer access the same as that
with a plain old pointer type.
There is now a notion of a "stored pointer", which is just the raw value
of the pointer, and the actual pointer type, which is used for loads.
Decoupling these allows us to fork the behavior when looking at metadata
in an external process, but keep things the same for the in-process
case.
There are two basic "runtime targets" that you can use to work with
metadata:
InProcess: Defines the pointer to be trivially a T* and stored as a
uintptr_t. A Metadata * is exactly as it was before, but defined via
AbstractMetadata<InProcess>.
External: A template that requires a target to specify its pointer size.
ExternalPointer: An opaque pointer in another address space that can't
(and shouldn't) be indirected with operator* or operator->. The memory
reader will fetch the data explicitly.
MetadataLookup.cpp and ProtocolConformance.cpp has same part for inspecting dynamic libraries.
The common code exist in one file and other uses it.
This uses the argument passing to callback in Linux/Cygwin and not applied to OS X.
"minimal" is defined as the set of requirements that would be
passed to a function with the type's generic signature that
takes the thick metadata of the parent type as its only argument.
This makes sure that runtime functions use proper calling conventions, get the required visibility, etc.
We annotate the most popular runtime functions in terms of how often they are invoked from Swift code.
- Almost all variants of retain/release functions are annotated to use the new calling convention.
- Some popular non-reference counting functions like swift_getGenericMetadata or swift_dynamicCast are annotated as well.
The set of runtime functions annotated to use the new calling convention should exactly match the definitions in RuntimeFunctions.def!
Generate global symbols which are function pointers to the actual implementations of runtime entry points.
This is done only for entry points using the new calling convention or for those entry points which explicitly require it.
and MetadataCache and fix a re-entrancy bug in metadata
instantiation.
The re-entrancy bug is that we were holding the instantiation
lock of a metadata cache while instantiating metadata. Doing
so prevents us from creating a different instantiation if
it's needed by the outer instantiation. This is already
possible, but it's much more likely in a patch I'm working on
to only store the minimal metadata for generic parameters
in generic types.
The same bug could also show up as a deadlock between threads,
so a recursive lock would not be a good fix. Instead, we add
a condition variable to the metadata cache. When fetching
metadata, we look for a node in the concurrent map, eagerly
creating an empty one if none currently exists. If lookup
finds an empty node, we wait on the condition variable for
the node to become populated. If lookup succeeds in creating
an empty node, we instantiate the metadata, grab the lock,
populate the node, and notify the condition variable.
Safely creating an empty node without any metadata present
requires us to move the key data into the map entry. That,
plus a few other invariant shifts, makes it sensible to
give the user of ConcurrentMap more control over the
allocation of map nodes and the layout of keys. That, in
turn, allows us to change the contract so that keys can be
more complex than just a hash code. Instead of incrementing
hash codes and re-performing the lookup, we just insist
that lookup keys be totally ordered.
For now, I've kept the uniform use of hash codes as a
component of the key for MetadataCaches. However, hash
codes aren't really profitable for small keys, and we should
probably use direct comparisons instead.
We should also switch the safer metadata caches (i.e. the
ones that don't involve calling an arbitrary instantiation
function, like MetatypeMetadataCache) over to directly use
ConcurrentMap.
LLDB's requirement that we maintain a linked list of metadata
cache instantiations with a known layout means we can't yet
remove the CacheEntry's redundant copy of the generic
arguments.
Add_library doesn't build per-target variants of the library, but
add_swift_library does. Also changed to install as part of stdlib, since it's
needed even if you aren't building the compiler.