Adds a rough sketch of what will be a test harness, currently only supported
on OS X:
- Launch a child process: an executable written in Swift
- Receive the child process's Mach port
- Receive reflection section addresses and the address of a heap instance
of interest
- Perform field type lookup on the instance remotely (TODO)
- Add RuntimeTarget template This will allow for converting between
metadata structures for native host and remote target architectures.
- Create InProcess and External templates for stored pointers
Add a few more types to abstract pointer access in the runtime
structures but keep native in-process pointer access the same as that
with a plain old pointer type.
There is now a notion of a "stored pointer", which is just the raw value
of the pointer, and the actual pointer type, which is used for loads.
Decoupling these allows us to fork the behavior when looking at metadata
in an external process, but keep things the same for the in-process
case.
There are two basic "runtime targets" that you can use to work with
metadata:
InProcess: Defines the pointer to be trivially a T* and stored as a
uintptr_t. A Metadata * is exactly as it was before, but defined via
AbstractMetadata<InProcess>.
External: A template that requires a target to specify its pointer size.
ExternalPointer: An opaque pointer in another address space that can't
(and shouldn't) be indirected with operator* or operator->. The memory
reader will fetch the data explicitly.
"minimal" is defined as the set of requirements that would be
passed to a function with the type's generic signature that
takes the thick metadata of the parent type as its only argument.
In principle, runtime entries could have a hidden visibility, because they are
never called directly from the code produced by IRGen.
But some of the runtime entries are invoked directly from the foundation.
Therefore they should be visible.
We annotate the most popular runtime functions in terms of how often they are invoked from Swift code:
- Many variants of retain/release functions are annotated to use the new calling convention.
But those variants of retain/release functions that may result in calls of objc_retain or objc_release
are not migrated to the new calling convention, because it results in significant performance degradations
when objects of Obj-C derived classes are used.
- Some popular non-reference counting functions like swift_getGenericMetadata or swift_dynamicCast are annotated as well.
The list of these functions is pretty much the same as the the set of functions defined in InstrumentsSupport.h
These are basically the functions that can be intercepted by different tools/profilers/etc.
This new x-macro should be used to define a runtime function that has an internal implementation
inside the runtime library and a global symbol referring to this internal implementation.
An example of such a runtime function is "swift_retain", which has a global symbol "_swift_retain"
referring to its internal implementation "_swift_retain_".
This makes sure that runtime functions use proper calling conventions, get the required visibility, etc.
We annotate the most popular runtime functions in terms of how often they are invoked from Swift code.
- Almost all variants of retain/release functions are annotated to use the new calling convention.
- Some popular non-reference counting functions like swift_getGenericMetadata or swift_dynamicCast are annotated as well.
The set of runtime functions annotated to use the new calling convention should exactly match the definitions in RuntimeFunctions.def!
Define a number of macro definitions that will be used for:
- proper auto-generation of LLVM IR level declarations of runtime function using RuntimeFunctions.def
- generation of wrappers for runtime functions
- setting proper calling conventions, visibility and other attributes of runtime functions inside the runtime library.
The generated wrapper simply invokes a corresponding entry point by means of
an indirect call via a global the symbol, which is a function pointer referring
to the implementation of a runtime function.
Using such wrappers allows for invocations of runtime functions from dynamic
libraries without the usual indirections via dynamic linker stubs.
If the calling convention and the current target require a wrapper, it will be
generated. Each wrapper gets a hidden linkage and is marked as ODR, so that
a linker can merge all wrappers with the same name.
This functionality could be re-used by e.g. LLVMPasses, which currently create LLVM IR declarations of runtime entry points on their own.
To make the function re-usable, slightly change the API of the function:
- use llvm::Module instead of IRGenModule.
- use llvm::ArrayRef instead of std::initializer_list, which allows the clients of this API to dynamically form the lists of return types and arguments.
and MetadataCache and fix a re-entrancy bug in metadata
instantiation.
The re-entrancy bug is that we were holding the instantiation
lock of a metadata cache while instantiating metadata. Doing
so prevents us from creating a different instantiation if
it's needed by the outer instantiation. This is already
possible, but it's much more likely in a patch I'm working on
to only store the minimal metadata for generic parameters
in generic types.
The same bug could also show up as a deadlock between threads,
so a recursive lock would not be a good fix. Instead, we add
a condition variable to the metadata cache. When fetching
metadata, we look for a node in the concurrent map, eagerly
creating an empty one if none currently exists. If lookup
finds an empty node, we wait on the condition variable for
the node to become populated. If lookup succeeds in creating
an empty node, we instantiate the metadata, grab the lock,
populate the node, and notify the condition variable.
Safely creating an empty node without any metadata present
requires us to move the key data into the map entry. That,
plus a few other invariant shifts, makes it sensible to
give the user of ConcurrentMap more control over the
allocation of map nodes and the layout of keys. That, in
turn, allows us to change the contract so that keys can be
more complex than just a hash code. Instead of incrementing
hash codes and re-performing the lookup, we just insist
that lookup keys be totally ordered.
For now, I've kept the uniform use of hash codes as a
component of the key for MetadataCaches. However, hash
codes aren't really profitable for small keys, and we should
probably use direct comparisons instead.
We should also switch the safer metadata caches (i.e. the
ones that don't involve calling an arbitrary instantiation
function, like MetatypeMetadataCache) over to directly use
ConcurrentMap.
LLDB's requirement that we maintain a linked list of metadata
cache instantiations with a known layout means we can't yet
remove the CacheEntry's redundant copy of the generic
arguments.
This change cuts the number of mallocs() in the metadata caches in half.
The current metadata cache data structure uses a linked list for each entry in
the tree to handle collissions. This means that we need at least two memory
allocations for each entry, one for the tree node and one for the linked list
node.
This commit changes the map used by the metadata caches from an open hash map
(that embeds a linked list at each entry) into an closed map that uses a
different hash value for each entry. With this change we no longer accept
collissions and it is now the responsibility of the user to prevent collissions.
The new get/trySet API makes this responsibility explicit. The new design also
goes well with the current design where hashing is done externally and the fact
that we don't save the full key, just the hash and the value to save memory.
This change reduces the number of allocated objects per entry in half. Instead
of allocating two 32-byte objects (one for the tree node and one for the linked
list) we just allocate a single entry that contains the hash and the value.
Unfortunately, values that are made of two 64-bit pointers (like protocol
conformance entries) are now too big for the 32-byte tree entry and are rounded
up to 48 bytes. In practice this is not a big deal because malloc has 48-byte pool
entries.
...and explicitly mark symbols we export, either for use by executables or for runtime-stdlib interaction. Until the stdlib supports resilience we have to allow programs to link to these SPI symbols.
- Nearly done: TypeRefs and the mangled name decoder.
- Add the swift-reflection-test tool.
The field reflection pipeline is roughly:
- Decode type references
- Substitute generic parameters
- Calculate sizes and offsets
There is currently only one action in the tool, which will test the
*Decode* part of the pipeline: `dump-reflection-section`. This reads
the *swift3_reflect section from an object file and dumps the decoded
type references for all of the stored properties and enum cases in the
file.
- TODO: Write tests with various type arrangements to exercise the
decoder - there are likely some holes in the decoder still since the
AST mangler is quite rich in its kinds.
TODO: The next test mode, `dump-field-types`, will do the following:
1. Launch a swift executable with a canned stopping point
2. Get the address of a heap object instance of interest
3. Dump the fully substituted typerefs of all of the stored properties
or enum case payloads.
That test mode will be more involved since it will attach to another
process and need to read from its address space but will test the
entire out-of-process reflection pipeline in a controlled environment.
We can maybe take this test a step further, with an option or a new
test mode, that prints the entire heap reference graph rooted at that
object of interest, in order to test the ability to detect reference
cycles, for example.
This is the first patch in a series that will allow new protocol
requirements to be added resiliently, with the runtime filling in
default implementations in witness tables.
First, this adds a new flag to the protocol descriptor indicating
that the protocol is resilient. In this case, there are two
additional fields, MinimumWitnessTableSizeInWords and
DefaultWitnessTableSizeInWords, followed by tail-allocated
default witnesses.
The swift_getGenericWitnessTable() entry point now fills in the
default witnesses from the protocol if the given witness table
template is smaller than the expected witness table size.
This also changes the layout of instantiated witness tables to move
the address point to the end of private data. Previously the private
data came after the requirements, but this meant that adding new
requirements would require sliding the private data at runtime and
accessing it indirectly. It is much simpler to access it from
negative offsets instead.
I updated IRGen to emit the new metadata, but currently all protocols
are flagged as not resilient, and default witnesses are not emitted;
this will come in a subsequent patch once some more plumbing is
in place.
To avoid generating GOT entries for references to protocols defined
in the current module, I had to add some hacks to the existing hack
for this. I'll hopefully clean this up in a principled manner later.
- Implement emission of type references for nominal type field
reflection, using a small custom encoder resulting in packed
structs, not strings. This will let us embed 7-bit encoded
32-bit relative offsets directly in the structure (not yet
hooked in).
- Use the AST Mangler for encoding type references
Archetypes and internal references were complicating this before, so we
can take the opportunity to reuse this machinery and avoid unique code
and new ABI.
Next up: Tests for reading the reflection sections and converting the
demangle tree into a tree of type references.
Todo: For concrete types, serialize the types for associated types of
their conformances to bootstrap the typeref substitution process.
rdar://problem/15617914
This comes with a fix for a null pointer dereference in _typeByName()
that would pop with foreign classes that do not have a
NominalTypeDescriptor.
Also, I decided to back out part of the change for now, where the
NominalTypeDescriptor references an accessor function instead of a
pattern, since this broke LLDB, which reaches into the pattern to
get the generic cache.
Soon we will split off the generic cache from the pattern, and at
that time we can change the NominalTypeDescriptor to point at the
cache. But for now, let's avoid needless churn in LLDB by keeping
that part of the setup unchanged.
Change conformance records to reference NominalTypeDescriptors instead of
metadata patterns for resilient or generic types.
For a resilient type, we don't know if the metadata is constant or not,
so we can't directly reference either constant metadata or the metadata
template.
Also, whereas previously NominalTypeDescriptors would point to the
metadata pattern, they now point to the metadata accessor function.
This allows the recently-added logic for instantiating concrete types
by name to continue working.
In turn, swift_initClassMetadata_UniversalStrategy() would reach into
the NominalTypeDescriptor to get the pattern out, so that its bump
allocator could be used to allocate ivar tables. Since the pattern is
no longer available this way, we have to pass it in as a parameter.
In the future, we will split off the read-write metadata cache entry
from the pattern; then swift_initClassMetadata_UniversalStrategy() can
just take a pointer to that, since it doesn't actually need anything
else from the pattern.
Since Clang doesn't guarantee alignment for function pointers, I had
to kill the cute trick that packed the NominalTypeKind into the low
bits of the relative pointer to the pattern; instead the kind is now
stored out of line. We could fix this by packing it with some other
field, or keep it this way in case we add new flags later.
Now that generic metadata is instantiated by calling accessor functions,
this change removes the last remaining place that metadata patterns were
referenced from outside the module they were defined in. Now, the layout
of the metadata pattern and the behavior of swift_getGenericMetadata()
is purely an implementation detail of generic metadata accessors.
This patch allows two previously-XFAIL'd tests to pass.
Instead of directly emitting calls to swift_getGenericMetadata*() and
referencing metadata templates, call a metadata accessor function
corresponding to the UnboundGenericType of the NominalTypeDecl.
The body of this accessor forwards arguments to a runtime metadata
instantiation function, together with the template.
Also, move some code around, so that metadata accesses which are
only done as part of the body of a metadata accessor function are
handled separately in emitTypeMetadataAccessFunction().
Apart from protocol conformances, this means metadata templates are
no longer referenced from outside the module where they were defined.