This is the first patch in a series that will allow new protocol
requirements to be added resiliently, with the runtime filling in
default implementations in witness tables.
First, this adds a new flag to the protocol descriptor indicating
that the protocol is resilient. In this case, there are two
additional fields, MinimumWitnessTableSizeInWords and
DefaultWitnessTableSizeInWords, followed by tail-allocated
default witnesses.
The swift_getGenericWitnessTable() entry point now fills in the
default witnesses from the protocol if the given witness table
template is smaller than the expected witness table size.
This also changes the layout of instantiated witness tables to move
the address point to the end of private data. Previously the private
data came after the requirements, but this meant that adding new
requirements would require sliding the private data at runtime and
accessing it indirectly. It is much simpler to access it from
negative offsets instead.
I updated IRGen to emit the new metadata, but currently all protocols
are flagged as not resilient, and default witnesses are not emitted;
this will come in a subsequent patch once some more plumbing is
in place.
To avoid generating GOT entries for references to protocols defined
in the current module, I had to add some hacks to the existing hack
for this. I'll hopefully clean this up in a principled manner later.
Instead of directly emitting calls to swift_getGenericMetadata*() and
referencing metadata templates, call a metadata accessor function
corresponding to the UnboundGenericType of the NominalTypeDecl.
The body of this accessor forwards arguments to a runtime metadata
instantiation function, together with the template.
Also, move some code around, so that metadata accesses which are
only done as part of the body of a metadata accessor function are
handled separately in emitTypeMetadataAccessFunction().
Apart from protocol conformances, this means metadata templates are
no longer referenced from outside the module where they were defined.
Recent changes added support for resiliently-sized enums, and
enums resilient to changes in implementation strategy.
This patch adds resilient case numbering, fixing the problem
where adding new payload cases would break existing code by
changing the numbering of no-payload cases.
The problem is that internally, enum cases are numbered with payload
cases coming first, followed by no-payload cases. While each list
is itself in declaration order, with new additions coming at the
end, we need to partition it to give us a fast runtime test for
"is this a payload or no-payload case index."
The resilient numbering strategy used here is that the getEnumTag
and destructiveInjectEnumTag value witness functions now take a
tag index in the range [-ElementsWithPayload..ElementsWithNoPayload-1].
Payload elements are numbered in *reverse* declaration order, so
adding new payload cases yields decreasing tag indices, and adding
new no-payload cases yields increasing tag indices, allowing use
sites to be resilient.
This adds the adjustment between 'fragile' and 'resilient' tag
indices in a somewhat unsatisfying manner, because the calculation
could be pushed down further into EnumImplStrategy, simplifying
both the IRGen code and the generated IR. I'll clean this up later.
In the meantime, clean up some other stuff in GenEnum.cpp, mostly
abstracting code that walks cases.
Use them to generate value witnesses when the type has dynamic packing.
Regularize the interface for calling value witnesses.
Not a huge difference yet, although we do re-use local type data
a little more effectively now.
Since that's somewhat expensive, allow the generation of meaningful
IR value names to be efficiently controlled in IRGen. By default,
enable meaningful value names only when generating .ll output.
I considered giving protocol witness tables the name T:Protocol
instead of T.Protocol, but decided that I didn't want to update that
many test cases.
Most notably, the source caches did not respect dominance. The
simplest solution was just to drop them in favor of the ordinary
caching system; this is unfortunate because it requires walking
over the path twice instead of exploiting the trie, but it's much
easier to make this work, especially in combination with the other
caching mechanisms at play.
This will be tested by later commits that enable lazy-loading of
local type data in various contexts.
This eliminates some minor overheads, but mostly it eliminates
a lot of conceptual complexity due to the overhead basically
appearing outside of its context.
The main idea here is that we really, really want to be
able to recover the protocol requirement of a conformance
reference even if it's abstract due to the conforming type
being abstract (e.g. an archetype). I've made the conversion
from ProtocolConformance* explicit to discourage casual
contamination of the Ref with a null value.
As part of this change, always make conformance arrays in
Substitutions fully parallel to the requirements, as opposed
to occasionally being empty when the conformances are abstract.
As another part of this, I've tried to proactively fix
prospective bugs with partially-concrete conformances, which I
believe can happen with concretely-bound archetypes.
In addition to just giving us stronger invariants, this is
progress towards the removal of the archetype from Substitution.
Instead of categorically forbidding caching within a conditional
scope, permit it but remember how to remove the cache entries.
This means that ad-hoc emission code can now get exactly the
right caching behavior if they use this properly. In keeping
with that, adjust a bunch of code to properly nest scopes
according to the conditional paths they enter.
There are several interesting new features here.
The first is that, when emitting a SILFunction, we're now able to
cache type data according to the full dominance structure of the
original function. For example, if we ask for type metadata, and
we've already computed it in a dominating position, we're now able
to re-use that value; previously, we were limited to only doing this
if the value was from the entry block or the LLVM basic block
matched exactly. Since this tracks the SIL dominance relationship,
things in IRGen which add their own control flow must be careful
to suppress caching within blocks that may not dominate the
fallthrough; this mechanism is currently very crude, but could be
made to allow a limited amount of caching within the
conditionally-executed blocks.
This query is done using a proper dominator tree analysis, even at -O0.
I do not expect that we will frequently need to actually build the
tree, and I expect that the code-size benefits of doing a real
analysis will be significant, especially as we move towards making
more metadata lazily computed.
The second feature is that this adds support for "abstract"
cache entries, which indicate that we know how to derive the metadata
but haven't actually done so. This code isn't yet tested, but
it's going to be the basis of making a lot of things much lazier.
of associated types in protocol witness tables.
We use the global access functions when the result isn't
dependent, and a simple accessor when the result can be cheaply
recovered from the conforming metadata. Otherwise, we add a
cache slot to a private section of the witness table, forcing
an instantiation per conformance. Like generic type metadata,
concrete instantiations of generic conformances are memoized.
There's a fair amount of code in this patch that can't be
dynamically tested at the moment because of the widespread
reliance on recursive expansion of archetypes / dependent
types. That's something we're now theoretically in a position
to change, and as we do so, we'll test more of this code.
This speculatively re-applies 7576a91009,
i.e. reverts commit 11ab3d537f.
We have not been able to duplicate the build failure in
independent testing; it might have been spurious or unrelated.
of associated types in protocol witness tables.
We use the global access functions when the result isn't
dependent, and a simple accessor when the result can be cheaply
recovered from the conforming metadata. Otherwise, we add a
cache slot to a private section of the witness table, forcing
an instantiation per conformance. Like generic type metadata,
concrete instantiations of generic conformances are memoized.
There's a fair amount of code in this patch that can't be
dynamically tested at the moment because of the widespread
reliance on recursive expansion of archetypes / dependent
types. That's something we're now theoretically in a position
to change, and as we do so, we'll test more of this code.
This reverts commit 6528ec2887, i.e.
it reapplies b1e3120a28, with a fix
to unbreak release builds.
The runtime entry doesn't just report the error, unlike the other report* functions, it also does the crashing.
Reapplying independent of unrelated reverted patches.
This reverts commit b1e3120a28.
Reverting because this patch uses WitnessTableBuilder::PI in NDEBUG code.
That field only exists when NDEBUG is not defined, but now NextCacheIndex, a
field that exists regardless, is being updated based on information from PI.
This problem means that Release builds do not work.
of associated types in protocol witness tables.
We use the global access functions when the result isn't
dependent, and a simple accessor when the result can be cheaply
recovered from the conforming metadata. Otherwise, we add a
cache slot to a private section of the witness table, forcing
an instantiation per conformance. Like generic type metadata,
concrete instantiations of generic conformances are memoized.
There's a fair amount of code in this patch that can't be
dynamically tested at the moment because of the widespread
reliance on recursive expansion of archetypes / dependent
types. That's something we're now theoretically in a position
to change, and as we do so, we'll test more of this code.
This value witness function takes an address of an enum value where the
payload has already been initialized, together with a case index, and
forms the enum value.
The formal behavior can be thought of as satisfying an identity in
relation to the existing two enum value witnesses. For any enum
value, the following is to leave the value unchanged:
tag = getEnumTag(value)
destructiveProjectEnumData(value)
destructiveInjectEnumData(value, tag)
This is the last missing piece for the inject_enum_addr SIL instruction
to handle resilient enums, allowing the implementation of an enum to be
decoupled from its uses. Also, it should be useful for dynamically
constructing enum cases with write reflection, once we get around to
doing such a thing.
The body of the value witness is emitted by a new emitStoreTag() method
on EnumImplStrategy. This is similar to the existing storeTag(), except
the case index is a value instead of a contant.
This is implemented as follows for the different enum strategies:
1) For enums consisting of a single case, this is trivial.
2) For enums where all cases are empty, stores the case index into the
payload area.
3) For enums with a single payload case, emits a call to a runtime
function. Note that for non-generic single payload enums, this could
be open-coded more efficiently, but the function still has the
correct behavior since it supports extra inhabitants and so on.
A follow-up patch will make this more efficient.
4) For multi-payload enums, there are two cases:
a) If one of the payloads is generic or resilient, the enum is
dynamically-sized, and a call to a runtime function is emitted.
b) If the entire enum is fixed-size, the value witness checks if
the case is empty or not.
If the case has a payload, the case index is swizzled into
spare bits of the payload, if any, with remaining bits going
into the extra tag area.
If the case is empty, the case index is swizzled into the
spare bits of the payload, the remaining bits of the payload,
and the extra tag area.
The implementations of emitStoreTag() duplicate existing logic in the
enum strategies, in particular case 4)b) is rather complicated.
Code cleanups are welcome here!
Modeling nonescaping captures as @inout parameters is wrong, because captures are allowed to share state, unlike 'inout' parameters, which are allowed to assume to some degree that there are no aliases during the parameter's scope. To model this, introduce a new @inout_aliasable parameter convention to indicate an indirect parameter that can be written to, not only by the current function, but by well-typed, well-synchronized aliasing accesses too. (This is unrelated to our discussions of adding a "type-unsafe-aliasable" annotation to pointer_to_address to allow for safe pointer punning.)
This is a bit of a hodge-podge of related changes that I decided
weren't quite worth teasing apart:
First, rename the weak{Retain,Release} entrypoints to
unowned{Retain,Release} to better reflect their actual use
from generated code.
Second, standardize the names of the rest of the entrypoints around
unowned{operation}.
Third, standardize IRGen's internal naming scheme and API for
reference-counting so that (1) there are generic functions for
emitting operations using a given reference-counting style and
(2) all operations explicitly call out the kind and style of
reference counting.
Finally, implement a number of new entrypoints for unknown unowned
reference-counting. These entrypoints use a completely different
and incompatible scheme for working with ObjC references. The
primary difference is that the new scheme abandons the flawed idea
(which I take responsibility for) that we can simulate an unowned
reference count for ObjC references, and instead moves towards an
address-only scheme when the reference might store an ObjC reference.
(The current implementation is still trivially takable, but that is
not something we should be relying on.) These will be tested in a
follow-up commit. For now, we still rely on the bad assumption of
reference-countability.
John and I discussed this and agreed that we only need two cases here,
not four. In the future this may be merged with ResilienceExpansion,
and become a struct with additional availability information, but
we're definitely sure we don't need four levels here.
Currently, a source for polymorphic parameter fulfillments is removed
from the source list if the size of the map of fulfillments does not
increase during its consideration. This isn't quite correct because it's
possible for a new fulfillment that uses the source to replace an
existing one without creating a new entry in the map. Have the code be
more explicit about the condition under which a source may be safely
discarded.
<rdar://problem/23121786> Crash emitting IR SIL function for Carthage
Swift SVN r32763
- GenProto.cpp for protocols and protocol conformances
- GenExistential.cpp for existential type layout and operations
- GenArchetype.cpp for archetype type layout and operations
Swift SVN r32493
which allows the use of an arbitrary-length path.
Eventually, this will also allow us to be much lazier in IRGen
about actually materializing such information in a function;
but for now, leave the existing code-generation patterns intact.
Swift SVN r32454
By using relative references, either directly to symbols internal to the current TU, or to the GOT entry for external symbols, we avoid unnecessary runtime relocations, and we save space on 64-bit platforms, since a single image is still <2GB in size. For the 64-bit standard library, this trades 26KB of fake-const data in __DATA,__swift1_proto for 13KB of true-const data in __TEXT,__swift2_proto. Implements rdar://problem/22334380.
Swift SVN r31555
Full type metadata isn't necessary to calculate the runtime layout of a dependent struct or enum; we only need the non-function data from the value witness table (size, alignment, extra inhabitant count, and POD/BT/etc. flags). This can be generated more efficiently than the type metadata for many types--if we know a specific instantiation is fixed-layout, we can regenerate the layout information, or if we know the type has the same layout as another well-known type, we can get the layout from a common value witness table. This breaks a deadlock in most (but not all) cases where a value type is recursive using classes or fixed-layout indirected structs like UnsafePointer. rdar://problem/19898165
This time, factor out the ObjC-dependent parts of the tests so they only run with ObjC interop.
Swift SVN r30266