The pattern of using a breadth-first traversal over a
control-flow graph shows up in a few places within the
SIL optimizer, so this data-structure unifies those
manually-implemented traversals into a common utility.
The `hop_to_executor` instruction is a synchronization point and any kind of other code might run at this point,
which potentially can release objects.
Fixes a miscompile
rdar://110924258
Look through `upcast` and `init_existential_ref` instructions and replace the operand of this cast instruction with the original value.
For example:
```
%2 = upcast %1 : $Derived to $Base
%3 = init_existential_ref %2 : $Base : $Base, $AnyObject
checked_cast_br %3 : $AnyObject to Derived, bb1, bb2
```
This makes it more likely that the cast can be constant folded because the source operand's type is more accurate.
In the example above, the cast reduces to
```
checked_cast_br %1 : $Derived to Derived, bb1, bb2
```
which can be trivially folded to always-succeeds.
Found while looking at `_SwiftDeferredNSDictionary.bridgeValues()`
The address checker records uses in its livenessUses map. Previously,
that map mapped from an instruction to a range of fields of the type.
But an instruction can use multiple discontiguous fields of a single
value. Here, such instructions are properly recorded by fixing the map
to store a bit vector for each instruction.
rdar://110676577
FieldSensitivePrunedLiveness is used as a vectorization of
PrunedLiveness. An instance of FSPL with N elements needs to be able to
represent the same states as N instances of PL.
Previously, it failed to do that in two significant ways:
(1) It attempted to save space for which elements were live by using
a range. This failed to account for instructions which are users of
non-contiguous fields of an aggregate.
apply(
@owned (struct_element_addr %s, #S.f1),
@owned (struct_element_addr %s, #S.f3)
)
(2) It used a single bit to represent whether the instruction was
consuming. This failed to account for instructions which consumed
some fields and borrowed others.
apply(
@owned (struct_element_addr %s, #S.f1),
@guaranteed (struct_element_addr %s, #S.f2)
)
The fix for (1) is to use a bit vector to represent which elements
are used by the instruction. The fix for (2) is to use a second bit
vector to represent which elements are _consumed_ by the instruction.
Adapted the move-checker to use the new representation.
rdar://110909290
And replace them with explicit `metatype` instruction in the entry block.
This allows such metatype instructions to be deleted if they are dead.
This was already done for performance-annotated functions. But now do this for all functions.
It is essential that performance-annotated functions are specialized in the same way as other functions.
Because otherwise it can happen that the same specialization has different performance characteristics in different modules.
And it's up to the linker to select one of those ODR functions when linking.
Also, dropping metatype arguments is good for performance and code size in general.
This change also contains a few bug fixes for dropping metatype arguments.
rdar://110509780
variadic-tuple results. There are three parts to this.
First, fix the emission of indirect result parameters to do a
proper abstraction-pattern-aware traversal of tuple patterns.
There was a FIXME here and everything.
Second, fix the computation of substituted abstraction
patterns to properly handle vanishing tuples. The previous code
was recursively destructuring tuples, but only when it saw a
tuple as the substituted type, which of course breaks on vanishing
tuples.
Finally, fix the emission of returns into vanishing tuple
patterns by allowing the code to not produce a TupleInitialization
when the tuple pattern vanishes. We should always get a singleton
element initializer in this case.
Fixes rdar://109843932, plus a closely-related test case for
vanishing tuples that I added myself.
Data structures must be layout compatible when built with and without asserts.
Fixes a compiler crash when C++ sources are built without asserts because SwiftCompilerSources are built with asserts.
rdar://110363377
* add `UnownedRetainInst` and `UnownedReleaseInst`
* add `var value` to `RetainValueInst` and `ReleaseValueInst`
* make the protocol `UnaryInstruction` be an `Instruction`
* add `var Type.isValueTypeWithDeinit`
* add `var Type.isUnownedStorageType`
* add `var OperandArray.values`
Properties that are marked as initialized are printed as `[assign=<index>]`
where `<index>` point to the property position in `getInitializedProperties()`
list.
Some properties from `initializes(...)` list could be already initialized,
which means that Raw SIL lowering has to emit `destroy_addr` for them
before calling init accessor.
This instruction is similar to AssignByWrapperInst, but instead of having
a destination operand, the initialization is fully factored into the init
function operand. Like AssignByWrapper, AssignOrInit has partial application
operands of both the initializer and the setter, and DI will lower the
instruction to a call based on whether the assignment is initialization or
a setter call.
Just the $*T -> $*@moveOnly T variant for addresses. Unlike the object version
this acts like a cast rather than something that provides semantics from the
frontend to the optimizer.
The reason why I am using a different instruction for addresses and objects here
is that the object checker doesnt have to deal with things like initialization.
drop_deinit only exists in ownership SIL. Remove IRGen support.
A drop_deinit can only ever be destroyed or destructured.
A destructure of a struct-with-deinit requires a drop_deinit operand.