NOTE: debug_value [moved] appearing in the source code implies a _move was
used. So this will not effect current stable swift code.
This is just a first version of this that I am using to commit/bring up tests
for IRGen supporting a full dataflow version of this patch.
Big picture is that there is a bunch of work that is done in the LLVM level in
the coroutine splitter to work around communicating live variables in the
various coroutine func-lets. This logic is all done with debug.declare and we
would need to update that logic in the coroutine splitter to handle
debug.addr. Rather than do this, after some conversation, AdrianP and I realized
that we could get the same effect of a debug.declare by just redeclaring the
current live set of debug_value after each possible coroutine funclet start. To
do this in full generality, we need a full dataflow but just to bring this up we
initially perform a dominance propagation algorithm of the following sort:
1. We walk the CFG along successors. By doing this we guarantee that we visit
blocks after their dominators.
2. When we visit a block, we walk the block from start->end. During this walk:
a. We grab a new block state from the centralized block->blockState map. This
state is a [SILDebugVariable : DebugValueInst].
b. If we see a debug_value, we map blockState[debug_value.getDbgVar()] =
debug_value. This ensures that when we get to the bottom of the block, we
have pairs of SILDebugVariable + last debug_value on it.
c. If we see any coroutine funclet boundaries, we clone the current tracked
set of our block state and then walk up the dom tree dumping in each block
any debug_value with a SILDebugVariable that we have not already
dumped. This is maintained by using a visited set of SILDebugVariable for
each funclet boundary.
The end result is that at the beginning of each funclet we will basically
declare the debug info for an addr.
This is insufficient of course for moves that are in conditional control flow,
e.x.:
```
let x = Klass()
if boolValue {
await asyncCall()
let _ = _move(x)
}
```
but this at least lets me begin to write tests for this in lldb using straight
line code and work out the rest of the issues in CodeGen using those tests.
This is just a quick fix to stop us from dropping live values such as m in the
following example:
```
public func addressOnlyVarTest<T : P>(_ x: T) {
var k = x
k.doSomething()
let m = _move(k)
m.doSomething()
k = x
k.doSomething()
}
```
Before this change, we would just drop m and one wouldn't even see it in the
debugger.
I am only doing this currently for cases where when we merge at least one
alloc_stack was moved. The reason why is that to implement this correctly, I
need to use llvm.dbg.addr and changing the debug info from using
llvm.dbg.declare -> llvm.dbg.addr requires statistics and needs to be done a
little later in the swift development process. If one of these alloc_stack had
the [moved] marker attached to it, we know the user /did/ use move so they have
in a sense opted into having a move function effect its program so we are only
changing how new code appears in the debugger.
Reduces the number of _ContiguousArrayStorage metadata.
In order to support constant time bridging we do need to set the correct
metadata when we bridge to Objective-C. This is so that the type check
succeeds when bridging back from Objective-C to reuse the storage
instance rather than bridging the elements.
To support dynamically setting the `_ContiguousArrayStorage` element
type i needed to add support for optimizing `alloc_ref_dynamic`
throughout the optimizer.
Possible future improvements:
* Use different metadata such that we can disambiguate native Swift
classes during destruction -- allowing native release rather then unknown
release usage.
* Optimize the newly added semantic function
getContiguousArrayStorageType
rdar://86171143
Add NonTypeDependentOperandToValue predicate for composability.
Add a getNonTypeDependentOperandValues(), which can be used as a functor.
The skipTypeDependentOperands parameter complicated the API.
The main effect of this will be that in IRGen we will use llvm.dbg.addr instead
of llvm.dbg.declare. We must do this since llvm.dbg.declare implies that the
given address is valid throughout the program.
This just adds the instructions/printing/parsing/serialization/deserialization.
rdar://85020571
The transformation propagations stack-initialized values. It checks
for any writes to the stack before doing so. OpenExistial instructions
now have a mutable property, so that also needs to be checked.
Add OpenExistentialAddrInst::isRead(SILInstruction).
Replace an `isa<OpenExistentialAddrInst>(user)` check with
`OpenExistentialAddrInst::isRead(user).
Fixes rdar://88664423 (SR-15791: Miscompile under -O with mutating protocol method)
This optimizes keypath-closures, like
```
a.map { \.x }
```
It results in a significant performance improvement for such code patterns.
rdar://87968067
This is an instruction that I am going to use to drive some of the ownership
based dataflow optimizations that I am writing now. The instruction contains a
kind that allows one to know what type of checking is required and allows the
need to add a bunch of independent instructions for independent checkers. Each
checker is responsible for removing all of its own mark instructions. NOTE:
MarkMustCheckInst is only allowed in Raw SIL since once we are in Canonical SIL
we want to ensure that all such checking has already occurred.
The new flag will be used to track whether a move_value corresponds to a
source-level lexical scope. Here, the flag is just added to the
instruction and represented in textual and serialized SIL.
Introduce a new instruction `dealloc_stack_ref ` and remove the `stack` flag from `dealloc_ref`.
The `dealloc_ref [stack]` was confusing, because all it does is to mark the deallocation of the stack space for a stack promoted object.
And a few other small related changes:
* remove libswiftPassInvocation from SILInstructionWorklist (because it's not needed)
* replace start/finishPassRun with start/finishFunction/InstructionPassRun
NFC
To give a bit more information, currently the way the move function is
implemented is that:
1. SILGen emits a builtin "move" that is called within the function _move in the
stdlib.
2. Mandatory Inlining today if the final inlined type is address only, inlines
builtin "move" as mark_unresolved_move_addr. Otherwise, if the inlined type
is loadable, it performs a load [take] + move [diagnostic] + store [init].
3. In the diagnostic pipeline before any mem optimizations have run, we run the
move checker for addresses. This eliminates /all/ mark_unresolved_move_addr
as part of emitting diagnostics. In order to make this work, we perform a
small optimization before the checker runs that moves the
mark_unresolved_move_addr from being on temporary alloc_stacks to the true
base underlying address we are trying to move. This optimization is necessary
since _move is generic and often times SILGen will emit this temporary that
we do not want.
4. Then after we have run the guaranteed mem optimizations, we run the object
based move checker emitting diagnostics.
This PR changes the scheme above to the following:
1. SILGen emits a builtin "move" that is called within the function _move in the
stdlib.
2. Mandatory Inlining inlines builtin "move" as mark_unresolved_move_addr.
3. In the diagnostic pipeline before we have run any mem optimizations and
before we have run the actual move address checker, we massage the IR as we
do above but in a separate pass where in addition we try to match this pattern:
```
%temporary = alloc_stack $LoadableType
store %1 to [init] %temporary : $*LoadableType
mark_unresolved_move_addr %temporary to %otherAddr : $*LoadableType
destroy_addr %temporary : $*LoadableType
```
and transform it to:
```
%temporary = alloc_stack $LoadableType
%2 = move_value [allows_diagnostics] %1 : $*LoadableType
store %2 to [init] %temporary : $*LoadableType
destroy_addr %temporary : $*LoadableType
```
ensuring that the object move checker will handle this.
4. Then after we have run the guaranteed mem optimizations, we run the object
based move checker emitting diagnostics.
During copy propagation (for which -enable-copy-propagation must still
be passed), also try to shrink borrow scopes by hoisting end_borrows
using the newly added ShrinkBorrowScope utility.
Allow end_borrow instructions to be hoisted over instructions that are
not deinit barriers for the value which is borrowed. Deinit barriers
include uses of the value, loads of memory, loads of weak references
that may be zeroed during deinit, and "synchronization points".
rdar://79149830
This instruction is similar to a copy_addr except that it marks a move of an
address that has to be checked. In order to keep the memory lifetime verifier
happy, the semantics before the checker runs are the mark_unresolved_move_addr is
equivalent to copy_addr [init] (not copy_addr [take][init]).
The use of this instruction is that Mandatory Inlining converts builtin "move"
to a mark_unresolved_move_addr when inlining the function "_move" (the only
place said builtin is invoked).
This is then run through a special checker (that is later in this PR) that
either proves that the mark_unresolved_move_addr can actually be a move in which
case it converts it to copy_addr [take][init] or if it can not be a move, emit
an error and convert the instruction to a copy_addr [init]. After this is done
for all instructions, we loop back through again and emit an error on any
mark_unresolved_move_addr that were not processed earlier allowing for us to
know that we have completeness.
NOTE: The move kills checker for addresses is going to run after Mandatory
Inlining, but before predictable memory opts and friends.
The reason why I am doing this is that currently checked_cast_br of an AnyObject
may return a different value due to boxing. As an example, this can occur when
performing a checked_cast_br of a __SwiftValue(AnyHashable(Klass())).
To model this in SIL, I added to OwnershipForwardingMixin a bit of information
that states if the instruction directly forwards ownership with a default value
of true. In checked_cast_br's constructor though I specialize this behavior if
the source type is AnyObject and thus mark the checked_cast_br as not directly
forwarding its operand. If an OwnershipForwardingMixin directly forwards
ownership, one can assume that if it forwards ownership, it will always do so in
a way that ensures that each forwarded value is rc-identical to whatever result
it algebraically forwards ownership to. So in the context of checked_cast_br, it
means that the source value is rc-identical to the argument of the success and
default blocks.
I added a verifier check to CheckedCastBr that makes sure that it can never
forward guaranteed ownership and have a source type of AnyObject.
In SemanticARCOpts, I modified the code that builds extended live ranges of
owned values (*) to check if a OwnershipForwardingMixin is directly
forwarding. If it is not directly forwarding, then we treat the use just as an
unknown consuming use. This will then prevent us from converting such an owned
value to guaranteed appropriately in such a case.
I also in SILGen needed to change checked_cast_br emission in SILGenBuilder to
perform an ensurePlusOne on the input operand (converting it to +1 with an
appropriate cleanup) if the source type is an AnyObject. I found this via the
verifier check catching this behavior from SILGen when compiling libswift. This
just shows how important IR verification of invariants can be.
(*) For those unaware, SemanticARCOpts contains a model of an owned value and
all forwarding uses of it called an OwnershipLiveRange.
rdar://85710101
Required for UnsafeRawPointer.withMemoryReboud(to:).
%out_token = rebind_memory %0 : $Builtin.RawPointer to %in_token
%0 must be of $Builtin.RawPointer type
%in_token represents a cached set of bound types from a prior memory state.
%out_token is an opaque $Builtin.Word representing the previously bound
types for this memory region.
This instruction's semantics are identical to ``bind_memory``, except
that the types to which memory will be bound, and the extent of the
memory region is unknown at compile time. Instead, the bound-types are
represented by a token that was produced by a prior memory binding
operation. ``%in_token`` must be the result of bind_memory or
Required for UnsafeRawPointer.withMemoryRebound(to:)
%token = bind_memory %0 : $Builtin.RawPointer, %1 : $Builtin.Word to $T
%0 must be of $Builtin.RawPointer type
%1 must be of $Builtin.Word type
%token is an opaque $Builtin.Word representing the previously bound types
for this memory region.
NOTE: This pass is disabled when -enable-experimental-lexical-lifetimes is
enabled.
When that flag is disabled, this removes the lexical flag from begin_borrow and
alloc_stack. This ensures that we can begin using begin_borrow [lexical] and
friends to emit diagnostics without impacting performance. I am going to be
preparing a subsequent patch that causes us to emit lexical lifetimes by
default. Due to this pass, I am not expecting any issues around perf.
This is a signal to the move value kill analysis that this is a move that should
have diagnostics emitted for it. It is a temporary addition until we add
MoveOnly to the SIL type system.
Leaks checking is not thread safe and e.g. lldb creates multiple SILModules in multiple threads, which would result in false alarms.
Ideally we would make it thread safe, e.g. by putting the instruction counters in the SILModule, but this would be a big effort and it's not worth doing it. Leaks checking in the frontend's and SILOpt's SILModule (not including SILModules created for module interface building) is a good enough test.
rdar://84688015
The key thing is that the move checker will not consider the explicit copy value
to be a copy_value that can be rewritten, ensuring that any uses of the result
of the explicit copy_value (consuming or other wise) are not checked.
Similar to the _move operator I recently introduced, this is a transparent
function so we can perform one level of specialization and thus at least be
generic over all concrete types.
Simple convenience on top of findOwnershipReferenceRoot to handle
guaranteed values forwarded through struct/tuple extract.
Note: eventually this should be handled by a ReferenceRoot abstraction
that stores the component path.
Fix two bugs:
- FirstArgOwnershipForwardingSingleValueInst needs to forward its first operand.
- select_value needs to be a ForwardedBorrow for all cases and the default.