Although nonescaping closures are representationally trivial pointers to their
on-stack context, it is useful to model them as borrowing their captures, which
allows for checking correct use of move-only values across the closure, and
lets us model the lifetime dependence between a closure and its captures without
an ad-hoc web of `mark_dependence` instructions.
During ownership elimination, We eliminate copy/destroy_value instructions and
end the partial_apply's lifetime with an explicit dealloc_stack as before,
for compatibility with existing IRGen and non-OSSA aware passes.
Consider the following example:
```
class Klass {}
@_moveOnly struct Butt {
var k = Klass()
}
func mixedUse(_: inout Butt, _: __owned Butt) {}
func foo() {
var y = Butt()
mixedUse(&y, y)
}
```
In this case, we want to have an exclusivity violation. Before this patch, we
did a by-value load [copy] of y and then performed the inout access. Since the
access scopes did not overlap, we would not get an exclusivity violation.
Additionally, since the checker assumes that exclusivity violations will be
caught in such a situation, we convert the load [copy] to a load [take] causing
a later memory lifetime violation as seen in the following SIL:
```
sil hidden [ossa] @$s4test3fooyyF : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack [lexical] $Butt, var, name "y" // users: %4, %5, %8, %12, %13
%1 = metatype $@thin Butt.Type // user: %3
// function_ref Butt.init()
%2 = function_ref @$s4test4ButtVACycfC : $@convention(method) (@thin Butt.Type) -> @owned Butt // user: %3
%3 = apply %2(%1) : $@convention(method) (@thin Butt.Type) -> @owned Butt // user: %4
store %3 to [init] %0 : $*Butt // id: %4
%5 = begin_access [modify] [static] %0 : $*Butt // users: %7, %6
%6 = load [take] %5 : $*Butt // user: %10 // <————————— This was a load [copy].
end_access %5 : $*Butt // id: %7
%8 = begin_access [modify] [static] %0 : $*Butt // users: %11, %10
// function_ref mixedUse2(_:_:)
%9 = function_ref @$s4test9mixedUse2yyAA4ButtVz_ADntF : $@convention(thin) (@inout Butt, @owned Butt) -> () // user: %10
%10 = apply %9(%8, %6) : $@convention(thin) (@inout Butt, @owned Butt) -> ()
end_access %8 : $*Butt // id: %11
destroy_addr %0 : $*Butt // id: %12
dealloc_stack %0 : $*Butt // id: %13
%14 = tuple () // user: %15
return %14 : $() // id: %15
} // end sil function '$s4test3fooyyF'
```
Now, instead we create a [consume] access and get the nice exclusivity error we
are looking for.
NOTE: As part of this I needed to tweak the verifier so that [deinit] accesses
are now allowed to have any form of access enforcement before we are in
LoweredSIL. I left in the original verifier error in LoweredSIL and additionally
left in the original error in IRGen. The reason why I am doing this is that I
need the deinit access to represent semantically what consuming from a
ref_element_addr, global, or escaping mutable var look like at the SIL level so
that the move checker can error upon it. Since we will error upon such
consumptions in Canonical SIL, such code patterns will never actually hit
Lowered/IRGen SIL, so it is safe to do so (and the verifier/errors will help us
if we make any mistakes). In the case of a non-escaping var though, we will be
able to use deinit statically and the move checker will make sure that it is not
reused before it is reinitialized.
rdar://101767439
This instruction can be inserted by Onone optimizations as a replacement for deleted instructions to
ensure that it's possible to single step on its location.
This allows dynamically indexing into tuples. IRGen not yet
implemented.
I think I'm going to need a type_refine_addr instruction in
order to handle substitutions into the operand type that
eliminate the outer layer of tuple-ness. Gonna handle that
in a follow-up commit.
Having added these, I'm not entirely sure we couldn't just use
alloc_stack and dealloc_stack. Well, if we find ourselves adding
a lot of redundancy with those instructions (e.g. around DI), we
can always go back and rip these out.
Add TermInst::forwardedOperand.
Add SILArgument::forwardedTerminatorResultOperand. This API will be
moved into a proper TerminatorResult abstraction.
Remove getSingleTerminatorOperand, which could be misused because it's
not necessarilly forwarding ownership.
Remove the isTransformationTerminator API, which is not useful or well
defined.
Rewrite several instances of complex logic to handle block arguments
with the simple terminator result API. This defines away potential
bugs where we don't detect casts that perform implicit conversion.
Replace uses of the SILPhiArgument type and code that explicitly
handle block arguments. Control flow is irrelevant in these
situations. SILPhiArgument needs to be deleted ASAP. Instead, use
simple APIs like SILArgument::isTerminatorResult(). Eventually this
will be replaced by a TerminatorResult type.
`getValue` -> `value`
`getValueOr` -> `value_or`
`hasValue` -> `has_value`
`map` -> `transform`
The old API will be deprecated in the rebranch.
To avoid merge conflicts, use the new API already in the main branch.
rdar://102362022
to SILBuilder::createUncheckedForwardingCast
It would be disastrous to confuse this utility with a bit cast. A bit
cast always produces an Unowned value which must immediately be copied
to be used. This utility always forwards ownership. It cannot be used
to truncate values.
Also, be careful not to convert "reinterpret cast"
(e.g. Builtin.reinterpretCast) into a "value cast" since ownership
will be incorrect and the reinterpreted types might not have
equivalent layout.
This lets us write optimizer unit tests and selectively debug the
optimizer in general. We'll be able trace analyses and control
optimization selectively for certain values.
Adding a trace flag to debug_value is the easiest way to start using
it experimentally and develop the rest of the infrastructure. If this
takes off, then we can consider a new `trace_value`
instruction. For now, reusing debug_value is the least intrusive way to
start writing liveness unit tests.
This is a dedicated instruction for incrementing a
profiler counter, which lowers to the
`llvm.instrprof.increment` intrinsic. This
replaces the builtin instruction that was
previously used, and ensures that its arguments
are statically known. This ensures that SIL
optimization passes do not invalidate the
instruction, fixing some code coverage cases in
`-O`.
rdar://39146527
This is exactly like copy_addr except that it is not viewed from the verifiers
perspective as an "invalid" copy of a move only value. It is intended to be used
in two contexts:
1. When the move checker emits a diagnostic since it could not eliminate a copy,
we still need to produce valid SIL without copy_addr on move only types since we
will hit canonical SIL eventually even if we don't actually codegen the SIL. The
pass can just convert said copy_addr to explicit_copy_addr and everyone is
happy.
2. To implement the explicit copy function for address only types.
`int_trap` doesn't provide a failure message, which makes crash reports hard to understand.
This is mostly the case for optimized casts which fail.
rdar://97681511
Andy some time ago already created the new API but didn't go through and update
the old occurences. I did that in this PR and then deprecated the old API. The
tree is clean, so I could just remove it, but I decided to be nicer to
downstream people by deprecating it first.
Specifically this means that rather than always being owned, we now have owned
and guaranteed versions of copyable_to_moveonlywrapper. Similar to
moveonlywrapper_to_copyable, one chooses which variant one gets by using
specific SILBuilder APIs:
create{Owned,Guaranteed}CopyableToMoveOnlyWrapperValueInst. It is still
forwarding and the rest of the forwarding APIs work as expected except that the
forwarding ownership is fixed (and an assertion will result if one attempts to
do so).
NOTE: It is assumed that trivial operands are always passed to the owned
variant.
The main fixes are:
1. MoveOnlyWrapperToCopyableValue needed to be marked as a
FirstArgOwnershipForwardingSingleValueInst instead of just as being an
Ownership mixin. I discovered that in certain cases I was treating it that
way (in the isa check for FirstArgOwnershipForwardingSingleValueInst), but we
were inconsistent. Now we are consistent.
2. MoveOnlyWrapperToCopyableValue is always specified as being initialized as
owned or guaranteed. What is key to understand though is that the
owned/guaranteed property here is more a semantic property around whether the
lifetime of the move only value is ending or if we are allowing it to escape
as an moveonlywrapped unwrapped guaranteed parameter to a function. The main
implication of this is that we can not just use the actual ownership kind to
determine the type of moveonlywrapper_to_copyable we are using. This is b/c
after ownership lowering, the resulting ownership kind will be none, meaning
the instruction will be in an invalid state. Thus the need to represent this
as a separate bit in the instruction. It may make sense to rename the forms
of this instruction to be `[lifetime end]` and `[guaranteed function arg]`
that way it is semantically clear. But I am going to do that change at
another time.
These instructions have the following attributes:
1. copyably_to_moveonlywrapper takes in a 'T' and maps it to a '@moveOnly
T'. This is semantically used when initializing a new moveOnly binding from a
copyable value. It semantically destroys its input @owned value and returns a
brand new independent @owned @moveOnly value. It also is used to convert a
trivial copyable value with type 'Trivial' into an owned non-trivial value of
type '@moveOnly Trivial'. If one thinks of '@moveOnly' as a monad, this is how
one injects a copyable value into the move only space.
2. moveonlywrapper_to_copyable takes in a '@moveOnly T' and produces a new 'T'
value. This is a 'forwarding' instruction where at parse time, we only allow for
one to choose it to be [owned] or [guaranteed].
* moveonlywrapper_to_copyable [owned] is used to signal the end of lifetime of
the '@moveOnly' wrapper. SILGen inserts these when ever a move only value has
its ownership passed to a situation where a copyable value is needed. Since it
is consuming, we know that the no implicit copy checker will ensure that if we
need a copy for it, the program will emit a diagnostic.
* moveonlywrapper_to_copyable [guaranteed] is used to pass a @moveOnly T value
as a copyable guaranteed parameter with type 'T' to a function. In the case of
using no-implicit-copy checking this is always fine since no-implicit-copy is a
local pattern. This would be an error when performing no escape
checking. Importantly, this instruction also is where in the case of an
@moveOnly trivial type, we convert from the non-trivial representation to the
trivial representation.
Some important notes:
1. In a forthcoming commit, I am going to rebase the no implicit copy checker on
top of these instructions. By using '@moveOnly' in the type system, we can
ensure that later in the SIL pipeline, we can have optimizations easily ignore
the code.
2. Be aware of is that due to SILGen only emitting '@moveOnly T' along immediate
accesses to the variable and always converts to a copyable representation when
calling other code, we can simply eliminate from the IR all moveonly-ness from
the IR using a lowering pass (that I am going to upstream). In the evil scheme
we are accomplishing here, we perform lowering of trivial values right after
ownership lowering and before diagnostics to simplify the pipeline.
On another note, I also fixed a few things in SILParsing around getASTType() vs
getRawASTType().
* split the PassUtils.swift file into PassContext.swift and Passes.swift
* rework `Builder` bridging allowing more insertion point variations, e.g. inserting at the end of a block.
* add Builder.create functions for more instructions
* add `PassContext.splitBlock`
* move SIL modification functions from PassContext to extensions of the relevant types (e.g. instructions).
* rename `Location.bridgedLocation` -> `Location.bridged`
Merge the AddressLowering pass from its old development branch and update
it so we can begin incrementally enabling it under a flag.
This has been reimplemented for simplicity. There's no point in
looking at the old code.
Reduces the number of _ContiguousArrayStorage metadata.
In order to support constant time bridging we do need to set the correct
metadata when we bridge to Objective-C. This is so that the type check
succeeds when bridging back from Objective-C to reuse the storage
instance rather than bridging the elements.
To support dynamically setting the `_ContiguousArrayStorage` element
type i needed to add support for optimizing `alloc_ref_dynamic`
throughout the optimizer.
Possible future improvements:
* Use different metadata such that we can disambiguate native Swift
classes during destruction -- allowing native release rather then unknown
release usage.
* Optimize the newly added semantic function
getContiguousArrayStorageType
rdar://86171143
The main effect of this will be that in IRGen we will use llvm.dbg.addr instead
of llvm.dbg.declare. We must do this since llvm.dbg.declare implies that the
given address is valid throughout the program.
This just adds the instructions/printing/parsing/serialization/deserialization.
rdar://85020571
This is an instruction that I am going to use to drive some of the ownership
based dataflow optimizations that I am writing now. The instruction contains a
kind that allows one to know what type of checking is required and allows the
need to add a bunch of independent instructions for independent checkers. Each
checker is responsible for removing all of its own mark instructions. NOTE:
MarkMustCheckInst is only allowed in Raw SIL since once we are in Canonical SIL
we want to ensure that all such checking has already occurred.
The new flag will be used to track whether a move_value corresponds to a
source-level lexical scope. Here, the flag is just added to the
instruction and represented in textual and serialized SIL.
Introduce a new instruction `dealloc_stack_ref ` and remove the `stack` flag from `dealloc_ref`.
The `dealloc_ref [stack]` was confusing, because all it does is to mark the deallocation of the stack space for a stack promoted object.