It's need to correctly maintain dependencies from an open-existential instruction to a `keypath` instruction which uses the opened type.
Fixes a SILVerifier crash.
rdar://105517521
Although nonescaping closures are representationally trivial pointers to their
on-stack context, it is useful to model them as borrowing their captures, which
allows for checking correct use of move-only values across the closure, and
lets us model the lifetime dependence between a closure and its captures without
an ad-hoc web of `mark_dependence` instructions.
During ownership elimination, We eliminate copy/destroy_value instructions and
end the partial_apply's lifetime with an explicit dealloc_stack as before,
for compatibility with existing IRGen and non-OSSA aware passes.
The instance type of a metatype instruction is not necessarily a legal lowered SIL Type.
Lower the type before converting it to a SILType.
rdar://105502403
Consider the following example:
```
class Klass {}
@_moveOnly struct Butt {
var k = Klass()
}
func mixedUse(_: inout Butt, _: __owned Butt) {}
func foo() {
var y = Butt()
mixedUse(&y, y)
}
```
In this case, we want to have an exclusivity violation. Before this patch, we
did a by-value load [copy] of y and then performed the inout access. Since the
access scopes did not overlap, we would not get an exclusivity violation.
Additionally, since the checker assumes that exclusivity violations will be
caught in such a situation, we convert the load [copy] to a load [take] causing
a later memory lifetime violation as seen in the following SIL:
```
sil hidden [ossa] @$s4test3fooyyF : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack [lexical] $Butt, var, name "y" // users: %4, %5, %8, %12, %13
%1 = metatype $@thin Butt.Type // user: %3
// function_ref Butt.init()
%2 = function_ref @$s4test4ButtVACycfC : $@convention(method) (@thin Butt.Type) -> @owned Butt // user: %3
%3 = apply %2(%1) : $@convention(method) (@thin Butt.Type) -> @owned Butt // user: %4
store %3 to [init] %0 : $*Butt // id: %4
%5 = begin_access [modify] [static] %0 : $*Butt // users: %7, %6
%6 = load [take] %5 : $*Butt // user: %10 // <————————— This was a load [copy].
end_access %5 : $*Butt // id: %7
%8 = begin_access [modify] [static] %0 : $*Butt // users: %11, %10
// function_ref mixedUse2(_:_:)
%9 = function_ref @$s4test9mixedUse2yyAA4ButtVz_ADntF : $@convention(thin) (@inout Butt, @owned Butt) -> () // user: %10
%10 = apply %9(%8, %6) : $@convention(thin) (@inout Butt, @owned Butt) -> ()
end_access %8 : $*Butt // id: %11
destroy_addr %0 : $*Butt // id: %12
dealloc_stack %0 : $*Butt // id: %13
%14 = tuple () // user: %15
return %14 : $() // id: %15
} // end sil function '$s4test3fooyyF'
```
Now, instead we create a [consume] access and get the nice exclusivity error we
are looking for.
NOTE: As part of this I needed to tweak the verifier so that [deinit] accesses
are now allowed to have any form of access enforcement before we are in
LoweredSIL. I left in the original verifier error in LoweredSIL and additionally
left in the original error in IRGen. The reason why I am doing this is that I
need the deinit access to represent semantically what consuming from a
ref_element_addr, global, or escaping mutable var look like at the SIL level so
that the move checker can error upon it. Since we will error upon such
consumptions in Canonical SIL, such code patterns will never actually hit
Lowered/IRGen SIL, so it is safe to do so (and the verifier/errors will help us
if we make any mistakes). In the case of a non-escaping var though, we will be
able to use deinit statically and the move checker will make sure that it is not
reused before it is reinitialized.
rdar://101767439
This is used to model global_addr/ref_element_addr/escaping closure captures
where we do not want to allow the user to consume the memory (leaving the memory
in a potentially uninitialized state), but we do want to allow for the user to
assign over the memory all at once.
Consider a situation like the following:
```
@_moveOnly
struct FileDescriptor {
var state: CInt = ...
}
final class Klass {
var descriptor = ...
}
func consumeDescriptor(_ x: __owned FileDescriptor) {}
// Eventually both of these will point at the same class.
var globalKlass = ...
var globalKlass2 = ...
func memoryUnsafe() {
consumeDescriptor(globalKlass.descriptor)
consumeDescriptor(globalKlass2.descriptor)
}
func callMemoryUnsafe() {
globalKlass2 = globalKlass()
memoryUnsafe()
}
```
Notice how in the above in memoryUnsafe, locally the compiler has no way of
knowing that globalKlass and globalKlass2 actually point at the same class and
thus we are attempting to consume the same descriptor twice. This is even
allowed by exclusivity. If descriptor was not move only, this would be safe
since we would just copy the value when we consume it. But b/c we ar3e using
move only, we must take from the memory.
This fits the name of the check better. The reason I am doing this renaming is
b/c I am going to add a nonconsumable but assignable check for
global_addr/ref_element_addr/captures with var semantics.
This reflects better the true meaning of this check which is that a value marked
with this check cannot be consumed on its boundary at all (when performing
let/var checking) and cannot be assigned over when performing var checking.
I discovered when working with improving the debug output of the move only
address checker that I had a need for lightweight thing like
DebugVarCarryingInst but that only could vend a VarDecl (unlike
DebugVarCarryingInst which also can vend a SILDebugVariable). As an example,
this lets one write a high level API that uses the standard API to loop over a
bunch of instructions all that vend a VarDecl and construct a stringified path
component list.
rdar://105293841
I had a fix a bunch of bugs in this, which isn't very surprising.
I changed remapSubstitutionMap to preserve the non-canonical signature
of the substitutions because otherwise it messes up printing
open_pack_element pretty badly --- we end up printing a sugared shape
class but a desugared generic signature. I'd rather not eagerly
canonicalize everything there because the sugar is quite nice.
Still, I don't feel great about this approach, and this is the
second time I've found myself doing something a little gross in order
to preserve sugar for printing this instruction.
Canonicalizing the replacement types is important for test stability,
and I think it's good downstream.
The most interesting part of this is that I implemented a rule which
handles tuple types becoming scalar as part of the substitution of
tuple_pack_element_addr. We talked about having this rule in the
formal type system, and I thought we were going to do it, but it
looks like we haven't actually implemented that yet. I added it to
SIL substitution because (1) I anticipate we'll be doing this
eventually in the formal type system, and that will have consequences
for SIL, and (2) we don't actually have a way to parse these singleton
tuple types, and I didn't want expanding singleton packs into tuples
to become this weird untestable corner case. I did have to poke a
hole in this rule to preserve types that were singleton tuples before
substitution, since apparently AutoDiff makes a lot of those.
I think adding a type_refine_addr that statically asserts a type
match is the right way to go in the long term for rewriting
singleton tuple_pack_element_addr, but I'm a little sick of adding
SIL instructions, so we just rewrite to unchecked_addr_cast for now.
Encapsulate all the complexity of reborrows and guaranteed phi in 3
ownership liveness interfaces:
LinerLiveness, InteriorLiveness, and ExtendedLiveness.
This instruction can be inserted by Onone optimizations as a replacement for deleted instructions to
ensure that it's possible to single step on its location.
This allows dynamically indexing into tuples. IRGen not yet
implemented.
I think I'm going to need a type_refine_addr instruction in
order to handle substitutions into the operand type that
eliminate the outer layer of tuple-ness. Gonna handle that
in a follow-up commit.
This will let the non-field sensitive version use a more performant
implementation internally. This is important since PrunedLiveBlocks is used in
the hot path when working with Ownership SSA, while the field sensitive version
is only used for certain diagnostics.
NOTE: I did not refactor PrunedLiveness to use the faster implementation... this
is just a quick pass over the code to prepare for that change.
This API is the inverse of visitEnclosingDefs when called on a phi.
This replaces the visitAdjacentReborrowsOfPhi algorithm with a small
loop that simply checks all the phis in the current block.
This should all be fairly efficient once SILArgument has a "reborrow"
flag.
Specifically, previously if we emitted an error we just dumped all of the
consuming uses. Now instead for each consuming use that needs a copy, we perform
a search for a specific boundary use (consuming or non-consuming) that is
reachable from the former and emit a specialized error for it. Thus we emit for
the two consuming case the normal consumed twice error, and now for
non-consuming errors we emit the "use after consume" error.
For those who are unaware, CanonicalizeOSSALifetime::canonicalizeValueLifetime()
is really a high level driver routine for the functionality of
CanonicalizeOSSALifetime that computes liveness and then rewrites copies using
boundary information. This change introduces splits the implementation of
canonicalizeValueLifetime into two parts: a first part called computeLiveness
and a second part called rewriteLifetimes. Internally canonicalizeValueLifetime
still just calls these two methods.
The reason why I am doing this is that it lets the move only object checker use
the raw liveness information computed before the rewriting mucks with the
analysis information. This information is used by the checker to compute the raw
liveness boundary of a value and use that information to determine the list of
consuming uses not on the boundary, consuming uses on the boundary, and
non-consuming uses on the boundary. This is then used by later parts of the
checker to emit our errors.
Some additional benefits of doing this are:
1. I was able to eliminate callbacks in the rewriting stage of
CanonicalOSSALifetimes which previously gave the checker this information.
2. Previously the move checker did not have access to the non-consuming boundary
uses causing us to always fail appropriately, but sadly not emit a note showing
the non-consuming use. I am going to wire this up in a subsequent commit.
The other change to the implementation of the move checker that this caused is
that I needed to add an extra diagnostic check for instructions that consume the
value twice or consume the value and use the value. The reason why this must be
done is that liveness does not distinguish in between different operands on the
same instruction meaning such an error would be lost.
Having added these, I'm not entirely sure we couldn't just use
alloc_stack and dealloc_stack. Well, if we find ourselves adding
a lot of redundancy with those instructions (e.g. around DI), we
can always go back and rip these out.
- SILPackType carries whether the elements are stored directly
in the pack, which we're not currently using in the lowering,
but it's probably something we'll want in the final ABI.
Having this also makes it clear that we're doing the right
thing with substitution and element lowering. I also toyed
with making this a scalar type, which made it necessary in
various places, although eventually I pulled back to the
design where we always use packs as addresses.
- Pack boundaries are a core ABI concept, so the lowering has
to wrap parameter pack expansions up as packs. There are huge
unimplemented holes here where the abstraction pattern will
need to tell us how many elements to gather into the pack,
but a naive approach is good enough to get things off the
ground.
- Pack conventions are related to the existing parameter and
result conventions, but they're different on enough grounds
that they deserve to be separated.