Add a separate 'verifyOwnership()' entry point so it's possible
to check OSSA lifetimes at various points.
Move SILGenCleanup into a SILGen pass pipeline.
After SILGen, verify incomplete OSSA.
After SILGenCleanup, verify ownership.
It's need to correctly maintain dependencies from an open-existential instruction to a `keypath` instruction which uses the opened type.
Fixes a SILVerifier crash.
rdar://105517521
Consider the following example:
```
class Klass {}
@_moveOnly struct Butt {
var k = Klass()
}
func mixedUse(_: inout Butt, _: __owned Butt) {}
func foo() {
var y = Butt()
mixedUse(&y, y)
}
```
In this case, we want to have an exclusivity violation. Before this patch, we
did a by-value load [copy] of y and then performed the inout access. Since the
access scopes did not overlap, we would not get an exclusivity violation.
Additionally, since the checker assumes that exclusivity violations will be
caught in such a situation, we convert the load [copy] to a load [take] causing
a later memory lifetime violation as seen in the following SIL:
```
sil hidden [ossa] @$s4test3fooyyF : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack [lexical] $Butt, var, name "y" // users: %4, %5, %8, %12, %13
%1 = metatype $@thin Butt.Type // user: %3
// function_ref Butt.init()
%2 = function_ref @$s4test4ButtVACycfC : $@convention(method) (@thin Butt.Type) -> @owned Butt // user: %3
%3 = apply %2(%1) : $@convention(method) (@thin Butt.Type) -> @owned Butt // user: %4
store %3 to [init] %0 : $*Butt // id: %4
%5 = begin_access [modify] [static] %0 : $*Butt // users: %7, %6
%6 = load [take] %5 : $*Butt // user: %10 // <————————— This was a load [copy].
end_access %5 : $*Butt // id: %7
%8 = begin_access [modify] [static] %0 : $*Butt // users: %11, %10
// function_ref mixedUse2(_:_:)
%9 = function_ref @$s4test9mixedUse2yyAA4ButtVz_ADntF : $@convention(thin) (@inout Butt, @owned Butt) -> () // user: %10
%10 = apply %9(%8, %6) : $@convention(thin) (@inout Butt, @owned Butt) -> ()
end_access %8 : $*Butt // id: %11
destroy_addr %0 : $*Butt // id: %12
dealloc_stack %0 : $*Butt // id: %13
%14 = tuple () // user: %15
return %14 : $() // id: %15
} // end sil function '$s4test3fooyyF'
```
Now, instead we create a [consume] access and get the nice exclusivity error we
are looking for.
NOTE: As part of this I needed to tweak the verifier so that [deinit] accesses
are now allowed to have any form of access enforcement before we are in
LoweredSIL. I left in the original verifier error in LoweredSIL and additionally
left in the original error in IRGen. The reason why I am doing this is that I
need the deinit access to represent semantically what consuming from a
ref_element_addr, global, or escaping mutable var look like at the SIL level so
that the move checker can error upon it. Since we will error upon such
consumptions in Canonical SIL, such code patterns will never actually hit
Lowered/IRGen SIL, so it is safe to do so (and the verifier/errors will help us
if we make any mistakes). In the case of a non-escaping var though, we will be
able to use deinit statically and the move checker will make sure that it is not
reused before it is reinitialized.
rdar://101767439
This allows dynamically indexing into tuples. IRGen not yet
implemented.
I think I'm going to need a type_refine_addr instruction in
order to handle substitutions into the operand type that
eliminate the outer layer of tuple-ness. Gonna handle that
in a follow-up commit.
Having added these, I'm not entirely sure we couldn't just use
alloc_stack and dealloc_stack. Well, if we find ourselves adding
a lot of redundancy with those instructions (e.g. around DI), we
can always go back and rip these out.
- SILPackType carries whether the elements are stored directly
in the pack, which we're not currently using in the lowering,
but it's probably something we'll want in the final ABI.
Having this also makes it clear that we're doing the right
thing with substitution and element lowering. I also toyed
with making this a scalar type, which made it necessary in
various places, although eventually I pulled back to the
design where we always use packs as addresses.
- Pack boundaries are a core ABI concept, so the lowering has
to wrap parameter pack expansions up as packs. There are huge
unimplemented holes here where the abstraction pattern will
need to tell us how many elements to gather into the pack,
but a naive approach is good enough to get things off the
ground.
- Pack conventions are related to the existing parameter and
result conventions, but they're different on enough grounds
that they deserve to be separated.
I've also fixed this so that it should work on instructions that
define multiple values. Someday we'll change all the open_existential
instructions to produce different values for the type dependency and
the value result; today is not that day, though.
Previously, logging of the actually problematic instruction was guarded
by LLVM_DEBUG. Meanwhile the verifier's require method prints an
instruction (usually one different from that at which the non-contiguous
scope was encountered).
Here, instead, the problematic instruction and the instruction which
defined the previous scope are printed to llvm::errs always (i.e.
whenever verification is actually run).
Additionally, debug-info logging is forcibly set on upon failure so that
the logs clearly show both what the previous scope was, what the current
scope is, and what instructions defined them.
We can't verify that store borrows aren't nested until we can reliably
compute liveness.
This can be fixed in two ways, both of which we plan to do ASAP:
(1) With complete lifetimes, this no longer needs to perform transitive
liveness at all.
(2) findInnerTransitiveGuaranteedUses, which ends up being called on the
load_borrow to compute liveness, can be taught to transitively process
InteriorPointer uses instead of returning PointerEscape. We need to make
sure all uses of the utility need to handle this first.
Return the AddressUseKind.
Fixes a bug in extendStoreBorrow where it was looking at an
uninitialized liveness result whenever a pointer escape was present.
First restore the basic PrunedLiveness abstraction to its original
intention. Move code outside of the basic abstraction that polutes the
abstraction and is fundamentally wrong from the perspective of the
liveness abstraction.
Most clients need to reason about live ranges, including the def
points, not just liveness based on use points. Add a PrunedLiveRange
layer of types that understand where the live range is
defined. Knowing where the live range is defined (the kill set) helps
reliably check that arbitrary points are within the boundary. This
way, the client doesn't need to be manage this on its own. We can also
support holes in the live range for non-SSA liveness. This makes it
safe and correct for the way liveness is now being used. This layer
safety handles:
- multiple defs
- instructions that are both uses and defs
- dead values
- unreachable code
- self-loops
So it's no longer the client's responsibility to check these things!
Add SSAPrunedLiveness and MultiDefPrunedLiveness to safely handle each
situation.
Split code that I can't figure out into
DiagnosticPrunedLiveness. Hopefully it will be deleted soon.