Although nonescaping closures are representationally trivial pointers to their
on-stack context, it is useful to model them as borrowing their captures, which
allows for checking correct use of move-only values across the closure, and
lets us model the lifetime dependence between a closure and its captures without
an ad-hoc web of `mark_dependence` instructions.
During ownership elimination, We eliminate copy/destroy_value instructions and
end the partial_apply's lifetime with an explicit dealloc_stack as before,
for compatibility with existing IRGen and non-OSSA aware passes.
Encapsulate all the complexity of reborrows and guaranteed phi in 3
ownership liveness interfaces:
LinerLiveness, InteriorLiveness, and ExtendedLiveness.
Specifically, previously if we emitted an error we just dumped all of the
consuming uses. Now instead for each consuming use that needs a copy, we perform
a search for a specific boundary use (consuming or non-consuming) that is
reachable from the former and emit a specialized error for it. Thus we emit for
the two consuming case the normal consumed twice error, and now for
non-consuming errors we emit the "use after consume" error.
For those who are unaware, CanonicalizeOSSALifetime::canonicalizeValueLifetime()
is really a high level driver routine for the functionality of
CanonicalizeOSSALifetime that computes liveness and then rewrites copies using
boundary information. This change introduces splits the implementation of
canonicalizeValueLifetime into two parts: a first part called computeLiveness
and a second part called rewriteLifetimes. Internally canonicalizeValueLifetime
still just calls these two methods.
The reason why I am doing this is that it lets the move only object checker use
the raw liveness information computed before the rewriting mucks with the
analysis information. This information is used by the checker to compute the raw
liveness boundary of a value and use that information to determine the list of
consuming uses not on the boundary, consuming uses on the boundary, and
non-consuming uses on the boundary. This is then used by later parts of the
checker to emit our errors.
Some additional benefits of doing this are:
1. I was able to eliminate callbacks in the rewriting stage of
CanonicalOSSALifetimes which previously gave the checker this information.
2. Previously the move checker did not have access to the non-consuming boundary
uses causing us to always fail appropriately, but sadly not emit a note showing
the non-consuming use. I am going to wire this up in a subsequent commit.
The other change to the implementation of the move checker that this caused is
that I needed to add an extra diagnostic check for instructions that consume the
value twice or consume the value and use the value. The reason why this must be
done is that liveness does not distinguish in between different operands on the
same instruction meaning such an error would be lost.
For most uses, some access scopes must be "respected"--if an extended
value's original lifetime originally extends beyond an access scope, its
canonicalized lifetime must not end _within_ such scopes (although
ending before them is fine). Currently, to be conservative, the utility
applies this behavior to all access scopes.
For move-only values, however, lifetimes end at final consumes without
regard to access scopes.
Allow this behavior to be controlled by whether or not a
NonLocalAccessBlockAnalysis is provided to the utility in its
constructor.
rdar://104635319
When encountering inside a borrow scope a non-lexical move_value or a
move_value [lexical] where the borrowed value is itself already lexical,
delete the move_value and regard its uses as uses of the moved-from
value.
If a unit test is miswritten in the sense that the test expects an
instance of one type by an instance of some other type is specified,
print that out.
Previously, the dealloc_stacks created for the alloc_stacks used to pass
@in_guaranteed arguments to on_stack closures were created after the
users of the closure. When SILGen created these alloc_stacks in the
same block as the users, this happened to work. Now that
AddressLowering creates such alloc_stacks elsewhere, this approach
results in invalid SIL.
Here, the dealloc_stacks are instead at the end of each block in the
dominance frontier of the alloc_stack.
Add `deletableInstructions()` and `reverseDeletableInstructions()` in SILBasicBlock.
It allows deleting instructions while iterating over all instructions of the block.
This is a replacement for `InstructionDeleter::updatingRange()`.
It's a simpler implementation than the existing `UpdatingListIterator` and `UpdatingInstructionIteratorRegistry`, because it just needs to keep the prev/next pointers for "deleted" instructions instead of the iterator-registration machinery.
It's also safer, because it doesn't require to delete instructions via a specific instance of an InstructionDeleter (which can be missed easily).
The current `UpdatingInstructionIteratorRegistry` referenced `this` in
the member initializer list. As per class.cdtor 11.9.5p1, this is UB as
for any class with a non-trivial constructor, referencing the base class
of the object before the constructor begins execution is not permitted.
We attempted to capture `this` in the lambda that was used to initialise
the member. This was being exploited by the MSVC compiler resulting in
incorrect execution of the instruction deleter.
`getValue` -> `value`
`getValueOr` -> `value_or`
`hasValue` -> `has_value`
`map` -> `transform`
The old API will be deprecated in the rebranch.
To avoid merge conflicts, use the new API already in the main branch.
rdar://102362022
Begin adding support for OSSA to checked-cast jump-threading based on
the new ownership utilities.
TODO:
Finish migrating to the new utilities in OwnershipOptUtils.
Ensure full unit test coverage.
Made bare @instruction and @block more useful. Rather than referring to
the first instruction and block in the current function, instead, they
now refer to the instruction after the test_specification instruction
(which must always exist) and the block containing the
test_specification instruction.
Pass a BasicCalleeAnalysis instance to isDeinitBarrier. This enables
LexicalDestroyHoisting to hoist destroys over applies of functions which
are not deinit barriers.
Pass a BasicCalleeAnalysis instance to isDeinitBarrier. This will allow
ShrinkBorrowScope to hoist end_borrows over applies of functions which
are not deinit barriers.
Added new C++-to-Swift callback for isDeinitBarrier.
And pass it CalleeAnalysis so it can depend on function effects. For
now, the argument is ignored. And, all callers just pass nullptr.
Promoted to API the mayAccessPointer component predicate of
isDeinitBarrier which needs to remain in C++. That predicate will also
depends on function effects. For that reason, it too is now passed a
BasicCalleeAnalysis and is moved into SILOptimizer.
Also, added more conservative versions of isDeinitBarrier and
maySynchronize which will never consider side-effects.
The testing works by way of a new pass "UnitTestRunner" and a new
instruction test_specification. When a function contains
test_specification instructions, it invokes the UnitTest subclass named
in the test_specification instruction with the arguments specified in
that instruction.
For example, when running the unit-test-runner class, having the
instructions
```
test_specification "my-neato-utility 19 @function[callee].block[2] @trace[2]"
test_specification "my-neato-utility 43 @block @trace"
```
would result in the test associated with "my-neato-utility" in
UnitTestRunner.cpp being invoked twice. Once with (19, aBlock, aValue),
and once with (43, anotherBlock, someOtherValue). That UnitTest
subclass class would need to call takeUInt, takeBlock, and takeTrace on
the Arguments struct it is invoked with. It would then pass those
arguments along to myNeatoUtility and dump out interesting results. The
results would then be FileChecked.
To improve the debugging experience of values whose lifetimes are
canonicalized without compromising the semantics expressed in the source
language, when canonicalizing OSSA lifetimes at Onone, lengthen
lifetimes as much as possible without incurring copies that would be
eliminated at O.
rdar://99618502
Rather than having finding the boundary be a single combined step,
separate finding the original boundary from extending that boundary.
This enables inserting an optional step between those steps, namely to
extend unconsumed liveness to its original extent at Onone.
It is possible for phis to be marked live. With guaranteed phis, they
will be the last uses and be non-consuming. In this case, the
merge block will have multiple predecessors whose terminators are on the
boundary. When inserting destroys, track whether a merge point has been
visited previously.
To facilitate this, restructure the boundary extension and destroy
insertion code.
Previously, the extender was building up a list of places at which to
insert destroys. In particular it was using the "boundary edge"
collection for all blocks at the beginning of which a destroy should be
created. In particular, it would add merge blocks. Such blocks are not
boundary blocks.
Here, the extender produces a PrunedLivenessBoundary which doesn't
violate that invariant.
This required some changes to the destroy insertion code to find where
to insert destroys. It is now similar to
PrunedLivenessBoundary::visitInsertionPoints and could be used as a
template for a PrunedLivenessBoundary::visitBoundaryPoints through which
::visitInsertionPoints might be factored.
Previously, the destroys set (now set vector) wasn't ever being cleared.
The result was that users could get overly pessimistic behavior if they
had previously used the utility with a destroy that came after the
destroys relevant for its current run. Here, it is cleared when the
utility is initialized with a new def. Addresses a TODO in the
copy_propagation test.
Previously, CanonicalizeOSSALifetime had its own copy of a variation of
the code for computing the liveness boundary that PrunedLiveness has.
Here, it is switched over to using PrunedLiveness' version.
In order to do that without complicating the interface for PrunedLivness
by adding a visitor, the extra bookkeeping that was being done for
destroy_values and debug_values is dropped. Instead, after getting an
original boundary from PrunedLiveness::computeBoundary, the boundary is
extended out to preexisting destroys which are not separated from the
original boundary by "interesting" instructions.
First restore the basic PrunedLiveness abstraction to its original
intention. Move code outside of the basic abstraction that polutes the
abstraction and is fundamentally wrong from the perspective of the
liveness abstraction.
Most clients need to reason about live ranges, including the def
points, not just liveness based on use points. Add a PrunedLiveRange
layer of types that understand where the live range is
defined. Knowing where the live range is defined (the kill set) helps
reliably check that arbitrary points are within the boundary. This
way, the client doesn't need to be manage this on its own. We can also
support holes in the live range for non-SSA liveness. This makes it
safe and correct for the way liveness is now being used. This layer
safety handles:
- multiple defs
- instructions that are both uses and defs
- dead values
- unreachable code
- self-loops
So it's no longer the client's responsibility to check these things!
Add SSAPrunedLiveness and MultiDefPrunedLiveness to safely handle each
situation.
Split code that I can't figure out into
DiagnosticPrunedLiveness. Hopefully it will be deleted soon.