Migrating to this classification was made easy by the recent rewrite
of the OSSA constraint model. It's also consistent with
instruction-level abstractions for working with different kinds of
OperandOwnership that are being designed.
This classification vastly simplifies OSSA passes that rewrite OSSA
live ranges, making it straightforward to reason about completeness
and correctness. It will allow a simple utility to canonicalize OSSA
live ranges on-the-fly.
This avoids the need for OSSA-based utilities and passes to hard-code
SIL opcodes. This will allow several of those unmaintainable pieces of
code to be replaced with a trivial OperandOwnership check.
It's extremely important for SIL maintainers to see a list of all SIL
opcodes associated with a simple OSSA classification and set of
well-specified rules for each opcode class, without needing to guess
or reverse-engineer the meaning from the implementation. This
classification does that while eliminating a pile of unreadable
macros.
This classification system is the model that CopyPropagation was
initially designed to use. Now, rather than relying on a separate
pass, a simple, lightweight utility will canonicalize OSSA
live ranges.
The major problem with writing optimizations based on OperandOwnership
is that some operations don't follow structural OSSA requirements,
such as project_box and unchecked_ownership_conversion. Those are
classified as PointerEscape which prevents the compiler from reasoning
about, or rewriting the OSSA live range.
Functional Changes:
As a side effect, this corrects many operand constraints that should
in fact require trivial operand values.
It would be more abstractly correct if this got DI support so
that we destroy the member if the constructor terminates
abnormally, but we can get to that later.
The SIL type of the keypath instruction is lowered in the function's
type expansion context, the various types involved in patterns are
unlowered AST types.
When verifying that the types matchup apply the type expansion to both
sides.
rdar://72134937
This commit is doing a few things:
1. It is centralizing all decisions about whether an operand's owner instruction
or a value's parent instruction is forwarding in each SILInstruction
itself. This will prevent this information from getting out of sync.
2. This allowed me to hide the low level queries in OwnershipUtils.h that
determined if a SILNodeKind was "forwarding". I tried to minimize the amount of
churn in this PR and thus didn't remove the
is{Owned,Ownership,Guaranteed}Forwarding{Use,Value} checks. Instead I left them
alone but added in asserts to make sure that if the old impl ever returns true,
the neew impl does as well. In a subsequent commit, I am going to remove the old
impl in favor of isa queries.
3. I also in the process discovered that there were some instructions that were
being inconsistently marked as forwarding. All of the asserts in the PR caught
these and I fixed these inconsistencies.
This originally trafficked in Instructions and for some reason the name was
never changed.
I also changed the result type to be a bool and added the ability for the passed
in closure to signal failure (and iteration stop) by returning false. This also
makes it possible to use visitLocalEndScopeUses in if statements which can be
useful.
This condition does not hold after each pass. It's only the case after running some cleanup-passes.
This verifier condition makes it impossible to bisect the optimization passes.
I think what was happening here was that we were using one of the superclass
classofs and were getting lucky since in the place I was using this I was
guaranteed to have single value instructions and that is what I wrote as my
first case X ).
I also added more robust checks tieing the older isGuaranteed...* APIs to the
ForwardingOperand API. I also eliminated the notion of Branch being an owned
forwarding instruction. We only used this in one place in the compiler (when
finding owned value introducers), yet we treat a phi as an introducer, so we
would never hit a branch in our search since we would stop at the phi argument.
The bigger picture here is that this means that all "forwarding instructions"
either forward ownership for everything or for everything but owned/unowned.
And for those listening in, I did find one instruction that was from an
ownership forwarding subclass but was not marked as forwarding:
DifferentiableFunctionInst. With this change, we can no longer by mistake have
such errors enter the code base.
This assert does not make sense because consumingUses in some cases
only contains the destroying uses. Owned values may not be destroyed
because they may be converted to ValueOwnershipKind::None on all paths
reaching a return. Instead, this utility needs to find liveness first
considering all uses (or at least all uses that may be on a lifetime
boundary). We probably then won't need this assert, but I'm leaving
the FIXME as a placeholder for that work.
Fixes rdar://71240363.
A reborrow occurs when a Borrowing Operand ends the lifetime of a borrowed value
and propagates forward a new guaranteed value that continues the guaranteed
lifetime of the value.
That prevents the SILVerifier from printing the context, making it
hard to quickly produce test cases from logs.
Instead, just return the failure status to the SILVerifier so it can
do its diagnostic thing.
An assert or llvm_unreachable would also be fine in addition to the
normal SILVerifier diagnostics, but I don't think that's needed here.
Specifically, I made it so that assuming our instruction is inserted into a
block already that we:
1. Return a constraint of {OwnershipKind::Any, UseLifetimeConstraint::NonLifetimeEnding}.
2. Return OwnershipKind::None for all values.
Noticed above I said that if the instruction is already inserted into a block
then we do this. The reason why is that if this is called before an instruction
is inserted into a block, we can't get access to the SILFunction that has the
information on whether or not we are in OSSA form. The only time this can happen
is if one is using these APIs from within SILBuilder since SILBuilder is the
only place where we allow this to happen. In SILBuilder, we already know whether
or not our function is in ossa or not and already does different things as
appropriate (namely in non-ossa does not call getOwnershipKind()). So we know
that if these APIs are called in such a situation, we will only be calling it if
we are in OSSA already. Given that, we just assume we are in OSSA if we do not
have a function.
To make sure that no mistakes are made as a result of that assumption, I put in
a verifier check that all values when ownership is disabled return a
OwnershipKind::None from getOwnershipKind().
The main upside to this is this means that we can write code for both
OSSA/non-OSSA and write code for non-None ownership without needing to check if
ownership is enabled.
This makes it easier to understand conceptually why a ValueOwnershipKind with
Any ownership is invalid and also allowed me to explicitly document the lattice
that relates ownership constraints/value ownership kinds.
I think this assert was just testing the wrong thing. Specifically, it is trying
to say that either the given value has a consuming use or it is post-dominated
by dead end blocks.
Instead of checking that directly by using DeadEndBlocks::isDeadEnd(), it was
using a different empty check that caused the assert to actually semantically
say that:
A value must have a lifetime ending use *or* the dead end blocks analysis must
have found at least one block at all in the function that is reachable from a
function terminating terminator (e.x.: return).
This is clearly not the former and was causing the linear lifetime error to hit
this assert in certain cases.
<rdar://problem/70690127>
There are multiple reasons this is needed.
1. Most passes do not perform CFG transformations. However, we often
need to split critical edges and remember to invalidate all SIL
analyses at the end of virtually every pass. This is very innefficient
and highly bug prone.
2. Many SIL analysis algorithms needs to reason about CFG
edges. Avoiding critical edges leads to far simpler and more efficient
designs when edges can be identified by blocks.
3. Handling block arguments on conditional branches create complexity
at the lowest level of the SIL interface. This complexity is difficult
to abstract over and bleeds until any algorithm that needs to reason
about phi operands. It's far easier to work with phis if we can easily
recover the phi operand with only a reference to the predecessor
block.
4. Attempting to preserve critical edges in high and mid level IR
blocks optimizations that otherwise have no business optimizing
branches. Branch optimization should always be defered to machine
level IR where the most relevant heuristics are employed to remove
unconditional branches. If code didn't need to be placed on a critical
edges, then a branch optimization can easily remove that code from the
critical edge.
Previously, we always inferred the ownership of the switch_enum from its phi
operands. This forced us to need to model a failure to find a good
OperandOwnershipKindMap in OperandOwnership.cpp. We want to eliminate such
conditions so that we can use failing to find a constraint to mean that a value
can accept any value rather than showing a failure.
This makes it clearer that isConsumingUse() is not an owned oriented API and
returns also for instructions that end the lifetime of guaranteed values like
end_borrow.
This introduces a new builtin, `getCurrentAsyncTask()`, that produces a
reference to the current task. This builtin can only be used within
`async` functions, and IR generation merely grabs the task argument
and packages it up.
The type of this function is `() -> Builtin.NativeObject`, because we
don't currently have a Swift-level representation of tasks, and can
probably handle everything through builtins or runtime calls.
This updates how we model reborrow's lifetimes for ownership verification.
Today we follow and combine a borrow's lifetime through phi args as well.
Owned values lifetimes end at a phi arg. This discrepency in modeling
lifetimes leads to the OwnershipVerifier raising errors incorrectly for
cases such as this, where the borrow and the base value do not dominate
the end_borrow:
bb0:
cond_br undef, bb1, bb2
bb1:
%copy0 = copy_value %0
%borrow0 = begin_borrow %copy0
br bb3(%borrow0, %copy0)
bb2:
%copy1 = copy_value %1
%borrow1 = begin_borrow %copy1
br bb3(%borrow1, %copy1)
bb3(%borrow, %baseVal):
end_borrow %borrow
destroy_value %baseVal
This PR adds a new ReborrowVerifier. The ownership verifier collects borrow's
lifetime ending users and populates the worklist of the ReborrowVerifier
with reborrows and the corresponding base value.
ReborrowVerifier then verifies that the lifetime of the reborrow is
within the lifetime of the base value.
Provide a mechanism to gradually migrate unit tests away from allowing
critical edges via -allow-critical-edges=false.
This will be the default in OSSA very soon, and will hopefully become
the default eventually for all SIL stages.
Note that not all required optimization pass changes have been
committed yet. I have pending changes in:
- SimplifyCFG
- SILCloner subclasses
- EagerSpecializer
- ArraySpecialization
- LoopUtils
- LoopRotate
There are multiple reasons we need to disallow critical edges:
1. Most passes do not perform CFG transformations. However, we often
need to split critical edges and remember to invalidate all SIL
analyses at the end of virtually every pass. This is very innefficient
and highly bug prone.
2. Many SIL analysis algorithms needs to reason about CFG
edges. Avoiding critical edges leads to far simpler and more efficient
designs when edges can be identified by blocks.
3. Handling block arguments on conditional branches create complexity
at the lowest level of the SIL interface. This complexity is difficult
to abstract over and bleeds until any algorithm that needs to reason
about phi operands. It's far easier to work with phis if we can easily
recover the phi operand with only a reference to the predecessor
block.
4. Attempting to preserve critical edges in high and mid level IR
blocks optimizations that otherwise have no business optimizing
branches. Branch optimization should always be defered to machine
level IR where the most relevant heuristics are employed to remove
unconditional branches. If code didn't need to be placed on a critical
edges, then a branch optimization can easily remove that code from the
critical edge.