All checked casts are emitted as unconditional_checked_cast
instructions. More than just the instructions which produce or consume
an opaque value must be rewritten as unconditional_checked_cast_addr
instructions. In particular, all those instructions for which
canIRGenUseScalarCheckedCastInstructions returns false must be rewritten
as unconditional_checked_cast_addr instructions.
Note the instructions to rewrite like that while visiting values and
then rewrite them near the end of rewriting.
Instead of waiting until after rewriting everything else, rewrite them
as the terminator results they produce are encountered. This enables
forming projections in the correct locations.
During def rewriting, the def itself can be changed, for example to be a
"dummy" load. In such cases, uses of the new def need to be rewritten,
not uses of the original def.
When a block argument is a terminator result from a try_apply, use the
ApplyRewriter to convert the try_apply.
In the case where the result is stored into an enum, with this change,
the init_enum_data_addr instruction is created prior to the try_apply
which is necessary in order for it to be passed as an argument to the
try_apply.
If a switch_enum instruction (1) exhaustively handles all cases, there
is no default case or block corresponding to it. If (2) it handles all
cases but one, the default case corresponds to the unique unhandled
case. Otherwise, (3) the default case corresponds to all the unhandled
cases.
The first two scenarios were already handled by address lowering.
Here, handling is added for case (3). It is similar to what is already
done for rewriting cases except that no unchecked_take_enum_data_addr
must be created and that the argument is always address-only (being the
same type as the operand of the switch_enum which is only being
rewritten because it was address-only).
The filterDeadArgs function takes a list of dead argument
indices--ordered from least to greatest--a list of original arguments,
and produces a list of arguments excluding the arguments at those dead
indices.
It does that by iterating from 0 to size(originalArguments) - 1, adding
the original argument at that index to the list of new arguments, so
long as the index that of a dead argument. To avoid doing lookups into
a set, this relies on the dead arguments being ordered ascending. There
is an interator into the dead argument list that is incremented only
when the current index is dead.
When that iterator is at the end, dereferencing it just gives the size
of the array of dead arguments. So in the case where the first argument
is dead but no other arguments are, and there _are_ other arguments, the
first argument would be skipped, and the second argument's index would
be found to be equal to the dereferenced iterator (1).
Previously, there was no check that the iterator was not at the end.
The result was failing to add the second argument to the new list. And
tripping an assertion failure.
Here, it is checked that the iterator is not at the end.
When rewriting uses, it is possible for new uses of a value to be
created, as when a debug_value instruction is created when a store
instruction is deleted. Ensure that all uses are rewritten by adding
all uses to the worklist of uses after rewriting each use.
When casting via unchecked_bitwise_cast, if the destination type is
loadable, don't mark the value it produces rewritten--that value is not
one that AddressLowering is tracking. Instead, replace its copy_value
uses with load [copy] uses of the address the rewritten instruction
produces.
Now that it can be called on partial_apply instructions,
insertAfterFullEvaluation does not name what the function does. One
could imagine a function which inserted after the applies of
(non-escaping) partial_applies.
Before iterating over an instruction's uses and deleting each, cache the
list of uses. Otherwise, the for loop stops after the first instruction
when it's deleted (and has its NextUse field cleared).
Specify the operand ownership of the Builtin differently depending on
whether lowered addresses are used. Handle rewriting the value version
of the builtin as the address version of the builtin in AddressLowering.
When a `begin_borrow [lexical]` is lowered, the lifetime that it
describes can't be shortened (or eliminated) when lowering. In some
cases, though, there will not be an alloc_stack corresponding directly
to the value being borrowed.
In these cases, mark the whole aggregate lexical.
First restore the basic PrunedLiveness abstraction to its original
intention. Move code outside of the basic abstraction that polutes the
abstraction and is fundamentally wrong from the perspective of the
liveness abstraction.
Most clients need to reason about live ranges, including the def
points, not just liveness based on use points. Add a PrunedLiveRange
layer of types that understand where the live range is
defined. Knowing where the live range is defined (the kill set) helps
reliably check that arbitrary points are within the boundary. This
way, the client doesn't need to be manage this on its own. We can also
support holes in the live range for non-SSA liveness. This makes it
safe and correct for the way liveness is now being used. This layer
safety handles:
- multiple defs
- instructions that are both uses and defs
- dead values
- unreachable code
- self-loops
So it's no longer the client's responsibility to check these things!
Add SSAPrunedLiveness and MultiDefPrunedLiveness to safely handle each
situation.
Split code that I can't figure out into
DiagnosticPrunedLiveness. Hopefully it will be deleted soon.
Andy some time ago already created the new API but didn't go through and update
the old occurences. I did that in this PR and then deprecated the old API. The
tree is clean, so I could just remove it, but I decided to be nicer to
downstream people by deprecating it first.