I included here SIL tests and in a separate PR against lldb included lldb tests
that validate as we step that the values are validated/invalidated as
appropriate.
rdar://106767457
I also added an interpreter test that validates that ref_element_addr works as
expected (I fixed that in an earlier commit, but did not add an interpreter
test).
rdar://106724277
* [Executors][Distributed] custom executors for distributed actor
* harden ordering guarantees of synthesised fields
* the issue was that a non-default actor must implement the is remote check differently
* NonDefaultDistributedActor to complete support and remote flag handling
* invoke nonDefaultDistributedActorInitialize when necessary in SILGen
* refactor inline assertion into method
* cleanup
* [Executors][Distributed] Update module version for NonDefaultDistributedActor
* Minor docs cleanup
* we solved those fixme's
* add mangling test for non-def-dist-actor
I also slightly changed the codegen around where we insert the mark_must_check.
Specifically, before we would emit the mark_must_check directly on the
ref_element_addr and then insert the access. This had the unfortunate effect
that we would hoist any destroy_addr that were actually needed out of the access
scope. Rather than do that, I now insert the mark_must_check on the access
itself. This results in the destroy_addr being within the scope (like the
mark_must_check itself).
rdar://105910066
We already did this for the situation without the begin_access. In truth, using
the terminator is a bit too wide, but it works for these sorts of arguments that
use assignable_but_not_consumable so for expediency (and since we are just
walking blocks), I just decided to do something quick.
rdar://106208343
The move checker was converting some kinds of copies
into their `explicit_copy_*` versions, despite the
type of the copy being a copyable type. This was
causing random crashes in some narrow circumstances.
resolves rdar://106669967
The subexpression of a MaterializePackExpr (which is always a tuple value
currently) is emitted while preparing to emit a pack expansion expr, and its
elements are projected from within the dynamic pack loop. This means that a
materialized pack is only evaluated once, rather than being evaluated on
every iteration over the pack elements.
I think this was just an incorrect fix. The actual issue is that the memory
lifetime verifier views debug_value as requiring a live address... but at the
same time, we do not want to emit diagnostics for a debug_value... we want to do
it for other things. So my solution is to add debug_value to the final liveness
computation, but not use it earlier when we use said liveness to compute
diagnostics.
rdar://106442224
This just makes sure that we won't crash on it and emit a proper message. I
think we should improve the message, but this at least gives a proper error.
rdar://106340382
This fixes an issue when doing move-checking on a read accessor,
where the field is only borrowed. After the MoveOnlyAddressChecker
ran on it, it'd inject a destroy that didn't get "claimed":
```
%2 = ref_element_addr %0 : $ListOfFiles, #ListOfFiles.file
%3 = mark_must_check [no_consume_or_assign] %2 : $*File
%4 = begin_access [read] [dynamic] %3 : $*File
%5 = load_borrow %4 : $*File
yield %5 : $File, resume bb1, unwind bb2
bb1:
end_borrow %5 : $File
end_access %4 : $*File
destroy_addr %2 : $*File // BAD
%9 = tuple ()
return %9 : $()
```
The approach of this fix is to recognize that at the point we're
injecting destroys, we would have emitted diagnostics and stopped
already if there were any consuming uses that we need to clean-up
after, since we're in `no_consume_or_assign` checking mode here
when just reading the field.
Presently under -enable-ossa-complete-lifetimes.
This allows SILGen to skip OSSA cleanups, for example at dead-end
blocks.
Long term, we may remove OSSA cleanups from SILGen entirely (except
for lexical borrow scopes). This changes lets us experiment with that
option.
Now that in OSSA `partial_apply [on_stack]`s are represented as owned
values rather than stack locations, it is possible for their destroys to
violate stack discipline. A direct lowering of the instructions to
non-OSSA would violate stack nesting.
Previously, when inlining, it was assumed that non-coroutine callees
maintained stack discipline. And, when inlining an OSSA function into a
non-OSSA function, OSSA instructions were lowered directly. The result
was that stack discipline could be violated.
Here, when inlining a function in OSSA form into a function lowered out
of OSSA form, stack nesting is fixed up.
Previously, there was an -Xllvm option to verify after all inlining to a
particlar caller. That makes it a chore to track down which apply's
inlining resulted in invalid code. Here, a new option is added that
verifies after each run of the inliner.
Some notes:
1. This ensures that if we capture them, we just capture the box by reference.
2. We are still using the old incorrect semantics for captures. I am doing this
so I can bring this up in separate easy to understand patches all of which
pass all of the moveonly tests.
3. Most of the test edits are due to small differences in error messages in
between the object and address checker.
4. I had to add a little support to the move only address checker for a small
pattern that doesn't occur with vars but do es occur for lets when we codegen
like this, specifically around enums. The pattern is we perform a load_borrow
and then copy_value and then use the result of the copy_value. Rather than fight
SILGen pattern I introduced a small canonicalization into the address checker which
transforms that pattern into a load [copy] + begin_borrow to restore the codegen
to a pattern the checker expects.
5. I left noimplicitcopy alone for now. But we should come back around and fix
it in a similar way. I just did not have time to do so.
This is the first slice of bringing up escaping closure support. The support is
based around introducing a new type of SILGen VarLoc: a VarLoc with a box and
without a value. Because the VarLoc only has a box, we have to in SILGen always
eagerly reproject out the address from the box. The reason why I am doing this
is that it makes it easy for the move checker to distinguish in between
different accesses to the box that we want to check separately. As such every
time that we open the box, we insert a mark_must_check
[assignable_but_not_consumable] on that project. If allocbox_to_stack manages to
determine that the box can be stack allocated, we eliminate all of the
mark_must_check and place a new mark_must_check [consumable_and_assignable] on
the alloc_stack. The end result is that we get the old model that we had before
and also can support escaping closures.
Later parts of the pipeline do not know about the instruction, so we need to
lower it there. This is additionally safe since we will not be performing move
only checking later in the pipeline.
Otherwise, sometimes when the object checker emits a diagnostic and cleans up
the IR, some of the cleaned up copies are copies that should have been handled
by the address checker. The end result is that the address checker does not emit
diagnostics for that IR. I found this problem was exascerbated when writing code
for escaping closures.
This commit also cleans up the passes in preparation for at a future time moving
some of the transformations into the utils folder.
Specifically, our operand /could/ be a SILArgument. In that case, oldInst in all
of these cases will be a nullptr. So make sure to only delete them if we
actually found a defining instruction.