I also added an interpreter test that validates that ref_element_addr works as
expected (I fixed that in an earlier commit, but did not add an interpreter
test).
rdar://106724277
I also slightly changed the codegen around where we insert the mark_must_check.
Specifically, before we would emit the mark_must_check directly on the
ref_element_addr and then insert the access. This had the unfortunate effect
that we would hoist any destroy_addr that were actually needed out of the access
scope. Rather than do that, I now insert the mark_must_check on the access
itself. This results in the destroy_addr being within the scope (like the
mark_must_check itself).
rdar://105910066
We already did this for the situation without the begin_access. In truth, using
the terminator is a bit too wide, but it works for these sorts of arguments that
use assignable_but_not_consumable so for expediency (and since we are just
walking blocks), I just decided to do something quick.
rdar://106208343
I think this was just an incorrect fix. The actual issue is that the memory
lifetime verifier views debug_value as requiring a live address... but at the
same time, we do not want to emit diagnostics for a debug_value... we want to do
it for other things. So my solution is to add debug_value to the final liveness
computation, but not use it earlier when we use said liveness to compute
diagnostics.
rdar://106442224
This just makes sure that we won't crash on it and emit a proper message. I
think we should improve the message, but this at least gives a proper error.
rdar://106340382
This fixes an issue when doing move-checking on a read accessor,
where the field is only borrowed. After the MoveOnlyAddressChecker
ran on it, it'd inject a destroy that didn't get "claimed":
```
%2 = ref_element_addr %0 : $ListOfFiles, #ListOfFiles.file
%3 = mark_must_check [no_consume_or_assign] %2 : $*File
%4 = begin_access [read] [dynamic] %3 : $*File
%5 = load_borrow %4 : $*File
yield %5 : $File, resume bb1, unwind bb2
bb1:
end_borrow %5 : $File
end_access %4 : $*File
destroy_addr %2 : $*File // BAD
%9 = tuple ()
return %9 : $()
```
The approach of this fix is to recognize that at the point we're
injecting destroys, we would have emitted diagnostics and stopped
already if there were any consuming uses that we need to clean-up
after, since we're in `no_consume_or_assign` checking mode here
when just reading the field.
Some notes:
1. This ensures that if we capture them, we just capture the box by reference.
2. We are still using the old incorrect semantics for captures. I am doing this
so I can bring this up in separate easy to understand patches all of which
pass all of the moveonly tests.
3. Most of the test edits are due to small differences in error messages in
between the object and address checker.
4. I had to add a little support to the move only address checker for a small
pattern that doesn't occur with vars but do es occur for lets when we codegen
like this, specifically around enums. The pattern is we perform a load_borrow
and then copy_value and then use the result of the copy_value. Rather than fight
SILGen pattern I introduced a small canonicalization into the address checker which
transforms that pattern into a load [copy] + begin_borrow to restore the codegen
to a pattern the checker expects.
5. I left noimplicitcopy alone for now. But we should come back around and fix
it in a similar way. I just did not have time to do so.
This is the first slice of bringing up escaping closure support. The support is
based around introducing a new type of SILGen VarLoc: a VarLoc with a box and
without a value. Because the VarLoc only has a box, we have to in SILGen always
eagerly reproject out the address from the box. The reason why I am doing this
is that it makes it easy for the move checker to distinguish in between
different accesses to the box that we want to check separately. As such every
time that we open the box, we insert a mark_must_check
[assignable_but_not_consumable] on that project. If allocbox_to_stack manages to
determine that the box can be stack allocated, we eliminate all of the
mark_must_check and place a new mark_must_check [consumable_and_assignable] on
the alloc_stack. The end result is that we get the old model that we had before
and also can support escaping closures.
Otherwise, sometimes when the object checker emits a diagnostic and cleans up
the IR, some of the cleaned up copies are copies that should have been handled
by the address checker. The end result is that the address checker does not emit
diagnostics for that IR. I found this problem was exascerbated when writing code
for escaping closures.
This commit also cleans up the passes in preparation for at a future time moving
some of the transformations into the utils folder.