Originally this was a method to determine if we had emitted a diagnostic when
running our gather visitor. It was pretty footgunny since one could easily
forget to set it. Instead of doing that, we now maintain a counter in the
diagnostic emitter that counts how many diagnostics we have emitted and use that
to determine if during the walk if we emitted any additional diagnostics.
With some of the changes that I have made, we began to emit a mark_must_check
[no_copy] on a copy_addr here. This change teaches the move address checker how
to recognize that in this case if we have a project_box it is actually b/c we
have something captured by an escaping closure.
When we have a loadable type, SILGen always eagerly loads the value, so the move
checker handled checking those cases by just looking for loads and emitting the
error based off of a load. Of course for address only types, the codegen looks
slightly different (we consume the value directly rather than load it). So this
change just teaches the checker how to do that.
rdar://105106470
`SSAPrunedLiveness::invalidate` is used as though it reset the state of
the instance it is called on. Clients then reuse the instance with the
expectation that it has been reset. But since it has not been reset,
this results in unexpected liveness results.
rdar://108627366
This makes it so that the move address checker is not dependent on starting the
traversal at a base object. I also included verifier checks that the API can
visit all address uses for:
1. project_box.
2. alloc_stack.
3. ref_element_addr.
4. ref_tail_addr.
5. global_addr_inst.
this is because this visitor is now apart of the SIL API definition as being
able to enumerate /all/ addresses derived from a specific chosen address value.
This is a refactoring NFCI change.
rdar://108510644
If one has a type with a deinit, if one were to partially invalidate the value,
the checker will clean up the remaining parts of the value but not the actual
underlying value. This then would cause the deinit of the actual type to be
called.
rdar://101651138
Code can only locally interact with a mutable memory location within a
formal access, and is only responsible for maintaining its invariants
during that access, so the move-only address checker does not need to,
and should not, observe operations that occur outside of the access
marked with the `mark_must_check` instruction. And for immutable
memory locations, although there are no explicit formal accesses, that's
because every access must be read-only, so although individual
accesses are not delimited, they are all compatible as far as
move-only checking is concerned. So we can back out the changes to SILGen
to re-project a memory location from its origin on every access, a
change which breaks invariants assumed by other SIL passes.
This is used to teach the checker that the thing being checked is supposed to be
uninitialized at the mark_must_check point so that we don't put a destroy_addr
there.
The way this is implemented is that we always initially add
assignable_but_not_consumable but in DI once we discover that the assign we are
guarding is an init, we convert the assignable to its initable variant.
rdar://106525988
Use BasicBlockBitfield to record per-block liveness state. This has
been the intention since BasicBlockBitfield was first introduced.
Remove the per-field bitfield from PrunedLiveBlocks. This
(re)specializes the data structure for scalar liveness and drastically
simplifies the implementation.
This utility is fundamental to all ownership utilities. It will be on
the critical path in many areas of the compiler, including at
-Onone. It needs to be minimal and as easy as possible for compiler
engineers to understand, investigate, and debug.
This is in preparation for fixing bugs related to multi-def liveness
as used by the move checker.
I included here SIL tests and in a separate PR against lldb included lldb tests
that validate as we step that the values are validated/invalidated as
appropriate.
rdar://106767457
I also added an interpreter test that validates that ref_element_addr works as
expected (I fixed that in an earlier commit, but did not add an interpreter
test).
rdar://106724277
I also slightly changed the codegen around where we insert the mark_must_check.
Specifically, before we would emit the mark_must_check directly on the
ref_element_addr and then insert the access. This had the unfortunate effect
that we would hoist any destroy_addr that were actually needed out of the access
scope. Rather than do that, I now insert the mark_must_check on the access
itself. This results in the destroy_addr being within the scope (like the
mark_must_check itself).
rdar://105910066
We already did this for the situation without the begin_access. In truth, using
the terminator is a bit too wide, but it works for these sorts of arguments that
use assignable_but_not_consumable so for expediency (and since we are just
walking blocks), I just decided to do something quick.
rdar://106208343
I think this was just an incorrect fix. The actual issue is that the memory
lifetime verifier views debug_value as requiring a live address... but at the
same time, we do not want to emit diagnostics for a debug_value... we want to do
it for other things. So my solution is to add debug_value to the final liveness
computation, but not use it earlier when we use said liveness to compute
diagnostics.
rdar://106442224
This just makes sure that we won't crash on it and emit a proper message. I
think we should improve the message, but this at least gives a proper error.
rdar://106340382
This fixes an issue when doing move-checking on a read accessor,
where the field is only borrowed. After the MoveOnlyAddressChecker
ran on it, it'd inject a destroy that didn't get "claimed":
```
%2 = ref_element_addr %0 : $ListOfFiles, #ListOfFiles.file
%3 = mark_must_check [no_consume_or_assign] %2 : $*File
%4 = begin_access [read] [dynamic] %3 : $*File
%5 = load_borrow %4 : $*File
yield %5 : $File, resume bb1, unwind bb2
bb1:
end_borrow %5 : $File
end_access %4 : $*File
destroy_addr %2 : $*File // BAD
%9 = tuple ()
return %9 : $()
```
The approach of this fix is to recognize that at the point we're
injecting destroys, we would have emitted diagnostics and stopped
already if there were any consuming uses that we need to clean-up
after, since we're in `no_consume_or_assign` checking mode here
when just reading the field.
Some notes:
1. This ensures that if we capture them, we just capture the box by reference.
2. We are still using the old incorrect semantics for captures. I am doing this
so I can bring this up in separate easy to understand patches all of which
pass all of the moveonly tests.
3. Most of the test edits are due to small differences in error messages in
between the object and address checker.
4. I had to add a little support to the move only address checker for a small
pattern that doesn't occur with vars but do es occur for lets when we codegen
like this, specifically around enums. The pattern is we perform a load_borrow
and then copy_value and then use the result of the copy_value. Rather than fight
SILGen pattern I introduced a small canonicalization into the address checker which
transforms that pattern into a load [copy] + begin_borrow to restore the codegen
to a pattern the checker expects.
5. I left noimplicitcopy alone for now. But we should come back around and fix
it in a similar way. I just did not have time to do so.
This is the first slice of bringing up escaping closure support. The support is
based around introducing a new type of SILGen VarLoc: a VarLoc with a box and
without a value. Because the VarLoc only has a box, we have to in SILGen always
eagerly reproject out the address from the box. The reason why I am doing this
is that it makes it easy for the move checker to distinguish in between
different accesses to the box that we want to check separately. As such every
time that we open the box, we insert a mark_must_check
[assignable_but_not_consumable] on that project. If allocbox_to_stack manages to
determine that the box can be stack allocated, we eliminate all of the
mark_must_check and place a new mark_must_check [consumable_and_assignable] on
the alloc_stack. The end result is that we get the old model that we had before
and also can support escaping closures.
Otherwise, sometimes when the object checker emits a diagnostic and cleans up
the IR, some of the cleaned up copies are copies that should have been handled
by the address checker. The end result is that the address checker does not emit
diagnostics for that IR. I found this problem was exascerbated when writing code
for escaping closures.
This commit also cleans up the passes in preparation for at a future time moving
some of the transformations into the utils folder.