Specifically, this API has some hard edges where instead of just returning an
invalid value to signal that we do not have self, we assert or return something
bogus. This commit just fixes our usage of that API to be correct.
rdar://132545626
In this part of the code, we are attempting to merge all of the operands into
the same region and then assigning all non-Sendable results of the function to
that same region. The problem that was occuring here was a thinko due to the
control flow of the code here not separating nicely the case of whether or not
we had operands or not. Previously this did not matter, since we just used the
first result in such a case... but since we changed to assign to the first
operand element in some cases, it matters now. To fix this, I split the confused
logic into two different easy to follow control paths... one if we have operands
and one where we do not have an operand. In the case where we have a first
operand, we merge our elements into its region. If we do not have any operands,
then we just perform one large region assign fresh.
This was not exposed by code that used non-coroutines since in SIL only
coroutines today have multiple results.
rdar://132767643
The changes to allow for partial consumption unintentionally also allowed for
`self` to be consumed as a whole during `deinit`, which we don't yet want to
allow because it could lead to accidental "resurrection" and/or accidental
infinite recursion if the consuming method lets `deinit` be implicitly run
again. This makes it an error again. The experimental feature
`ConsumeSelfInDeinit` will allow it for test coverage or experimentation
purposes. rdar://132761460
We have found certain cases due to the requestified typechecker, a type is
initially Sendable and then is later non-Sendable. This can be seen by the
attached test case where the first time one calls isNonSendableType on the test
value, one would get that it is Sendable and then the second time one would get
it was non-Sendable. The result of this is that the pass gets into an
inconsistent state.
This patch is a small patch that makes the pass more permissive in the face of
such an error by making it so that we do not ignore Sendable results of
instructions (that is we make sure to track a value for them), so we do not
break invariants.
The longer term better fix is to make it so that we have a cache in the pass for
this query that way we just always use the first answer returned from the
typechecker and cache that. If the typechecker has such a bug, we may get bogus
results, but we at least do not break invariants.
As an example of this type of behavior, in the test case in this patch, we first
find the Sendable conformance of MySubClass and then the typechecker after doing
some more type checking while performing that query, the second time finds the
inherited non-Sendable conformance of MyParentClass causing MySubClass to be
considered to be non-Sendable.
rdar://132347404
The main changes are:
*) Rewrite everything in swift. So far, parts of memory-behavior analysis were already implemented in swift. Now everything is done in swift and lives in `AliasAnalysis.swift`. This is a big code simplification.
*) Support many more instructions in the memory-behavior analysis - especially OSSA instructions, like `begin_borrow`, `end_borrow`, `store_borrow`, `load_borrow`. The computation of end_borrow effects is now much more precise. Also, partial_apply is now handled more precisely.
*) Simplify and reduce type-based alias analysis (TBAA). The complexity of the old TBAA comes from old days where the language and SIL didn't have strict aliasing and exclusivity rules (e.g. for inout arguments). Now TBAA is only needed for code using unsafe pointers. The new TBAA handles this - and not more. Note that TBAA for classes is already done in `AccessBase.isDistinct`.
*) Handle aliasing in `begin_access [modify]` scopes. We already supported truly immutable scopes like `begin_access [read]` or `ref_element_addr [immutable]`. For `begin_access [modify]` we know that there are no other reads or writes to the access-address within the scope.
*) Don't cache memory-behavior results. It turned out that the hit-miss rate was pretty bad (~ 1:7). The overhead of the cache lookup took as long as recomputing the memory behavior.
When creating `destroy_value`s, create them with the `dead_end` flag if
all subsequent destroys (those from which it is notionally being
hoisted) have the flag.
When creating `destroy_value`s, create them with the `dead_end` flag if
all subsequent destroys (those from which it is notionally being
hoisted) have the flag.
Now that `isWithinBoundary` and `areUsesWithinBoundary` are "the same"
(up to the fact that one takes an instruction and the other an array of
operands), there's no reason to use the latter when looking at a single
instruction.
When checking whether an instruction is contained in a liveness
boundary, a pointer to a DeadEndBlocks instance must always be passed.
When the pointer is null, it is only checked that the instruction occurs
within the direct live region. When the pointer is non-null, it is
checked whether the instruction occurs within the region obtained by
extending the live region up to the availability boundary within
dead-end regions that are adjacent to the non-lifetime-ending portion of
the liveness boundary.
Use the more precise areUsesWithinBoundary API (which takes dead-end
blocks into account). This requires first updating liveness with the
newly created destroys.