Unlike in regular swift, The class_method instruction references the specialized version of a class method.
This must be handled in ReabstractionInfo: it needs to work without a concrete callee SIL function.
Also, the SILVerifier must handle the case that a class_method instruction references a specialized method.
Specifically:
1. I changed Partition::apply so that it has an emitLog flag. The reason why I
did this is we run apply in a few different situations sometimes when we want to
emit logging other times when we really don't. For instance, we want to emit
logging when walking instructions and updating the entry partition. On the other
hand, we do not want to emit logging if we apply a value to a partition while
attempting to determine why an error needed to be emitted.
2. When we create an assign partition op and we see that our destination and
source are the same representative, we do not create the actual assign. Before
we did not log this so it looked like there was a logic error that was stopping
us from emitting a partition op when visiting said instructions. Now, we emit a
small logging message so it isn't possible to be confused.
3. Since I am adding another parameter to Partition::apply, I decided to
refactor Partition::apply to be in a separate PartitionOpEvaluator data
structure that contains the options that we used to pass into Partition::apply.
This prevents any mistakes around configuring Partition::apply since the fields
provide nice names/common sense default values.
This was a piece of code that I added early while adding better unittesting for
Partition. Instead in that patch I added a forward declared class in Partition
called PartitionTester that could implement getRegion. I just forgot to remove
this.
We were performing a union on the intersection of the lhs/rhs but were dropping
the parts of lhs/rhs that were in the symmetric difference of the two sets.
Without this, we would not diagnose cases like this where we had elements on the
lhs/rhs that were not in the intersection.
```
var closure: () -> () = {}
await transferToMain(closure)
if await booleanFlag {
closure = {
print(self.klass)
}
} else {
closure = {}
}
// At this point we would lose closure since they were different elements
await transferToMain(closure) // We wouldn't error on this!
```
rdar://117437059
Importantly, we determine at the error stage if a specific value that is
transferred is within the same region of a value that is Actor derived. This
means that we can in a flow insensitive manner determine the values that are
actor derived and then via propagating those values into various regions
determine the issue... using our region analysis to handle the flow sensitivity.
One important case to reason about here is that since we are relying on the
region propagation for flow sensitivity is that this solves the var case for
us. A var when declared is never marked as actor derived. Var uses only become
actor derived if the var was merged into a region that contain is actor
derived. That means that re-assigning to a non-actor derived value, eliminates
the actor derived bit.
As part of this, I also discovered I could just get rid of the captured uniquely
identified array in favor of just passing in an actor derived flag.
rdar://115656589
One needs to pass in the explicit flag to enable this as well as
-debug-flag=send-non-sendable. This makes it easier to debug the affect of
applying specific partition ops.
Not every block in a region which begins with the non-lifetime-ending
boundary of a value and ending with unreachable-terminated blocks has
the value available. If the unreachable-terminated blocks in this
boundary are not available, it is incorrect to insert destroys of the
value in them: it is an overconsume on some paths. Previously,
however, destroys were simply being inserted at the unreachable.
Here, this is fixed by finding the boundary of availability within that
region and inserting destroys before the terminators of the blocks on
that boundary.
rdar://116255254
OSSALifetimeCompletion needs to insert not at unreachable instructions
that appear after the non-lifetime-ending boundary of a value but rather
at the terminators of the availability boundary of the value within that
region. Once it does so, it will no longer be sufficient to check
whether the insertion point is an unreachable because such terminators
may be another terminator that appears on the availability boundary.
Prepare for that by recording the instructions that were found and
checking whether the destroy insertion point is such an instruction
before bailing rather than specifically checking for `unreachable`.
Transfer is the terminology that we are using for something be transferred
across an isolation boundary, not consume. This also eliminates confusion with
consume which is a term being used around ownership.
When canonicalizing the lifetime of a lexical value, deinit barriers are
respected. This is done by walking backwards from lifetime ends and
adding encountered deinit barriers to liveness.
Only destroy lifetime ends were walked back from under the assumption
that lifetimes would be complete. Without complete OSSA lifetimes,
however, it's necessary to also necessary to consider lifetimes that end
with unreachables. Unfortunately, we can't simply walk back from those
unreachables because there may be instructions which are secretly users
of the value being canonicalized (e.g. destroys of `partial_apply`s to
which a `begin_borrow` of the value was passed). Such uses don't appear
in the use list because lifetime canonicalization expects complete
lifetimes and only visits lifetime ends of `begin_borrow`s.
Here, instead, the instructions before the relevant unreachables are
added to liveness. In order to determine which unreachables are
relevant, it's necessary to have a liveness that includes the original
destroys. So a copy of liveness is created and those destroys are added
to it.
rdar://115468707
Previously, we were not recognizing that a ref_element_addr from an actor object
is equivalent to the actor object and we shouldn't allow for it to be consumed.
rdar://115132118
Really, we should just be using representative values here in general since it
serves the same purpose and makes it easier to trace back values. But this in
the short term makes the output easier to reason about.
Specifically:
1. Converted llvm::errs() -> llvm::dbgs() when using LLVM_DEBUG.
2. Converted a dump method on errs to be a print method on dbgs that is called from dump with print(llvm::dbgs()).
- VTableSpecializer, a new pass that synthesizes a new vtable per each observed concrete type used
- Don't use full type metadata refs in embedded Swift
- Lazily emit specialized class metadata (LazySpecializedClassMetadata) in IRGen
- Don't emit regular class metadata for a class decl if it's generic (only emit the specialized metadata)
Without this fix, the new 'consuming' and 'borrowing' keywords cannot
be used with trivial types. Which means, for example, they can't be
used in macro expansions that work on various types.
Fixes patterns like:
public func test1(i: consuming Int) -> Int {
takeClosure { [i = copy i] in i }
}
public func test2(i: borrowing Int) -> Int {
takeClosure { [i = copy i] in i }
}
public func test3(i: consuming Int) -> Int {
takeClosure { i }
}
// Sadly, test4 is still incorrectly diagnosed.
public func test4(i: borrowing Int) -> Int {
takeClosure { i }
}
Fixes rdar://112795074 (Crash compiling function that has a macro annotation and uses `consuming`)