* [SILOpt] Allow pre-specializations for _Trivial of known size
rdar://119224542
This allows pre-specializations to be generated and applied for trivial types of a shared size.
I left them as friends since that was in the original code. There isn't a reason
to do this and break the encapsulation of Partition. I just added reasonable
helpers that give PartitionOpEvaluator all of the functionality it needs.
Previously I avoided doing this since the only problem would be that in a case
where we had two transfer instructions that were in an if-else block, we would
just emit an error for one:
```swift
if boolValue {
transfer(x)
} else {
transfer(x) // Only emit error for this transfer!
}
useValue(x)
```
Now that we are tracking at the transfer point if any element in the transfer
was captured in a closure, this becomes an actual semantic issue since if we
track the transfer instruction that isn't reachable from the closure capture, we
will not become more pessimistic:
```swift
if boolValue {
closure = { useInOut(&x) }
transfer(x)
} else {
transfer(x)
}
// Since we grab from the else block, sendableField is allowed to be accessed
// since we do not track that x was captured by reference in a closure.
x.sendableField
useValue(x)
```
To be truly safe, we need to emit both errors.
rdar://119048779
If the var is captured in a closure before it is transferred, it is not safe to
access the Sendable field since we may race on accessing the field with an
assignment to the field in another concurrency domain.
rdar://115124361
I am not sure if this would ever actually ever cause a bug... but we should be
consistent. The issue here is that in the evaluator switch when processing
transfers in certain cases when we emitted an early error due to an actor
derived value we were performing a transfer and other times we weren't. With
this patch:
1. We now consistently do not actually transfer the value if it is actor
derived. It is always illegal to transfer an actor derived value... so we know
that if we always just error on transferring it, we have a safe model.
2. The switch we breaking so that we could do an assert that we canonicalized
the modified PartitionOp. Rather than do that and allow for people to
potentially return, I moved the assert into a SWIFT_DEFER and added an assert in
the continuation of the switch. This means that if an author ever breaks out of
the switch, they will hit the assert implying that the author in all cases must
explicitly return within the case statement guaranteeing this mistake does not
happen again.
Specifically:
1. If the value is transferred such that it becomes part of an actor region, the
value is permanently part of the actor region as one would normally have.
2. If the value is just used in an async let or is used by a nonisolated async
function within the async let then while the async let is alive it cannot be
used. But once the async let has been awaited upon, we allow for it to be used
again.
rdar://117506395
This means to support specializing functions with indirect error results.
Also, when specializing (and the concrete error type is loadable), convert the indirect error to a direct error.
rdar://118532113
[region-isolation] Since we now propagate the transferred instruction, use that to emit the error instead of attempting to infer the transfer instruction for a requires
This is another NFC refactor in preparation for changing how we emit
errors. Specifically, we need access to not only the instruction, but also the
specific operand that the transfer occurs at. This ensures that we can look up
the specific type information later when we emit an error rather than tracking
this information throughout the entire pass.
In cea0f00598, `InstructionDeleter` began
deleting `load [take]` instructions. Analogous to how it creates a
`destroy_value` when deleting an instruction which consumes a value, in
the case of deleting a `load [take]` the `InstructionDeleter` inserts a
compensating `destroy_addr`.
Previously, `DeadCodeElimination` did not observe the creation of any
instructions created by the `InstructionDeleter`. In the case of the
newly created `destroy_addr`, DCE didn't mark that the `destroy_addr`
was live and so deleted it. The result was a leak.
Here, this is fixed by passing an `InstModCallbacks`--with an
`onCreateNewInst` implementation--down into `erasePhiArgument` that
eventually invokes the `InstructionDeleter`. When the
`InstructionDeleter` creates a new instruction, DCE marks it live.
getExprForPartitionOp(...) just returned the expression from the loc of op.currentInst:
SILInstruction *sourceInstr = op.getSourceInst(/*assertNonNull=*/true);
Expr *expr = sourceInstr->getLoc().getAsASTNode<Expr>();
Instead of mucking around with exprs, just use the SILLocation from the
SILInstruction.
I also changed how we unique transfer instructions to just use the transfer
instruction itself instead of the AST/Expr of the transfer instruction.
Was experimenting with making PartitionOps a noncopyable type and I discovered
these places where we copy PartitionOps when we could use a const reference. It
is good not to copy PartitionOps since they potentially contain a heap allocated
array.
Sadly, my change to make PartitionOps noncopyable will have to wait until a
forthcoming commit here I overhaul how we emit errors since that older code
copies PartitionOps a lot and I would rather just delete that code and then fix
PartitionOps. But these are on the surface safe changes that makes sense to get
in separately to make that next patch easier to review.
What this does is really split the one dataflow we are performing into two
dataflows we perform at the same time. The first dataflow is the region dataflow
that we already have with transferring never occurring. The second dataflow is a
simple gen/kill dataflow where we gen on a transfer instruction and kill on
AssignFresh. What it tracks are regions where a specific element is transferred
and propagates the region until the element is given a new value. This of course
means that once the dataflow has converged, we have to emit an error not if the
value was transferred, but if any value in its region was transferred.
Unlike in regular swift, The class_method instruction references the specialized version of a class method.
This must be handled in ReabstractionInfo: it needs to work without a concrete callee SIL function.
Also, the SILVerifier must handle the case that a class_method instruction references a specialized method.
Specifically:
1. I changed Partition::apply so that it has an emitLog flag. The reason why I
did this is we run apply in a few different situations sometimes when we want to
emit logging other times when we really don't. For instance, we want to emit
logging when walking instructions and updating the entry partition. On the other
hand, we do not want to emit logging if we apply a value to a partition while
attempting to determine why an error needed to be emitted.
2. When we create an assign partition op and we see that our destination and
source are the same representative, we do not create the actual assign. Before
we did not log this so it looked like there was a logic error that was stopping
us from emitting a partition op when visiting said instructions. Now, we emit a
small logging message so it isn't possible to be confused.
3. Since I am adding another parameter to Partition::apply, I decided to
refactor Partition::apply to be in a separate PartitionOpEvaluator data
structure that contains the options that we used to pass into Partition::apply.
This prevents any mistakes around configuring Partition::apply since the fields
provide nice names/common sense default values.
This was a piece of code that I added early while adding better unittesting for
Partition. Instead in that patch I added a forward declared class in Partition
called PartitionTester that could implement getRegion. I just forgot to remove
this.
We were performing a union on the intersection of the lhs/rhs but were dropping
the parts of lhs/rhs that were in the symmetric difference of the two sets.
Without this, we would not diagnose cases like this where we had elements on the
lhs/rhs that were not in the intersection.
```
var closure: () -> () = {}
await transferToMain(closure)
if await booleanFlag {
closure = {
print(self.klass)
}
} else {
closure = {}
}
// At this point we would lose closure since they were different elements
await transferToMain(closure) // We wouldn't error on this!
```
rdar://117437059
Importantly, we determine at the error stage if a specific value that is
transferred is within the same region of a value that is Actor derived. This
means that we can in a flow insensitive manner determine the values that are
actor derived and then via propagating those values into various regions
determine the issue... using our region analysis to handle the flow sensitivity.
One important case to reason about here is that since we are relying on the
region propagation for flow sensitivity is that this solves the var case for
us. A var when declared is never marked as actor derived. Var uses only become
actor derived if the var was merged into a region that contain is actor
derived. That means that re-assigning to a non-actor derived value, eliminates
the actor derived bit.
As part of this, I also discovered I could just get rid of the captured uniquely
identified array in favor of just passing in an actor derived flag.
rdar://115656589
One needs to pass in the explicit flag to enable this as well as
-debug-flag=send-non-sendable. This makes it easier to debug the affect of
applying specific partition ops.
Not every block in a region which begins with the non-lifetime-ending
boundary of a value and ending with unreachable-terminated blocks has
the value available. If the unreachable-terminated blocks in this
boundary are not available, it is incorrect to insert destroys of the
value in them: it is an overconsume on some paths. Previously,
however, destroys were simply being inserted at the unreachable.
Here, this is fixed by finding the boundary of availability within that
region and inserting destroys before the terminators of the blocks on
that boundary.
rdar://116255254
OSSALifetimeCompletion needs to insert not at unreachable instructions
that appear after the non-lifetime-ending boundary of a value but rather
at the terminators of the availability boundary of the value within that
region. Once it does so, it will no longer be sufficient to check
whether the insertion point is an unreachable because such terminators
may be another terminator that appears on the availability boundary.
Prepare for that by recording the instructions that were found and
checking whether the destroy insertion point is such an instruction
before bailing rather than specifically checking for `unreachable`.
Transfer is the terminology that we are using for something be transferred
across an isolation boundary, not consume. This also eliminates confusion with
consume which is a term being used around ownership.