Usually `explicit_copy_addr` and `explicit_copy_value` don't survive until the first SILCombine pass run anyway.
But if they do, the simplifications need to be registered, otherwise SILCombine will complain.
This is needed in Embedded Swift because the `witness_method` convention requires passing the witness table to the callee.
However, the witness table is not necessarily available.
A witness table is only generated if an existential value of a protocol is created.
This is a rare situation because only witness thunks have `witness_method` convention and those thunks are created as "transparent" functions, which means they are always inlined (after de-virtualization of a witness method call).
However, inlining - even of transparent functions - can fail for some reasons.
This change adds a new EmbeddedWitnessCallSpecialization pass:
If a function with `witness_method` convention is directly called, the function is specialized by changing the convention to `method` and the call is replaced by a call to the specialized function:
```
%1 = function_ref @callee : $@convention(witness_method: P) (@guaranteed C) -> ()
%2 = apply %1(%0) : $@convention(witness_method: P) (@guaranteed C) -> ()
...
sil [ossa] @callee : $@convention(witness_method: P) (@guaranteed C) -> () {
...
}
```
->
```
%1 = function_ref @$e6calleeTfr9 : $@convention(method) (@guaranteed C) -> ()
%2 = apply %1(%0) : $@convention(method) (@guaranteed C) -> ()
...
// specialized callee
sil shared [ossa] @$e6calleeTfr9 : $@convention(method) (@guaranteed C) -> () {
...
}
```
Fixes a compiler crash
rdar://165184147
It eliminates dead access scopes if they are not conflicting with other scopes.
Removes:
```
%2 = begin_access [modify] [dynamic] %1
... // no uses of %2
end_access %2
```
However, dead _conflicting_ access scopes are not removed.
If a conflicting scope becomes dead because an optimization e.g. removed a load, it is still important to get an access violation at runtime.
Even a propagated value of a redundant load from a conflicting scope is undefined.
```
%2 = begin_access [modify] [dynamic] %1
store %x to %2
%3 = begin_access [read] [dynamic] %1 // conflicting with %2!
%y = load %3
end_access %3
end_access %2
use(%y)
```
After redundant-load-elimination:
```
%2 = begin_access [modify] [dynamic] %1
store %x to %2
%3 = begin_access [read] [dynamic] %1 // now dead, but still conflicting with %2
end_access %3
end_access %2
use(%x) // propagated from the store, but undefined here!
```
In this case the scope `%3` is not removed because it's important to get an access violation error at runtime before the undefined value `%x` is used.
This pass considers potential conflicting access scopes in called functions.
But it does not consider potential conflicting access in callers (because it can't!).
However, optimizations, like redundant-load-elimination, can only do such transformations if the outer access scope is within the function, e.g.
```
bb0(%0 : $*T): // an inout from a conflicting scope in the caller
store %x to %0
%3 = begin_access [read] [dynamic] %1
%y = load %3 // cannot be propagated because it cannot be proved that %1 is the same address as %0
end_access %3
```
All those checks are only done for dynamic access scopes, because they matter for runtime exclusivity checking.
Dead static scopes are removed unconditionally.
This is wrong for hoisted load instructions because we don't check for aliasing in the pre-header.
And for side-effect-free instructions it's not really necessary, because that can cleanup CSE afterwards.
Fixes a miscompile
rdar://164034503
The `shouldExpand` in `OptUtils.swift` was incorrectly returning `true`
unconditionally when `useAggressiveReg2MemForCodeSize` was disabled. The
expansion might be invalid for types with addr-only types and structs
with deinit, but we didn't check them before. This could lead to invalid
`destructure_struct` instructions without `drop_deinit` being emitted.
This showed up on and off again on the source-compatibility testsuite project hummingbird.
The gist of the problem is that transformations may not rewrite the
type of an inlined instance of a variable without also createing a
deep copy of the inlined function with a different name (and e.g., a
specialization suffix). Otherwise the modified inlined variable will
cause an inconsistency when later compiler passes try to create the
abstract declaration of that inlined function as there would be
conflicting declarations for that variable.
Since SILDebugScope isn't yet available in the SwiftCompilerSources
this fix just drop these variables, but it would be absolutely
possible to preserve them by using the same mechanism that SILCloner
uses to create a deep copy of the inlined function scopes.
rdar://163167975
It hoists `destroy_value` instructions for non-lexical values.
```
%1 = some_ownedValue
...
last_use(%1)
... // other instructions
destroy_value %1
```
->
```
%1 = some_ownedValue
...
last_use(%1)
destroy_value %1 // <- moved after the last use
... // other instructions
```
In contrast to non-mandatory optimization passes, this is the only pass which hoists destroys over deinit-barriers.
This ensures consistent behavior in -Onone and optimized builds.
Add a special case for checked_cast_addr_br instruction. If it conformed to
SourceDestAddrInstruction, then the diagnostics would already have handled it
naturally, but the instruction's conditional semantics are strange enough that
such a conformance might confuse other passes.
rdar://159793739 (Using `as?` with non-escapable types emits faulty lifetime
diagnostics)
Only record an outgoingArgument access when the current allocation is an
outgoing argument and the function exiting instruction is ReturnInst.
This avoids invalid SIL where we attempt to extend end_access up to an `unwind` but
fail to actually materialize that end_access.
Don't always consider an inout_aliasable argument to have
escaped. AccessEnforcementSelection has already done that analysis and left
begin_access [dynamic] artifacts if the argument has escaped in any meaningful
way. Use that information to
Essential for supporting autoclosures that call mutating methods on span-like
things. Such as UTF8Span.UnicodeScalarIterator.skipForward():
e.g. while numSkipped < n && skipForward() != 0 { ... }
Define LocalAccessInfo._isFullyAssigned to mean that the access does not read
the incoming value. Then treat any assignment that isn't full as a potential read.