(old name: CapturePropagation)
The pass is now rewritten in swift which makes the code smaller and simpler.
Compared to the old pass it has two improvements:
* It can constant propagate whole structs (and not only builtin literals). This is important for propagating "real" Swift constants which have a struct type of e.g. `Int`.
* It constant propagates keypaths even if there are other non-constant closure captures which are not propagated. This is something the old pass didn't do.
rdar://151185177
This pass replaces `alloc_box` with `alloc_stack` if the box is not escaping.
The original implementation had some limitations. It could not handle cases of local functions which are called multiple times or even recursively, e.g.
```
public func foo() -> Int {
var i = 1
func localFunction() { i += 1 }
localFunction()
localFunction()
return i
}
```
The new implementation (done in Swift) fixes this problem with a new algorithm.
It's not only more powerful, but also simpler: the new pass has less than half lines of code than the old pass.
The pass is invoked in the mandatory pipeline and later in the optimizer pipeline.
The new implementation provides a module-pass for the mandatory pipeline (whereas the "regular" pass is a function pass).
This is required because the mandatory pass needs to remove originals of specialized closures, which cannot be done from a function-pass.
In the old implementation this was done with a hack by adding a semantic attribute and deleting the function later in the pipeline.
I still kept the sources of the old pass for being able to bootstrap the compiler without a host compiler.
rdar://142756547
* re-implement the pass in swift
* support alloc_stack liveranges which span over multiple basic blocks
* support `load`-`store` pairs, copying from the alloc_stack (in addition to `copy_addr`)
Those improvements help to reduce temporary stack allocations, especially for InlineArrays.
rdar://151606382
Beside cleaning up the source code, the motivation for the translation into Swift is to make it easier to improve the pass for some InlineArray specific optimizations (though I'm not sure, yet if we really need those).
Also, the new implementation doesn't contain the optimize-store-into-temp optimization anymore, because this is covered by redundant load elimination.
Fixes a false alarm in case of recursive calls with different type parameters.
For example:
```
protocol P {
associatedtype E: P
}
func noRecursionMismatchingTypeArgs1<T: P>(_ t: T.Type) {
if T.self == Int.self {
return
}
noRecursionMismatchingTypeArgs1(T.E.self)
}
```
It hoists `destroy_value` instructions without shrinking an object's lifetime.
This is done if it can be proved that another copy of a value (either in an SSA value or in memory) keeps the referenced object(s) alive until the original position of the `destroy_value`.
```
%1 = copy_value %0
...
last_use_of %0
// other instructions
destroy_value %0 // %1 is still alive here
```
->
```
%1 = copy_value %0
...
last_use_of %0
destroy_value %0
// other instructions
```
The benefit of this optimization is that it can enable copy-propagation by moving destroys above deinit barries and access scopes.
It removes a `copy_value` where the source is a guaranteed value, if possible:
```
%1 = copy_value %0 // %0 = a guaranteed value
// uses of %1
destroy_value %1 // borrow scope of %0 is still valid here
```
->
```
// uses of %0
```
This optimization is very similar to the LoadCopyToBorrow optimization.
Therefore I merged both optimizations into a single file and renamed it to "CopyToBorrowOptimization".
The optimization replaces a `load [copy]` with a `load_borrow` if possible.
```
%1 = load [copy] %0
// no writes to %0
destroy_value %1
```
->
```
%1 = load_borrow %0
// no writes to %0
end_borrow %1
```
The new implementation uses alias-analysis (instead of a simple def-use walk), which is much more powerful.
rdar://115315849
Changes in this CR add part of the, Swift based, Autodiff specific
closure specialization optimization pass. The pass does not modify any
code nor does it even exist in any of the optimization pipelines. The
rationale for pushing this partially complete optimization pass upstream
is to keep up with the breaking changes in the underlying Swift based
compiler infrastructure.
Add a new mandatory BooleanLiteralFolding pass which constant folds conditional branches with boolean literals as operands.
```
%1 = integer_literal -1
%2 = apply %bool_init(%1) // Bool.init(_builtinBooleanLiteral:)
%3 = struct_extract %2, #Bool._value
cond_br %3, bb1, bb2
```
->
```
...
br bb1
```
This pass is intended to run before DefiniteInitialization, where mandatory inlining and constant folding didn't run, yet (which would perform this kind of optimization).
This optimization is required to let DefiniteInitialization handle boolean literals correctly.
For example in infinite loops:
```
init() {
while true { // DI need to know that there is no loop exit from this while-statement
if some_condition {
member_field = init_value
break
}
}
}
```
By default it lowers the builtin to an `alloc_vector` with a paired `dealloc_stack`.
If the builtin appears in the initializer of a global variable and the vector elements are initialized,
a statically initialized global is created where the initializer is a `vector` instruction.
In regular swift this is a nice optimization. In embedded swift it's a requirement, because the compiler needs to be able to specialize generic deinits of non-copyable types.
The new de-virtualization utilities are called from two places:
* from the new DeinitDevirtualizer pass. It replaces the old MoveOnlyDeinitDevirtualization, which is very basic and does not fulfill the needs for embedded swift.
* from MandatoryPerformanceOptimizations for embedded swift
For chains of async functions where suspensions can be statically
proven to never be required, this pass removes all suspensions and
turns the functions into synchronous functions.
For example, this function does not actually require any suspensions,
once the correct executor is acquired upon initial entry:
```
func fib(_ n: Int) async -> Int {
if n <= 1 { return n }
return await fib(n-1) + fib(n-2)
}
```
So we can turn the above into this for better performance:
```
func fib() async -> Int {
return fib_sync()
}
func fib_sync(_ n: Int) -> Int {
if n <= 1 { return n }
return fib(n-1) + fib(n-2)
}
```
while rewriting callers of `fib` to use the `sync` entry-point
when we can prove that it will be invoked on a compatible executor.
This pass is currently experimental and under development. Thus, it
is disabled by default and you must use
`-enable-experimental-async-demotion` to try it.
It lowers let property accesses of classes.
Lowering consists of two tasks:
* In class initializers, insert `end_init_let_ref` instructions at places where all let-fields are initialized.
This strictly separates the life-range of the class into a region where let fields are still written during
initialization and a region where let fields are truly immutable.
* Add the `[immutable]` flag to all `ref_element_addr` instructions (for let-fields) which are in the "immutable"
region. This includes the region after an inserted `end_init_let_ref` in an class initializer, but also all
let-field accesses in other functions than the initializer and the destructor.
This pass should run after DefiniteInitialization but before RawSILInstLowering (because it relies on `mark_uninitialized` still present in the class initializer).
Note that it's not mandatory to run this pass. If it doesn't run, SIL is still correct.
Simplified example (after lowering):
bb0(%0 : @owned C): // = self of the class initializer
%1 = mark_uninitialized %0
%2 = ref_element_addr %1, #C.l // a let-field
store %init_value to %2
%3 = end_init_let_ref %1 // inserted by lowering
%4 = ref_element_addr [immutable] %3, #C.l // set to immutable by lowering
%5 = load %4
The new implementation has several benefits compared to the old C++ implementation:
* It is significantly simpler. It optimizes each load separately instead of all at once with bit-field based dataflow.
* It's using alias analysis more accurately which enables more loads to be optimized
* It avoids inserting additional copies in OSSA
The algorithm is a data flow analysis which starts at the original load and searches for preceding stores or loads by following the control flow in backward direction.
The preceding stores and loads provide the "available values" with which the original load can be replaced.
The old C++ pass didn't catch a few cases.
Also:
* The new pass is significantly simpler: it doesn't perform dataflow for _all_ memory locations at once using bitfields, but handles each store separately. (In both implementations there is a complexity limit in place to avoid quadratic complexity)
* The new pass works with OSSA
It sets the `[bare]` attribute for `alloc_ref` and `global_value` instructions if their header (reference count and metatype) is not used throughout the lifetime of the object.
This allows to run the NamedReturnValueOptimization only late in the pipeline.
The optimization shouldn't be done before serialization, because it might prevent predictable memory optimizations in the caller after inlining.
It converts a lazily initialized global to a statically initialized global variable.
When this pass runs on a global initializer `[global_init_once_fn]` it tries to create a static initializer for the initialized global.
```
sil [global_init_once_fn] @globalinit {
alloc_global @the_global
%a = global_addr @the_global
%i = some_const_initializer_insts
store %i to %a
}
```
The pass creates a static initializer for the global:
```
sil_global @the_global = {
%initval = some_const_initializer_insts
}
```
and removes the allocation and store instructions from the initializer function:
```
sil [global_init_once_fn] @globalinit {
%a = global_addr @the_global
%i = some_const_initializer_insts
}
```
The initializer then becomes a side-effect free function which let's the builtin-simplification remove the `builtin "once"` which calls the initializer.
If a `debug_step` has the same debug location as a previous or succeeding instruction it is removed.
It's just important that there is at least one instruction for a certain debug location so that single stepping on that location will work.
Computes the side effects for a function, which consists of argument- and global effects.
This is similar to the ComputeEscapeEffects pass, just for side-effects.
Removes redundant ObjectiveC <-> Swift bridging calls.
Basically, if a value is bridged from ObjectiveC to Swift an then back to ObjectiveC again, then just re-use the original ObjectiveC value.
Also in this commit: add an additional DCE pass before ownership elimination. It can cleanup dead code which is left behind by the ObjCBridgingOptimization.
rdar://89987440
The ComputeEffects pass derives escape information for function arguments and adds those effects in the function.
This needs a lot of changes in check-lines in the tests, because the effects are printed in SIL
The ComputeEffects pass derives escape information for function arguments and adds those effects in the function.
This needs a lot of changes in check-lines in the tests, because the effects are printed in SIL
The `run-unit-tests` is a "pseudo" pass which is invoked from sil-opt and runs all the unit tests, implemented in Swift.
This is done from the `swift-unit-tests.sil` lit test.