Add a new mandatory BooleanLiteralFolding pass which constant folds conditional branches with boolean literals as operands.
```
%1 = integer_literal -1
%2 = apply %bool_init(%1) // Bool.init(_builtinBooleanLiteral:)
%3 = struct_extract %2, #Bool._value
cond_br %3, bb1, bb2
```
->
```
...
br bb1
```
This pass is intended to run before DefiniteInitialization, where mandatory inlining and constant folding didn't run, yet (which would perform this kind of optimization).
This optimization is required to let DefiniteInitialization handle boolean literals correctly.
For example in infinite loops:
```
init() {
while true { // DI need to know that there is no loop exit from this while-statement
if some_condition {
member_field = init_value
break
}
}
}
```
By default it lowers the builtin to an `alloc_vector` with a paired `dealloc_stack`.
If the builtin appears in the initializer of a global variable and the vector elements are initialized,
a statically initialized global is created where the initializer is a `vector` instruction.
In regular swift this is a nice optimization. In embedded swift it's a requirement, because the compiler needs to be able to specialize generic deinits of non-copyable types.
The new de-virtualization utilities are called from two places:
* from the new DeinitDevirtualizer pass. It replaces the old MoveOnlyDeinitDevirtualization, which is very basic and does not fulfill the needs for embedded swift.
* from MandatoryPerformanceOptimizations for embedded swift
For chains of async functions where suspensions can be statically
proven to never be required, this pass removes all suspensions and
turns the functions into synchronous functions.
For example, this function does not actually require any suspensions,
once the correct executor is acquired upon initial entry:
```
func fib(_ n: Int) async -> Int {
if n <= 1 { return n }
return await fib(n-1) + fib(n-2)
}
```
So we can turn the above into this for better performance:
```
func fib() async -> Int {
return fib_sync()
}
func fib_sync(_ n: Int) -> Int {
if n <= 1 { return n }
return fib(n-1) + fib(n-2)
}
```
while rewriting callers of `fib` to use the `sync` entry-point
when we can prove that it will be invoked on a compatible executor.
This pass is currently experimental and under development. Thus, it
is disabled by default and you must use
`-enable-experimental-async-demotion` to try it.
It lowers let property accesses of classes.
Lowering consists of two tasks:
* In class initializers, insert `end_init_let_ref` instructions at places where all let-fields are initialized.
This strictly separates the life-range of the class into a region where let fields are still written during
initialization and a region where let fields are truly immutable.
* Add the `[immutable]` flag to all `ref_element_addr` instructions (for let-fields) which are in the "immutable"
region. This includes the region after an inserted `end_init_let_ref` in an class initializer, but also all
let-field accesses in other functions than the initializer and the destructor.
This pass should run after DefiniteInitialization but before RawSILInstLowering (because it relies on `mark_uninitialized` still present in the class initializer).
Note that it's not mandatory to run this pass. If it doesn't run, SIL is still correct.
Simplified example (after lowering):
bb0(%0 : @owned C): // = self of the class initializer
%1 = mark_uninitialized %0
%2 = ref_element_addr %1, #C.l // a let-field
store %init_value to %2
%3 = end_init_let_ref %1 // inserted by lowering
%4 = ref_element_addr [immutable] %3, #C.l // set to immutable by lowering
%5 = load %4
- VTableSpecializer, a new pass that synthesizes a new vtable per each observed concrete type used
- Don't use full type metadata refs in embedded Swift
- Lazily emit specialized class metadata (LazySpecializedClassMetadata) in IRGen
- Don't emit regular class metadata for a class decl if it's generic (only emit the specialized metadata)
The new implementation has several benefits compared to the old C++ implementation:
* It is significantly simpler. It optimizes each load separately instead of all at once with bit-field based dataflow.
* It's using alias analysis more accurately which enables more loads to be optimized
* It avoids inserting additional copies in OSSA
The algorithm is a data flow analysis which starts at the original load and searches for preceding stores or loads by following the control flow in backward direction.
The preceding stores and loads provide the "available values" with which the original load can be replaced.
The old C++ pass didn't catch a few cases.
Also:
* The new pass is significantly simpler: it doesn't perform dataflow for _all_ memory locations at once using bitfields, but handles each store separately. (In both implementations there is a complexity limit in place to avoid quadratic complexity)
* The new pass works with OSSA
Renamed UnitTest to FunctionTest.
FunctionTests are now instantiated once as global objects--with their
names and the code they are to run--at which time they are stored by
name in a global registry.
Moved the types to the SIL library.
Together, these changes enable defining unit tests in the source file
containing the code to be tested.
It sets the `[bare]` attribute for `alloc_ref` and `global_value` instructions if their header (reference count and metatype) is not used throughout the lifetime of the object.
In the next commit, I am modifying the move only checker to ensure that we
always error when partially invalidating. While doing this I discovered that
these copies were getting in the way of emitting good diagnostics in that case.
Deallocate dynamic allocas done for metadata/wtable packs. These
stackrestore calls are inserted on the dominance frontier and then stack
nesting is fixed up. That was achieved as follows:
Added a new IRGen pass PackMetadataMarkerInserter; it
- determines if there are any instructions which might allocate on-stack
pack metadata
- if there aren't, no changes are made
- if there are, alloc_pack_metadata just before instructions that could
allocate pack metadata on the stack and dealloc_pack_metadata on the
dominance frontier of those instructions
- fixup stack nesting
During IRGen, the allocations done for metadata/wtable packs are
recorded and IRGenSILFunction associates them with the instruction that
lowered. It must be the instruction after some alloc_pack_metadata
instruction. Then, when visiting the dealloc_pack_metadata instructions
corresponding to that alloc_pack_metadata, deallocate those packs.
This reverts commit 224674cad1.
Originally, I made this change since we were going to follow the AST in a strict
way in terms of what closures are considered escaping or not from a diagnostics
perspective. Upon further investigation I found that we actually do something
different for inout escaping semantics and by treating the AST as the one point
of truth, we are being inconsistent with the rest of the compiler. As an
example, the following code is considered by the compiler to not be an invalid
escaping use of an inout implying that we do not consider the closure to be
escaping:
```
func f(_ x: inout Int) {
let g = {
_ = x
}
}
```
in contrast, a var is always considered to be an escape:
```
func f(_ x: inout Int) {
var g = {
_ = x
}
}
test2.swift:3:13: error: escaping closure captures 'inout' parameter 'x'
var g = {
^
test2.swift:2:10: note: parameter 'x' is declared 'inout'
func f(_ x: inout Int) {
^
test2.swift:4:11: note: captured here
_ = x
^
```
Of course, if we store the let into memory, we get the error one would expect:
```
var global: () -> () = {}
func f(_ x: inout Int) {
let g = {
_ = x
}
global = g
}
test2.swift:4:11: error: escaping closure captures 'inout' parameter 'x'
let g = {
^
test2.swift:3:10: note: parameter 'x' is declared 'inout'
func f(_ x: inout Int) {
^
test2.swift:5:7: note: captured here
_ = x
^
```
By reverting to the old behavior where allocbox to stack ran early, noncopyable
types now have the same sort of semantics: let closures that capture a
noncopyable type that do not on the face of it escape are considered
non-escaping, while if the closure is ever stored into memory (e.x.: store into
a global, into a local var) or escapes, we get the appropriate escaping
diagnostics. E.x.:
```
public struct E : ~Copyable {}
public func borrowVal(_ e: borrowing E) {}
public func consumeVal(_ e: consuming E) {}
func f1() {
var e = E()
// Mutable borrowing use of e. We can consume e as long as we reinit at end
// of function. We don't here, so we get an error.
let c1: () -> () = {
borrowVal(e)
consumeVal(e)
}
// Mutable borrowing use of e. We can consume e as long as we reinit at end
// of function. We do do that here, so no error.
let c2: () -> () = {
borrowVal(e)
consumeVal(e)
e = E()
}
}
```
This allows to run the NamedReturnValueOptimization only late in the pipeline.
The optimization shouldn't be done before serialization, because it might prevent predictable memory optimizations in the caller after inlining.
It converts a lazily initialized global to a statically initialized global variable.
When this pass runs on a global initializer `[global_init_once_fn]` it tries to create a static initializer for the initialized global.
```
sil [global_init_once_fn] @globalinit {
alloc_global @the_global
%a = global_addr @the_global
%i = some_const_initializer_insts
store %i to %a
}
```
The pass creates a static initializer for the global:
```
sil_global @the_global = {
%initval = some_const_initializer_insts
}
```
and removes the allocation and store instructions from the initializer function:
```
sil [global_init_once_fn] @globalinit {
%a = global_addr @the_global
%i = some_const_initializer_insts
}
```
The initializer then becomes a side-effect free function which let's the builtin-simplification remove the `builtin "once"` which calls the initializer.
Now that we handle inlined global initializers in LICM, CSE and the StringOptimization, we don't need to have a separate mid-level inliner pass, which treats global accessors specially.
Specifically, we already have the appropriate semantics for arguments captured
by escaping closures but in certain cases allocbox to stack is able to prove
that the closure doesn’t actually escape. This results in the capture being
converted into a non-escaping SIL form. This then causes the move checker to
emit the wrong kind of error.
The solution is to create an early allocbox to stack that doesn’t promote move
only types in boxes from heap -> stack if it is captured by an escaping closure
but does everything else normally. Then once the move checking is completed, we
run alloc box to stack an additional time to ensure that we keep the guarantee
that heap -> stack is performed in those cases.
rdar://108905586
Otherwise, sometimes when the object checker emits a diagnostic and cleans up
the IR, some of the cleaned up copies are copies that should have been handled
by the address checker. The end result is that the address checker does not emit
diagnostics for that IR. I found this problem was exascerbated when writing code
for escaping closures.
This commit also cleans up the passes in preparation for at a future time moving
some of the transformations into the utils folder.
The Swift Simplification pass can do more than the old MandatoryCombine pass: simplification of more instruction types and dead code elimination.
The result is a better -Onone performance while still keeping debug info consistent.
Currently following code patterns are simplified:
* `struct` -> `struct_extract`
* `enum` -> `unchecked_enum_data`
* `partial_apply` -> `apply`
* `br` to a 1:1 related block
* `cond_br` with a constant condition
* `isConcrete` and `is_same_metadata` builtins
More simplifications can be added in the future.
rdar://96708429
rdar://104562580
If a `debug_step` has the same debug location as a previous or succeeding instruction it is removed.
It's just important that there is at least one instruction for a certain debug location so that single stepping on that location will work.
This is a cleaner separation of concerns. The reason why I did not do this
originally is that I thought I would need to reuse this functionality in the
address checker, but this issue actually does not come up there since we project
the address and then load instead of load and then project.
This enables us to emit the appropriate error for consuming uses of fields and
also causes us to eliminate copies exposed by using fields of a move only type
in a non-consuming way.
rdar://103271138
* split the `PassContext` into multiple protocols and structs: `Context`, `MutatingContext`, `FunctionPassContext` and `SimplifyContext`
* change how instruction passes work: implement the `simplify` function in conformance to `SILCombineSimplifyable`
* add a mechanism to add a callback for inserted instructions