* Let the customBits and lastInitializedBitfieldID share a single uint64_t. This increases the number of available bits in SILNode and Operand from 8 to 20. Also, it simplifies the Operand class because no PointerIntPairs are used anymore to store the operand pointer fields.
* Instead make the "deleted" flag a separate bool field in SILNode (instead of encoding it with the sign of lastInitializedBitfieldID). Another simplification
* Enable important invariant checks also in release builds by using `require` instead of `assert`. Not catching such errors in release builds would be a disaster.
* Let the Swift optimization passes use all the available bits and not only a fixed amount of 8 (SILNode) and 16 (SILBasicBlock).
We often look at the SIL output of -sil-print-function and may want to debug a specific pass
after looking at the output.
-sil-break-before-pass-count=<pass_number> will allow to automatically break in the debugger
after <pass_count> of passes are run.
Example:
From -sil-print-function dump:
"SIL function after #6680, stage MidLevel,Function, pass 38: RedundantLoadElimination"
-Xllvm -sil-break-before-pass-count=6680 will break before running this pass in the debugger
InstructionRange uses 3 sets. Some algorithms need two active ranges at the top-level. Additionally, utilities often have few levels of nesting that each require a block set.
I am doing this since region based isolation hit the same issue that the move
checker did. So it makes sense to refactor the functionality into its own pass
and move it into a helper pass that runs before both.
It is very conservative and only stubifies functions that the specialization
passes explicitly mark as this being ok to be done to.
For years, optimizer engineers have been hitting a common bug caused by passes
assuming all SILValues have a parent function only to be surprised by SILUndef.
Generally we see SILUndef not that often so we see this come up later in
testing. This patch eliminates that problem by making SILUndef uniqued at the
function level instead of the module level. This ensures that it makes sense for
SILUndef to have a parent function, eliminating this possibility since we can
define an API to get its parent function.
rdar://123484595
Over time I am going to be using RegionAnalysis for a series of passes that all
use that same information since I am worried about RegionAnalysis computation
time. With that being said, we want to make sure to eliminate the memory that
RegionAnalysis uses once this series of passes have completed. What this commit
does is create a pass that explicitly invalidates region analysis and explicitly
places it in the pass pipeline after the series of passes. This will ensure that
even if we add an additional pass, there is a strong "rattlesnake" signal to the
new code author that the code needs to be placed before the region analysis
invalidation and will prevent mistakes such as having to recompute the region
analysis in that later pass or the later pass forgeting to invalidate the
analysis.
Add a new mandatory BooleanLiteralFolding pass which constant folds conditional branches with boolean literals as operands.
```
%1 = integer_literal -1
%2 = apply %bool_init(%1) // Bool.init(_builtinBooleanLiteral:)
%3 = struct_extract %2, #Bool._value
cond_br %3, bb1, bb2
```
->
```
...
br bb1
```
This pass is intended to run before DefiniteInitialization, where mandatory inlining and constant folding didn't run, yet (which would perform this kind of optimization).
This optimization is required to let DefiniteInitialization handle boolean literals correctly.
For example in infinite loops:
```
init() {
while true { // DI need to know that there is no loop exit from this while-statement
if some_condition {
member_field = init_value
break
}
}
}
```
This is useful for bisecting passes in large projects:
1. create a config file from a full build log. E.g. with
```
grep -e '-module-name' build.log | sed -e 's/.*-module-name \([^ ]*\) .*/\1:10000000/' | sort | uniq > config.txt
```
2. add the `-Xllvm -sil-pass-count-config-file config.txt` option to the project settings
3. bisect by modifying the counts in the config file
4. clean-rebuild after each bisecting step
By default it lowers the builtin to an `alloc_vector` with a paired `dealloc_stack`.
If the builtin appears in the initializer of a global variable and the vector elements are initialized,
a statically initialized global is created where the initializer is a `vector` instruction.
* [SILOpt] Allow pre-specializations for _Trivial of known size
rdar://119224542
This allows pre-specializations to be generated and applied for trivial types of a shared size.
In regular swift this is a nice optimization. In embedded swift it's a requirement, because the compiler needs to be able to specialize generic deinits of non-copyable types.
The new de-virtualization utilities are called from two places:
* from the new DeinitDevirtualizer pass. It replaces the old MoveOnlyDeinitDevirtualization, which is very basic and does not fulfill the needs for embedded swift.
* from MandatoryPerformanceOptimizations for embedded swift
For chains of async functions where suspensions can be statically
proven to never be required, this pass removes all suspensions and
turns the functions into synchronous functions.
For example, this function does not actually require any suspensions,
once the correct executor is acquired upon initial entry:
```
func fib(_ n: Int) async -> Int {
if n <= 1 { return n }
return await fib(n-1) + fib(n-2)
}
```
So we can turn the above into this for better performance:
```
func fib() async -> Int {
return fib_sync()
}
func fib_sync(_ n: Int) -> Int {
if n <= 1 { return n }
return fib(n-1) + fib(n-2)
}
```
while rewriting callers of `fib` to use the `sync` entry-point
when we can prove that it will be invoked on a compatible executor.
This pass is currently experimental and under development. Thus, it
is disabled by default and you must use
`-enable-experimental-async-demotion` to try it.
It lowers let property accesses of classes.
Lowering consists of two tasks:
* In class initializers, insert `end_init_let_ref` instructions at places where all let-fields are initialized.
This strictly separates the life-range of the class into a region where let fields are still written during
initialization and a region where let fields are truly immutable.
* Add the `[immutable]` flag to all `ref_element_addr` instructions (for let-fields) which are in the "immutable"
region. This includes the region after an inserted `end_init_let_ref` in an class initializer, but also all
let-field accesses in other functions than the initializer and the destructor.
This pass should run after DefiniteInitialization but before RawSILInstLowering (because it relies on `mark_uninitialized` still present in the class initializer).
Note that it's not mandatory to run this pass. If it doesn't run, SIL is still correct.
Simplified example (after lowering):
bb0(%0 : @owned C): // = self of the class initializer
%1 = mark_uninitialized %0
%2 = ref_element_addr %1, #C.l // a let-field
store %init_value to %2
%3 = end_init_let_ref %1 // inserted by lowering
%4 = ref_element_addr [immutable] %3, #C.l // set to immutable by lowering
%5 = load %4
- VTableSpecializer, a new pass that synthesizes a new vtable per each observed concrete type used
- Don't use full type metadata refs in embedded Swift
- Lazily emit specialized class metadata (LazySpecializedClassMetadata) in IRGen
- Don't emit regular class metadata for a class decl if it's generic (only emit the specialized metadata)
The new implementation has several benefits compared to the old C++ implementation:
* It is significantly simpler. It optimizes each load separately instead of all at once with bit-field based dataflow.
* It's using alias analysis more accurately which enables more loads to be optimized
* It avoids inserting additional copies in OSSA
The algorithm is a data flow analysis which starts at the original load and searches for preceding stores or loads by following the control flow in backward direction.
The preceding stores and loads provide the "available values" with which the original load can be replaced.
The old C++ pass didn't catch a few cases.
Also:
* The new pass is significantly simpler: it doesn't perform dataflow for _all_ memory locations at once using bitfields, but handles each store separately. (In both implementations there is a complexity limit in place to avoid quadratic complexity)
* The new pass works with OSSA
Renamed UnitTest to FunctionTest.
FunctionTests are now instantiated once as global objects--with their
names and the code they are to run--at which time they are stored by
name in a global registry.
Moved the types to the SIL library.
Together, these changes enable defining unit tests in the source file
containing the code to be tested.
It sets the `[bare]` attribute for `alloc_ref` and `global_value` instructions if their header (reference count and metatype) is not used throughout the lifetime of the object.
In the next commit, I am modifying the move only checker to ensure that we
always error when partially invalidating. While doing this I discovered that
these copies were getting in the way of emitting good diagnostics in that case.