Add a new mandatory BooleanLiteralFolding pass which constant folds conditional branches with boolean literals as operands.
```
%1 = integer_literal -1
%2 = apply %bool_init(%1) // Bool.init(_builtinBooleanLiteral:)
%3 = struct_extract %2, #Bool._value
cond_br %3, bb1, bb2
```
->
```
...
br bb1
```
This pass is intended to run before DefiniteInitialization, where mandatory inlining and constant folding didn't run, yet (which would perform this kind of optimization).
This optimization is required to let DefiniteInitialization handle boolean literals correctly.
For example in infinite loops:
```
init() {
while true { // DI need to know that there is no loop exit from this while-statement
if some_condition {
member_field = init_value
break
}
}
}
```
By default it lowers the builtin to an `alloc_vector` with a paired `dealloc_stack`.
If the builtin appears in the initializer of a global variable and the vector elements are initialized,
a statically initialized global is created where the initializer is a `vector` instruction.
In regular swift this is a nice optimization. In embedded swift it's a requirement, because the compiler needs to be able to specialize generic deinits of non-copyable types.
The new de-virtualization utilities are called from two places:
* from the new DeinitDevirtualizer pass. It replaces the old MoveOnlyDeinitDevirtualization, which is very basic and does not fulfill the needs for embedded swift.
* from MandatoryPerformanceOptimizations for embedded swift
ASTGen always builds with the host Swift compiler, without requiring
bootstrapping, and is enabled in more places. Move the regex literal
parsing logic there so it is enabled in more host environments, and
makes use of CMake's Swift support. Enable all of the regex literal
tests when ASTGen is built, to ensure everything is working.
Remove the "AST" and "Parse" Swift modules from SwiftCompilerSources,
because they are no longer needed.
Introduce two modes of bridging:
* inline mode: this is basically how it worked so far. Using full C++ interop which allows bridging functions to be inlined.
* pure mode: bridging functions are not inlined but compiled in a cpp file. This allows to reduce the C++ interop requirements to a minimum. No std/llvm/swift headers are imported.
This change requires a major refactoring of bridging sources. The implementation of bridging functions go to two separate files: SILBridgingImpl.h and OptimizerBridgingImpl.h.
Depending on the mode, those files are either included in the corresponding header files (inline mode), or included in the c++ file (pure mode).
The mode can be selected with the BRIDGING_MODE cmake variable. By default it is set to the inline mode (= existing behavior). The pure mode is only selected in certain configurations to work around C++ interop issues:
* In debug builds, to workaround a problem with LLDB's `po` command (rdar://115770255).
* On windows to workaround a build problem.
For chains of async functions where suspensions can be statically
proven to never be required, this pass removes all suspensions and
turns the functions into synchronous functions.
For example, this function does not actually require any suspensions,
once the correct executor is acquired upon initial entry:
```
func fib(_ n: Int) async -> Int {
if n <= 1 { return n }
return await fib(n-1) + fib(n-2)
}
```
So we can turn the above into this for better performance:
```
func fib() async -> Int {
return fib_sync()
}
func fib_sync(_ n: Int) -> Int {
if n <= 1 { return n }
return fib(n-1) + fib(n-2)
}
```
while rewriting callers of `fib` to use the `sync` entry-point
when we can prove that it will be invoked on a compatible executor.
This pass is currently experimental and under development. Thus, it
is disabled by default and you must use
`-enable-experimental-async-demotion` to try it.
It lowers let property accesses of classes.
Lowering consists of two tasks:
* In class initializers, insert `end_init_let_ref` instructions at places where all let-fields are initialized.
This strictly separates the life-range of the class into a region where let fields are still written during
initialization and a region where let fields are truly immutable.
* Add the `[immutable]` flag to all `ref_element_addr` instructions (for let-fields) which are in the "immutable"
region. This includes the region after an inserted `end_init_let_ref` in an class initializer, but also all
let-field accesses in other functions than the initializer and the destructor.
This pass should run after DefiniteInitialization but before RawSILInstLowering (because it relies on `mark_uninitialized` still present in the class initializer).
Note that it's not mandatory to run this pass. If it doesn't run, SIL is still correct.
Simplified example (after lowering):
bb0(%0 : @owned C): // = self of the class initializer
%1 = mark_uninitialized %0
%2 = ref_element_addr %1, #C.l // a let-field
store %init_value to %2
%3 = end_init_let_ref %1 // inserted by lowering
%4 = ref_element_addr [immutable] %3, #C.l // set to immutable by lowering
%5 = load %4
The new implementation has several benefits compared to the old C++ implementation:
* It is significantly simpler. It optimizes each load separately instead of all at once with bit-field based dataflow.
* It's using alias analysis more accurately which enables more loads to be optimized
* It avoids inserting additional copies in OSSA
The algorithm is a data flow analysis which starts at the original load and searches for preceding stores or loads by following the control flow in backward direction.
The preceding stores and loads provide the "available values" with which the original load can be replaced.
The old C++ pass didn't catch a few cases.
Also:
* The new pass is significantly simpler: it doesn't perform dataflow for _all_ memory locations at once using bitfields, but handles each store separately. (In both implementations there is a complexity limit in place to avoid quadratic complexity)
* The new pass works with OSSA
It sets the `[bare]` attribute for `alloc_ref` and `global_value` instructions if their header (reference count and metatype) is not used throughout the lifetime of the object.
This allows to run the NamedReturnValueOptimization only late in the pipeline.
The optimization shouldn't be done before serialization, because it might prevent predictable memory optimizations in the caller after inlining.
It converts a lazily initialized global to a statically initialized global variable.
When this pass runs on a global initializer `[global_init_once_fn]` it tries to create a static initializer for the initialized global.
```
sil [global_init_once_fn] @globalinit {
alloc_global @the_global
%a = global_addr @the_global
%i = some_const_initializer_insts
store %i to %a
}
```
The pass creates a static initializer for the global:
```
sil_global @the_global = {
%initval = some_const_initializer_insts
}
```
and removes the allocation and store instructions from the initializer function:
```
sil [global_init_once_fn] @globalinit {
%a = global_addr @the_global
%i = some_const_initializer_insts
}
```
The initializer then becomes a side-effect free function which let's the builtin-simplification remove the `builtin "once"` which calls the initializer.
If a `debug_step` has the same debug location as a previous or succeeding instruction it is removed.
It's just important that there is at least one instruction for a certain debug location so that single stepping on that location will work.
* split the `PassContext` into multiple protocols and structs: `Context`, `MutatingContext`, `FunctionPassContext` and `SimplifyContext`
* change how instruction passes work: implement the `simplify` function in conformance to `SILCombineSimplifyable`
* add a mechanism to add a callback for inserted instructions
Replace the generic `List` with the (non-generic) `InstructionList` and `BasicBlockList`.
The `InstructionList` is now a bit different than the `BasicBlockList` because it supports that instructions are deleted while iterating over the list.
Also add a test pass which tests instruction modification while iteration.
Added new C++-to-Swift callback for isDeinitBarrier.
And pass it CalleeAnalysis so it can depend on function effects. For
now, the argument is ignored. And, all callers just pass nullptr.
Promoted to API the mayAccessPointer component predicate of
isDeinitBarrier which needs to remain in C++. That predicate will also
depends on function effects. For that reason, it too is now passed a
BasicCalleeAnalysis and is moved into SILOptimizer.
Also, added more conservative versions of isDeinitBarrier and
maySynchronize which will never consider side-effects.
Computes the side effects for a function, which consists of argument- and global effects.
This is similar to the ComputeEscapeEffects pass, just for side-effects.
Dead-end blocks are blocks from which there is no path to the function exit (`return`, `throw` or unwind).
These are blocks which end with an unreachable instruction and blocks from which all paths end in "unreachable" blocks.
It decides which functions need stack protection.
It sets the `needStackProtection` flags on all function which contain stack-allocated values for which an buffer overflow could occur.
Within safe swift code there shouldn't be any buffer overflows.
But if the address of a stack variable is converted to an unsafe pointer, it's not in the control of the compiler anymore.
This means, if there is any `address_to_pointer` instruction for an `alloc_stack`, such a function is marked for stack protection.
Another case is `index_addr` for non-tail allocated memory.
This pattern appears if pointer arithmetic is done with unsafe pointers in swift code.
If the origin of an unsafe pointer can only be tracked to a function argument, the pass tries to find the root stack allocation for such an argument by doing an inter-procedural analysis.
If this is not possible, the fallback is to move the argument into a temporary `alloc_stack` and do the unsafe pointer operations on the temporary.
rdar://93677524
Provides a list of instructions, which reference a function.
A function "use" is an instruction in another (or the same) function which references the function.
In most cases those are `function_ref` instructions, but can also be e.g. `keypath` instructions.
'FunctionUses' performs an analysis of all functions in the module and collects instructions which reference other functions.
This utility can be used to do inter-procedural caller-analysis.
To add a module pass in `Passes.def` use the new `SWIFT_MODULE_PASS` macro.
On the swift side, create a `ModulePass`.
It’s run function receives a `ModulePassContext`, which provides access to all functions of a module.
But it doesn't provide any APIs to modify functions.
In order to modify a function, a module pass must use `ModulePassContext.transform(function:)`.
Removes redundant ObjectiveC <-> Swift bridging calls.
Basically, if a value is bridged from ObjectiveC to Swift an then back to ObjectiveC again, then just re-use the original ObjectiveC value.
Also in this commit: add an additional DCE pass before ownership elimination. It can cleanup dead code which is left behind by the ObjCBridgingOptimization.
rdar://89987440
To use _RegexParser from SwiftSyntax.
* Create 'libswiftCompilerModules_SwiftSyntax.a' which is a subset of
'libswiftCompilerModules.a'
* Link 'lib_InternalSwiftSyntaxParser' to
'libswiftCompilerModules_SwiftSyntax.a'
* Factor out swift runtime linking logic in CMake so that dynamic
libraries can link to Swift runtime, in addition to executables
* Link 'lib_InternalSwiftSyntaxParser' to swift runtime
This fixes:
* An issue where the diagnostic messages were leaked
* Diagnose at correct position inside the regex literal
To do this:
* Introduce 'Parse' SwiftCompiler module that is a bridging layer
between '_CompilerRegexParser' and C++ libParse
* Move libswiftParseRegexLiteral and libswiftLexRegexLiteral to 'Parse'
Also this change makes 'SwiftCompilerSources/Package.swift' be configured
by CMake so it can actually be built with 'swift-build'.
rdar://92187284
The ComputeEffects pass derives escape information for function arguments and adds those effects in the function.
This needs a lot of changes in check-lines in the tests, because the effects are printed in SIL
The ComputeEffects pass derives escape information for function arguments and adds those effects in the function.
This needs a lot of changes in check-lines in the tests, because the effects are printed in SIL
As the _MatchingEngine module no longer contains the matching engine, this patch renames this module to describe its role more accurately. Because this module primarily contains the AST and the regex parsing logic, I propose we rename it to "_RegexParser".
Also renames the ExperimentalRegex module in SwiftCompilerSources to _RegexParser for consistency. This would prevent errors if sources in _RegexParser used qualified lookup with the module name.