Optionally, the dependency to the initialization of the global can be specified with a dependency token `depends_on <token>`.
This is usually a `builtin "once"` which calls the initializer for the global variable.
Layers:
- FunctionConvention: AST FunctionType: results, parameters
- ArgumentConventions: SIL function arguments
- ApplyOperandConventions: applied operands
The meaning of an integer index is determined by the collection
type. All the mapping between the various indices (results,
parameters, SIL argument, applied arguments) is restricted to the
collection type that owns that mapping. Remove the concept of a
"caller argument index".
By default it lowers the builtin to an `alloc_vector` with a paired `dealloc_stack`.
If the builtin appears in the initializer of a global variable and the vector elements are initialized,
a statically initialized global is created where the initializer is a `vector` instruction.
In regular swift this is a nice optimization. In embedded swift it's a requirement, because the compiler needs to be able to specialize generic deinits of non-copyable types.
The new de-virtualization utilities are called from two places:
* from the new DeinitDevirtualizer pass. It replaces the old MoveOnlyDeinitDevirtualization, which is very basic and does not fulfill the needs for embedded swift.
* from MandatoryPerformanceOptimizations for embedded swift
* add `NominalTypeDecl.isResilient`
* make the return type of `Type.getNominalFields` optional and return nil in case the nominal type is resilient.
This forces users of this API to think about what to do in case the nominal type is resilient.
A transformation must not create a `struct` instruction with a non-copyable type because this would apply that a (potential) deinit would be called for that struct.
Make filter APIs for UseList chainable by adding them to Sequence where Element == Operand
For example, it allows to write:
```
let singleUse = value.uses.ignoreDebugUses.ignoreUsers(ofType: EndAccessInst.self).singleUse
```
Also, add `UseList.getSingleUser(notOfType:)`
When merging stores in a global initializer, it's possible that the merged store is inserted at the wrong location, causing a SIL verifier error.
This is hard to reproduce, but can happen.
The merged store must be inserted _after_ all other stores. Instead it's inserted after the store of the last property. Now, if properties are _not_ initialized in the order they are declared, this problem can show up.
rdar://117189962
Introduce two modes of bridging:
* inline mode: this is basically how it worked so far. Using full C++ interop which allows bridging functions to be inlined.
* pure mode: bridging functions are not inlined but compiled in a cpp file. This allows to reduce the C++ interop requirements to a minimum. No std/llvm/swift headers are imported.
This change requires a major refactoring of bridging sources. The implementation of bridging functions go to two separate files: SILBridgingImpl.h and OptimizerBridgingImpl.h.
Depending on the mode, those files are either included in the corresponding header files (inline mode), or included in the c++ file (pure mode).
The mode can be selected with the BRIDGING_MODE cmake variable. By default it is set to the inline mode (= existing behavior). The pure mode is only selected in certain configurations to work around C++ interop issues:
* In debug builds, to workaround a problem with LLDB's `po` command (rdar://115770255).
* On windows to workaround a build problem.
All SILArgument types are "block arguments". There are three kinds:
1. Function arguments
2. Phis
3. Terminator results
In every situation where the source of the block argument matters, we
need to distinguish between these three. Accidentally failing to
handle one of the cases is an perpetual source of compiler
bugs. Attempting to handle both phis and terminator results uniformly
is *always* a bug, especially once OSSA has phi flags. Even when all
cases are handled correctly, the code that deals with data flow across
blocks is incomprehensible without giving each case a type. This
continues to be a massive waste of time literally every time I review
code that involves cross-block control flow.
Unfortunately, we don't have these C++ types yet (nothing big is
blocking that, it just wasn't done). That's manageable because we can
use wrapper types on the Swift side for now. Wrapper types don't
create any more complexity than protocols, but they do sacrifice some
usability in switch cases.
There is no reason for a BlockArgument type. First, a function
argument is a block argument just as much as any other. BlockArgument
provides no useful information beyond Argument. And it is nearly
always a mistake to care about whether a value is a function argument
and not care whether it is a phi or terminator result.
For chains of async functions where suspensions can be statically
proven to never be required, this pass removes all suspensions and
turns the functions into synchronous functions.
For example, this function does not actually require any suspensions,
once the correct executor is acquired upon initial entry:
```
func fib(_ n: Int) async -> Int {
if n <= 1 { return n }
return await fib(n-1) + fib(n-2)
}
```
So we can turn the above into this for better performance:
```
func fib() async -> Int {
return fib_sync()
}
func fib_sync(_ n: Int) -> Int {
if n <= 1 { return n }
return fib(n-1) + fib(n-2)
}
```
while rewriting callers of `fib` to use the `sync` entry-point
when we can prove that it will be invoked on a compatible executor.
This pass is currently experimental and under development. Thus, it
is disabled by default and you must use
`-enable-experimental-async-demotion` to try it.
It lowers let property accesses of classes.
Lowering consists of two tasks:
* In class initializers, insert `end_init_let_ref` instructions at places where all let-fields are initialized.
This strictly separates the life-range of the class into a region where let fields are still written during
initialization and a region where let fields are truly immutable.
* Add the `[immutable]` flag to all `ref_element_addr` instructions (for let-fields) which are in the "immutable"
region. This includes the region after an inserted `end_init_let_ref` in an class initializer, but also all
let-field accesses in other functions than the initializer and the destructor.
This pass should run after DefiniteInitialization but before RawSILInstLowering (because it relies on `mark_uninitialized` still present in the class initializer).
Note that it's not mandatory to run this pass. If it doesn't run, SIL is still correct.
Simplified example (after lowering):
bb0(%0 : @owned C): // = self of the class initializer
%1 = mark_uninitialized %0
%2 = ref_element_addr %1, #C.l // a let-field
store %init_value to %2
%3 = end_init_let_ref %1 // inserted by lowering
%4 = ref_element_addr [immutable] %3, #C.l // set to immutable by lowering
%5 = load %4
This instructions marks the point where all let-fields of a class are initialized.
This is important to ensure the correctness of ``ref_element_addr [immutable]`` for let-fields,
because in the initializer of a class, its let-fields are not immutable, yet.
Codegen is the same, but `begin_dealloc_ref` consumes the operand and produces a new SSA value.
This cleanly splits the liferange to the region before and within the destructor of a class.
Fixes rdar://113339972 DeadStoreElimination causes uninitialized closure context
Before this fix, the recently enabled function side effect implementation
would return no side effects for a partial apply that is not applied
in the same function. This resulted in DeadStoreElimination
incorrectly eliminating the initialization of the closure context.
The fix is to model the effects of capturing the arguments for the
closure context. The effects of running the closure body will be
considered later, at the point that the closure is
applied. Running the closure does, however, depend on the captured
values to be valid. If the value being captured is addressible,
then we need to model the effect of reading from that memory. In
this case, the capture reads from a local stack slot:
%stack = alloc_stack $Klass
store %ref to %stack : $*Klass
%closure = partial_apply [callee_guaranteed] %f(%stack)
: $@convention(thin) (@in_guaranteed Klass) -> ()
Later, when the closure is applied, we won't have any reference back
to the original stack slot. The application may not even happen in a caller.
Note that, even if the closure will be applied in the current
function, the side effects of the application are insufficient to
cover the side effects of the capture. For example, the closure
body itself may not read from an argument, but the context must
still be valid in case it is copied or if the capture itself was
not a bitwise-move.
As an optimization, we ignore the effect of captures for on-stack
partial applies. Such captures are always either a bitwise-move
or, more commonly, capture the source value by address. In these
cases, the side effects of applying the closure are sufficient to
cover the effects of the captures. And, if an on-stack closure is
not invoked in the current function (or passed to a callee) then
it will never be invoked, so the captures never have effects.
For example:
```
var p = Point(x: 10, y: 20)
let o = UnsafePointer(&p)
```
Also support outlined arrays with pointers to other globals. For example:
```
var g1 = 1
var g2 = 2
func f() -> [UnsafePointer<Int>] {
return [UnsafePointer(&g1), UnsafePointer(&g2)]
}
```
Sometimes structs are not stored in one piece, but as individual elements. Merge such individual stores to a single store of the whole struct.
This enables generating a statically initialized global.
The new implementation has several benefits compared to the old C++ implementation:
* It is significantly simpler. It optimizes each load separately instead of all at once with bit-field based dataflow.
* It's using alias analysis more accurately which enables more loads to be optimized
* It avoids inserting additional copies in OSSA
The algorithm is a data flow analysis which starts at the original load and searches for preceding stores or loads by following the control flow in backward direction.
The preceding stores and loads provide the "available values" with which the original load can be replaced.
To make it available in other optimizations as well.
Also, a few problems:
* Use destructre instructions when in OSSA
* Don't split the store if it's nominal type has unreferenceable stoarge
* rename it to `trySplit` because it's not guaranteed to work
Also, add the counterpart for load instructions: `LoadInst.trySplit()`
`ownership` is a bad name in `LoadInst`, because it hides `Value.ownership`.
Therefore rename it to `loadOwnership`.
Do the same for ownership in StoreInst to be consistent.
Before this change, if a global variable is required to be statically initialized (e.g. due to @_section attribute), we don't allow its type to be a struct, only a scalar type works. This change improves on that by teaching MandatoryPerformanceOptimizations pass to inline struct initializer calls into initializer of globals, as long as they are simple enough so that we can be sure that we don't trigger recursive/infinite inlining.
The old C++ pass didn't catch a few cases.
Also:
* The new pass is significantly simpler: it doesn't perform dataflow for _all_ memory locations at once using bitfields, but handles each store separately. (In both implementations there is a complexity limit in place to avoid quadratic complexity)
* The new pass works with OSSA
For `alloc_ref [bare] [stack]` and `global_value [bare]` omit the object header initialization.
The `bare` flag means that the object header is not used.
This was already done with a peephole optimization inside IRGen for `global_value`. But now rely on the SIL `bare` flag.
It sets the `[bare]` attribute for `alloc_ref` and `global_value` instructions if their header (reference count and metatype) is not used throughout the lifetime of the object.