Which consists of
* removing redundant `address_to_pointer`-`pointer_to_address` pairs
* optimize `index_raw_pointer` of a manually computed stride to `index_addr`
* remove or increase the alignment based on a "assumeAlignment" builtin
This is a big code cleanup but also has some functional differences for the `address_to_pointer`-`pointer_to_address` pair removal:
* It's not done if the resulting SIL would result in a (detectable) use-after-dealloc_stack memory lifetime failure.
* It's not done if `copy_value`s must be inserted or borrow-scopes must be extended to comply with ownership rules (this was the task of the OwnershipRAUWHelper).
Inserting copies is bad anyway.
Extending borrow-scopes would only be required if the original lifetime of the pointer extends a borrow scope - which shouldn't happen in save code. Therefore this is a very rare case which is not worth handling.
`String(describing:)` does a bunch of dynamic casts
that can be pretty slow. Use interpolation instead,
which bypasses them.
For `swift-frontend`, this brings the time taken
for type-checking an empty file down from ~100ms
to ~70ms.
For `swift build`, this brings the time taken for
a null build down from ~600ms to ~450ms (the
larger delta is presumably due to the fact that
there's much more Swift code in `swift-package`).
Canonicalize a `fix_lifetime` from an address to a `load` + `fix_lifetime`:
```
%1 = alloc_stack $T
...
fix_lifetime %1
```
->
```
%1 = alloc_stack $T
...
%2 = load %1
fix_lifetime %2
```
This transformation is done for `alloc_stack` and `store_borrow` (which always has an `alloc_stack` operand).
The benefit of this transformation is that it enables other optimizations, like mem2reg.
This peephole optimization was already done in SILCombine, but it didn't handle store_borrow.
A good opportunity to make an instruction simplification out of it.
This is part of fixing regressions when enabling OSSA modules:
rdar://140229560
* Remove dead `load_borrow` instructions (replaces the old peephole optimization in SILCombine)
* If the `load_borrow` is followed by a `copy_value`, combine both into a `load [copy]`
It hoists `destroy_value` instructions without shrinking an object's lifetime.
This is done if it can be proved that another copy of a value (either in an SSA value or in memory) keeps the referenced object(s) alive until the original position of the `destroy_value`.
```
%1 = copy_value %0
...
last_use_of %0
// other instructions
destroy_value %0 // %1 is still alive here
```
->
```
%1 = copy_value %0
...
last_use_of %0
destroy_value %0
// other instructions
```
The benefit of this optimization is that it can enable copy-propagation by moving destroys above deinit barries and access scopes.
It removes a `copy_value` where the source is a guaranteed value, if possible:
```
%1 = copy_value %0 // %0 = a guaranteed value
// uses of %1
destroy_value %1 // borrow scope of %0 is still valid here
```
->
```
// uses of %0
```
This optimization is very similar to the LoadCopyToBorrow optimization.
Therefore I merged both optimizations into a single file and renamed it to "CopyToBorrowOptimization".
Sometimes it can happen that a deinit function, which is imported from another module, has shared linkage.
In this case it is important to de-serialize the function body. Otherwise it would be illegal SIL.
Unfortunately I don't have a test case for this.
The optimization replaces a `load [copy]` with a `load_borrow` if possible.
```
%1 = load [copy] %0
// no writes to %0
destroy_value %1
```
->
```
%1 = load_borrow %0
// no writes to %0
end_borrow %1
```
The new implementation uses alias-analysis (instead of a simple def-use walk), which is much more powerful.
rdar://115315849
In Embedded Swift, witness method lookup is done from specialized witness tables.
For this to work, the type of witness_method must be specialized as well.
Otherwise the method call would be done with wrong parameter conventions (indirect instead of direct).
As the optimizer uses more and more AST stuff, it's now time to create an "AST" module.
Initially it defines following AST datastructures:
* declarations: `Decl` + derived classes
* `Conformance`
* `SubstitutionMap`
* `Type` and `CanonicalType`
Some of those were already defined in the SIL module and are now moved to the AST module.
This change also cleans up a few things:
* proper definition of `NominalTypeDecl`-related APIs in `SIL.Type`
* rename `ProtocolConformance` to `Conformance`
* use `AST.Type`/`AST.CanonicalType` instead of `BridgedASTType` in SIL and the Optimizer
MandatoryPerformanceOptimizations already did most of the vtable specialization work.
So it makes sense to remove the VTableSpecializerPass completely and do everything in MandatoryPerformanceOptimizations.
* add missing APIs
* bridge the entries as values and not as pointers
* add lookup functions in `Context`
* make WitnessTable.Entry.Kind enum cases lower case
Make SILLInkage available in SIL as `SIL.Linkage`.
Also, rename the misleading Function and GlobalVariable ABI `isAvailableExternally` to `isDefinedExternally`
The main changes are:
*) Rewrite everything in swift. So far, parts of memory-behavior analysis were already implemented in swift. Now everything is done in swift and lives in `AliasAnalysis.swift`. This is a big code simplification.
*) Support many more instructions in the memory-behavior analysis - especially OSSA instructions, like `begin_borrow`, `end_borrow`, `store_borrow`, `load_borrow`. The computation of end_borrow effects is now much more precise. Also, partial_apply is now handled more precisely.
*) Simplify and reduce type-based alias analysis (TBAA). The complexity of the old TBAA comes from old days where the language and SIL didn't have strict aliasing and exclusivity rules (e.g. for inout arguments). Now TBAA is only needed for code using unsafe pointers. The new TBAA handles this - and not more. Note that TBAA for classes is already done in `AccessBase.isDistinct`.
*) Handle aliasing in `begin_access [modify]` scopes. We already supported truly immutable scopes like `begin_access [read]` or `ref_element_addr [immutable]`. For `begin_access [modify]` we know that there are no other reads or writes to the access-address within the scope.
*) Don't cache memory-behavior results. It turned out that the hit-miss rate was pretty bad (~ 1:7). The overhead of the cache lookup took as long as recomputing the memory behavior.