* Reimplement most of the logic in Swift as an Instruction simplification and remove the old code from SILCombine
* support more cases of existential archetype replacements:
For example:
```
%0 = alloc_stack $any P
%1 = init_existential_addr %0, $T
use %1
```
is transformed to
```
%0 = alloc_stack $T
use %0
```
Also, if the alloc_stack is already an opened existential and the concrete type is known,
replace it as well:
```
%0 = metatype $@thick T.Type
%1 = init_existential_metatype %0, $@thick any P.Type
%2 = open_existential_metatype %1 : $@thick any P.Type to $@thick (@opened("X", P) Self).Type
...
%3 = alloc_stack $@opened("X", any P) Self
use %3
```
is transformed to
```
...
%3 = alloc_stack $T
use %3
```
If an apply uses an existential archetype (`@opened("...")`) and the concrete type is known, replace the existential archetype with the concrete type
1. in the apply's substitution map
2. in the arguments, e.g. by inserting address casts
For example:
```
%5 = apply %1<@opend("...")>(%2) : <τ_0_0> (τ_0_0) -> ()
```
->
```
%4 = unchecked_addr_cast %2 to $*ConcreteType
%5 = apply %1<ConcreteType>(%4) : <τ_0_0> (τ_0_0) -> ()
```
Replace `unconditional_checked_cast` to an existential metatype with an `init_existential_metatype`, it the source is a conforming type.
Note that init_existential_metatype is better than unconditional_checked_cast because it does not need to do any runtime casting.
Which consists of
* removing redundant `address_to_pointer`-`pointer_to_address` pairs
* optimize `index_raw_pointer` of a manually computed stride to `index_addr`
* remove or increase the alignment based on a "assumeAlignment" builtin
This is a big code cleanup but also has some functional differences for the `address_to_pointer`-`pointer_to_address` pair removal:
* It's not done if the resulting SIL would result in a (detectable) use-after-dealloc_stack memory lifetime failure.
* It's not done if `copy_value`s must be inserted or borrow-scopes must be extended to comply with ownership rules (this was the task of the OwnershipRAUWHelper).
Inserting copies is bad anyway.
Extending borrow-scopes would only be required if the original lifetime of the pointer extends a borrow scope - which shouldn't happen in save code. Therefore this is a very rare case which is not worth handling.
`String(describing:)` does a bunch of dynamic casts
that can be pretty slow. Use interpolation instead,
which bypasses them.
For `swift-frontend`, this brings the time taken
for type-checking an empty file down from ~100ms
to ~70ms.
For `swift build`, this brings the time taken for
a null build down from ~600ms to ~450ms (the
larger delta is presumably due to the fact that
there's much more Swift code in `swift-package`).
Canonicalize a `fix_lifetime` from an address to a `load` + `fix_lifetime`:
```
%1 = alloc_stack $T
...
fix_lifetime %1
```
->
```
%1 = alloc_stack $T
...
%2 = load %1
fix_lifetime %2
```
This transformation is done for `alloc_stack` and `store_borrow` (which always has an `alloc_stack` operand).
The benefit of this transformation is that it enables other optimizations, like mem2reg.
This peephole optimization was already done in SILCombine, but it didn't handle store_borrow.
A good opportunity to make an instruction simplification out of it.
This is part of fixing regressions when enabling OSSA modules:
rdar://140229560
* Remove dead `load_borrow` instructions (replaces the old peephole optimization in SILCombine)
* If the `load_borrow` is followed by a `copy_value`, combine both into a `load [copy]`
It hoists `destroy_value` instructions without shrinking an object's lifetime.
This is done if it can be proved that another copy of a value (either in an SSA value or in memory) keeps the referenced object(s) alive until the original position of the `destroy_value`.
```
%1 = copy_value %0
...
last_use_of %0
// other instructions
destroy_value %0 // %1 is still alive here
```
->
```
%1 = copy_value %0
...
last_use_of %0
destroy_value %0
// other instructions
```
The benefit of this optimization is that it can enable copy-propagation by moving destroys above deinit barries and access scopes.
It removes a `copy_value` where the source is a guaranteed value, if possible:
```
%1 = copy_value %0 // %0 = a guaranteed value
// uses of %1
destroy_value %1 // borrow scope of %0 is still valid here
```
->
```
// uses of %0
```
This optimization is very similar to the LoadCopyToBorrow optimization.
Therefore I merged both optimizations into a single file and renamed it to "CopyToBorrowOptimization".
Sometimes it can happen that a deinit function, which is imported from another module, has shared linkage.
In this case it is important to de-serialize the function body. Otherwise it would be illegal SIL.
Unfortunately I don't have a test case for this.
The optimization replaces a `load [copy]` with a `load_borrow` if possible.
```
%1 = load [copy] %0
// no writes to %0
destroy_value %1
```
->
```
%1 = load_borrow %0
// no writes to %0
end_borrow %1
```
The new implementation uses alias-analysis (instead of a simple def-use walk), which is much more powerful.
rdar://115315849
In Embedded Swift, witness method lookup is done from specialized witness tables.
For this to work, the type of witness_method must be specialized as well.
Otherwise the method call would be done with wrong parameter conventions (indirect instead of direct).
As the optimizer uses more and more AST stuff, it's now time to create an "AST" module.
Initially it defines following AST datastructures:
* declarations: `Decl` + derived classes
* `Conformance`
* `SubstitutionMap`
* `Type` and `CanonicalType`
Some of those were already defined in the SIL module and are now moved to the AST module.
This change also cleans up a few things:
* proper definition of `NominalTypeDecl`-related APIs in `SIL.Type`
* rename `ProtocolConformance` to `Conformance`
* use `AST.Type`/`AST.CanonicalType` instead of `BridgedASTType` in SIL and the Optimizer
MandatoryPerformanceOptimizations already did most of the vtable specialization work.
So it makes sense to remove the VTableSpecializerPass completely and do everything in MandatoryPerformanceOptimizations.
* add missing APIs
* bridge the entries as values and not as pointers
* add lookup functions in `Context`
* make WitnessTable.Entry.Kind enum cases lower case