And move the implementation of `SIL.Type.canBeClass` to the AST Type. The SIL Type just calls the AST Type implementation.
Also rename `SIL.Type.canonicalASTType` -> `SIL.Type.astType`.
* `users`: maps from a sequence of operands to a sequence of instructions
* `users(ofType:)` : as `users`, but filters users of a certain instruction type
* `Value.users`: = `Value.uses.users`
`String(describing:)` does a bunch of dynamic casts
that can be pretty slow. Use interpolation instead,
which bypasses them.
For `swift-frontend`, this brings the time taken
for type-checking an empty file down from ~100ms
to ~70ms.
For `swift build`, this brings the time taken for
a null build down from ~600ms to ~450ms (the
larger delta is presumably due to the fact that
there's much more Swift code in `swift-package`).
Briefly, some versions of Span in the standard library violated trivial
lifetimes; versions of the compiler built at that time simply ignored
dependencies on trivial values. For now, disable trivial dependencies to allow
newer compilers to build against those older standard libraries. This check is
only relevant for ~6 mo (until July 2025).
Handle all combinations of nested dependence scopes: access scopes, coroutines,
and borrow scopes.
This is required to enforce ~Escapable _read accessors and unsafeAddress addressors.
Fixes rdar://140424699 (Invalid SIL is generated by some passes for certain
@lifetime annotations)
We can assume that memory is already initialized at the point of a 'yield'; a
yield use does not need to invalidate the single-initialization property for
temporary stack allocations.
Look through a call to the ConvertPointerToPointerArgument compiler intrinsic
just like it is a copy of the pointer. At the source level, that's all it is:
Treat this like a direct use of the argument 'p' rather than the result of the
invisible pointer conversion call:
func foo(_: UnsafeRawPointer)
func bar(p: UnsafePointer<T>) {
foo(p)
}
Canonicalize a `fix_lifetime` from an address to a `load` + `fix_lifetime`:
```
%1 = alloc_stack $T
...
fix_lifetime %1
```
->
```
%1 = alloc_stack $T
...
%2 = load %1
fix_lifetime %2
```
This transformation is done for `alloc_stack` and `store_borrow` (which always has an `alloc_stack` operand).
The benefit of this transformation is that it enables other optimizations, like mem2reg.
This peephole optimization was already done in SILCombine, but it didn't handle store_borrow.
A good opportunity to make an instruction simplification out of it.
This is part of fixing regressions when enabling OSSA modules:
rdar://140229560
With OSSA it can happen more easily that the final release is not immediately located before the related dealloc_stack_ref.
Therefore do a more precise check (using escape analysis) if any instruction between a release and a dealloc_stack_ref might (implicitly) release the allocated object.
This is part of fixing regressions when enabling OSSA modules:
rdar://140229560
This is needed after running the SSAUpdater for an existing OSSA value, because the updater can
insert unnecessary phis in the middle of the original liverange which breaks up the original
liverange into smaller ones:
```
%1 = def_of_owned_value
%2 = begin_borrow %1
...
br bb2(%1)
bb2(%3 : @owned $T): // inserted by SSAUpdater
...
end_borrow %2 // use after end-of-lifetime!
destroy_value %3
```
It's not needed to run this utility if SSAUpdater is used to create a _new_ OSSA liverange.
* Remove dead `load_borrow` instructions (replaces the old peephole optimization in SILCombine)
* If the `load_borrow` is followed by a `copy_value`, combine both into a `load [copy]`
Replace destructure_tuple with tuple_extract instructions and destructure_struct with struct_extract instructions.
This canonicalization helps other optimizations to e.g. CSE tuple_extract/struct_extract.
It hoists `destroy_value` instructions without shrinking an object's lifetime.
This is done if it can be proved that another copy of a value (either in an SSA value or in memory) keeps the referenced object(s) alive until the original position of the `destroy_value`.
```
%1 = copy_value %0
...
last_use_of %0
// other instructions
destroy_value %0 // %1 is still alive here
```
->
```
%1 = copy_value %0
...
last_use_of %0
destroy_value %0
// other instructions
```
The benefit of this optimization is that it can enable copy-propagation by moving destroys above deinit barries and access scopes.
Just don't store the begin instruction.
This led to problem if the "begin" was not actually an instruction but a block argument.
Using the first instruction of that block is not correct in case the range ends immediately at the first instruction, e.g.
```
bb0(%0 : @owned $C):
destroy_value %0
```
It removes a `copy_value` where the source is a guaranteed value, if possible:
```
%1 = copy_value %0 // %0 = a guaranteed value
// uses of %1
destroy_value %1 // borrow scope of %0 is still valid here
```
->
```
// uses of %0
```
This optimization is very similar to the LoadCopyToBorrow optimization.
Therefore I merged both optimizations into a single file and renamed it to "CopyToBorrowOptimization".
Sometimes it can happen that a deinit function, which is imported from another module, has shared linkage.
In this case it is important to de-serialize the function body. Otherwise it would be illegal SIL.
Unfortunately I don't have a test case for this.