Type annotations for instruction operands are omitted, e.g.
```
%3 = struct $S(%1, %2)
```
Operand types are redundant anyway and were only used for sanity checking in the SIL parser.
But: operand types _are_ printed if the definition of the operand value was not printed yet.
This happens:
* if the block with the definition appears after the block where the operand's instruction is located
* if a block or instruction is printed in isolation, e.g. in a debugger
The old behavior can be restored with `-Xllvm -sil-print-types`.
This option is added to many existing test files which check for operand types in their check-lines.
So far, argument effects were printed in square brackets before the function name, e.g.
```
sil [escapes !%0.**, !%1, %1.c*.v** => %0.v**] @foo : $@convention(thin) (@guaranteed T) -> @out S {
bb0(%0 : $*S, %1 : @guaranteed $T):
...
```
As we are adding more argument effects, this becomes unreadable.
To make it more readable, print the effects after the opening curly brace, and print a separate line for each argument. E.g.
```
sil [ossa] @foo : $@convention(thin) (@guaranteed T) -> @out S {
[%0: noescape **]
[%1: noescape, escape c*.v** => %0.v**]
bb0(%0 : $*S, %1 : @guaranteed $T):
...
```
The ComputeEffects pass derives escape information for function arguments and adds those effects in the function.
This needs a lot of changes in check-lines in the tests, because the effects are printed in SIL
The ComputeEffects pass derives escape information for function arguments and adds those effects in the function.
This needs a lot of changes in check-lines in the tests, because the effects are printed in SIL
Introduce a new instruction `dealloc_stack_ref ` and remove the `stack` flag from `dealloc_ref`.
The `dealloc_ref [stack]` was confusing, because all it does is to mark the deallocation of the stack space for a stack promoted object.
`ref_to_unowned` and similar instructions must be checked if the operand/result types are treated as "pointers" in escape analysis.
Fixes a miscompile.
https://bugs.swift.org/browse/SR-15527
rdar://85772200
This is a basic SSA-based liveness algorithm. It should just do what
it says. This change rescues the core logic from the nonsense that's
been hacked in. There are now two APIs:
(1) computeLifetimeBoundary (new) only does lifetime analysis. It
provides a new API that always succeeds and provides the last use
points so clients can do the right thing based on the user. I need
this for OSSA utilities.
(2) computeFrontier (old) emulates the original API with a simplified
version of the original logic.
Next steps:
- replace the old API with a separate utility that computes destroy
insertion points. Completely remove DeadEndBlocks from lifetime
analysis, because it simply has nothing to do with liveness.
- Gradually migrate clients to either the new lifetime API provided
here or the new destroy insertion API to be provided later.
Instead of having the complicated logic of finding the end of an object's lifetime we now use the existing ValueLifetimeAnalysis and StackNesting tools.
This simplifies the implementation a lot and does not use dominator and post-dominator trees anymore.
It's not a NFC because the way how to ensure proper nesting of stack allocations is now different.
To correct the stack nesting, the deallocations are moved down (instead of moving the allocations up).
Also it's now required that there is a final release for the allocation on all paths (which was not the case in some hand-written SIL tests).
Computation of dead-end blocks and escape analysis are now only done on demand. This saves compile time in case a function has no alloc_refs at all.
This reduces the amount of SIL generated for array operations significantly.
The generated code should be mostly the same (modulo different inlining decisions).
This reduces the amount of SIL generated for array operations significantly.
The generated code should be mostly the same (modulo different inlining decisions).
Strict aliasing only applies to memory operations that use strict
addresses. The optimizer needs to be aware of this flag. Uses of raw
addresses should not have their address substituted with a strict
address.
Also add Builtin.LoadRaw which will be used by raw pointer loads.
This broke the test suite under optimizations with a SIL verifier error: "stack dealloc does
not match most recent stack alloc".
This reverts commit 7a2ca23bc2, reversing
changes made to 4c55e8d7a7.
Unreachable blocks prevented stack promotion in some cases.
Now we use our own post-dominator tree which ignores unreachable blocks instead of the standard post-dominator tree provided by the PostDominanceAnalysis.
Unreachable blocks (better: unreachable sub-graphs) are of no interrest because we don't have to insert the dealloc instructions in unreachable blocks anyway.
This is safe because the closure is not allowed to capture the array according
to the documentation of 'withUnsafeMutableBuffer' and the current implementation
makes sure that any such capture would observe an empty array by swapping self
with an empty array.
Users will get "almost guaranteed" stack promotion for small arrays by writing
something like:
func testStackAllocation(p: Proto) {
var a = [p, p, p]
a.withUnsafeMutableBufferPointer {
let array = $0
work(array)
}
}
It is "almost guaranteed" because we need to statically be able to tell the size
required for the array (no unspecialized generics) and the total buffer size
must not exceed 1K.
Similarly to how we've always handled parameter types, we
now recursively expand tuples in result types and separately
determine a result convention for each result.
The most important code-generation change here is that
indirect results are now returned separately from each
other and from any direct results. It is generally far
better, when receiving an indirect result, to receive it
as an independent result; the caller is much more likely
to be able to directly receive the result in the address
they want to initialize, rather than having to receive it
in temporary memory and then copy parts of it into the
target.
The most important conceptual change here that clients and
producers of SIL must be aware of is the new distinction
between a SILFunctionType's *parameters* and its *argument
list*. The former is just the formal parameters, derived
purely from the parameter types of the original function;
indirect results are no longer in this list. The latter
includes the indirect result arguments; as always, all
the indirect results strictly precede the parameters.
Apply instructions and entry block arguments follow the
argument list, not the parameter list.
A relatively minor change is that there can now be multiple
direct results, each with its own result convention.
This is a minor change because I've chosen to leave
return instructions as taking a single operand and
apply instructions as producing a single result; when
the type describes multiple results, they are implicitly
bound up in a tuple. It might make sense to split these
up and allow e.g. return instructions to take a list
of operands; however, it's not clear what to do on the
caller side, and this would be a major change that can
be separated out from this already over-large patch.
Unsurprisingly, the most invasive changes here are in
SILGen; this requires substantial reworking of both call
emission and reabstraction. It also proved important
to switch several SILGen operations over to work with
RValue instead of ManagedValue, since otherwise they
would be forced to spuriously "implode" buffers.
Having a separate address and container value returned from alloc_stack is not really needed in SIL.
Even if they differ we have both addresses available during IRGen, because a dealloc_stack is always dominated by the corresponding alloc_stack in the same function.
Although this commit quite large, most changes are trivial. The largest non-trivial change is in IRGenSIL.
This commit is a NFC regarding the generated code. Even the generated SIL is the same (except removed #0, #1 and @local_storage).