Instead of only checking the return block, we could potentially check
its predecessors and its predecessors's predecessors, etc.
Alos put in a threshold to throttle this to make sure its cheap.
We are still only being able to find of a small # of epilogue retains.
The bail on MayDecrement is blocking many of the opportunites.
This should bring us closer to being able to handle Walsh.
This is part of rdar://24022375.
Similarly to how we've always handled parameter types, we
now recursively expand tuples in result types and separately
determine a result convention for each result.
The most important code-generation change here is that
indirect results are now returned separately from each
other and from any direct results. It is generally far
better, when receiving an indirect result, to receive it
as an independent result; the caller is much more likely
to be able to directly receive the result in the address
they want to initialize, rather than having to receive it
in temporary memory and then copy parts of it into the
target.
The most important conceptual change here that clients and
producers of SIL must be aware of is the new distinction
between a SILFunctionType's *parameters* and its *argument
list*. The former is just the formal parameters, derived
purely from the parameter types of the original function;
indirect results are no longer in this list. The latter
includes the indirect result arguments; as always, all
the indirect results strictly precede the parameters.
Apply instructions and entry block arguments follow the
argument list, not the parameter list.
A relatively minor change is that there can now be multiple
direct results, each with its own result convention.
This is a minor change because I've chosen to leave
return instructions as taking a single operand and
apply instructions as producing a single result; when
the type describes multiple results, they are implicitly
bound up in a tuple. It might make sense to split these
up and allow e.g. return instructions to take a list
of operands; however, it's not clear what to do on the
caller side, and this would be a major change that can
be separated out from this already over-large patch.
Unsurprisingly, the most invasive changes here are in
SILGen; this requires substantial reworking of both call
emission and reabstraction. It also proved important
to switch several SILGen operations over to work with
RValue instead of ManagedValue, since otherwise they
would be forced to spuriously "implode" buffers.
If a value is returned as @owned, we can move the epilogue retain
to the caller and convert the return value to @unowned. This gives
ARC optimizer more freedom to optimize the retain out on the caller's
side.
It appears that epilgue retains are harder to find than epilogue
releases. Most of the time they are not in the return block.
(1) Sometimes, they are in predecessors
(2) Sometimes they come from a call which returns an @owned return value.
This should be improved if we fix (1) and go bottom up.
(3) We do not handle exploded retain_value.
Currently, this catches a small number of opportunities.
We probably need to improve epilogue retain matcher if we are to handle
more cases.
This is part of rdar://24022375.
We also need some refactoring in the pass. e.g. break functions into smaller
functions. I will do with subsequent commit.
This shaves of ~0.5 seconds from ARC when compiling the stdlib on my machine.
I wired up the cache to the delete notification trigger so we are still memory
safe.
This is similar and yet different from epilogue release matcher. Particularly
how retain is found and when to bail. Therefore this is put into a different
class than ConsumedArgToEpilogueReleaseMatcher
This is currently a NFC other than some basic testing using the epilogue dumper.
When we have all the epilogue releases. Make sure they cover all the non-trivial
parts of the base. Otherwise, treat as if we've found no releases for the base.
Currently. this is a NFC other than epilogue dumper. I will wire it up with
function signature with next commit.
This is part of rdar://22380547
So instead of only being able to match %1 and release %1 in (1). we
can also match %1 with (release %2, and release%3, i.e. exploded release_value)
in (2).
(1)
foo(%1)
strong_release %1
(2)
foo(%1)
%2 = struct_extract %1, field_a
%3 = struct_extract %1, field_b
strong_release %2
strong_release %3
This will allow function signature to better move the release instructions to
the callers.
Currently, this is a NFC other than testing using the epilogue match dumper.
This is done by splitting the transformation into an analysis phase and a transformation phase (which does not use the dominator tree anymore).
The domintator tree is recalucated once after the whole function is processed.
This change eventually solves the compile time problem of rdar://problem/24410167.
Allow function passes to:
1. Add new functions, to be optimized before continuing with the current
function.
2. Restart the pipeline on the current function after the current pass
completes.
This makes it possible to fully optimize callees that are the result of
specialization prior to generating interprocedural information or making
inlining choices about these callees.
It also allows us to solve a phase-ordering issue we have with generic
specialization, devirtualization, and inlining, by rescheduling the
current function after changes happen in one of these passes as opposed
to running all of these as part of the inlining pass as happens today.
Currently this is NFC since we have no passes that use this
functionality.
This patch also implements some of the missing functions used by RLE and DSE in new projection
that exist in the old projection.
New projection provides better memory usage, eventually we will phase out the old projection code.
New projection is now copyable, i.e. we have a proper constructor for it. This helps make the code
more readable.
We do see a bit increase in compilation time in compiling stdlib -O, this is a result of the way
we now get types of a projection path, but I expect this to go down (away) with further improvement
on how memory locations are constructed and cached with later patches.
=== With the OLD Projection. ===
Total amount of memory allocated.
--------------------------------
Bytes Used Count Symbol Name
13032.01 MB 50.6% 2158819 swift::SILPassManager::runPassesOnFunction(llvm::ArrayRef<swift::SILFunctionTransform*>, swift::SILFunction*)
2879.70 MB 11.1% 3076018 (anonymous namespace)::ARCSequenceOpts::run()
2663.68 MB 10.3% 1375465 (anonymous namespace)::RedundantLoadElimination::run()
1534.35 MB 5.9% 5067928 (anonymous namespace)::SimplifyCFGPass::run()
1278.09 MB 4.9% 576714 (anonymous namespace)::SILCombine::run()
1052.68 MB 4.0% 935809 (anonymous namespace)::DeadStoreElimination::run()
771.75 MB 2.9% 1677391 (anonymous namespace)::SILCSE::run()
715.07 MB 2.7% 4198193 (anonymous namespace)::GenericSpecializer::run()
434.87 MB 1.6% 652701 (anonymous namespace)::SILSROA::run()
402.99 MB 1.5% 658563 (anonymous namespace)::SILCodeMotion::run()
341.13 MB 1.3% 962459 (anonymous namespace)::DCE::run()
279.48 MB 1.0% 415031 (anonymous namespace)::StackPromotion::run()
Compilation time breakdown.
--------------------------
Running Time Self (ms) Symbol Name
25716.0ms 35.8% 0.0 swift::runSILOptimizationPasses(swift::SILModule&)
25513.0ms 35.5% 0.0 swift::SILPassManager::runOneIteration()
20666.0ms 28.8% 24.0 swift::SILPassManager::runFunctionPasses(llvm::ArrayRef<swift::SILFunctionTransform*>)
19664.0ms 27.4% 77.0 swift::SILPassManager::runPassesOnFunction(llvm::ArrayRef<swift::SILFunctionTransform*>, swift::SILFunction*)
3272.0ms 4.5% 12.0 (anonymous namespace)::SimplifyCFGPass::run()
3266.0ms 4.5% 7.0 (anonymous namespace)::ARCSequenceOpts::run()
2608.0ms 3.6% 5.0 (anonymous namespace)::SILCombine::run()
2089.0ms 2.9% 104.0 (anonymous namespace)::SILCSE::run()
1929.0ms 2.7% 47.0 (anonymous namespace)::RedundantLoadElimination::run()
1280.0ms 1.7% 14.0 (anonymous namespace)::GenericSpecializer::run()
1010.0ms 1.4% 45.0 (anonymous namespace)::DeadStoreElimination::run()
966.0ms 1.3% 191.0 (anonymous namespace)::DCE::run()
496.0ms 0.6% 6.0 (anonymous namespace)::SILCodeMotion::run()
=== With the NEW Projection. ===
Total amount of memory allocated.
--------------------------------
Bytes Used Count Symbol Name
11876.64 MB 48.4% 22112349 swift::SILPassManager::runPassesOnFunction(llvm::ArrayRef<swift::SILFunctionTransform*>, swift::SILFunction*)
2887.22 MB 11.8% 3079485 (anonymous namespace)::ARCSequenceOpts::run()
1820.89 MB 7.4% 1877674 (anonymous namespace)::RedundantLoadElimination::run()
1533.16 MB 6.2% 5073310 (anonymous namespace)::SimplifyCFGPass::run()
1282.86 MB 5.2% 577024 (anonymous namespace)::SILCombine::run()
772.21 MB 3.1% 1679154 (anonymous namespace)::SILCSE::run()
721.69 MB 2.9% 936958 (anonymous namespace)::DeadStoreElimination::run()
715.08 MB 2.9% 4196263 (anonymous namespace)::GenericSpecializer::run()
Compilation time breakdown.
--------------------------
Running Time Self (ms) Symbol Name
25137.0ms 37.3% 0.0 swift::runSILOptimizationPasses(swift::SILModule&)
24939.0ms 37.0% 0.0 swift::SILPassManager::runOneIteration()
20226.0ms 30.0% 29.0 swift::SILPassManager::runFunctionPasses(llvm::ArrayRef<swift::SILFunctionTransform*>)
19241.0ms 28.5% 83.0 swift::SILPassManager::runPassesOnFunction(llvm::ArrayRef<swift::SILFunctionTransform*>, swift::SILFunction*)
3214.0ms 4.7% 10.0 (anonymous namespace)::SimplifyCFGPass::run()
3005.0ms 4.4% 14.0 (anonymous namespace)::ARCSequenceOpts::run()
2438.0ms 3.6% 7.0 (anonymous namespace)::SILCombine::run()
2217.0ms 3.2% 54.0 (anonymous namespace)::RedundantLoadElimination::run()
2212.0ms 3.2% 131.0 (anonymous namespace)::SILCSE::run()
1195.0ms 1.7% 11.0 (anonymous namespace)::GenericSpecializer::run()
1168.0ms 1.7% 39.0 (anonymous namespace)::DeadStoreElimination::run()
853.0ms 1.2% 150.0 (anonymous namespace)::DCE::run()
499.0ms 0.7% 7.0 (anonymous namespace)::SILCodeMotion::run()
SILValue.h/.cpp just defines the SIL base classes. Referring to specific instructions is a (small) kind of layering violation.
Also I want to keep SILValue small so that it is really just a type alias of ValueBase*.
NFC.
As there are no instructions left which produce multiple result values, this is a NFC regarding the generated SIL and generated code.
Although this commit is large, most changes are straightforward adoptions to the changes in the ValueBase and SILValue classes.
And use project_box to get to the address value.
SILGen now generates a project_box for each alloc_box.
And IRGen re-uses the address value from the alloc_box if the operand of project_box is an alloc_box.
This lets the generated code be the same as before.
Other than that most changes of this (quite large) commit are straightforward.
In all of the cases where this is being used, we already immediately perform an
unreachable if we find a TermKind::Invalid. So simplify the code and move it
into the conversion switch itself.
This reverts commit 81e7bdfe1b.
This is not true in non-loop canonicalized SIL. It is true in loop-canonicalized
SIL though. So I need to fix the test to avoid the assert.
Correct format:
```
//===--- Name of file - Description ----------------------------*- Lang -*-===//
```
Notes:
* Comment line should be exactly 80 chars.
* Padding: Pad with dashes after "Description" to reach 80 chars.
* "Name of file", "Description" and "Lang" are all optional.
* In case of missing "Lang": drop the "-*-" markers.
* In case of missing space: drop one, two or three dashes before "Name of file".
This allows for the RCIdentityAnalysis to be tested independent of other
passes.
Also add some initial tests for RCIdentity. I am stepping through "strip by
strip" but I did not have time to finish the coverage.