Don't allow this optimization to kick in for "inout" args.
The optimization may expose local writes to any aliases of the argument.
I can't prove that is memory safe.
Erik pointed out this case.
This requires a bit of code motion.
e.g.
1. %Tmp = alloc_stack
2. copy_addr %InArg to [initialization] %Tmp
3. DataAddr = init_enum_data_addr %OutArg
4. copy_addr %Tmp#1 to [initialization] %DataAddr
becomes
1. %Tmp = alloc_stack
4. DataAddr = init_enum_data_addr %OutArg
2. copy_addr %InArg to [initialization] %DataAddr
Fixes at least one regression resulting from '++' removal.
See rdar://23874709 [perf] -Onone Execution Time regression of up-to 19%
-Onone results
|.benchmark............|.bestbase.|.bestopt.|..delta.|.%delta.|speedup.|
|.StringWalk...........|....33570.|...20967.|.-12603.|.-37.5%.|..1.60x.|
|.OpenClose............|......446.|.....376.|....-70.|.-15.7%.|..1.19x.|
|.SmallPT..............|....98959.|...83964.|.-14995.|.-15.2%.|..1.18x.|
|.StrToInt.............|....17550.|...16377.|..-1173.|..-6.7%.|..1.07x.|
|.BenchLangCallingCFunc|......453.|.....428.|....-25.|..-5.5%.|..1.06x.|
|.CaptureProp..........|....50758.|...48156.|..-2602.|..-5.1%.|..1.05x.|
|.ProtocolDispatch.....|.....5276.|....5017.|...-259.|..-4.9%.|..1.05x.|
|.Join.................|.....1433.|....1372.|....-61.|..-4.3%.|..1.04x.|
AFAICT, this does not fix any existing bug, but eliminates unverified
assumptions about well-formed SIL, which could be broken by future
optimization.
Forward: The optimization will replace all in-scope uses of the
destination address with the source. With this change we will be sure
not eliminate writes into a destination address unless the destination
is an AllocStackInst. This hasn't been a problem in practice because
the optimization requires an in-scope deinit of the destination
address, which can't happen on typical address projections.
Backward: The optimization will replace in-scope uses of the source
with the destination. With this change we will be sure not to write
into the destination location prior to the copy unless the destination
is an AllocStackInst. This hasn't been a problem in practice because
the optimization requires the copy to be an initialization of the
address, which can't happen on typical address projections.
This change prevents both optimizations without an obvious guarantee
that any dependency on the destination address will manifest as a
SIL-level dependence on the address producer. For example,
init_enum_data_addr would not qualify because it simply projects an
address within a value that may have other dependencies.
Add back a stand-alone devirtualizer pass, running prior to generic
specialization. As with the stand-alone generic specializer pass, this
may add functions to the pass manager's work list.
This is another step in unbundling these passes from the performance
inliner.
This exposed the first interesting bug found by using TermKind, in DCE we were
not properly handling switch_enum_addr and checked_cast_addr_br.
SR-335
rdar://23980060
After the refactoring, RLE runs in the following phases:
Phase 1. we use an iterative data flow to compute whether there is an
available value at a given point, we do not yet care about what the value
is.
Phase 2. we compute the real forwardable value at a given point.
Phase 3. we setup the SILValues for the redundant load elimination.
Phase 4. we perform the redundant load elimination.
Previously we were computing available bit as well as what the available
value is every iteration of the data flow.
I do not see a compilation time improvement though, but this helps to move
to a genset and killset later as we only need to expand Phase 1 into a few smaller
phases to compute genset & killset first and then iterate until convergence for
the data flow.
I verified that we are performing same # of RLE on stdlib before the change.
Existing test ensure correctness.
i.e. multiple different values from predecessors
Previously, RLE is placing the SILArguments and branch edgevalues itself. This is probably
not as reliable/robust as using the SSAupdater.
RLE uses a single SSAupdater to create the SILArguments, this way previously created SILArguments
can be reused.
One test is created specifically for that so that we do not generate extraneous SILArguments.
RLE is an iterative data flow. Functions with too many locations may take a long time for the
data flow to converge.
Once we move to a genset and killset for RLE. we should be able to lessen the condition a bit more.
I have observed no difference in # of redundant loads eliminated on the stdlib (currently we
eliminate 3862 redundant loads).
i.e. multiple different values from predecessors
Previously, RLE is placing the SILArguments and branch edgevalues itself. This is probably
not as reliable/robust as using the SSAupdater.
RLE uses a single SSAupdater to create the SILArguments, this way previously created SILArguments
can be reused.
One test is created specifically for that so that we do not generate extraneous SILArguments.
1. Add some comments regarding how the pass builds and uses genset and killset
2. Merge some similar functions.
3. Rename DSEComputeKind to DSEKind.
4. Some other small comment changes.
Begin unbundling devirtualization, specialization, and inlining by
recreating the stand-alone generic specializer pass.
I've added a use of the pass to the pipeline, but this is almost
certainly not going to be the final location of where it runs. It's
primarily there to ensure this code gets exercised.
Since this is running prior to inlining, it changes the order that some
functions are specialized in, which means differences in the order of
output of one of the tests (one which similarly changed when
devirtualization, specialization, and inlining were bundled together).
This check should not trigger, at least with the current definition of what a convert_function may do.
But still it's a good idea to make the check to be on the safe side.
This just runs a transform range on getSuccessor()'s ArrayRef<SILSuccessor> so
one does not need to always call Successor.getBB() when iterating over successor
blocks. Instead the transform range does that call for you.
I also updated some loops to use this new SILBasicBlock method to make sure that
the code is tested out by tests that are already in tree. All these places
should be functionally the same albeit a bit cleaner.
This reverts commit 82ff59c0b9.
Original commit message:
This allows us to compile the function:
func valueArray() -> Int{
var a = [1,2,3]
var r = a[0] + a[1] + a[2]
return r
}
Down to just a return of the value 6. And should eventually allow us to remove
the overhead of vararg calls.
rdar://19958821
Debug variable info may be attached to debug_value, debug_value_addr,
alloc_box, and alloc_stack instructions.
In order to write textual SIL -> SIL testcases that exercise the handling
of debug information by SIL passes, we need to make a couple of additions
to the textual SIL language. In memory, the debug information attached to
SIL instructions references information from the AST. If we want to create
debug info from parsing a textual .sil file, these bits need to be made
explicit.
Performance Notes: This is memory neutral for compilations from Swift
source code, because the variable name is still stored in the AST. For
compilations from textual source the variable name is stored in tail-
allocated memory following the SIL instruction that introduces the
variable.
<rdar://problem/22707128>