If we use a shared valueenumerator, imagine the case when one of the AAcache or MBcache
is cleared and we clear the valueenumerator.
This could give rise to collisions (false positives) in the not-yet-cleared cache!
Add back a stand-alone devirtualizer pass, running prior to generic
specialization. As with the stand-alone generic specializer pass, this
may add functions to the pass manager's work list.
This is another step in unbundling these passes from the performance
inliner.
This exposed the first interesting bug found by using TermKind, in DCE we were
not properly handling switch_enum_addr and checked_cast_addr_br.
SR-335
rdar://23980060
Begin unbundling devirtualization, specialization, and inlining by
recreating the stand-alone generic specializer pass.
I've added a use of the pass to the pipeline, but this is almost
certainly not going to be the final location of where it runs. It's
primarily there to ensure this code gets exercised.
Since this is running prior to inlining, it changes the order that some
functions are specialized in, which means differences in the order of
output of one of the tests (one which similarly changed when
devirtualization, specialization, and inlining were bundled together).
Add interfaces and update the pass execution logic to allow function
passes to create new functions, or ask for functions to be optimized
prior to continuing.
Doing so results in the pass pipeline halting execution on the current
function, and continuing with newly added functions, returning to the
previous function after the newly added functions are fully optimized.
This makes it easy to use -sil-verify-all to verify that both type of info are
created correctly and that analyses are properly updating them. I am going to
use this to harden testing of the loop canonicalizer.
Make it a std::vector that reserves enough space based on the number of
functions in the initial bottom-up ordering.
This is the first step in making it possible for function passes to
notify the pass manager of new functions to process.
Make it a bit more clear that we're alternating between collecting (and
then running) function passes, and running module passes. Removes some
duplication that was present.
Reapplies 9d4d3c8 with fixes for bisecting pass execution.
I need this for loop-arc since I need to be able to analyze all "loop-exits"
when I just have the parent loop region. We are already computing this
information and throwing it away, so there should be no compile time impact.
This enables array value propagation in array literal loops like:
for e in [2,3,4] {
r += e
}
Allowing us to completely get rid of the array.
rdar://19958821
SR-203
This reverts commit 82ff59c0b9.
Original commit message:
This allows us to compile the function:
func valueArray() -> Int{
var a = [1,2,3]
var r = a[0] + a[1] + a[2]
return r
}
Down to just a return of the value 6. And should eventually allow us to remove
the overhead of vararg calls.
rdar://19958821