Instead of caching alias results globally for the module, make AliasAnalysis a FunctionAnalysisBase which caches the alias results per function.
Why?
* So far the result caches could only grow. They were reset when they reached a certain size. This was not ideal. Now, they are invalidated whenever the function changes.
* It was not possible to actually invalidate an alias analysis result. This is required, for example in TempRValueOpt and TempLValueOpt (so far it was done manually with invalidateInstruction).
* Type based alias analysis results were also cached for the whole module, while it is actually dependent on the function, because it depends on the function's resilience expansion. This was a potential bug.
I also added a new PassManager API to directly get a function-base analysis:
getAnalysis(SILFunction *f)
The second change of this commit is the removal of the instruction-index indirection for the cache keys. Now the cache keys directly work on instruction pointers instead of instruction indices. This reduces the number of hash table lookups for a cache lookup from 3 to 1.
This indirection was needed to avoid dangling instruction pointers in the cache keys. But this is not needed anymore, because of the new delayed instruction deletion mechanism.
At some point, pass definitions were heavily macro-ized. Pass
descriptive names were added in two places. This is not only redundant
but a source of confusion. You could waste a lot of time grepping for
the wrong string. I removed all the getName() overrides which, at
around 90 passes, was a fairly significant amount of code bloat.
Any pass that we want to be able to invoke by name from a tool
(sil-opt) or pipeline plan *should* have unique type name, enum value,
commend-line string, and name string. I removed a comment about the
various inliner passes that contradicted that.
Side note: We should be consistent with the policy that a pass is
identified by its type. We have a couple passes, LICM and CSE, which
currently violate that convention.
I see some small performance improvements on a few benchmarks, but they
are likely to be due to noise.
The compilation pipeline is very epilogue release friendly at the moment,i.e.
we do not move the epilogue release of a function till very late in the pipeline.
Therefore, this global data flow sort of an overkill. I am going to change
the pass pipeline next so that we can move epilogue releases freely and the data
flow will become useful.
I do not see compilation time increase.
rdar://26446587
top level driver . Move the top level driver of the pairing analysis into
ARCSequenceOpts and have ARCSequenceOpts use ARCMatchingSetBuilder directly.
This patch is the first in a series of patches that improve ARC compile
time performance by ensuring that ARC only visits the full CFG at most
one time.
Previously when ARC was split into an analysis and a pass, the split in
the codebase occurred at the boundary in between ARCSequenceOpts and
ARCPairingAnalysis. I used a callback to allow ARCSequenceOpts to inject
code into ARCPairingAnalysis.
Now that the analysis has been moved together with the pass this
unnecessarily complicates the code. More importantly though it creates
obstacles towards reducing compile time by visiting the CFG only once.
Specifically, we need to visit the full cfg once to gather interesting
instructions. Then when performing the actual dataflow analysis, we only
visit the interesting instructions. This causes an interesting problem
since retains/releases can have dependencies on each other implying that
I need to be able to update where various "interesting instructions" are
located after ARC moves it. The "interesting instruction" information is
stored at the pairing analysis level, but the moving/removal of
instructions is injected in via the callback.
By moving the top level driver part of ARCPairingAnalysis into
ARCSequenceOpts, we simplify the code by eliminating the dependency
injection callback and also make it easier to manage the cached CFG
state in the face of the ARC optimizer moving/removing retains/releases.
(libraries now)
It has been generally agreed that we need to do this reorg, and now
seems like the perfect time. Some major pass reorganization is in the
works.
This does not have to be the final word on the matter. The consensus
among those working on the code is that it's much better than what we
had and a better starting point for future bike shedding.
Note that the previous organization was designed to allow separate
analysis and optimization libraries. It turns out this is an
artificial distinction and not an important goal.