Support for @noescape SILFunctionTypes.
These are the underlying SIL changes necessary to implement the new
closure capture ABI.
Note: This includes a change to function name mangling that
primarily affects reabstraction thunks.
The new ABI will allow stack allocation of non-escaping closures as a
simple optimization.
The new ABI, and the stack allocation optimization, also require
closure context to be @guaranteed. That will be implemented as the
next step.
Many SIL passes pattern match partial_apply sequences. These all
needed to be fixed to handle the convert_function that SILGen now
emits. The conversion is now needed whenever a function declaration,
which has an escaping type, is passed into a @NoEscape argument.
In addition to supporting new SIL patterns, some optimizations like
inlining and SIL combine are now stronger which could perturb some
benchmark results.
These underlying SIL changes should be merged now to avoid conflicting
with other work. Minor benchmark discrepancies can be investigated as part of
the stack-allocation work.
* Add a noescape attribute to SILFunctionType.
And set this attribute correctly when lowering formal function types to SILFunctionTypes based on @escaping.
This will allow stack allocation of closures, and unblock a related ABI change.
* Flip the polarity on @noescape on SILFunctionType and clarify that
we don't default it.
* Emit withoutActuallyEscaping using a convert_function instruction.
It might be better to use a specialized instruction here, but I'll leave that up to Andy.
Andy: And I'll leave that to Arnold who is implementing SIL support for guaranteed ownership of thick function types.
* Fix SILGen and SIL Parsing.
* Fix the LoadableByAddress pass.
* Fix ClosureSpecializer.
* Fix performance inliner constant propagation.
* Fix the PartialApplyCombiner.
* Adjust SILFunctionType for thunks.
* Add mangling for @noescape/@escaping.
* Fix test cases for @noescape attribute, mangling, convert_function, etc.
* Fix exclusivity test cases.
* Fix AccessEnforcement.
* Fix SILCombine of convert_function -> apply.
* Fix ObjC bridging thunks.
* Various MandatoryInlining fixes.
* Fix SILCombine optimizeApplyOfConvertFunction.
* Fix more test cases after merging (again).
* Fix ClosureSpecializer. Hande convert_function cloning.
Be conservative when combining convert_function. Most of our code doesn't know
how to deal with function type mismatches yet.
* Fix MandatoryInlining.
Be conservative with function conversion. The inliner does not yet know how to
cast arguments or convert between throwing forms.
* Fix PartialApplyCombiner.
introduce a common superclass, SILNode.
This is in preparation for allowing instructions to have multiple
results. It is also a somewhat more elegant representation for
instructions that have zero results. Instructions that are known
to have exactly one result inherit from a class, SingleValueInstruction,
that subclasses both ValueBase and SILInstruction. Some care must be
taken when working with SILNode pointers and testing for equality;
please see the comment on SILNode for more information.
A number of SIL passes needed to be updated in order to handle this
new distinction between SIL values and SIL instructions.
Note that the SIL parser is now stricter about not trying to assign
a result value from an instruction (like 'return' or 'strong_retain')
that does not produce any.
This patch implements collection and dumping of statistics about SILModules, SILFunctions and memory consumption during the execution of SIL optimization pipelines.
The following statistics can be collected:
* For SILFunctions: the number of SIL basic blocks, the number of SIL instructions, the number of SIL instructions of a specific kind, duration of a pass
* For SILModules: the number of SIL basic blocks, the number of SIL instructions, the number of SIL instructions of a specific kind, the number of SILFunctions, the amount of memory used by the compiler, duration of a pass
By default, any collection of statistics is disabled to avoid affecting compile times.
One can enable the collection of statistics and dumping of these statistics for the whole SILModule and/or for SILFunctions.
To reduce the amount of produced data, one can set thresholds in such a way that changes in the statistics are only reported if the delta between the old and the new values are at least X%. The deltas are computed as using the following formula:
Delta = (NewValue - OldValue) / OldValue
Thresholds provide a simple way to perform a simple filtering of the collected statistics during the compilation. But if there is a need for a more complex analysis of collected data (e.g. aggregation by a pipeline stage or by the type of a transformation), it is often better to dump as much data as possible into a file using e.g. -sil-stats-dump-all -sil-stats-modules -sil-stats-functions and then e.g. use the helper scripts to store the collected data into a database and then perform complex queries on it. Many kinds of analysis can be then formulated pretty easily as SQL queries.
This patch fixes a number of issues:
The analysis was using EpilogueARCContext as a temporary when computing. This is
an performance problem since EpilogueARCContext contains all of the memory used
in the analysis. So essentially, we were mallocing tons of memory every time we
missed the analyses cache. This patch changes the pass to instead have 1
EpilogueARCContext whose internal state is cleared in between invocations. Since
the data structures (see below) used after this patch do not shrink memory after
being cleared, this should cause us to have far less memory churn.
The analysis was managing its block state data structure by allocating the
individual block state structs using a BumpPtrAllocator/DenseMap stored in
EpilogueARCContext. The individual state structures were allocated from the
BumpPtrAllocator and the DenseMap then mapped a specific SILBasicBlock to its
State data structure. Ignoring that we were mallocing this memory every time we
computed rather than reusing global state, this pessimizes performance on small
functions significantly. This is because the BumpPtrAllocator by default heap
allocates initially a page and DenseMap initially mallocs a 64 entry hash
table. Thus for a 1 block function, we would be allocating a large amount of
memory that is just unneeded.
Instead this patch changes the analysis to use a std::vector in combination with
PostOrderFunctionInfo to manage the per block state. The way this works is that
PostOrderFunctionInfo already contains a map from a SILBasicBlock to its post
order number. So, when we are allocating memory for each block, we visit the CFG
in post order. Thus we know that each block's state will be stored in the vector
at vector[post order number].
This has a number of nice effects:
1. By eliminating the need for the DenseMap, in large test cases, we are
signficiantly reducing the memory overhead (by 24 bytes per basic block assuming
8 byte ptrs).
2. We will use far less memory when applying this analysis to small functions.
rdar://33841629
Commonly when an analysis uses subanalyses, we eagerly create the sub function
info when constructing the main function info. This is not always necessary and
when the subanalyses do work in their constructor, can be actively harmful if
the parent analysis is never invoked.
This utility class solves this problem by being a very easy way to perform a
delayed call to the sub-analysis to get the sub-functioninfo combined with a
cache so that after the LazyFunctionInfo is used once, we do not reuse the
DenseMap in the sub-analysis unnecessarily.
An example of where this can happen is in EpilogueARCAnalysis in combination
with PostOrderFunctionInfo. PostOrderFunctionInfo eagerly creates a new post
order. So, if we were to eagerly create the PostOrderFunctionInfo (the
sub-functioninfo) when we created an EpilogueARCFunctionInfo, we would be
creating a post order even if we never actually invoke EpilogueARCFunctionInfo.
By using this the maybeGet API on FunctionAnalysisBase instead of get, we stop
EpilogueARCAnalysis from building itself if it does not yet exist, only to
invalidate itself.
rdar://33841629
Today, if one wants to invalidate state relative to your own function analysis,
you have to use FunctionAnalysisBase::get() to get the analysis. The problem
here is that if the analysis does not exist yet, then you are actually creating
the analysis. This is an issue when one wants to perform an action on an
analysis only if the analysis has already been built. An example of such a
situation is when one is processing a delete notification. If one does not have
an analysis for a function, one should just do nothing.
I am going to use this to fix a delete notification problem in
EpilogueARCAnalysis.
rdar://33841629
Otherwise, every time we include Analysis.h, we will try to include those other
files even if we have already included Analysis.h. This can increase compile
time.
rdar://33841629
Make the static enforcement of accesses in noescape closures stored-property
sensitive. This will relax the existing enforcement so that the following is
not diagnosed:
struct MyStruct {
var x = X()
var y = Y()
mutating
func foo() {
x.mutatesAndTakesClosure() {
_ = y.read() // no-warning
}
}
}
To do this, update the access summary analysis to summarize accesses to
subpaths of a capture.
rdar://problem/32987932
Make the static enforcement of accesses in noescape closures stored-property
sensitive. This will relax the existing enforcement so that the following is
not diagnosed:
struct MyStruct {
var x = X()
var y = Y()
mutating
func foo() {
x.mutatesAndTakesClosure() {
_ = y.read()
}
}
}
To do this, update the access summary analysis to be stored-property sensitive.
rdar://problem/32987932
Simple utility for transfersing functions such that parent scopes are always
visited before noescape closures.
Note that recursion is disallowed. Noescape closures are not reentrant.
Record noescape closure scopes. This allows passes to process closures and their
parent scopes in a controlled order. AccessEnforcementSelection needs this
because it needs to process parent scopes before selecting enforcement within
noescape closures.
Eventually this could be used by the PassManager so that
AccessEnforcementSelection can go back to being a function transform.
IndexTrie is a more light-weight representation and it works well in this case.
This requires recovering the represented sequence from an IndexTrieNode, so
also add a getParent() method.
Add an interprocedural SIL analysis pass that summarizes the accesses that
closures make on their @inout_aliasable captures. This will be used to
statically enforce exclusivity for calls to functions that take noescape
closures.
The analysis summarizes the accesses on each argument independently and
uses the BottomUpIPAnalysis utility class to iterate to a fixed point when
there are cycles in the call graph.
For now, the analysis is not stored-property-sensitive -- that will come in a
later commit.
Replace `NameOfType foo = dyn_cast<NameOfType>(bar)` with DRY version `auto foo = dyn_cast<NameOfType>(bar)`.
The DRY auto version is by far the dominant form already used in the repo, so this PR merely brings the exceptional cases (redundant repetition form) in line with the dominant form (auto form).
See the [C++ Core Guidelines](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#es11-use-auto-to-avoid-redundant-repetition-of-type-names) for a general discussion on why to use `auto` to avoid redundant repetition of type names.
In particular, support the following optimizations:
- owned-to-guaranteed
- dead argument elimination
Argument explosion is disabled for generics at the moment as it usually leads to a slower code.
if the argument is an array literal.
For example:
arr += [1, 2, 3]
is replaced by:
arr.append(1)
arr.append(2)
arr.append(3)
This gives considerable speedups up to 10x (for our micro-benchmarks which test this).
This is based on the work of @ben-ng, who implemented the first version of this optimization (thanks!).
array.append_element(newElement: Element)
array.append_contentsOf(contentsOf newElements: S)
And allow early inlining of them.
Those functions will be needed to optimize Array.append(contentsOf)
There are now separate functions for function addition and deletion instead of InvalidationKind::Function.
Also, there is a new function for witness/vtable invalidations.
rdar://problem/29311657
There is already precedence for doing this via DominanceInfo. The reason I am
doing this is that I need the ownership verifier to be able to be run in Raw SIL
before no return folding has run. This means I need to be able to ignore
unreachable code resulting from SILGen not inserting unreachables after no
return functions.
The reason why SILGen does this is to preserve the source information of the
unreachable code so that we can emit nice errors about unreachable code.
rdar://29791263
Separate formal lowered types from SIL types.
The SIL type of an argument will depend on the SIL module's conventions.
The module conventions are determined by the SIL stage and LangOpts.
Almost NFC, but specialized manglings are broken incidentally as a result of
fixes to the way passes handle book-keeping of aruments. The mangler is fixed in
the subsequent commit.
Otherwise, NFC is intended, but quite possible do to rewriting the logic in many
places.
Not sure why but this was another "toxic utility method".
Most of the usages fell into one of three categories:
- The base value was always non-null, so we could just call
getCanonicalType() instead, making intent more explicit
- The result was being compared for equality, so we could
skip canonicalization and call isEqual() instead, removing
some boilerplate
- Utterly insane code that made no sense
There were only a couple of legitimate uses, and even there
open-coding the conditional null check made the code clearer.
Also while I'm at it, make the SIL open archetypes tracker
more typesafe by passing around ArchetypeType * instead of
Type and CanType.
Till now, the escape analysis would always pessimistically assume that any strong_release or release_value may result in a destructor call and the object may escape through it. With this change, the escape analysis would determine for local objects whose exact dynamic type is known which destructors would be called and check if local objects may really escape in those destructors.