This was mistakenly reverted in an attempt to fix buildbots.
Unfortunately it's now smashed into one commit.
---
Introduce @_specialize(<type list>) internal attribute.
This attribute can be attached to generic functions. The attribute's
arguments must be a list of concrete types to be substituted in the
function's generic signature. Any number of specializations may be
associated with a generic function.
This attribute provides a hint to the compiler. At -O, the compiler
will generate the specified specializations and emit calls to the
specialized code in the original generic function guarded by type
checks.
The current attribute is designed to be an internal tool for
performance experimentation. It does not affect the language or
API. This work may be extended in the future to add user-visible
attributes that do provide API guarantees and/or direct dispatch to
specialized code.
This attribute works on any generic function: a freestanding function
with generic type parameters, a nongeneric method declared in a
generic class, a generic method in a nongeneric class or a generic
method in a generic class. A function's generic signature is a
concatenation of the generic context and the function's own generic
type parameters.
e.g.
struct S<T> {
var x: T
@_specialize(Int, Float)
mutating func exchangeSecond<U>(u: U, _ t: T) -> (U, T) {
x = t
return (u, x)
}
}
// Substitutes: <T, U> with <Int, Float> producing:
// S<Int>::exchangeSecond<Float>(u: Float, t: Int) -> (Float, Int)
---
[SILOptimizer] Introduce an eager-specializer pass.
This pass finds generic functions with @_specialized attributes and
generates specialized code for the attribute's concrete types. It
inserts type checks and guarded dispatch at the beginning of the
generic function for each specialization. Since we don't currently
expose this attribute as API and don't specialize vtables and witness
tables yet, the only way to reach the specialized code is by calling
the generic function which performs the guarded dispatch.
In the future, we can build on this work in several ways:
- cross module dispatch directly to specialized code
- dynamic dispatch directly to specialized code
- automated specialization based on less specific hints
- partial specialization
- and so on...
I reorganized and refactored the optimizer's generic utilities to
support direct function specialization as opposed to apply
specialization.
This split the function signature module pass into 2 functin passes.
By doing so, this allows us to rewrite to using the FSO-optimized
function prior to attempting inlining, but allow us to do a substantial
amount of optimization on the current function before attempting to do
FSO on that function.
And also helps us to move to a model which module pass is NOT used unless
necesary.
I do not see regression nor improvement for on the performance test suite.
functionsignopts.sil and functionsignopt_sroa.sil are modified because the
mangler now takes into account of information in the projection tree.
Temporarily reverting @_specialize because stdlib unit tests are
failing on an internal branch during deserialization.
This reverts commit e2c43cfe14, reversing
changes made to 9078011f93.
This change follows up on an idea from Michael (thanks!).
It enables debugging and profiling on SIL level, which is useful for compiler debugging.
There is a new frontend option -gsil which lets the compiler write a SIL file and generated debug info for it.
For details see docs/DebuggingTheCompiler.rst and the comments in SILDebugInfoGenerator.cpp.
This pass finds generic functions with @_specialized attributes and
generates specialized code for the attribute's concrete types. It
inserts type checks and guarded dispatch at the beginning of the
generic function for each specialization. Since we don't currently
expose this attribute as API and don't specialize vtables and witness
tables yet, the only way to reach the specialized code is by calling
the generic function which performs the guarded dispatch.
In the future, we can build on this work in several ways:
- cross module dispatch directly to specialized code
- dynamic dispatch directly to specialized code
- automated specialization based on less specific hints
- partial specialization
- and so on...
I reorganized and refactored the optimizer's generic utilities to
support direct function specialization as opposed to apply
specialization.
We really only need the analysis to tell whether a function has caller
inside the module or not. We do not need to know the callsites.
Remove them for now to make the analysis more memory efficient.
Add a note to indicate it can be extended.
RecomputeFunctionList should really be a SmallVector instead of a
DenseSet. A DenseSet gives rise to a nondeterminstic way of iterating over
all functions.
Add an invalidateAnalysisForDeadFunction API. This API calls the invalidateAnalysis
by default unless overriden by analysis pass themselves. This API passes the extra
information that this function is dead and going to be removed from the module.
CallerAnalysis overrides this API and only invalidate caller/callee relations but
does not push this into the recompute list.
We also considered the possibility of keeping a computed list, instead of recompute
list but that would introduce a O(n^2) complexity as every time we try to complete
the computed list, we need to walk over all the functions that currently exist in the
module to make sure the computed list is complete.
I feel eventually we can do a handleDeleteNotification for function deletion and we
wont need the API added in this change.
Address the comments from 0acc0a8464
I still have not made up my mind how to handle deleted functions.
CallerAnalysis is not hooked up to anything yet.
The analysis can tell all the callsites which calls a function in the module.
The analysis is computed and kept up-to-date lazily.
At the core of it, it keeps a list of functions that need to be recomputed for
the Caller/Callee relation to be precise and on every query, the analysis makes
sure to recompute them and clear the list before any query.
This is NFC right now. I am going to wire it up to function signature analysis
eventually.
a separate analysis pass.
This pass is run on every function and the optimized signature is return'ed through the
getArgDescList and getResultDescList.
Next step is to split to cloning and callsite rewriting into their own function passes.
rdar://24730896
"
analysis pass.
This pass is run on every function and the optimized signature is return'ed through the
getArgDescList and getResultDescList.
Next step is to split to cloning and callsite rewriting into their own function passes.
rdar://24730896
We already computed this information so this is just storing information
we were already computing.
One thing to note is that in code with canonicalized loops, we will
always only have one backedge. But we would like loop region to be
correct even in the case of non-canonicalized code so we support having
multiple back edges. But since the common case is 1 backedge, we
optimize for that case.
This commit contains updated tests and also updates to the loop region graph
viewer so that it draws backedges as green arrows from the loop to its backedge
subregions. The test updates were done by examining each test case by hand.
This is safe because the closure is not allowed to capture the array according
to the documentation of 'withUnsafeMutableBuffer' and the current implementation
makes sure that any such capture would observe an empty array by swapping self
with an empty array.
Users will get "almost guaranteed" stack promotion for small arrays by writing
something like:
func testStackAllocation(p: Proto) {
var a = [p, p, p]
a.withUnsafeMutableBufferPointer {
let array = $0
work(array)
}
}
It is "almost guaranteed" because we need to statically be able to tell the size
required for the array (no unspecialized generics) and the total buffer size
must not exceed 1K.
LSValue::reduce reduces a set of LSValues (mapped to a set of LSLocations) to
a single LSValue.
It can then be used as the forwarding value for the location.
Previously, we expand into intermediate nodes and leaf nodes and then go bottom
up, trying to create a single LSValue out of the given LSValues.
Instead, we now use a recursion to go top down. This simplifies the code. And this
is fine as we do not expect to run into type tree that are too deep.
Existing test cases ensure correctness.
This enables function signature handles a case of self-recursion.
With this change we convert 11 @owned return value to "not owned", while
we convert 179 @owned parameter to @guanrateed.
rdar://24022375
More specifically, this handles a case of self-recursion.
With this change we convert 11 @owned return value to "not owned", while
we convert 179 @owned parameter to @guanrateed.
rdar://24022375
Reinstates commit 0c2ca94ef7
With two bug fixes:
*) use after free asan crash
*) wrong check in ValueLifetimeAnalysis::isWithinLifetime
And some refactoring