This disables a bunch of passes when ownership is enabled. This will allow me to
keep transparent functions in ossa and skip most of the performance pipeline without
being touched by passes that have not been updated for ownership.
This is important so that we can in -Onone code import transparent functions and
inline them into other ossa functions (you can't inline from ossa => non-ossa).
To do so this commit does a few different things:
1. I changed SILOptFunctionBuilder to notify the pass manager's logging
functionality when new functions are added to the module and to notify analyses
as well. NOTE: This on purpose does not put the new function on the pass manager
worklist since we do not want to by mistake introduce a large amount of
re-optimizations. Such a thing should be explicit.
2. I eliminated SILModuleTransform::notifyAddFunction. This just performed the
operations from 1. Now that SILOptFunctionBuilder performs this operation for
us, it is not needed.
3. I changed SILFunctionTransform::notifyAddFunction to just add the function to
the passmanager worklist. It does not need to notify the pass manager's logging
or analyses that a new function was added to the module since
SILOptFunctionBuilder now performs that operation. Given its reduced
functionality, I changed the name to addFunctionToPassManagerWorklist(...). The
name is a little long/verbose, but this is a feature since one should think
before getting the pass manager to rerun transforms on a function. Also, giving
it a longer name calls out the operation in the code visually, giving this
operation more prominance when reading code. NOTE: I did the rename using
Xcode's refactoring functionality!
rdar://42301529
I am going to add the code in a bit that does the notifications. I tried to pass
down the builder instead of the pass manager. I also tried not to change the
formatting.
rdar://42301529
Adds a combined API to output both debug message and optimization remarks.
The previously added test partial_specialization_debug.sil ensures that it's an
NFC for debug output.
For now these are underscored attributes, i.e. compiler internal attributes:
@_optimize(speed)
@_optimize(size)
@_optimize(none)
Those attributes override the command-line specified optimization mode for a specific function.
The @_optimize(none) attribute is equivalent to the already existing @_semantics("optimize.sil.never") attribute
At some point, pass definitions were heavily macro-ized. Pass
descriptive names were added in two places. This is not only redundant
but a source of confusion. You could waste a lot of time grepping for
the wrong string. I removed all the getName() overrides which, at
around 90 passes, was a fairly significant amount of code bloat.
Any pass that we want to be able to invoke by name from a tool
(sil-opt) or pipeline plan *should* have unique type name, enum value,
commend-line string, and name string. I removed a comment about the
various inliner passes that contradicted that.
Side note: We should be consistent with the policy that a pass is
identified by its type. We have a couple passes, LICM and CSE, which
currently violate that convention.
There are now separate functions for function addition and deletion instead of InvalidationKind::Function.
Also, there is a new function for witness/vtable invalidations.
rdar://problem/29311657
This re-instates commit de9622654d
The problem of the infinite loop should be fixed by the previous fix in FunctionSignatureOpts.
In addition this new commit implements a safety check to void such cases, even if buggy optimizations try to keep pushing new functions onto the work list.
We were waiting to delete old apply / try_apply instructions until after
fully specializing all the apply / try_apply in the function. This is
problematic when we have a recursive call and specializing the function
that we're currently processing, since we end up cloning the function
with the old apply / try_apply present.
Rather than doing this, clean up the old apply / try_apply immediately
after processing each one.
Resolves SR-1114 / rdar://problem/25455308.
Do not specialize an apply/partial_apply that we've already added to the
set of dead instructions. Doing so can result in creating a new
instruction which we will leave around, and which will have a type
mismatch in its parameter list.
Fixes rdar://problem/25447450.
We ended up adding the same instruction twice to a SmallVector of
instructions to be deleted. To avoid this, we'll track these
to-be-deleted instructions in a SmallSetVector instead.
We were also failing to add an instruction that we can delete to the set
of instructions to be deleted, so I fixed that as well.
I've added a test case, but it's currently disabled because fixing this
turned up another issue in the same code which I still need to take a
look at.
Fixes rdar://problem/25369617.
We were giving special handling to ApplyInst when we were attempting to use
getMemoryBehavior(). This commit changes the special handling to work on all
full apply sites instead of just AI. Additionally, we look through partial
applies and thin to thick functions.
I also added a dumper called BasicInstructionPropertyDumper that just dumps the
results of SILInstruction::get{Memory,Releasing}Behavior() for all instructions
in order to verify this behavior.
With this re-abstraction a specialized function has the same calling convention as if it would have been written with the specialized types in the first place.
In general this results in less alloc_stacks and load/stores.
It also can eliminate some re-abstraction thunks, e.g. if a generic closure is used in a non-generic context.
It some (hopefully rare) cases it may require to add re-abstraction thunks.
In case a function has multiple indirect results, only the first is converted to a direct result. This is an open TODO.
Begin unbundling devirtualization, specialization, and inlining by
recreating the stand-alone generic specializer pass.
I've added a use of the pass to the pipeline, but this is almost
certainly not going to be the final location of where it runs. It's
primarily there to ensure this code gets exercised.
Since this is running prior to inlining, it changes the order that some
functions are specialized in, which means differences in the order of
output of one of the tests (one which similarly changed when
devirtualization, specialization, and inlining were bundled together).