- `Mangle::ASTMangler::mangleAutoDiffDerivativeFunction()` and `Mangle::ASTMangler::mangleAutoDiffLinearMap()` accept original function declarations and return a mangled name for a derivative function or linear map. This is called during SILGen and TBDGen.
- `Mangle::DifferentiationMangler` handles differentiation function mangling in the differentiation transform. This part is necessary because we need to perform demangling on the original function and remangle it as part of a differentiation function mangling tree in order to get the correct substitutions in the mangled derivative generic signature.
A mangled differentiation function name includes:
- The original function.
- The differentiation function kind.
- The parameter indices for differentiation.
- The result indices for differentiation.
- The derivative generic signature.
When a non-value instruction (e.g. a destroy_addr) was deleted, the corresponding cache entry in MemoryBehaviorCache was not invalidated.
If a new instruction was allocated at the same memory location, the old - and invalid - cache entry was re-used.
This bug triggered a SIL memory lifetime failure in TempRValueElimination.
https://bugs.swift.org/browse/SR-13985
rdar://problem/72614608
This change also fixes another problem (which I found by inspection): Individual result values of MultipleValueInstructions were not invalidated correctly.
MandatoryCopyPropagation must be a separate pass in order to preserve
all debug_value instructions. CopyPropagation cannot preserve
debug_value because, as a rule, debug information cannot affect -O
behavior.
Canonicalizing OSSA provably minimizes the number of retains and
releases within the boundaries of that lifetime. This eliminates the
need for ad-hoc optimization of OSSA copies.
This initial implementation only canonicalizes owned values, but
canonicalizing guaranteed values is a simple extension.
This was originally part of the CopyPropagation prototype years
ago. Now OSSA is specified completely enough that it can be turned
into a simple utility instead.
CanonicalOSSALifetime uses PrunedLiveness to find the extended live
range and identify the consumes on the boundary. All other consumes
need their own copy. No other copies are needed.
By running this after other transformations that affect OSSA
lifetimes, we can avoid the need to run pattern-matching optimization
to SemanticARC to recover from suboptimal patterns, which is not
robust, maintainable, or efficient.
It's important that fundamental APIs don't lie to their users.
Make it clear that this API always returns true for deinitialization,
even if we could for example analyze the destructor and determine that
there aren't any actual writes!
This makes it clear that we are not talking about a madeChange in a general
sense and are instead just tracking if /any/ of our callbacks were invoked. This
is still useful enough for our users and will prevent confusion.
This allows for me to do a couple of things improving quality/correctness/ease of use:
1. I reimplemented InstMod's RAUW and RAUW/erase helpers on top of
setUseValue/deleteInst. Beyond allowing the caller to specify less things, we
gain an orthogonality preventing bugs like overriding erase/RAUW but not
overriding erase or having the erase used in erase/RAUW act differently than
the erase for deleteInst.
2. There were a bunch of places using InstModCallback that also were setting
uses without having the ability for InstModCallbacks perform it (since it
only supported RAUW). This is an anti-pattern and could cause subtle bugs to
be introduced by appropriate state in the caller not being updated.
Specifically before this PR, if a caller did not customize a specific callback
of InstModCallbacks, we would store a static default std::function into
InstModCallbacks. This means that we always would have an indirect jump. That is
unfortunate since this code is often called in loops.
In this PR, I eliminate this problem by:
1. I made all of the actual callback std::function in InstModCallback private
and gave them a "Func" postfix (e.x.: deleteInst -> deleteInstFunc).
2. I created public methods with the old callback names to actually call the
callbacks. This ensured that as long as we are not escaping callbacks from
InstModCallback, this PR would not result in the need for any source changes
since we are changing a call of a std::function field to a call to a method.
3. I changed all of the places that were escaping inst mod's callbacks to take
an InstModCallback. We shouldn't be doing that anyway.
4. I changed the default value of each callback in InstModCallbacks to be a
nullptr and changed the public helper methods to check if a callback is
null. If the callback is not null, it is called, otherwise the getter falls
back to an inline default implementation of the operation.
All together this means that the cost of a plain InstModCallback is reduced and
one pays an indirect function cost price as one customizes it further which is
better scalability.
P.S. as a little extra thing, I added a madeChange field onto the
InstModCallback. Now that we have the helpers calling the callbacks, I can
easily insert instrumentation like this, allowing for users to pass in
InstModCallback and see if anything was RAUWed without needing to specify a
callback.
It's against the principles of pass design to check the driver mode
within the pass. A pass always needs to do the same thing regardless
of where it runs in the pass pipeline. It also needs to be possible to
test passes in isolation.
This bare-bones utility will be the basis for
CanonicalizeOSSALifetime. It is maximally flexible and can be adopted
by any analysis that needs SSA-based liveness expressed in terms of
the live blocks. It's meant to be layered underneath various
higher-level analyses.
We could consider revamping ValueLifetimeAnalysis and layering it on
top of this. If PrunedLiveness is adopted widely enough, we can
combine it with a block numbering analysis so we can micro-optimize
the internal data structures.
This is a generic API that when ownership is enabled allows one to replace all
uses of a value with a value with a differing ownership by transforming/lifetime
extending as appropriate.
This API supports all pairings of ownership /except/ replacing a value with
OwnershipKind::None with a value without OwnershipKind::None. This is a more
complex optimization that we do not support today. As a result, we include on
our state struct a helper routine that callers can use to know if the two values
that they want to process can be handled by the algorithm.
My moticiation is to use this to to update InstSimplify and SILCombiner in a
less bug prone way rather than just turn stuff off.
Noting that this transformation inserts ownership instructions, I have made sure
to test this API in two ways:
1. With Mandatory Combiner alone (to make sure it works period).
2. With Mandatory Combiner + Semantic ARC Opts to make sure that we can
eliminate the extra ownership instructions it inserts.
As one can see from the tests, the optimizer today is able to handle all of
these transforms except one conditional case where I need to eliminate a dead
phi arg. I have a separate branch that hits that today but I have exposed unsafe
behavior in ClosureLifetimeFixup that I need to fix first before I can land
that. I don't want that to stop this PR since I think the current low level ARC
optimizer may be able to help me here since this is a simple transform it does
all of the time.
If the specialized function has a re-abstracted (= converted from indirect to direct) resilient argument or return types, use an alternative mangling: "TB" instead of "Tg".
Resilient parameters/returns can be converted from indirect to direct if the specialization is created within the type's resilience domain, i.e. in its module (where the type is loadable).
In this case we need to generate a different mangled name for the specialized function to distinguish it from specializations in other modules, which cannot re-abstract this resilient type.
This fixes a miscompile resulting from ODR-linking specializations from different modules, which in fact have different function signatures.
https://bugs.swift.org/browse/SR-13900
rdar://71914016
I reimplemented the original addNewEdgeValueToBranch to just call the new
overload with a default InstModCallbacks, so nothing changed and now we can plug
in callbacks to this utility!
When instructions are changed within a pass in a way that affects subsequent alias queries in the same pass run,
their alias analysis information must be invalidated.
Otherwise it can result in miscompiles and/or invalid SIL.
rdar://71924430
In derivatives of loops, no longer allocate boxes for indirect case payloads. Instead, use a custom pullback context in the runtime which contains a bump-pointer allocator.
When a function contains a differentiated loop, the closure context is a `Builtin.NativeObject`, which contains a `swift::AutoDiffLinearMapContext` and a tail-allocated top-level linear map struct (which represents the linear map struct that was previously directly partial-applied into the pullback). In branching trace enums, the payloads of previously indirect cases will be allocated by `swift::AutoDiffLinearMapContext::allocate` and stored as a `Builtin.RawPointer`.
Otherwise, one is always forced to use ValueLifetimeAnalysis::Frontier, a
SmallVector<SILInstruction *, 4>. This may not be a size appropriate for every
problem, so it makes sense to provide Frontier as a good rule of thumb, but use
FrontierImpl on the actual API boundary to loosen the constraint if the user
wishes to do so.
Adds a new flag "-experimental-skip-all-function-bodies" that skips
typechecking and SIL generation for all function bodies (where
possible).
`didSet` functions are still typechecked and have SIL generated as their
body is checked for the `oldValue` parameter, but are not serialized.
Parsing will generally be skipped as well, but this isn't necessarily
the case since other flags (eg. "-verify-syntax-tree") may force delayed
parsing off.
* Redundant hop_to_executor elimination: if a hop_to_executor is dominated by another hop_to_executor with the same operand, it is eliminated:
hop_to_executor %a
... // no suspension points
hop_to_executor %a // can be eliminated
* Dead hop_to_executor elimination: if a hop_to_executor is not followed by any code which requires to run on its actor's executor, it is eliminated:
hop_to_executor %a
... // no instruction which require to run on %a
return
rdar://problem/70304809
to check for improperly nested '@_semantic' functions.
Add a missing @_semantics("array.init") in ArraySlice found by the
diagnostic.
Distinguish between array.init and array.init.empty.
Categorize the types of semantic functions by how they affect the
inliner and pass pipeline, and centralize this logic in
PerformanceInlinerUtils. The ultimate goal is to prevent inlining of
"Fundamental" @_semantics calls and @_effects calls until the late
pipeline where we can safely discard semantics. However, that requires
significant pipeline changes.
In the meantime, this change prevents the situation from getting worse
and makes the intention clear. However, it has no significant effect
on the pass pipeline and inliner.
Add AccesssedStorage::compute and computeInScope to mirror AccessPath.
Allow recovering the begin_access for Nested storage.
Adds AccessedStorage.visitRoots().
Things that have come up recently but are somewhat blocked on this:
- Moving AccessMarkerElimination down in the pipeline
- SemanticARCOpts correctness and improvements
- AliasAnalysis improvements
- LICM performance regressions
- RLE/DSE improvements
Begin to formalize the model for valid memory access in SIL. Ignoring
ownership, every access is a def-use chain in three parts:
object root -> formal access base -> memory operation address
AccessPath abstracts over this path and standardizes the identity of a
memory access throughout the optimizer. This abstraction is the basis
for a new AccessPathVerification.
With that verification, we now have all the properties we need for the
type of analysis requires for exclusivity enforcement, but now
generalized for any memory analysis. This is suitable for an extremely
lightweight analysis with no side data structures. We currently have a
massive amount of ad-hoc memory analysis throughout SIL, which is
incredibly unmaintainable, bug-prone, and not performance-robust. We
can begin taking advantage of this verifably complete model to solve
that problem.
The properties this gives us are:
Access analysis must be complete over memory operations: every memory
operation needs a recognizable valid access. An access can be
unidentified only to the extent that it is rooted in some non-address
type and we can prove that it is at least *not* part of an access to a
nominal class or global property. Pointer provenance is also required
for future IRGen-level bitfield optimizations.
Access analysis must be complete over address users: for an identified
object root all memory accesses including subobjects must be
discoverable.
Access analysis must be symmetric: use-def and def-use analysis must
be consistent.
AccessPath is merely a wrapper around the existing accessed-storage
utilities and IndexTrieNode. Existing passes already very succesfully
use this approach, but in an ad-hoc way. With a general utility we
can:
- update passes to use this approach to identify memory access,
reducing the space and time complexity of those algorithms.
- implement an inexpensive on-the-fly, debug mode address lifetime analysis
- implement a lightweight debug mode alias analysis
- ultimately improve the power, efficiency, and maintainability of
full alias analysis
- make our type-based alias analysis sensistive to the access path
...and avoid reallocation.
This is immediately necessary for LICM, in addition to its current
uses. I suspect this could be used by many passes that work with
addresses. RLE/DSE should absolutely migrate to it.