Added new C++-to-Swift callback for isDeinitBarrier.
And pass it CalleeAnalysis so it can depend on function effects. For
now, the argument is ignored. And, all callers just pass nullptr.
Promoted to API the mayAccessPointer component predicate of
isDeinitBarrier which needs to remain in C++. That predicate will also
depends on function effects. For that reason, it too is now passed a
BasicCalleeAnalysis and is moved into SILOptimizer.
Also, added more conservative versions of isDeinitBarrier and
maySynchronize which will never consider side-effects.
In theory, the analysis invalidation notifications should assure that every function has a CallerAnalysis::FunctionInfo in `funcInfos`.
But it's not unlikely that we are missing some of those notifications.
We got some not-reproducible crash reports because of missing function infos in CallerAnalysis.
With this change the analysis accepts missing function infos and does the right thing if such an info is missing.
In the long term we should replace CallerAnalysis by FunctionUses, anyway.
rdar://99653954
We had two notions of canonical types, one is the structural property
where it doesn't contain sugared types, the other one where it does
not contain reducible type parameters with respect to a generic
signature.
Rename the second one to a 'reduced type'.
IterableBackwardReachability just requires an iterable list of gens.
ShrinkBorrowScope, LexicalDestroyHoisting, and SSADestroyHoisting all
need to be able to check whether a given instruction is scope ending
quickly. Use a SmallSetVector rather than a SmallVector for the gens in
all three.
The new utility finds access scopes which are barriers by finding access
scopes which themselves contain barriers. This is necessary to (1)
allow hoisting through access scopes when possible (i.e. not simply
treating all end_access instructions as barriers) and (2) not hoist into
access scopes that contain barriers and in so doing introduce
exclusivity violations.
The new optimistic, iterative backward reachability is optimized to do
as little work as possible. Only blocks not all of whose successors are
kills participate in the worklist at all. The blocks within that
discovered set are visited via a worklist which tracks blocks which have
been found to be unreachable at begin and whose
unreachable-at-begin-ness must be propagated into
unreachable-at-end-ness of its predecessors.
rdar://92545900
* C++: add a function `getDestructors(SILType type, bool isExactType)’: if the type is a final class or `isExactType` is true, then return the one and only destructor of that class.
* swift: add `getDestructor(ofExactType type: Type)` and `getIncompleteCallees`
* swift: remove `getDestructor` from the PassContext. The API of the `calleeAnalysis` can be used instead.
Handle recursive non-escaping local functions.
Previously, it was thought that recursion would force a closure to be
escaping. This is not necessarilly true.
Update AccessEnforcementSelection to conservatively handle closure cycles.
Fixes rdar://88726092 (Compiler hangs when building)
Previously, Reachability assumed that phis were not barriers. Here,
handling for barrier phis is added. To that end, a new delegate
callback `checkReachableBarrier(PhiValue)` is added. Before marking the
beginning of a block as reached (or any of its predecessors), check
whether each argument that is a phi is a barrier. If any is, then
reachability is done.
Implemented the new method in SSADestroyHoisting by splitting apart the
classification of an instruction and the work to do in response to
visiting an instruction. Then, when visiting a PhiValue, just check
whether any of the predecessors terminators are classified as barriers.
That way, seeing that they're classified that way doesn't result in
noting down that those terminators had been reached (which indeed they
will not have been if any of the terminators from which the values are
flowing into the phi are barriers).
Pessimistic, non-iterative data flow for analyzing backward reachability
from a set of last uses to their dominating def or nearest barrier.
Meet: ReachableEnd(predecessor) = intersection(ReachableBegin, successors)
Intended for frequently called utilities where minimizing the cost of
data flow is more important than analyzing reachability across
loops. Expected to visit very few blocks because barriers often occur
close to a last use.
Note: this does not require initializing bitsets for all blocks in the
function for each SSA value being analyzed.
Track in-use iterators and update them both when instructions are
deleted and when they are added.
Safe iteration in the presence of arbitrary changes now looks like
this:
for (SILInstruction *inst : deleter.updatingRange(&bb)) {
modify(inst);
}
It's not needed anymore with delayed instruction deletion.
It was used for two purposes:
1. For analysis, which cache instructions, to avoid dangling instruction pointers
2. For passes, which maintain worklists of instructions, to remove a deleted instructions from the worklist. This is now done by checking SILInstruction::isDeleted().
Instead of caching alias results globally for the module, make AliasAnalysis a FunctionAnalysisBase which caches the alias results per function.
Why?
* So far the result caches could only grow. They were reset when they reached a certain size. This was not ideal. Now, they are invalidated whenever the function changes.
* It was not possible to actually invalidate an alias analysis result. This is required, for example in TempRValueOpt and TempLValueOpt (so far it was done manually with invalidateInstruction).
* Type based alias analysis results were also cached for the whole module, while it is actually dependent on the function, because it depends on the function's resilience expansion. This was a potential bug.
I also added a new PassManager API to directly get a function-base analysis:
getAnalysis(SILFunction *f)
The second change of this commit is the removal of the instruction-index indirection for the cache keys. Now the cache keys directly work on instruction pointers instead of instruction indices. This reduces the number of hash table lookups for a cache lookup from 3 to 1.
This indirection was needed to avoid dangling instruction pointers in the cache keys. But this is not needed anymore, because of the new delayed instruction deletion mechanism.
If the "regular" alias analysis thinks that an instruction may write to an address, check if the instruction is in an immutable scope of V.
That means that even if we don't know anything about the instruction (e.g. a call to an unknown function), we can be sure that it cannot write to the address.
An immutable scope is for example a read-only begin_access/end_access scope.
Another example is a borrow scope of an immutable copy-on-write buffer, for example:
%b = begin_borrow %array_buffer
%addr = ref_element_addr [immutable] %b : $BufferType, #BufferType.someField
Deinitializer side effects severely handicap the connection-graph
based EscapeAnalysis. Because it's not flow-sensitive, this approach
falls apart whenever an object is released in the current function,
which makes it seem to have escaped everywhere (it generally doesn't
matter if it doesn't escape until the release point, but the analysis
can't discern that).
This can be slightly mitigated by realizing that releasing an object
can only cause things it points to to escape if the object itself has
references or pointers.
Fix: Don't create a escaping content node when releasing an object
that can't contain any references or pointers. The previous commit,
"Fix EscapeAnalysis::mayReleaseContent" would defeat release-hoisting
in important cases without doing anything else. Adding this extra
precision to the connection graph avoids some regressions.
The previous implementation made extremely subtle and specific
assumptions about how the API is used which doesn't apply
everywhere. It was trying very hard to avoid regressing performance
relative to an even older implementation that didn't even try to consider deinitializer side effects.
The aggressive logic was based on the idea that a release must have a
corresponding retain somewhere in the same function and we don't care
if the last release happens early if there are no more aliasing
uses. All the unit tests I wrote previously were based on release
hoisting, which happens to work given the way the API is used.
But this logic is incorrect for retain sinking. In that case sinking
past an "unrelated" release could cause the object to be freed
early. See test/SILOptimizer/arc_crash.swift.
With SemanticARC and other SIL improvements being made, I'm afraid
bugs like this will begin to surface.
To fix it, just remove the subtle logic to leave behind a simple and
sound EscapeAnalysis API. To do better, we will need to rewrite the
AliasAnalysis logic for release side effects, which is currently
a tangled web. In the meantime, SemanticARC can handle many cases without EscapeAnalysis.
Fixes rdar://74469299 (ARC miscompile:
EscapeAnalysis::mayReleaseContent; potential use-after-free)
While fixing this, add support for address-type queries too:
Fixes rdar://74360041 (Assertion failed:
(!releasedReference->getType().isAddress() && "an address is never a
reference"), function mayReleaseContent
My goal was to reduce the size of SILLocation. It now contains only of a storage union, which is basically a pointer and a bitfield containing the Kind, StorageKind and flags. By far, most locations are only single pointers to an AST node. For the few cases where more data needs to be stored, this data is allocated separately: with the SILModule's bump pointer allocator.
While working on this, I couldn't resist to do a major refactoring to simplify the code:
* removed unused stuff
* The term "DebugLoc" was used for 3 completely different things:
- for `struct SILLocation::DebugLoc` -> renamed it to `FilePosition`
- for `hasDebugLoc()`/`getDebugSourceLoc()` -> renamed it to `hasASTNodeForDebugging()`/`getSourceLocForDebugging()`
- for `class SILDebugLocation` -> kept it as it is (though, `SILScopedLocation` would be a better name, IMO)
* made SILLocation more "functional", i.e. replaced some setters with corresponding constructors
* replaced the hand-written bitfield `KindData` with C bitfields
* updated and improved comments
This removes the ambiguity when casting from a SingleValueInstruction to SILNode, which makes the code simpler. E.g. the "isRepresentativeSILNode" logic is not needed anymore.
Also, it reduces the size of the most used instruction class - SingleValueInstruction - by one pointer.
Conceptually, SILInstruction is still a SILNode. But implementation-wise SILNode is not a base class of SILInstruction anymore.
Only the two sub-classes of SILInstruction - SingleValueInstruction and NonSingleValueInstruction - inherit from SILNode. SingleValueInstruction's SILNode is embedded into a ValueBase and its relative offset in the class is the same as in NonSingleValueInstruction (see SILNodeOffsetChecker).
This makes it possible to cast from a SILInstruction to a SILNode without knowing which SILInstruction sub-class it is.
Casting to SILNode cannot be done implicitly, but only with an LLVM `cast` or with SILInstruction::asSILNode(). But this is a rare case anyway.
This removes the ambiguity when casting from a SingleValueInstruction to SILNode, which makes the code simpler. E.g. the "isRepresentativeSILNode" logic is not needed anymore.
Also, it reduces the size of the most used instruction class - SingleValueInstruction - by one pointer.
Conceptually, SILInstruction is still a SILNode. But implementation-wise SILNode is not a base class of SILInstruction anymore.
Only the two sub-classes of SILInstruction - SingleValueInstruction and NonSingleValueInstruction - inherit from SILNode. SingleValueInstruction's SILNode is embedded into a ValueBase and its relative offset in the class is the same as in NonSingleValueInstruction (see SILNodeOffsetChecker).
This makes it possible to cast from a SILInstruction to a SILNode without knowing which SILInstruction sub-class it is.
Casting to SILNode cannot be done implicitly, but only with an LLVM `cast` or with SILInstruction::asSILNode(). But this is a rare case anyway.
Importantly this also lets us use the analysis framework to validate that we do
properly invalidate DeadEndBlocks, preventing bugs.
I did not thread this all over the compiler. Instead I just used it for now in
SemanticARCOpts just to add some coverage without threading it into too many
places.
I am trying to be more careful about this rather than letting someone make this
mistake in the future. I also added some comments and a DeadEndBlocks instance
to TempRValueElimination so that it gets full simplifications in OSSA.
Currently all of these places in the code base perform simplifyInstruction and
then a replaceAllSimplifiedUsesAndErase(...). This is a bad pattern since:
1. simplifyInstruction assumes its result will be passed to
replaceAllSimplifiedUsesAndErase. So by leaving these as separate things, we
allow for users to pointlessly make this mistake.
2. I am going to implement in a subsequent commit a utility that lifetime
extends interior pointer bases when replacing an address with an interior
pointer derived address. To do this efficiently, I want to reuse state I
compute during simplifyInstruction during the actual RAUW meaning that if the
two operations are split, that is difficult without extending the API. So by
removing this, I can make the transform and eliminate mistakes at the same
time.
Access scopes for enforcing exclusivity are currently the only
exception to our ability to canonicalize OSSA lifetime purely based on
the SSA value's known uses. This is because access scopes have
semantics relative to object deinitializers.
In general, deinitializers are asynchronous with respect to code that
is unrelated to the object's uses. Ignoring exclusivity, the optimizer
may always destroy objects as early as it wants, as long as the object
won't be used again. The optimizer may also extend the lifetime
(although in the future this lifetime extension should be limited by
"synchronization points").
The optimizer's freedom is however limited by exclusivity
enforcement. Optimization may never introduce new exclusivity
violations. Destroying an object within an access scope is an
exclusivity violation if the deinitializer accesses the same variable.
To handle this, OSSA canonicalization must detect access scopes that
overlap with the end of the pruned extended lifetime. Essentially:
%def
begin_access // access scope unrelated to def
use %def // pruned liveness ends here
end_access
destroy %def
Support for access scopes composes cleanly with the existing algorithm
without adding significant cost in the usual case. Overlapping access
scopes are unusual. A single CFG walk within the original extended
lifetime is normally sufficient. Only the blocks that are not already
LiveOut in the pruned liveness need to be visited. During this walk,
local overlapping access are detected by scanning for end_access
instructions after the last use point. Global overlapping accesses are
detected by checking NonLocalAccessBlockAnalysis. This avoids scanning
instructions in the common case. NonLocalAccessBlockAnalysis is a
trivial analysis that caches the rare occurence of nonlocal access
scopes. The analysis itself is a single linear scan over the
instruction stream. This analysis can be preserved across most
transformations and I expect it to be used to speed up other
optimizations related to access marker.
When an overlapping access is detected, pruned liveness is simply
extended to include the end_access as a new use point. Extending the
lifetime is iterative, but with each iteration, blocks that are now
marked LiveOut no longer need to be visited. Furthermore, interleaved
accessed scopes are not expected to happen in practice.
This is a low level API already being used in multiple places besides
InstSimplify (e.x.: Utils/OwnershipOptUtils), so it makes sense to move it into
InstOptUtil.
When a non-value instruction (e.g. a destroy_addr) was deleted, the corresponding cache entry in MemoryBehaviorCache was not invalidated.
If a new instruction was allocated at the same memory location, the old - and invalid - cache entry was re-used.
This bug triggered a SIL memory lifetime failure in TempRValueElimination.
https://bugs.swift.org/browse/SR-13985
rdar://problem/72614608
This change also fixes another problem (which I found by inspection): Individual result values of MultipleValueInstructions were not invalidated correctly.
It's important that fundamental APIs don't lie to their users.
Make it clear that this API always returns true for deinitialization,
even if we could for example analyze the destructor and determine that
there aren't any actual writes!