NFC
Not only that this function _is_ not used anymore, it also _should_ not be used.
Optimizations should not make decisions based on if there are any dead-end blocks in a function.
This could lead to situations, e.g. where an optimization can be done before inlining, but not after inlining a (potentially small) function which has a dead-end block.
The main change is to detect infinite recursive calls under invariant conditions. For example:
func f() {
if #available(macOS 10.4.4, *) {
f()
}
}
or invariant conditions due to forwarded arguments:
func f(_ x: Int) {
if x > 0 {
f(x)
}
}
Also, improve the warning message. Instead of giving a warning at the function location
warning: all paths through this function will call itself
give a warning at the call location:
warning: function call causes an infinite recursion
Especially in case of multiple recursive calls, it makes it easier to locate the problem.
https://bugs.swift.org/browse/SR-11842
rdar://57460599
Additional handling of copy_value/destroy_value/load[copy]/
begin_borrow/end_borrow is needed to support OSSA.
TODO: Support handling of 2d array in the pass for OSSA.
Currently hoisting of loads and borrows are not supported in OSSA
It's important that fundamental APIs don't lie to their users.
Make it clear that this API always returns true for deinitialization,
even if we could for example analyze the destructor and determine that
there aren't any actual writes!
`differentiability_function_extract` instruction has an optional explicit
extractee type. This is currently used by TypeSubstCloner and the
LoadableByAddress transform to rewrite `differentiability_function_extract`
instructions while preserving `@differentiable` function type invariants.
There is an assertion that `differentiability_function_extract` instructions do
not have explicit extractee types outside of canonical/lowered SIL. However,
this does not handle the SIL deserialization case above: when a function
containing a `differentiable_function_extract` instruction with an explicit type
is deserialized into a raw SIL module (which happens when optimizations are
enabled).
Removing the assertion unblocks this encountered use case.
A more robust longer-term solution may be to change SIL `@differentiable`
function types to explicitly store component original/JVP/VJP function types.
Also fix `differentiable_function_extract` extractee type serialization.
Resolves SR-14004.
Now that OperandOwnership determines the operand constraints, it
doesn't make sense to distinguish between Borrow and NestedBorrow at
this level. We want these uses to automatically convert between the
nested/non-nested state as the operand's ownership changes. The use
does not need to impose any constraint on the ownership of the
incoming value.
For algorithms that need to distinguish nested borrows, it's still
trivial to do so.
A NonUse operand does not use the value itself, so it ignores
ownership and does not require liveness. This is for operands that
represent dependence on a type but are not actually passed the value
of that type (e.g. they may refer an open_existential). This could be
used for other dependence-only operands in the future.
A TrivialUse operand has undefined ownership semantics aside from
requiring liveness. Therefore it is only legal to pass the use a value
with ownership None (a trivial value). Contrast this with things like
InstantaneousUse or BitwiseEscape, which just don't care about
ownership (i.e. they have no ownership semantics.
All of the explicitly listed operations in this category require
trivially typed operands. So the meaning is obvious to anyone
adding SIL operations and updating OperandOwnership.cpp, without
needing to decifer the value ownership kinds.
Clarify which uses are allowed to take Unowned values. Add enforcement
to ensure that Unowned values are not passed to other uses.
Operations that can take unowned are:
- copy_value
- apply/return @unowned argument
- aggregates (struct, tuple, destructure, phi)
- forwarding operations that are arbitrary type casts
Unowned values are currently borrowed within ObjC deinitializers
materialized by the Swift compiler. This will be banned as soon as
SILGen is fixed.
Migrating to this classification was made easy by the recent rewrite
of the OSSA constraint model. It's also consistent with
instruction-level abstractions for working with different kinds of
OperandOwnership that are being designed.
This classification vastly simplifies OSSA passes that rewrite OSSA
live ranges, making it straightforward to reason about completeness
and correctness. It will allow a simple utility to canonicalize OSSA
live ranges on-the-fly.
This avoids the need for OSSA-based utilities and passes to hard-code
SIL opcodes. This will allow several of those unmaintainable pieces of
code to be replaced with a trivial OperandOwnership check.
It's extremely important for SIL maintainers to see a list of all SIL
opcodes associated with a simple OSSA classification and set of
well-specified rules for each opcode class, without needing to guess
or reverse-engineer the meaning from the implementation. This
classification does that while eliminating a pile of unreadable
macros.
This classification system is the model that CopyPropagation was
initially designed to use. Now, rather than relying on a separate
pass, a simple, lightweight utility will canonicalize OSSA
live ranges.
The major problem with writing optimizations based on OperandOwnership
is that some operations don't follow structural OSSA requirements,
such as project_box and unchecked_ownership_conversion. Those are
classified as PointerEscape which prevents the compiler from reasoning
about, or rewriting the OSSA live range.
Functional Changes:
As a side effect, this corrects many operand constraints that should
in fact require trivial operand values.
This is a generic API that when ownership is enabled allows one to replace all
uses of a value with a value with a differing ownership by transforming/lifetime
extending as appropriate.
This API supports all pairings of ownership /except/ replacing a value with
OwnershipKind::None with a value without OwnershipKind::None. This is a more
complex optimization that we do not support today. As a result, we include on
our state struct a helper routine that callers can use to know if the two values
that they want to process can be handled by the algorithm.
My moticiation is to use this to to update InstSimplify and SILCombiner in a
less bug prone way rather than just turn stuff off.
Noting that this transformation inserts ownership instructions, I have made sure
to test this API in two ways:
1. With Mandatory Combiner alone (to make sure it works period).
2. With Mandatory Combiner + Semantic ARC Opts to make sure that we can
eliminate the extra ownership instructions it inserts.
As one can see from the tests, the optimizer today is able to handle all of
these transforms except one conditional case where I need to eliminate a dead
phi arg. I have a separate branch that hits that today but I have exposed unsafe
behavior in ClosureLifetimeFixup that I need to fix first before I can land
that. I don't want that to stop this PR since I think the current low level ARC
optimizer may be able to help me here since this is a simple transform it does
all of the time.
Bridging an async Swift method back to an ObjC completion-handler-based API requires
that the ObjC thunk spawn a task on which to execute the Swift async API and pass
its results back on to the completion handler.
If the specialized function has a re-abstracted (= converted from indirect to direct) resilient argument or return types, use an alternative mangling: "TB" instead of "Tg".
Resilient parameters/returns can be converted from indirect to direct if the specialization is created within the type's resilience domain, i.e. in its module (where the type is loadable).
In this case we need to generate a different mangled name for the specialized function to distinguish it from specializations in other modules, which cannot re-abstract this resilient type.
This fixes a miscompile resulting from ODR-linking specializations from different modules, which in fact have different function signatures.
https://bugs.swift.org/browse/SR-13900
rdar://71914016
Often times when one is working with ownership one has a value V and a set of
use points Uses where you want V's lifetime to end at, but those Uses together
(while not reachable from each other) only partially post-dominate
V. JointPostDominanceSetComputer is a struct that implements a general solution
to that operation at the block level. The struct itself is just a set of state
that the computation uses so that a pass can clear the state (allowing for us to
avoid needing to remalloc if we had any small data structures that went big).
To get into the semantics, the API
JointPostDominanceSetComputer::findJointPostDominatingSet() takes in a
"dominating block" and a "dominated block set" and returns two things to the user:
1. A set of blocks that together with the "dominated block set"
jointly-postdominate the "dominating block".
2. A list of blocks in the "dominated block set" that were reachable from any of
the other "dominated blocks", including itself in the case of a block in aloop.
Conceptually we are performing a backwards walk up the CFG towards the
"dominating block" starting at each block in the "dominated block set". As we
go, we track successor blocks and report any successor blocks that we do not hit
during our traversal as result blocks and are passed to the result callback.
Now what does this actually mean:
1. All blocks in the "dominated blockset" that are at the same loop nest level
as our dominating block will always be part of the final post-dominating block
set.
2. All "lifetime ending" blocks that are at a different loop nest level than our
dominating block are not going to be in our final result set. Let
LifetimeEndingBlock be such a block. Then note that our assumed condition
implies that there must be a sub-loop, SubLoop, at the same level of the
loop-nest as the dominating block that contains LifetimeEndingBlock. The
algorithm will yield to the caller the exiting blocks of that loop. It will also
flag the blocks that were found to be a different use level, so the caller can
introduce a copy at that point if needed.
NOTE: Part of the reason why I am writing this rather than using the linear
lifetime checker (LLChecker) is that LLChecker is being used in too many places
because it is convenient. Its original true use was for emitting diagnostics and
that can be seen through the implementation. I don't want to add more
contortions to that code, so as I am finding new use cases where I could either
write something new or add contortions to the LLChecker, I am doing the former.