Otherwise, one is always forced to use ValueLifetimeAnalysis::Frontier, a
SmallVector<SILInstruction *, 4>. This may not be a size appropriate for every
problem, so it makes sense to provide Frontier as a good rule of thumb, but use
FrontierImpl on the actual API boundary to loosen the constraint if the user
wishes to do so.
Adds a new flag "-experimental-skip-all-function-bodies" that skips
typechecking and SIL generation for all function bodies (where
possible).
`didSet` functions are still typechecked and have SIL generated as their
body is checked for the `oldValue` parameter, but are not serialized.
Parsing will generally be skipped as well, but this isn't necessarily
the case since other flags (eg. "-verify-syntax-tree") may force delayed
parsing off.
* Redundant hop_to_executor elimination: if a hop_to_executor is dominated by another hop_to_executor with the same operand, it is eliminated:
hop_to_executor %a
... // no suspension points
hop_to_executor %a // can be eliminated
* Dead hop_to_executor elimination: if a hop_to_executor is not followed by any code which requires to run on its actor's executor, it is eliminated:
hop_to_executor %a
... // no instruction which require to run on %a
return
rdar://problem/70304809
to check for improperly nested '@_semantic' functions.
Add a missing @_semantics("array.init") in ArraySlice found by the
diagnostic.
Distinguish between array.init and array.init.empty.
Categorize the types of semantic functions by how they affect the
inliner and pass pipeline, and centralize this logic in
PerformanceInlinerUtils. The ultimate goal is to prevent inlining of
"Fundamental" @_semantics calls and @_effects calls until the late
pipeline where we can safely discard semantics. However, that requires
significant pipeline changes.
In the meantime, this change prevents the situation from getting worse
and makes the intention clear. However, it has no significant effect
on the pass pipeline and inliner.
Add AccesssedStorage::compute and computeInScope to mirror AccessPath.
Allow recovering the begin_access for Nested storage.
Adds AccessedStorage.visitRoots().
Things that have come up recently but are somewhat blocked on this:
- Moving AccessMarkerElimination down in the pipeline
- SemanticARCOpts correctness and improvements
- AliasAnalysis improvements
- LICM performance regressions
- RLE/DSE improvements
Begin to formalize the model for valid memory access in SIL. Ignoring
ownership, every access is a def-use chain in three parts:
object root -> formal access base -> memory operation address
AccessPath abstracts over this path and standardizes the identity of a
memory access throughout the optimizer. This abstraction is the basis
for a new AccessPathVerification.
With that verification, we now have all the properties we need for the
type of analysis requires for exclusivity enforcement, but now
generalized for any memory analysis. This is suitable for an extremely
lightweight analysis with no side data structures. We currently have a
massive amount of ad-hoc memory analysis throughout SIL, which is
incredibly unmaintainable, bug-prone, and not performance-robust. We
can begin taking advantage of this verifably complete model to solve
that problem.
The properties this gives us are:
Access analysis must be complete over memory operations: every memory
operation needs a recognizable valid access. An access can be
unidentified only to the extent that it is rooted in some non-address
type and we can prove that it is at least *not* part of an access to a
nominal class or global property. Pointer provenance is also required
for future IRGen-level bitfield optimizations.
Access analysis must be complete over address users: for an identified
object root all memory accesses including subobjects must be
discoverable.
Access analysis must be symmetric: use-def and def-use analysis must
be consistent.
AccessPath is merely a wrapper around the existing accessed-storage
utilities and IndexTrieNode. Existing passes already very succesfully
use this approach, but in an ad-hoc way. With a general utility we
can:
- update passes to use this approach to identify memory access,
reducing the space and time complexity of those algorithms.
- implement an inexpensive on-the-fly, debug mode address lifetime analysis
- implement a lightweight debug mode alias analysis
- ultimately improve the power, efficiency, and maintainability of
full alias analysis
- make our type-based alias analysis sensistive to the access path
...and avoid reallocation.
This is immediately necessary for LICM, in addition to its current
uses. I suspect this could be used by many passes that work with
addresses. RLE/DSE should absolutely migrate to it.
This attribute allows to define a pre-specialized entry point of a
generic function in a library.
The following definition provides a pre-specialized entry point for
`genericFunc(_:)` for the parameter type `Int` that clients of the
library can call.
```
@_specialize(exported: true, where T == Int)
public func genericFunc<T>(_ t: T) { ... }
```
Pre-specializations of internal `@inlinable` functions are allowed.
```
@usableFromInline
internal struct GenericThing<T> {
@_specialize(exported: true, where T == Int)
@inlinable
internal func genericMethod(_ t: T) {
}
}
```
There is syntax to pre-specialize a method from a different module.
```
import ModuleDefiningGenericFunc
@_specialize(exported: true, target: genericFunc(_:), where T == Double)
func prespecialize_genericFunc(_ t: T) { fatalError("dont call") }
```
Specially marked extensions allow for pre-specialization of internal
methods accross module boundries (respecting `@inlinable` and
`@usableFromInline`).
```
import ModuleDefiningGenericThing
public struct Something {}
@_specializeExtension
extension GenericThing {
@_specialize(exported: true, target: genericMethod(_:), where T == Something)
func prespecialize_genericMethod(_ t: T) { fatalError("dont call") }
}
```
rdar://64993425
1. Do a better alias analysis for "function-local" objects, like alloc_stack and inout parameters
2. Fully support try_apply and begin/end/abort_apply
So far we fully relied on escape analysis. But escape analysis has some shortcomings with SIL address-types.
Therefore, handle two common cases, alloc_stack and inout parameters, with alias analysis.
This gives better results.
The biggest change here is to do a quick check if the address escapes via an address_to_pointer instructions.
* Fix another use-after-free in SILCombine
swift::endLifetimeAtFrontier also needs to use
swift::emitDestroyOperation and delete instructions via callbacks that
can correctly remove it from the worklist that SILCombine maintains
* Add test for use-after-free in SILCombine
SILCombine maintains a worklist of instructions and deleting of instructions is valid only via callbacks that remove them from the worklist as well. It calls swift::tryDeleteDeadClosure which in turn calls SILBuilder apis like emitStrongRelease/emitReleaseValue/emitDestroyValue which can delete instructions via SILInstruction::eraseFromParent leaving behind a stale entry in SILCombine's worklist causing a crash.
This PR adds swift::emitDestroyOperation which correctly calls the appropriate InstModCallbacks on added/removed instructions. This comes from swift::releasePartialApplyCapturedArg which was handling creation of destroys with callbacks correctly.
`get_async_continuation[_addr]` begins a suspend operation by accessing the continuation value that can resume
the task, which can then be used in a callback or event handler before executing `await_async_continuation` to
suspend the task.
A key concept in late ARC optimization is "RC Identity". In short, a result of
an instruction is rc-identical to an operand of the instruction if one can
safely move a retain (release) from before the instruction on the result to one
after on the operand without changing the program semantics. This creates a
simple model where one can work on equivalence classes of rc-identical values
(using a dominating definition generally as the representative) and thus
optimize/pair retain, release.
When preparing for late ARC optimization, the optimizer will normalize aggregate
ARC operations (retain_value, release_value) into singular strong_retain,
strong_release operations on leaf types of the aggregate that are
non-trivial. As an example, a retain_value on a KlassPair would be canonicalized
into two strong_retain, one for the lhs and one for the rhs. When this is done,
the optimizer generally just creates new struct_extract at the point where the
retain is. In such a case, we may have that the debug_value for the underlying
type is actually on a reformed aggregate whose underlying parts we are
retaining:
```
bb0(%0 : $Builtin.NativeObject):
strong_retain %0
%1 = struct $Array(%0 : $Builtin.NativeObject, ...)
debug_value %1 : $Array, ...
```
By looking through RC identical uses, we can handle a large subset of these
cases without much effort: ones were there is a single owning pointer like Array.
To handle more complex cases we would have to calculate an inverse access path needed to get
back to our value and somehow deal with all of the complexity therein (I am sure
we can do it I just haven't thought through all of the details).
The only interesting behavior that this results in is that when we emit
diagnostics, we just use the rc-identical transitive use debug_value's name
without a projection path. This is because the source location associated with
that debug_value is with a separate value that is rc-identical to the actual
value that we visited during our opt-remark traversal up the def-use
graph. Consider the following example below, noting the comments that show in
the SIL itself what I attempted to explain above.
```
struct KlassPair {
var lhs: Klass
var rhs: Klass
}
struct StateWithOwningPointer {
var state: TrivialState
var owningPtr: Klass
}
sil @theFunction : $@convention(thin) () -> () {
bb0:
%0 = apply %getKlassPair() : $@convention(thin) () -> @owned KlassPair
// This debug_value's name can be combined...
debug_value %0 : $KlassPair, name "myPair"
// ... with the access path from the struct_extract here...
%1 = struct_extract %0 : $KlassPair, #KlassPair.lhs
// ... to emit a nice diagnostic that 'myPair.lhs' is being retained.
strong_retain %1 : $Klass
// In contrast in the case below, we rely on looking through rc-identity uses
// to find the debug_value. In this case, the source info associated with the
// debug_value (%2) is no longer associated with the underlying access path we
// have been tracking upwards (%1 is in our access path list). Instead, we
// know that the debug_value is rc-identical to whatever value we were
// originally tracking up (%1) and thus the correct identifier to use is the
// direct name of the identifier alone (without access path) since that source
// identifier must be some value in the source that by itself is rc-identical
// to whatever is being manipulated. Thus if we were to emit the access path
// here for na rc-identical use we would get "myAdditionalState.owningPtr"
// which is misleading since ArrayWrapperWithMoreState does not have a field
// named 'owningPtr', its subfield array does. That being said since
// rc-identity means a retain_value on the value with the debug_value upon it
// is equivalent to the access path value we found by walking up the def-use
// graph from our strong_retain's operand.
%0a = apply %getStateWithOwningPointer() : $@convention(thin) () -> @owned StateWithOwningPointer
%1 = struct_extract %0a : $StateWithOwningPointer, #StateWithOwningPointer.owningPtr
strong_retain %1 : $Klass
%2 = struct $Array(%0 : $Builtin.NativeObject, ...)
%3 = struct $ArrayWrapperWithMoreState(%2 : $Array, %moreState : MoreState)
debug_value %2 : $ArrayWrapperWithMoreState, name "myAdditionalState"
}
```
* Remove NewInsts from ARCSequenceOpts
* Remove more instances of InsertPts
* Address comments from #33504
* Make bottom up loop traversal simpler. Use better apis
* Update LoopRegion printer with more info
Add differentiation support for non-active `try_apply` SIL instructions.
Notable pullback generation changes:
* Original basic blocks are now visited in a different order:
* starting from the original basic block, all its predecessors
* are visited in a breadth-first search order. This ensures that
* all successors of any block are visited before the block itself.
Resolves TF-433.
LLVM, as of 77e0e9e17daf0865620abcd41f692ab0642367c4, now builds with
-Wsuggest-override. Let's clean up the swift sources rather than disable
the warning locally.
TLDR: This fixes an ownership verifier assert caused by not placing end_borrows
along paths where an enum is provable to have a trivial case. It only happens if
all non-trivial cases in a switch_enum are "dead end blocks" where the program
will end and we leak objects.
The Problem
-----------
The actual bug here only occurs in cases where we have a switch_enum on an enum
with mixed trivial, non-trivial cases and all of the non-trivial payloaded cases
are "dead end blocks". As an example, lets look at a simple switch_enum over an
optional where the .some case is a dead end block and we leak the Klass object
into program termination:
```
%0 = load [copy] %mem : $Klass
switch_enum %0 : $Optional<Klass>, case #Optional.some: bbDeadEnd, case #Optional.none: bbContinue
bbDeadEnd(%0a : @owned $Klass): // %0 is leaked into program end!
unreachable
bbContinue:
... // program continue.
```
In this case, if we were only looking at final destroying uses, we would pass a
def without any uses to the ValueLifetimeChecker causing us to not have a
frontier at all causing us to not insert any end_borrows, yielding:
```
%0 = load_borrow %mem : $Klass
switch_enum %0 : $Optional<Klass>, case #Optional.some: bbDeadEnd, case #Optional.none: bbContinue
bbDeadEnd(%0a : @guaranteed $Klass): // %0 is leaked into program end and
// doesnt need an end_borrow!
unreachable
bbContinue:
... // program continue... we need an end_borrow here though!
```
This then trips the ownership verifier since switch_enum is a transforming
terminator that acts like a forwarding instruction implying we need an
end_borrow on the base value along all non-dead end paths through the program.
Importantly this is not actually a leak of a value or unsafe behavior since the
only time that we enter into unsafe territory is along paths where the enum was
actually trivial. So the load_borrow is actually just loaded the trivial enum
value.
The Fix
-------
In order to work around this, I realized that the right solution is to also
include the forwarding consuming uses (in this case the switch_enum use) when
determining the lifetime and that this solves the problem.
That being said, after I made that change, I noticed that I needed to remove my
previous manner of computing the insertion point to use for arguments when
finding the lifetime using ValueLifetimeAnalysis. Previously since I was using
only the destroying uses I knew that the destroy_value could not be the first
instruction in the block of my argument since I handled that case individually
before using the ValueLifetimeAnalysis. That invariant is no longer true as can
be seen in the case above if %0 was from a SILArgument itself instead of a load
[copy] and we were converting that argument to be a guaranteed argument.
To fix this, I taught ValueLifetimeAnalysis how to handle defs from
Arguments. The key thing is I noticed while reading the code that the analysis
only generally cared about the instruction's parent block. Beyond that, the def
being from an instruction was only needed to determine if a user is earlier in
the same block as the def instruction. Those concerns not apply to SILArgument
which dominate all instructions in the same block, so in this patch, we just
skip those conditional checks when we have a SILArgument. The rest of the code
that uses the parent block is the same for both SILArgument/SILInstructions.
rdar://65244617
Specifically:
1. I made methods, variables camelCase.
2. I expanded out variable names (e.x.: bb -> block, predBB -> predBlocks, U -> wrappedUse).
3. I changed typedef -> using.
4. I changed a few c style for loops into for each loops using llvm::enumerate.
NOTE: I left the parts needed for syncing to LLVM in the old style since LLVM
needs these to exist for CRTP to work correctly for the SILSSAUpdater.