This removes the ambiguity when casting from a SingleValueInstruction to SILNode, which makes the code simpler. E.g. the "isRepresentativeSILNode" logic is not needed anymore.
Also, it reduces the size of the most used instruction class - SingleValueInstruction - by one pointer.
Conceptually, SILInstruction is still a SILNode. But implementation-wise SILNode is not a base class of SILInstruction anymore.
Only the two sub-classes of SILInstruction - SingleValueInstruction and NonSingleValueInstruction - inherit from SILNode. SingleValueInstruction's SILNode is embedded into a ValueBase and its relative offset in the class is the same as in NonSingleValueInstruction (see SILNodeOffsetChecker).
This makes it possible to cast from a SILInstruction to a SILNode without knowing which SILInstruction sub-class it is.
Casting to SILNode cannot be done implicitly, but only with an LLVM `cast` or with SILInstruction::asSILNode(). But this is a rare case anyway.
This removes the ambiguity when casting from a SingleValueInstruction to SILNode, which makes the code simpler. E.g. the "isRepresentativeSILNode" logic is not needed anymore.
Also, it reduces the size of the most used instruction class - SingleValueInstruction - by one pointer.
Conceptually, SILInstruction is still a SILNode. But implementation-wise SILNode is not a base class of SILInstruction anymore.
Only the two sub-classes of SILInstruction - SingleValueInstruction and NonSingleValueInstruction - inherit from SILNode. SingleValueInstruction's SILNode is embedded into a ValueBase and its relative offset in the class is the same as in NonSingleValueInstruction (see SILNodeOffsetChecker).
This makes it possible to cast from a SILInstruction to a SILNode without knowing which SILInstruction sub-class it is.
Casting to SILNode cannot be done implicitly, but only with an LLVM `cast` or with SILInstruction::asSILNode(). But this is a rare case anyway.
-enable-subst-sil-function-types-for-function-values
-enable-large-loadable-types
These defaulted to on, and there were no corresponding flags for
turning them off, so the flags had no effect.
For now simply run the pass before SemanticARCOpts. This will probably
be called as a utility from within SemanticARCOpts so it can be
iteratively applied after other ARC-related transformations.
It's against the principles of pass design to check the driver mode
within the pass. A pass always needs to do the same thing regardless
of where it runs in the pass pipeline. It also needs to be possible to
test passes in isolation.
Adds a new flag "-experimental-skip-all-function-bodies" that skips
typechecking and SIL generation for all function bodies (where
possible).
`didSet` functions are still typechecked and have SIL generated as their
body is checked for the `oldValue` parameter, but are not serialized.
Parsing will generally be skipped as well, but this isn't necessarily
the case since other flags (eg. "-verify-syntax-tree") may force delayed
parsing off.
* Redundant hop_to_executor elimination: if a hop_to_executor is dominated by another hop_to_executor with the same operand, it is eliminated:
hop_to_executor %a
... // no suspension points
hop_to_executor %a // can be eliminated
* Dead hop_to_executor elimination: if a hop_to_executor is not followed by any code which requires to run on its actor's executor, it is eliminated:
hop_to_executor %a
... // no instruction which require to run on %a
return
rdar://problem/70304809
to check for improperly nested '@_semantic' functions.
Add a missing @_semantics("array.init") in ArraySlice found by the
diagnostic.
Distinguish between array.init and array.init.empty.
Categorize the types of semantic functions by how they affect the
inliner and pass pipeline, and centralize this logic in
PerformanceInlinerUtils. The ultimate goal is to prevent inlining of
"Fundamental" @_semantics calls and @_effects calls until the late
pipeline where we can safely discard semantics. However, that requires
significant pipeline changes.
In the meantime, this change prevents the situation from getting worse
and makes the intention clear. However, it has no significant effect
on the pass pipeline and inliner.
This attribute allows to define a pre-specialized entry point of a
generic function in a library.
The following definition provides a pre-specialized entry point for
`genericFunc(_:)` for the parameter type `Int` that clients of the
library can call.
```
@_specialize(exported: true, where T == Int)
public func genericFunc<T>(_ t: T) { ... }
```
Pre-specializations of internal `@inlinable` functions are allowed.
```
@usableFromInline
internal struct GenericThing<T> {
@_specialize(exported: true, where T == Int)
@inlinable
internal func genericMethod(_ t: T) {
}
}
```
There is syntax to pre-specialize a method from a different module.
```
import ModuleDefiningGenericFunc
@_specialize(exported: true, target: genericFunc(_:), where T == Double)
func prespecialize_genericFunc(_ t: T) { fatalError("dont call") }
```
Specially marked extensions allow for pre-specialization of internal
methods accross module boundries (respecting `@inlinable` and
`@usableFromInline`).
```
import ModuleDefiningGenericThing
public struct Something {}
@_specializeExtension
extension GenericThing {
@_specialize(exported: true, target: genericMethod(_:), where T == Something)
func prespecialize_genericMethod(_ t: T) { fatalError("dont call") }
}
```
rdar://64993425
This can compensate the performance regression of the more conservative handling of function calls in TempRValueOpt (see previous commit).
The pass runs after the inlining passes and can therefore optimize in some cases where it's not possible before inlining.
Specifically the option: -sil-stop-optzns-before-lowering-ownership. This makes
it possible to write end-to-end tests on OSSA passes. Before one would have to
pattern match after ownership was lowered, losing the ability to do finegrained
FileCheck pattern matching on ossa itself.
Needed to make sure that global initializers are not optimized in mid-level SIL while other functions are still in high-level SIL.
Having the StringOptimization not in high-level SIL was just a mistake in my earlier PR.
Optimizes String operations with constant operands.
Specifically:
* Replaces x.append(y) with x = y if x is empty.
* Removes x.append("")
* Replaces x.append(y) with x = x + y if x and y are constant strings.
* Replaces _typeName(T.self) with a constant string if T is statically known.
With this optimization it's possible to constant fold string interpolations, like "the \(Int.self) type" -> "the Int type"
This new pass runs on high-level SIL, where semantic calls are still in place.
rdar://problem/65642843
In order to test this, I implemented a small source loc inference routine for
instructions without valid SILLocations. This is an optional nob that the
opt-remark writer can optionally enable on a per remark basis. The current
behaviors are just forward/backward scans in the same basic block. If we scan
forwards, if we find a valid SourceLoc, we just use ethat. If we are scanning
backwards, instead we grab the SourceRange and if it is valid use the end source
range of the given instruction. This seems to give a good result for retain
(forward scan) and release (backward scan).
The specific reason that I did that is that my test case for this are
retain/release operations. Often times these operations due to code-motion are
moved around (and rightly to prevent user confusion) given by optimizations auto
generated SIL locations. Since that is the test case I am using, to test this I
needed said engine.
This just moves it past the SIL linker (which since the stdlib doesn't link
anything will not change anything and past TempRValueOpt which is already
updated for OSSA.
I am going to be moving back ownership lowering first in the stdlib so that we
can bring up the optimizer on ownership without needing to deal with
serialization issues (the stdlib doesn't deserialize SIL from any other
modules).
This patch just begins the mechanical process with a nice commit message. Should
be NFC.
* Include small non-generic functions for serializaion
* serialize initializer of global variables: so that global let variables can be constant propagated across modules
rdar://problem/60696510
Optimizes copies from a temporary (an "l-value") to a destination.
%temp = alloc_stack $Ty
instructions_which_store_to %temp
copy_addr [take] %temp to %destination
dealloc_stack %temp
is optimized to
destroy_addr %destination
instructions_which_store_to %destination
The name TempLValueOpt refers to the TempRValueOpt pass, which performs a related transformation, just with the temporary on the "right" side.
The TempLValueOpt is similar to CopyForwarding::backwardPropagateCopy.
It's more restricted (e.g. the copy-source must be an alloc_stack).
That enables other patterns to be optimized, which backwardPropagateCopy cannot handle.
This pass also performs a small peephole optimization which simplifies copy_addr - destroy sequences.
copy_addr %source to %destination
destroy_addr %source
is replace with
copy_addr [take] %source to %destination