This instruction is similar to AssignByWrapperInst, but instead of having
a destination operand, the initialization is fully factored into the init
function operand. Like AssignByWrapper, AssignOrInit has partial application
operands of both the initializer and the setter, and DI will lower the
instruction to a call based on whether the assignment is initialization or
a setter call.
This still does not have the complete behavior that we want since we are not yet
inserting the debug_value undef to invalidate after performing moves.
NOTE: In printed SIL, I decided to make the flag implicit if we have a
noncopyable type. This is just to reduce the bloat in SIL. If one needs this
property for a copyable type though, it is mandatory.
This is in preparation for wiring up debug info support for noncopyable
values. Originally this flag name made sense since it was set when we performed
consume operator checking. Now I am going to use it for noncopyable types as
well. I think the new name uses_moveable_value_debuginfo actually describes what
the flag is supposed to do, tell IRGen that the value may be moved since it
needs to use moveable value debug info emission.
It's need to correctly maintain dependencies from an open-existential instruction to a `keypath` instruction which uses the opened type.
Fixes a SILVerifier crash.
rdar://105517521
This allows dynamically indexing into tuples. IRGen not yet
implemented.
I think I'm going to need a type_refine_addr instruction in
order to handle substitutions into the operand type that
eliminate the outer layer of tuple-ness. Gonna handle that
in a follow-up commit.
Having added these, I'm not entirely sure we couldn't just use
alloc_stack and dealloc_stack. Well, if we find ourselves adding
a lot of redundancy with those instructions (e.g. around DI), we
can always go back and rip these out.
Should be NFC in impact, but some of the existing patterns can produce
redundant dependencies in probably-obscure cases, so it's not purely a
refactor.
Add TermInst::forwardedOperand.
Add SILArgument::forwardedTerminatorResultOperand. This API will be
moved into a proper TerminatorResult abstraction.
Remove getSingleTerminatorOperand, which could be misused because it's
not necessarilly forwarding ownership.
Remove the isTransformationTerminator API, which is not useful or well
defined.
Rewrite several instances of complex logic to handle block arguments
with the simple terminator result API. This defines away potential
bugs where we don't detect casts that perform implicit conversion.
Replace uses of the SILPhiArgument type and code that explicitly
handle block arguments. Control flow is irrelevant in these
situations. SILPhiArgument needs to be deleted ASAP. Instead, use
simple APIs like SILArgument::isTerminatorResult(). Eventually this
will be replaced by a TerminatorResult type.
`getValue` -> `value`
`getValueOr` -> `value_or`
`hasValue` -> `has_value`
`map` -> `transform`
The old API will be deprecated in the rebranch.
To avoid merge conflicts, use the new API already in the main branch.
rdar://102362022
This lets us write optimizer unit tests and selectively debug the
optimizer in general. We'll be able trace analyses and control
optimization selectively for certain values.
Adding a trace flag to debug_value is the easiest way to start using
it experimentally and develop the rest of the infrastructure. If this
takes off, then we can consider a new `trace_value`
instruction. For now, reusing debug_value is the least intrusive way to
start writing liveness unit tests.
Currently the SROA just overwrites already-existing expressions on
variables. When SROA is recursively run on a data structure this leads to
nonsensical expressions such as
type $*Outer, expr op_fragment:#Inner.x
instead of
type $*Outer, expr op_fragment:#Outer.inner op_fragment:#Inner.x
The (nonsensical) LLVM IR generated from this violates some assumptions in LLVM
for example, if a struct has multiple members of the same type, you can end up
with multiple dbg.declare intrinsics claiming to describe the same variable). As
a quick fix, this patch detects this situation and drops the debug info. A
proper fix shouldn't be too difficult to implement though.
rdar://99874371
This is a dedicated instruction for incrementing a
profiler counter, which lowers to the
`llvm.instrprof.increment` intrinsic. This
replaces the builtin instruction that was
previously used, and ensures that its arguments
are statically known. This ensures that SIL
optimization passes do not invalidate the
instruction, fixing some code coverage cases in
`-O`.
rdar://39146527
This is exactly like copy_addr except that it is not viewed from the verifiers
perspective as an "invalid" copy of a move only value. It is intended to be used
in two contexts:
1. When the move checker emits a diagnostic since it could not eliminate a copy,
we still need to produce valid SIL without copy_addr on move only types since we
will hit canonical SIL eventually even if we don't actually codegen the SIL. The
pass can just convert said copy_addr to explicit_copy_addr and everyone is
happy.
2. To implement the explicit copy function for address only types.
Andy some time ago already created the new API but didn't go through and update
the old occurences. I did that in this PR and then deprecated the old API. The
tree is clean, so I could just remove it, but I decided to be nicer to
downstream people by deprecating it first.
The use of the SWIFT_INLINE_BITFIELD macros in SILNode were a constant source of confusion and bugs.
With this refactoring I tried to simplify the definition of "shared fields" in SILNode, SILValue and SILInstruction classes:
* Move `kind`, `locationKindAndFlags` and the 32-bit fields out of the 64-bitfield into their own member variables. This avoids _a lot_ of manual bit position computations.
* Now we have two separate "shared fields": an 8-bit field (e.g. for boolean flags) and a 32-bit field (e.g. for indices, which can potentially get large). Both fields can be used independently. Also, they are not "bit fields" per se. Instructions can use the field e.g. as a `bool`, `uint32_t`, or - if multiple flags are to be stored - as a packed bit field.
* With these two separate fields, we don't have the need for defining bitfields both in a base class _and_ in a derived value/instruction class. We can get rid of the complex logic which handles such cases. Just keep a check to catch accidental overlaps of fields in base and derived classes.
* Still use preprocessor macros for the implementation, but much simpler ones than before.
* Add documentation.
As we do with field indices for struct instructions.
This avoids quadratic behavior in case of enums with lots of cases.
Also: cache field and enum case indices in the SILModule.
Previously, the AbstractionPattern that was used for the value
"returned" (i.e. via a completion handler) from ObjC mostly (but not
quite always) was "type".
The generated completion handler correctly (because this is required in
order to call _resumeUnsafeContinuation) reabstracted the block (e.g.
from @convention(block) to @substituted <T> () -> @out T for <()>). The
callee of the ObjC function, however, loaded the function from the block
as if it were not reabstracted (e.g. () -> ()).
On most platforms, that happened to work. On arm64e, that difference in
types caused in a difference in pointer signing, resulting in a failure
at runtime.
rdar://85526879
rdar://85526916