This allows dynamically indexing into tuples. IRGen not yet
implemented.
I think I'm going to need a type_refine_addr instruction in
order to handle substitutions into the operand type that
eliminate the outer layer of tuple-ness. Gonna handle that
in a follow-up commit.
Having added these, I'm not entirely sure we couldn't just use
alloc_stack and dealloc_stack. Well, if we find ourselves adding
a lot of redundancy with those instructions (e.g. around DI), we
can always go back and rip these out.
Should be NFC in impact, but some of the existing patterns can produce
redundant dependencies in probably-obscure cases, so it's not purely a
refactor.
Add TermInst::forwardedOperand.
Add SILArgument::forwardedTerminatorResultOperand. This API will be
moved into a proper TerminatorResult abstraction.
Remove getSingleTerminatorOperand, which could be misused because it's
not necessarilly forwarding ownership.
Remove the isTransformationTerminator API, which is not useful or well
defined.
Rewrite several instances of complex logic to handle block arguments
with the simple terminator result API. This defines away potential
bugs where we don't detect casts that perform implicit conversion.
Replace uses of the SILPhiArgument type and code that explicitly
handle block arguments. Control flow is irrelevant in these
situations. SILPhiArgument needs to be deleted ASAP. Instead, use
simple APIs like SILArgument::isTerminatorResult(). Eventually this
will be replaced by a TerminatorResult type.
`getValue` -> `value`
`getValueOr` -> `value_or`
`hasValue` -> `has_value`
`map` -> `transform`
The old API will be deprecated in the rebranch.
To avoid merge conflicts, use the new API already in the main branch.
rdar://102362022
This lets us write optimizer unit tests and selectively debug the
optimizer in general. We'll be able trace analyses and control
optimization selectively for certain values.
Adding a trace flag to debug_value is the easiest way to start using
it experimentally and develop the rest of the infrastructure. If this
takes off, then we can consider a new `trace_value`
instruction. For now, reusing debug_value is the least intrusive way to
start writing liveness unit tests.
Currently the SROA just overwrites already-existing expressions on
variables. When SROA is recursively run on a data structure this leads to
nonsensical expressions such as
type $*Outer, expr op_fragment:#Inner.x
instead of
type $*Outer, expr op_fragment:#Outer.inner op_fragment:#Inner.x
The (nonsensical) LLVM IR generated from this violates some assumptions in LLVM
for example, if a struct has multiple members of the same type, you can end up
with multiple dbg.declare intrinsics claiming to describe the same variable). As
a quick fix, this patch detects this situation and drops the debug info. A
proper fix shouldn't be too difficult to implement though.
rdar://99874371
This is a dedicated instruction for incrementing a
profiler counter, which lowers to the
`llvm.instrprof.increment` intrinsic. This
replaces the builtin instruction that was
previously used, and ensures that its arguments
are statically known. This ensures that SIL
optimization passes do not invalidate the
instruction, fixing some code coverage cases in
`-O`.
rdar://39146527
This is exactly like copy_addr except that it is not viewed from the verifiers
perspective as an "invalid" copy of a move only value. It is intended to be used
in two contexts:
1. When the move checker emits a diagnostic since it could not eliminate a copy,
we still need to produce valid SIL without copy_addr on move only types since we
will hit canonical SIL eventually even if we don't actually codegen the SIL. The
pass can just convert said copy_addr to explicit_copy_addr and everyone is
happy.
2. To implement the explicit copy function for address only types.
Andy some time ago already created the new API but didn't go through and update
the old occurences. I did that in this PR and then deprecated the old API. The
tree is clean, so I could just remove it, but I decided to be nicer to
downstream people by deprecating it first.
The use of the SWIFT_INLINE_BITFIELD macros in SILNode were a constant source of confusion and bugs.
With this refactoring I tried to simplify the definition of "shared fields" in SILNode, SILValue and SILInstruction classes:
* Move `kind`, `locationKindAndFlags` and the 32-bit fields out of the 64-bitfield into their own member variables. This avoids _a lot_ of manual bit position computations.
* Now we have two separate "shared fields": an 8-bit field (e.g. for boolean flags) and a 32-bit field (e.g. for indices, which can potentially get large). Both fields can be used independently. Also, they are not "bit fields" per se. Instructions can use the field e.g. as a `bool`, `uint32_t`, or - if multiple flags are to be stored - as a packed bit field.
* With these two separate fields, we don't have the need for defining bitfields both in a base class _and_ in a derived value/instruction class. We can get rid of the complex logic which handles such cases. Just keep a check to catch accidental overlaps of fields in base and derived classes.
* Still use preprocessor macros for the implementation, but much simpler ones than before.
* Add documentation.
As we do with field indices for struct instructions.
This avoids quadratic behavior in case of enums with lots of cases.
Also: cache field and enum case indices in the SILModule.
Previously, the AbstractionPattern that was used for the value
"returned" (i.e. via a completion handler) from ObjC mostly (but not
quite always) was "type".
The generated completion handler correctly (because this is required in
order to call _resumeUnsafeContinuation) reabstracted the block (e.g.
from @convention(block) to @substituted <T> () -> @out T for <()>). The
callee of the ObjC function, however, loaded the function from the block
as if it were not reabstracted (e.g. () -> ()).
On most platforms, that happened to work. On arm64e, that difference in
types caused in a difference in pointer signing, resulting in a failure
at runtime.
rdar://85526879
rdar://85526916
Reduces the number of _ContiguousArrayStorage metadata.
In order to support constant time bridging we do need to set the correct
metadata when we bridge to Objective-C. This is so that the type check
succeeds when bridging back from Objective-C to reuse the storage
instance rather than bridging the elements.
To support dynamically setting the `_ContiguousArrayStorage` element
type i needed to add support for optimizing `alloc_ref_dynamic`
throughout the optimizer.
Possible future improvements:
* Use different metadata such that we can disambiguate native Swift
classes during destruction -- allowing native release rather then unknown
release usage.
* Optimize the newly added semantic function
getContiguousArrayStorageType
rdar://86171143
The main effect of this will be that in IRGen we will use llvm.dbg.addr instead
of llvm.dbg.declare. We must do this since llvm.dbg.declare implies that the
given address is valid throughout the program.
This just adds the instructions/printing/parsing/serialization/deserialization.
rdar://85020571
Swift string literals are only permitted to contain well-formed UTF-8, but C does not share this restriction, and ClangImporter wasn't checking for that before it created `StringLiteralExpr`s for imported macros; this could cause crashes when importing a header. This commit makes us drop these macros instead.
Although invalid UTF-8 always *did* cause a segfault in my testing, I'm not convinced that there isn't a way to cause a miscompile with a bug like this. If we somehow did generate code that fed ill-formed UTF-8 to the builtin literal init for Swift.String, the resulting string could cause undefined behavior at runtime. So I have additionally added a defensive assertion to StringLiteralInst that any UTF-8 string represented in SIL is well-formed. Hopefully that will catch any non-crashing compiler bugs like this one.
Fixes rdar://67840900.
Required for UnsafeRawPointer.withMemoryRebound(to:)
%token = bind_memory %0 : $Builtin.RawPointer, %1 : $Builtin.Word to $T
%0 must be of $Builtin.RawPointer type
%1 must be of $Builtin.Word type
%token is an opaque $Builtin.Word representing the previously bound types
for this memory region.
This change separates out the formation of the generic signature and
substitutions for a SIL substituted function type as a pre-pass
before doing the actual function type lowering. The only input we
really need to form this signature is the original abstraction pattern
that a type is being lowered against, and pre-computing it should make
the code less side-effecty and confusing. It also allows us to handle
generic nominal types in a more robust way; we transfer over all of
the nominal type requirements to the generalized generic signature,
then when recursively visiting the bindings, we same-type-constrain
the generic parameters used in those requirements to the newly-generalized
generic arguments. This ensures that the minimized signature preserves
any non-trivial requirements imposed by the nominal type, such as
conditional conformances on its type arguments, same-type constraints
among associated types, etc.
This approach does lead to less-than-optimal generalized generic
signatures getting generated, since nominal type generic arguments
get same-type-bound either to other generic arguments or fixed to
concrete types almost always. It would be useful to do a minimization
pass on the final generic signature to eliminate these unnecessary
generic arguments, but that can be done in a follow-up PR.
Fix two bugs:
- FirstArgOwnershipForwardingSingleValueInst needs to forward its first operand.
- select_value needs to be a ForwardedBorrow for all cases and the default.