The problem with `is_escaping_closure` was that it didn't consume its operand and therefore reference count checks were unreliable.
For example, copy-propagation could break it.
As this instruction was always used together with an immediately following `destroy_value` of the closure, it makes sense to combine both into a `destroy_not_escaped_closure`.
It
1. checks the reference count and returns true if it is 1
2. consumes and destroys the operand
This is used for synthetic uses like _ = x that do not act as a true use but
instead only suppress unused variable warnings. This patch just adds the
instruction.
Eventually, we can use it to move the unused variable warning from Sema to SIL
slimmming the type checker down a little bit... but for now I am using it so
that other diagnostic passes can have a SIL instruction (with SIL location) so
that we can emit diagnostics on code like _ = x. Today we just do not emit
anything at all for that case so a diagnostic SIL pass would not see any
instruction that it could emit a diagnostic upon. In the next patch of this
series, I am going to add SILGen support to do that.
I am adding this instruction to express artificially that two non-Sendable
values should be part of the same region. It is meant to be used in cases where
due to unsafe code using Sendable, we stop propagating a non-Sendable dependency
that needs to be made in the same region of a use of said Sendable value. I
included an example in ./docs/SIL.rst of where this comes up with @out results
of continuations.
The specific problem here is that this causes MarkUnresolvedMoveAddr to be
within the SINGLE_VALUE_INST_RANGE which defines the classify method used to
determine if casting to a SingleValueInstruction is ok. This is of course not ok
in this case since mark_unresolved_move_addr is actually a NonValueInst.
I noticed this b/c I made the same mistake with a new instruction I am added and
hit a crash in the tests as a result of it being classified as a single value
instruction. After fixing that, I checked for any more instances of this issue
and found that mark_unresolved_move_addr was similarly afflicted.
For now this will only be used for HopToMainActorIfNeeded thunks. I am creating
this now since in the past there has only been one option for creating
thunks... to create the thunk in SILGen using SILGenThunk. This code is hard to
test and there is a lot of it. By using an instruction here we get a few benefits:
1. We decouple SILGen from needing to generate new kinds of thunks. This means
that SILGenThunk does not need to expand to handle more thunks.
2. All thunks implemented via ThunkInst will be easy to test in a decoupled way
with SIL tests.
3. Even though this stabilizes the patient, we still have many thunks in SILGen
and various parts of the compiler. Over time, we can swap to this model,
allowing us to hopefully eventually delete SILGenThunk.
Some requirement machine work
Rename requirement to Value
Rename more things to Value
Fix integer checking for requirement
some docs and parser changes
Minor fixes
It indicates that the value's lifetime continues to at least this point.
The boundary formed by all consuming uses together with these
instructions will encompass all uses of the value.
* Allow normal function results of @yield_once coroutines
* Address review comments
* Workaround LLVM coroutine codegen problem: it assumes that unwind path never returns.
This is not true to Swift coroutines as unwind path should end with error result.
This adds SIL-level support and LLVM codegen for normal results of a coroutine.
The main user of this will be autodiff as VJP of a coroutine must be a coroutine itself (in order to produce the yielded result) and return a pullback closure as a normal result.
For now only direct results are supported, but this seems to be enough for autodiff purposes.
* `alloc_vector`: allocates an uninitialized vector of elements on the stack or in a statically initialized global
* `vector`: creates an initialized vector in a statically initialized global
This commit just introduces the instruction. In a subsequent commit, I am going
to add support to SILGen to emit this. This ensures that when we assign into a
tuple var we initialize it with one instruction instead of doing it in pieces.
The problem with doing it in pieces is that when one is emitting diagnostics it
looks semantically like SILGen actually is emitting code for initializing in
pieces which could be an error.
This instructions marks the point where all let-fields of a class are initialized.
This is important to ensure the correctness of ``ref_element_addr [immutable]`` for let-fields,
because in the initializer of a class, its let-fields are not immutable, yet.
Codegen is the same, but `begin_dealloc_ref` consumes the operand and produces a new SSA value.
This cleanly splits the liferange to the region before and within the destructor of a class.
I was originally hoping to reuse mark_must_check for multiple types of checkers.
In practice, this is not what happened... so giving it a name specifically to do
with non copyable types makes more sense and makes the code clearer.
Just a pure rename.
The new instruction is needed for opaque values mode to allow values to
be extracted from tuples containing packs which will appear for example
as function arguments.
The new instruction wraps a value in a `@sil_weak` box and produces an
owned value. It is only legal in opaque values mode and is transformed
by `AddressLowering` to `store_weak`.
The new instruction unwraps an `@sil_weak` box and produces an owned
value. It is only legal in opaque values mode and is transformed by
`AddressLowering` to `load_weak`.
Since dead-store-elimination is now more accurate, it's important that the `bind_memory` instructions are also defined to _read_ memory.
Otherwise they are not considered as barriers for dead-store-elimination.
Fixes a miscompile.
rdar://112150142
This is code that I am fairly familiar with but it still took a day of
investigation to figure out how it is supposed to be used now in the
presence of bridging.
This primarily involved ruling out the possibity that the mid-level
Swift APIs could at some point call into the lower-level C++ APIs.
The biggest problem was that AliasAnalysis::getMemoryBehaviorOfInst()
was declared as a public interface, and it's name indicates that it
computes the memory behavior. But it is just a wrapper around a Swift
API and never actually calls into any of the C++ logic that is
responsible for computing memory behavior!
The `hop_to_executor` instruction is a synchronization point and any kind of other code might run at this point,
which potentially can release objects.
Fixes a miscompile
rdar://110924258
This instruction is similar to AssignByWrapperInst, but instead of having
a destination operand, the initialization is fully factored into the init
function operand. Like AssignByWrapper, AssignOrInit has partial application
operands of both the initializer and the setter, and DI will lower the
instruction to a call based on whether the assignment is initialization or
a setter call.
Just the $*T -> $*@moveOnly T variant for addresses. Unlike the object version
this acts like a cast rather than something that provides semantics from the
frontend to the optimizer.
The reason why I am using a different instruction for addresses and objects here
is that the object checker doesnt have to deal with things like initialization.
The new alloc_pack_metadata and dealloc_pack_metadata are inserted as
part of IRGen lowering. The former indicates that the next instruction
might result in on-stack pack metadata being emitted. The latter
indicates that this is the position at which metadata emitted on behalf
of its operand should be cleaned up.
This instruction can be inserted by Onone optimizations as a replacement for deleted instructions to
ensure that it's possible to single step on its location.
This allows dynamically indexing into tuples. IRGen not yet
implemented.
I think I'm going to need a type_refine_addr instruction in
order to handle substitutions into the operand type that
eliminate the outer layer of tuple-ness. Gonna handle that
in a follow-up commit.
Having added these, I'm not entirely sure we couldn't just use
alloc_stack and dealloc_stack. Well, if we find ourselves adding
a lot of redundancy with those instructions (e.g. around DI), we
can always go back and rip these out.
This is a dedicated instruction for incrementing a
profiler counter, which lowers to the
`llvm.instrprof.increment` intrinsic. This
replaces the builtin instruction that was
previously used, and ensures that its arguments
are statically known. This ensures that SIL
optimization passes do not invalidate the
instruction, fixing some code coverage cases in
`-O`.
rdar://39146527