An instruction is a deinit barrier whenever one of three component
predicates is true for it. In the case of applies, it is true whenever
one of those three predicates is true for any of the instructions in any
of its callees; that fact is cached in the side-effect analysis of every
function.
If side-effect analysis or callee analysis is unavailable, in order to
define each of those three component predicates on a
`FullApplySite`/`EndApplyInst`/`AbortApplyInst`, it would be necessary
to define them to conservatively return true: it isn't known whether any
of the instructions in any of the callees were deinit barriers.
Refactored the two versions of the deinit barrier predicate (namely
`Instruction.isDeinitBarrier(_:) and
`swift::mayBeDeinitBarrierNotConsideringSideEffects`) to handle
`FullApplySite`/`EndApplyInst`/`AbortApplyInst`s specially first (to
look up the callees' side-effect and to conservatively bail,
respectively). Asserted that the three component predicates are not
called with `FullApplySite`/`EndApplyInst`/`AbortApplyInst`s. Callers
should instead use the `isDeinitBarrier` APIs.
An alternative would be to conservatively return true from the three
components. That seems more likely to result in direct calls to these
member predicates, however, and at the moment at least there is no
reason for such calls to exist. If some other caller besides the
deinit-barrier predicates needs to call this function, side-effect
analysis should be updated to cache these three properties separately at
that point.
When an instruction `mayReadOrWriteMemory`, the `mayAccessPointer`
predicate relies on the `visitAccessedAddress` utility to determine
whether an instruction may access a pointer. That utility doesn't (and
can't, without changing the representation of the builtin) visit
`RawPointer` operands as used by memmove/memcpy builtins.
Previously, this resulted `mayAccessPointer` returning `false` for such
builtins. Here, this is fixed by making the predicate handle
`BuiltinInsts` separately. Instead of just always returning `true` in
the face of a builtin, rely on the BuiltinInfo associated with a
BuiltinInst and use `mayReadOrWriteMemory`.
rdar://120656227
* `alloc_vector`: allocates an uninitialized vector of elements on the stack or in a statically initialized global
* `vector`: creates an initialized vector in a statically initialized global
Avoid heap-allocating an immortal FunctionTest with `new` because it
results in LSAN reporting a leak.
In fact, the "leaked" value wasn't leaked: a reference to it was stored
in a global map when the type's constructor ran. It was only a leak in
the sense that it was never freed, not that there was a dangling
allocation which couldn't be freed.
Work around this by storing the global instances themselves in a second
static structure. Store pointers to the instances into the global map
as before.
rdar://118134637
I also included changes to the rest of the SIL optimizer pipeline to ensure that
the part of the optimizer pipeline before we lower tuple_addr_constructor (which
is right after we run TransferNonSendable) work as before.
The reason why I am doing this is that this ensures that diagnostic passes can
tell the difference in between:
```
x = (a, b, c)
```
and
```
x.0 = a
x.1 = b
x.2 = c
```
This is important for things like TransferNonSendable where assigning over the
entire tuple element is treated differently from if one were to initialize it in
pieces using projections.
rdar://117880194
This commit just introduces the instruction. In a subsequent commit, I am going
to add support to SILGen to emit this. This ensures that when we assign into a
tuple var we initialize it with one instruction instead of doing it in pieces.
The problem with doing it in pieces is that when one is emitting diagnostics it
looks semantically like SILGen actually is emitting code for initializing in
pieces which could be an error.
When rewriting uses of a noncopyable value, the move-only checker failed to take into account
the scope of borrowing uses when establishing the final lifetimes of values. One way this
manifested was when borrowed values get reabstracted from value to in-memory representations,
using a store_borrow instruction, the lifetime of the original borrow would be ended immediately
after the store_borrow begins rather than after the matching end_borrow. Fix this by, first,
changing `store_borrow` to be treated as a borrowing use of its source rather than an
interior-pointer use; this should be more accurate overall since `store_borrow` borrows the
entire source value for a well-scoped duration balanced by `end_borrow` instructions. That done,
change MoveOnlyBorrowToDestructureUtils so that when it sees a borrow use, it ends the borrow
at the end(s) of the use's borrow scope, instead of immediately after the beginning of the use.
In the C++ sources it is slightly more convenient to dump to stderr than
to print to stdout, but it is rather more unsightly to print to stderr
from the Swift sources. Switch to stdout. Also allows the dump
functions to be marked debug only.
Introduce two modes of bridging:
* inline mode: this is basically how it worked so far. Using full C++ interop which allows bridging functions to be inlined.
* pure mode: bridging functions are not inlined but compiled in a cpp file. This allows to reduce the C++ interop requirements to a minimum. No std/llvm/swift headers are imported.
This change requires a major refactoring of bridging sources. The implementation of bridging functions go to two separate files: SILBridgingImpl.h and OptimizerBridgingImpl.h.
Depending on the mode, those files are either included in the corresponding header files (inline mode), or included in the c++ file (pure mode).
The mode can be selected with the BRIDGING_MODE cmake variable. By default it is set to the inline mode (= existing behavior). The pure mode is only selected in certain configurations to work around C++ interop issues:
* In debug builds, to workaround a problem with LLDB's `po` command (rdar://115770255).
* On windows to workaround a build problem.
This instruction was given forwarding ownership in the original OSSA
implementation. That will obviously lead to memory leaks. Remove
ownership from this instruction and verify that it is never used for
non-trivial types.
At the cost of adding an unsafe bitcast implementation detail,
simplified the code involved to register a new FunctionTest when adding
one.
Also simplifies how Swift native `FunctionTest`s are registered with the
C++ registry.
Now, the to-be-executed thin closure for native Swift `FunctionTest`s is
stored under within the swift::test::FunctionTest instance corresponding
to it. Because its type isn't representable in C++, `void *` is used
instead. When the FunctionTest is invoked, a thunk is called which
takes the actual test function and bridged versions of the arguments.
That thunk unwraps the arguments, casts the stored function to the
appropriate type, and invokes it.
Thanks to Andrew Trick for the idea.
Not every block in a region which begins with the non-lifetime-ending
boundary of a value and ending with unreachable-terminated blocks has
the value available. If the unreachable-terminated blocks in this
boundary are not available, it is incorrect to insert destroys of the
value in them: it is an overconsume on some paths. Previously,
however, destroys were simply being inserted at the unreachable.
Here, this is fixed by finding the boundary of availability within that
region and inserting destroys before the terminators of the blocks on
that boundary.
rdar://116255254