- stop storing the parent task in the TaskGroup at the .swift level
- make sure that swift_taskGroup_isCancelled is implied by the parent
task being cancelled
- make the TaskGroup structs frozen
- make the withTaskGroup functions inlinable
- remove swift_taskGroup_create
- teach IRGen to allocate memory for the task group
- don't deallocate the task group in swift_taskGroup_destroy
To achieve the allocation change, introduce paired create/destroy builtins.
Furthermore, remove the _swiftRetain and _swiftRelease functions and
several calls to them. Replace them with uses of the appropriate builtins.
I should probably change the builtins to return retained, since they're
working with a managed type, but I'll do that in a separate commit.
This isn't _terribly_ useful as-is, because the only constant mask you can get at from Swift at present is the zeroinitializer, but even that is quite useful for optimizing the repeating: intializer on SIMD. At some future point we should wire up generating constant masks for the .even, .odd, .high and .low properties (and also eventually make shufflevector take non-constant masks in LLVM). But this is enough to be useful, so let's get it in.
The comment in LowerHopToActor explains the design here.
We want SILGen to emit hops to actors, ignoring executors,
because it's easier to fully optimize in a world where deriving
an executor is a non-trivial operation. But we also want something
prior to IRGen to lower the executor derivation because there are
useful static optimizations we can do, such as doing the derivation
exactly once on a dominance path and strength-reducing the derivation
(e.g. exploiting static knowledge that an actor is a default actor).
There are probably phase-ordering problems with doing this so late,
but hopefully they're restricted to situations like actors that
share an executor. We'll want to optimize that eventually, but
in the meantime, this unblocks the executor work.
The immediate desire is to minimize the set of ABI dependencies
on the layout of an ExecutorRef. In addition to that, however,
I wanted to generally reduce the code size impact of an unsafe
continuation since it now requires accessing thread-local state,
and I wanted resumption to not have to create unnecessary type
metadata for the value type just to do the initialization.
Therefore, I've introduced a swift_continuation_init function
which handles the default initialization of a continuation
and returns a reference to the current task. I've also moved
the initialization of the normal continuation result into the
caller (out of the runtime), and I've moved the resumption-side
cmpxchg into the runtime (and prior to the task being enqueued).
Tasks shouldn't normally hog the actor context indefinitely after making a call that's bound to
that actor, since that prevents the actor from potentially taking on other jobs it needs to
be able to address. Set up SILGen so that it saves the current executor (using a new runtime
entry point) and hops back to it after every actor call, not only ones where the caller context
is also actor-bound.
The added executor hopping here also exposed a bug in the runtime implementation while processing
DefaultActor jobs, where if an actor job returned to the processing loop having already yielded
the thread back to a generic executor, we would still attempt to make the actor give up the thread
again, corrupting its state.
rdar://71905765
Most of the async runtime functions have been changed to not
expect the task and executor to be passed in. When knowing the
task and executor is necessary, there are runtime functions
available to recover them.
The biggest change I had to make to a runtime function signature
was to swift_task_switch, which has been altered to expect to be
passed the context and resumption function instead of requiring
the caller to park the task. This has the pleasant consequence
of allowing the implementation to very quickly turn around when
it recognizes that the current executor is satisfactory. It does
mean that on arm64e we have to sign the continuation function
pointer as an argument and then potentially resign it when
assigning into the task's resume slot.
rdar://70546948
Create a TargetDispatchClassMetadata for Swift metadata that also has a dispatch-compatible vtable. Dispatch leaves room for ObjC class metadata so the two regions don't overlap. (The vtable currently consists of a single dummy entry; this will be filled out later.)
Rearrange the Job and AsyncTask hierarchy so that AsyncTask inherits only from Job, which in turn inherits from HeapObject. This gives all Job instances a dispatch-compatible isa field. It also gives them a refcount word, which is wasted on instances that aren't AsyncTask instances. Maybe we can find some use for that space in the future.
rdar://75227953
In their previous form, the non-`_f` variants of these entry points were unused, and IRGen
lowered the `createAsyncTask` builtins to use the `_f` variants with a large amount of caller-side
codegen to manually unpack closure values. Amid all this, it also failed to make anyone responsible
for releasing the closure context after the task completed, causing every task creation to leak.
Redo the `swift_task_create_*` entry points to accept the two words of an async closure value
directly, and unpack the closure to get its invocation entry point and initial context size
inside the runtime. (Also get rid of the non-future `swift_task_create` variant, since it's unused
and it's subtly different in a lot of hairy ways from the future forms. Better to add it later
when it's needed than to have a broken unexercised version now.)
The underlying runtime functions really want to be able to consume the closure being used
to spawn the task, but the implementation was trying to hide this by introducing a retain
at IRGen time, which is not very ARC-optimizer-friendly. Correctly model the builtin operands
as consumed so that the ownership verifier allows them to be taken +1.
Empty types (such as structs without stored properties) have a
meaningless value for their address. To avoid crashes in the Thread
Sanitizer runtime, rather than passing this unspecified value as
the address of the inout access, skip emission of the runtime call.
The bug allowing unspecified behavior here has been present since we
first added TSan support for checking Swift access races -- but codegen
changes on arm64 have recently made crashes due to the bug much more
likely.
rdar://problem/47686212
It would be more abstractly correct if this got DI support so
that we destroy the member if the constructor terminates
abnormally, but we can get to that later.
In derivatives of loops, no longer allocate boxes for indirect case payloads. Instead, use a custom pullback context in the runtime which contains a bump-pointer allocator.
When a function contains a differentiated loop, the closure context is a `Builtin.NativeObject`, which contains a `swift::AutoDiffLinearMapContext` and a tail-allocated top-level linear map struct (which represents the linear map struct that was previously directly partial-applied into the pullback). In branching trace enums, the payloads of previously indirect cases will be allocated by `swift::AutoDiffLinearMapContext::allocate` and stored as a `Builtin.RawPointer`.
The `createAsyncTask` entry point expects the parent task to be
"guaranteed" and the function to be "owned". However, that's not easy
to model in the operand ownership map, so we consider all operands to
be "guaranteed" and balance out the reference count when lowering the builtin.
This is an egregious hack that will create a little extra reference
count traffic.
`Builtin.createAsyncTask` takes flags, an optional parent task, and an
async/throwing function to execute, and passes it along to the
`swift_task_create_f` entry point to create a new (potentially child)
task, returning the new task and its initial context.
Implement a new builtin, `cancelAsyncTask()`, to cancel the given
asynchronous task. This lowers down to a call into the runtime
operation `swift_task_cancel()`.
Use this builtin to implement Task.Handle.cancel().
This introduces a new builtin, `getCurrentAsyncTask()`, that produces a
reference to the current task. This builtin can only be used within
`async` functions, and IR generation merely grabs the task argument
and packages it up.
The type of this function is `() -> Builtin.NativeObject`, because we
don't currently have a Swift-level representation of tasks, and can
probably handle everything through builtins or runtime calls.
Here, the following is implemented:
- Construction of SwiftContext struct with the fields needed for calling
functions.
- Allocating and deallocating these swift context via runtime calls
before calling async functions and after returning from them.
- Storing arguments (including bindings and the self parameter but not
including protocol fields for witness methods) and returns (both
direct and indirect).
- Calling async functions.
Additional things that still need to be done:
- protocol extension methods
- protocol witness methods
- storing yields
- partial applies
I forgot about this part of the design when I was working on this. To ensure
that the whole design works as expected, I included a small end-to-end test
using an experimental design for simd that uses polymorphic builtins that test
this functionally.
NOTE: The experimental design is only intended to exercise the code functionally.
rdar://48248417
TLDR: This patch introduces a new kind of builtin, "a polymorphic builtin". One
calls it like any other builtin, e.x.:
```
Builtin.generic_add(x, y)
```
but it has a contract: it must be specialized to a concrete builtin by the time
we hit Lowered SIL. In this commit, I add support for the following generic
operations:
Type | Op
------------------------
FloatOrVector |FAdd
FloatOrVector |FDiv
FloatOrVector |FMul
FloatOrVector |FRem
FloatOrVector |FSub
IntegerOrVector|AShr
IntegerOrVector|Add
IntegerOrVector|And
IntegerOrVector|ExactSDiv
IntegerOrVector|ExactUDiv
IntegerOrVector|LShr
IntegerOrVector|Mul
IntegerOrVector|Or
IntegerOrVector|SDiv
IntegerOrVector|SRem
IntegerOrVector|Shl
IntegerOrVector|Sub
IntegerOrVector|UDiv
IntegerOrVector|Xor
Integer |URem
NOTE: I only implemented support for the builtins in SIL and in SILGen. I am
going to implement the optimizer parts of this in a separate series of commits.
DISCUSSION
----------
Today there are polymorphic like instructions in LLVM-IR. Yet, at the
swift and SIL level we represent these operations instead as Builtins whose
names are resolved by splatting the builtin into the name. For example, adding
two things in LLVM:
```
%2 = add i64 %0, %1
%2 = add <2 x i64> %0, %1
%2 = add <4 x i64> %0, %1
%2 = add <8 x i64> %0, %1
```
Each of the add operations are done by the same polymorphic instruction. In
constrast, we splat out these Builtins in swift today, i.e.:
```
let x, y: Builtin.Int32
Builtin.add_Int32(x, y)
let x, y: Builtin.Vec4xInt32
Builtin.add_Vec4xInt32(x, y)
...
```
In SIL, we translate these verbatim and then IRGen just lowers them to the
appropriate polymorphic instruction. Beyond being verbose, these prevent these
Builtins (which need static types) from being used in polymorphic contexts where
we can guarantee that eventually a static type will be provided.
In contrast, the polymorphic builtins introduced in this commit can be passed
any type, with the proviso that the expert user using this feature can guarantee
that before we reach Lowered SIL, the generic_add has been eliminated. This is
enforced by IRGen asserting if passed such a builtin and by the SILVerifier
checking that the underlying builtin is never called once the module is in
Lowered SIL.
In forthcoming commits, I am going to add two optimizations that give the stdlib
tool writer the tools needed to use this builtin:
1. I am going to add an optimization to constant propagation that changes a
"generic_*" op to the type of its argument if the argument is a type that is
valid for the builtin (i.e. integer or vector).
2. I am going to teach the SILCloner how to specialize these as it inlines. This
ensures that when we transparent inline, we specialize the builtin automatically
and can then form SSA at -Onone using predictable memory access operations.
The main implication around these polymorphic builtins are that if an author is
not able to specialize the builtin, they need to ensure that after constant
propagation, the generic builtin has been DCEed. The general rules are that the
-Onone optimizer will constant fold branches with constant integer operands. So
if one can use a bool of some sort to trigger the operation, one can be
guaranteed that the code will not codegen. I am considering putting in some sort
of diagnostic to ensure that the stdlib writer has a good experience (e.x. get
an error instead of crashing the compiler).
Returns `true` if `T.Type` is known to refer to a concrete type. The
implementation allows for the optimizer to specialize this at -O and
eliminate conditional code.
Includes `Swift._isConcrete<T>(T.Type) -> Bool` wrapper function.
A function may be eliminated as dead code after initial builtin lowering
occurs. When this happens, an entry in the profile symbol table for the
function is not guaranteed. Its coverage record should be dropped.
rdar://42564768
When we generate code that asks for complete metadata for a fully concrete specific type that
doesn't have trivial metadata access, like `(Int, String)` or `[String: [Any]]`,
generate a cache variable that points to a mangled name, and use a common accessor function
that turns that cache variable into a pointer to the instantiated metadata. This saves a bunch
of code size, and should have minimal runtime impact, since the demangling of any string only
has to happen once.
This mostly just works, though it exposed a couple of issues:
- Mangling a type ref including objc protocols didn't cause the objc protocol record to get
instantiated. Fixed as part of this patch.
- The runtime type demangler doesn't correctly handle retroactive conformances. If there are
multiple retroactive conformances in a process at runtime, then even though the mangled string
refers to a specific conformance, the runtime still just picks one without listening to the
mangler. This is left to fix later, rdar://problem/53828345.
There is some more follow-up work that we can do to further improve the gains:
- We could improve the runtime-provided entry points, adding versions that don't require size
to be cached, and which can handle arbitrary metadata requests. This would allow for mangled
names to also be used for incomplete metadata accesses and improve code size of some generic
type accessors. However, we'd only be able to take advantage of the new entry points in
OSes that ship a new runtime.
- We could choose to always symbolic reference all type references, which would generally reduce
the size of mangled strings, as well as make runtime demangling more efficient, since it wouldn't
need to hit the runtime caches. This would however require that we be able to handle symbolic
references across files in the MetadataReader in order to avoid regressing remote mirror
functionality.
To display a failure message in the debugger, create a function in the debug info which has the name of the failure message.
The debug location of the trap/cond_fail is then wrapped into this function and the function is declared as "inlined".
In case the debugger stops at the trap instruction, it displays the inline function, which looks like the failure message.
For example:
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)
frame #0: 0x0000000100000cbf a.out`testit3(_:) [inlined] Unexpectedly found nil while unwrapping an Optional value at test.swift:14:11 [opt]
11
12 @inline(never)
13 func testit(_ a: Int?) -> Int {
-> 14 return a!
15 }
16
This change is currently not enabled by default, but can be enabled with the option "-Xllvm -enable-trap-debug-info".
Enabling this feature needs some changes in lldb. When the lldb part is done, this option can be removed and the feature enabled by default.
To display a failure message in the debugger, create a function in the debug info which has the name of the failure message.
The debug location of the trap/cond_fail is then wrapped into this function and the function is declared as "inlined".
In case the debugger stops at the trap instruction, it displays the inline function, which looks like the failure message.
For example:
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)
frame #0: 0x0000000100000cbf a.out`testit3(_:) [inlined] Unexpectedly found nil while unwrapping an Optional value at test.swift:14:11 [opt]
11
12 @inline(never)
13 func testit(_ a: Int?) -> Int {
-> 14 return a!
15 }
16
This change is currently not enabled by default, but can be enabled with the option "-Xllvm -enable-trap-debug-info".
Enabling this feature needs some changes in lldb. When the lldb part is done, this option can be removed and the feature enabled by default.
String -> Builtin.RawPointer that given a string constructed from a
literal, returns the address of the string literal in the global string
table of the compiled binary as a pointer.