The immediate desire is to minimize the set of ABI dependencies
on the layout of an ExecutorRef. In addition to that, however,
I wanted to generally reduce the code size impact of an unsafe
continuation since it now requires accessing thread-local state,
and I wanted resumption to not have to create unnecessary type
metadata for the value type just to do the initialization.
Therefore, I've introduced a swift_continuation_init function
which handles the default initialization of a continuation
and returns a reference to the current task. I've also moved
the initialization of the normal continuation result into the
caller (out of the runtime), and I've moved the resumption-side
cmpxchg into the runtime (and prior to the task being enqueued).
Fill out the metadata for Job to have a Dispatch-compatible vtable. When available, use the dispatch_enqueue_onto_queue_4Swift to enqueue Jobs directly onto queues. Otherwise, keep using dispatch_async_f as we have been.
rdar://75227953
Take the existing CompatibilityOverride mechanism and generalize it so it can be used in both the runtime and Concurrency libraries. The mechanism is preprocessor-heavy, so this requires some tricks. Use the SWIFT_TARGET_LIBRARY_NAME define to distinguish the libraries, and use a different .def file and mach-o section name accordingly.
We want the global/main executor functions to be a little more flexible. Instead of using the override mechanism, we expose function pointers that can be set by the compatibility library, or by any other code that wants to use a custom implementation.
rdar://73726764
CFRunLoopRun returns once it finishes, though the kernel may clean us up
before we get there. We effectively have a race condition between the
kernel cleaning us up and returning from a never returning function.
Small programs likely get cleaned up before reaching the ud2 instruction
emitted after the never returning function in swift, so they don't
notice, but programs of a sufficient size do. At that size, the program
will crash after what the programmer expects to be the end of their
program.
Throwing functions pass the error result in `swiftself` to the resume
partial function.
Therefore, `() async -> ()` to `() async throws -> ()` is not ABI compatible.
TODO: go through remaining failing IRGen async tests and replace the
illegal convert_functions.
Most of the async runtime functions have been changed to not
expect the task and executor to be passed in. When knowing the
task and executor is necessary, there are runtime functions
available to recover them.
The biggest change I had to make to a runtime function signature
was to swift_task_switch, which has been altered to expect to be
passed the context and resumption function instead of requiring
the caller to park the task. This has the pleasant consequence
of allowing the implementation to very quickly turn around when
it recognizes that the current executor is satisfactory. It does
mean that on arm64e we have to sign the continuation function
pointer as an argument and then potentially resign it when
assigning into the task's resume slot.
rdar://70546948
Previously, thick async functions were represented sometimes as a pair
of (AsyncFunctionPointer, nullptr)--when the thick function was produced
via a thin_to_thick_function, e.g.--and sometimes as a pair of
(FunctionPointer, ThickContext)--when the thick function was produced by
a partial_apply--with the size stored in the slot of the ThickContext.
That optimized for the wrong case: partial applies of dynamic async
functions; in that case, there is no appropriate AsyncFunctionPointer to
form when lowering the partial_apply instruction. The far more common
case is to know exactly which function is being partially applied. In
that case, we can form the appropriate AsyncFunctionPointer.
Furthermore, the previous representation made calling a thick function
more complex: it was always necessary to check whether the context was
in fact null and then proceed along two different paths depending.
Here, that behavior is corrected by creating a thunk in a mandatory
IRGen SIL pass in the case that the function that is being partially
applied is dynamic. That new thunk is then partially applied in place
of the original partial_apply of the dynamic function.
Create a TargetDispatchClassMetadata for Swift metadata that also has a dispatch-compatible vtable. Dispatch leaves room for ObjC class metadata so the two regions don't overlap. (The vtable currently consists of a single dummy entry; this will be filled out later.)
Rearrange the Job and AsyncTask hierarchy so that AsyncTask inherits only from Job, which in turn inherits from HeapObject. This gives all Job instances a dispatch-compatible isa field. It also gives them a refcount word, which is wasted on instances that aren't AsyncTask instances. Maybe we can find some use for that space in the future.
rdar://75227953
In their previous form, the non-`_f` variants of these entry points were unused, and IRGen
lowered the `createAsyncTask` builtins to use the `_f` variants with a large amount of caller-side
codegen to manually unpack closure values. Amid all this, it also failed to make anyone responsible
for releasing the closure context after the task completed, causing every task creation to leak.
Redo the `swift_task_create_*` entry points to accept the two words of an async closure value
directly, and unpack the closure to get its invocation entry point and initial context size
inside the runtime. (Also get rid of the non-future `swift_task_create` variant, since it's unused
and it's subtly different in a lot of hairy ways from the future forms. Better to add it later
when it's needed than to have a broken unexercised version now.)
I'm about to replace these with new variations that correctly handle spawning a task with
a closure as its initial entry point, without leaking or requiring large amounts of caller-side
code generation.
I have identified the following conceptual synchronization points at
which task data and computation can cross thread boundaries. We need to
model these in TSan to avoid false positives:
Awaiting an async task (`AsyncTask::waitFuture`), which has two cases:
1) The task has completed (`AsyncTask::completeFuture`). Everything
that happened during task execution "happened before" before the
point where we access its result. We synchronize on the *awaited*
task.
2) The task is still executing: the current execution is suspended and
the waiting task is put into the list of "waiters". Once the awaited
task completes, the waiters will be scheduled. In this case, we
synchronize on the *waiting* task.
Note: there is a similar relationship for task groups which I still have
to investigate. I will follow-up with an additional patch and tests.
Actor job execution (`swift::runJobInExecutorContext`):
Job scheduling (`swift::swift_task_enqueue`) precedes/happens before job
execution. Also all job executions (switching actors or suspend/resume)
are serially ordered.
Note: the happens-before edge for schedule->execute isn't strictly
needed in most cases since scheduling calls through to libdispatch's
`dispatch_async_f`, which we already intercept and model in TSan.
However, I am trying to model Swift Task semantics to increase the
chance of things to continue to work in case the "task backend" is
switched out.
rdar://74256733
First, just call an async -> T function instead of forcing the caller
to piece together which case we're in and perform its own copy. This
ensures that the task is actually kept alive properly.
Second, now that we no longer implicitly depend on the waiting tasks
being run synchronously, go ahead and schedule them to run on the
global executor.
This solves some problems which were blocking the work on TLS-ifying
the task/executor state.
This is conditional on UseAsyncLowering and in the future should also be
conditional on `clangTargetInfo.isSwiftAsyncCCSupported()` once that
support is merged.
Update tests to work either with swiftcc or swifttailcc.
Previously, the error stored in the async context was of type SwiftError
*. In order to enable the context to be callee released, make it
indirect and change its type to SwiftError **.
rdar://71378532