We were detaching the child by just modifying the list, but the cancellation path was assuming that that would not be done without holding the task status lock.
This patch just fixes the current runtime; the back-deployment side is complicated.
Fixes rdar://88398824
Moved all the threading code to one place. Added explicit support for
Darwin, Linux, Pthreads, C11 threads and Win32 threads, including new
implementations of Once for Linux, Pthreads, C11 and Win32.
rdar://90776105
SWIFT_STDLIB_SINGLE_THREADED_RUNTIME is too much of a blunt instrument here.
It covers both the Concurrency runtime and the rest of the runtime, but we'd
like to be able to have e.g. a single-threaded Concurrency runtime while
the rest of the runtime is still thread safe (for instance).
So: rename it to SWIFT_STDLIB_SINGLE_THREADED_CONCURRENCY and make it just
control the Concurrency runtime, then add a SWIFT_STDLIB_THREADING_PACKAGE
setting at the CMake/build-script level, which defines
SWIFT_STDLIB_THREADING_xxx where xxx depends on the chosen threading package.
This is especially useful on systems where there may be a choice of threading
package that you could use.
rdar://90776105
Moved all the threading code to one place. Added explicit support for
Darwin, Linux, Pthreads, C11 threads and Win32 threads, including new
implementations of Once for Linux, Pthreads, C11 and Win32.
rdar://90776105
SWIFT_STDLIB_SINGLE_THREADED_RUNTIME is too much of a blunt instrument here.
It covers both the Concurrency runtime and the rest of the runtime, but we'd
like to be able to have e.g. a single-threaded Concurrency runtime while
the rest of the runtime is still thread safe (for instance).
So: rename it to SWIFT_STDLIB_SINGLE_THREADED_CONCURRENCY and make it just
control the Concurrency runtime, then add a SWIFT_STDLIB_THREADING_PACKAGE
setting at the CMake/build-script level, which defines
SWIFT_STDLIB_THREADING_xxx where xxx depends on the chosen threading package.
This is especially useful on systems where there may be a choice of threading
package that you could use.
rdar://90776105
Apply a blanket pass of including `new` for the placement new allocation
and namespacing the call to the global placement new allocator. This
should repair the Android ARMv7 builds.
A task can be in one of 4 states over its lifetime:
(a) suspended
(b) enqueued
(c) running
(d) completed
This change provides priority inversion avoidance support if a task gets
escalated when it is in state (a), (c), (d).
Radar-Id: rdar://problem/76127624
This change adds support for WASI in stdlib tests. Some tests that expect a crash to happen had to be disabled, since there's currently no way to observe such crash from a WASI host.
This change has two parts to it:
1. Add in a new interface (addStatusRecordWithChecks) for adding task
status records that also takes in a function ref. This function ref will
be used to evaluate if current state of the parent task has any changes
that need to be propagated to the child task that has been created.
This is necessary to prevent the following race between task creation
and concurrent cancellation and escalation:
a. Parent task create child task. It does lazy relaxed loads on its own
state while doing so and propagates this state to the child.
b. Child task is created but has not been attached to the parent
task/task group.
c. Parent task gets cancelled by another thread.
d. Child task gets linked into the parent’s task status records but no
reevaluation has happened to account for changes that might have happened to
the parent after (a).
2. Move status record management functions from the
Runtime/Concurrency.h to TaskPrivate.h. Remove any corresponding
overrides that are no longer needed. Remove unused tryAddStatusRecord
method whose functionality is provided by addStatusRecordWithChecks.
Radar-Id: rdar://problem/86347801
handler of parent task that created the group
Change comment in TaskGroup.swift to enforce that only parent task can
call cancelAll on the group
Add tests to verify mutating of task group in child tasks will fail
Radar-Id: rdar://problem/86346865
when a task is adding adding new children to a task group, we need to
synchronize with the task status record lock of the parent task that has the
task group, to prevent races with concurrent cancellation and escalation.
Radar-Id: rdar://problem/86311782
A `compare_exchange_weak` can spuriously return false, regardless of
whether a concurrent access happened. This was causing a null-pointer
dereference in TaskGroupImpl::poll in a narrow circumstance.
The dereference failure only appears when using the `arm64`
slice of the runtime library, since Clang will use `ldxr/stxr` for
synchronization on such targets. The weak form does not retry on a
spurious failure, but the strong version will.
resolves rdar://84192672
The goal here is not to eventually implement a concurrent thread
pool ourselves. We're just making it easier for integrators who
have their own pool and don't want to use Dispatch to build the
Swift concurrency runtime. Just hook the right functions and
you should be fine.
The necessary functions to hook are:
- swift_task_enqueueGlobal
- swift_task_enqueueGlobalAfterDelay
The following functions *would* be necessary to hook:
- swift_task_enqueueMainExecutor
- swift_task_asyncMainDrainQueue (only if you have an async main?)
However, this configuration does not currently properly support
the main executor, and so `@MainActor` should be avoided for now.
rdar://83513751
The goal here is not to eventually implement a concurrent thread
pool ourselves. We're just making it easier for integrators who
have their own pool and don't want to use Dispatch to build the
Swift concurrency runtime. Just hook the right functions and
you should be fine.
The necessary functions to hook are:
- swift_task_enqueueGlobal
- swift_task_enqueueGlobalAfterDelay
The following functions *would* be necessary to hook:
- swift_task_enqueueMainExecutor
- swift_task_asyncMainDrainQueue (only if you have an async main?)
However, this configuration does not currently properly support
the main executor, and so `@MainActor` should be avoided for now.
rdar://83513751
If we didn't do this (and we didn't), the tasks get released as we
perform the next() impl, and move the value from the ready task to the
waiting task. Then, the ready task gets destroyed.
But as the task group exists, it performs a cancelAll() and that
iterates over all records. Those records were not removed previously
(!!!) which meant we were pointing at now deallocated tasks.
Previously this worked because we didn't deallocate the tasks, so they
leaked, but we didn't crash. With the memory leak fixed, this began to
crash since we'd attempt to cancel already destroyed tasks.
Solution:
- Remove task records whenever they complete a waiting task.
- This can ONLY be done by the "group owning task" itself, becuause
the contract of ONLY this task being allowed to modify records. o
It MUST NOT be done by the completing tasks as they complete, as it
would race with the owning task modifying this linked list of child
tasks in the group record.
The proper handling of task group child tasks is that:
- if it completes a waiting task immediately, we don't need to retain it
- we just move the value to the waiting task and can destroy the task
- if we need to store the ready task and wait for a waiting task (for a
task that hits `await group.next()`) then we need to retain the ready
task.
- as the waiting task arrives, we move the value from the ready task
to the waiting task, and swift_release the ready task -- it will now
be destroyed safely.
`SWIFT_STDLIB_SINGLE_THREADED_RUNTIME` mode has been broken for a long time.
This patch guards some includes and use of libdispatch headers so that platforms
that doesn't support libdispatch can build cooperative executor runtime.
And fixed missing implementations for cooperative mode.
Tracking this as a single bit is actually largely uninteresting
to the runtime. To handle priority escalation properly, we really
need to track this at a finer grain of detail: recording that the
task is running on a specific thread, enqueued on a specific actor,
or so on. But starting by tracking a single bit is important for
two reasons:
- First, it's more realistic about the performance overheads of
tasks: we're going to be doing this tracking eventually, and
the cost of that tracking will be dominated by the atomic
access, so doing that access now sets the baseline about right.
- Second, it ensures that we've actually got runtime involvement
in all the right places to do this tracking.
A propos of the latter: there was no runtime involvement with
awaiting a continuation, which is a point at which the task
potentially transitions from running to suspended. We must do
the tracking as part of this transition, rather than recognizing
in the run-loops that a task is still active and treating it as
having suspended, because the latter point potentially races with
the resumption of the task. To do this, I've had to introduce
a runtime function, swift_continuation_await, to do this awaiting
rather than inlining the atomic operation on the continuation.
As part of doing this work, I've also fixed a bug where we failed
to load-acquire in swift_task_escalate before walking the task
status records to invoke escalation actions.
I've also fixed several places where the handling of task statuses
may have accidentally allowed the task to revert to uncancelled.
This is to workaround a bug in llvm's codegen when emitting the
callee-pop stack adjustment on a regular return from a swiftasync
function (vs. a tail call).
Without the workaround we fail to emit the callee-pop stack adjustment
leading to a mis-aligned stack on return.
```
pop {r7, pc}
add sp, #16
```
Workaround for rdar://79726989
Changes the task, taskGroup, asyncLet wait funtion call ABIs.
To reduce code size pass the context parameters and resumption function
as arguments to the wait function.
This means that the suspend point does not need to store parent context
and resumption to the suspend point's context.
```
void swift_task_future_wait_throwing(
OpaqueValue * result,
SWIFT_ASYNC_CONTEXT AsyncContext *callerContext,
AsyncTask *task,
ThrowingTaskFutureWaitContinuationFunction *resume,
AsyncContext *callContext);
```
The runtime passes the caller context to the resume entry point saving
the load of the parent context in the resumption function.
This patch adds a `Metadata *` field to `GroupImpl`. The await entry
pointer no longer pass the metadata pointer and there is a path through
the runtime where the task future is no longer available.