Each trace point is declared as a function in the new `Tracing.h` header. These functions are called from the appropriate places in the concurrency runtime.
On Darwin, an implementation of these functions is provided which uses the `os/signpost.h` API to emit signpost events/intervals.
When the signpost API is not available, no-op stub implementations are provided. Implementations for other OSes can be provided by providing implementations of the trace functions for that OS.
rdar://81858487
This change has two parts to it:
1. Add in a new interface (addStatusRecordWithChecks) for adding task
status records that also takes in a function ref. This function ref will
be used to evaluate if current state of the parent task has any changes
that need to be propagated to the child task that has been created.
This is necessary to prevent the following race between task creation
and concurrent cancellation and escalation:
a. Parent task create child task. It does lazy relaxed loads on its own
state while doing so and propagates this state to the child.
b. Child task is created but has not been attached to the parent
task/task group.
c. Parent task gets cancelled by another thread.
d. Child task gets linked into the parent’s task status records but no
reevaluation has happened to account for changes that might have happened to
the parent after (a).
2. Move status record management functions from the
Runtime/Concurrency.h to TaskPrivate.h. Remove any corresponding
overrides that are no longer needed. Remove unused tryAddStatusRecord
method whose functionality is provided by addStatusRecordWithChecks.
Radar-Id: rdar://problem/86347801
handler of parent task that created the group
Change comment in TaskGroup.swift to enforce that only parent task can
call cancelAll on the group
Add tests to verify mutating of task group in child tasks will fail
Radar-Id: rdar://problem/86346865
Since locking the task status is a presumed-uncommon case, the
trade-offs inherent in the allocation patterns of AtomicWaitQueue
are very appropriate here, even more than they are for metadata
completion.
rdar://86100232
when a task is adding adding new children to a task group, we need to
synchronize with the task status record lock of the parent task that has the
task group, to prevent races with concurrent cancellation and escalation.
Radar-Id: rdar://problem/86311782
If we didn't do this (and we didn't), the tasks get released as we
perform the next() impl, and move the value from the ready task to the
waiting task. Then, the ready task gets destroyed.
But as the task group exists, it performs a cancelAll() and that
iterates over all records. Those records were not removed previously
(!!!) which meant we were pointing at now deallocated tasks.
Previously this worked because we didn't deallocate the tasks, so they
leaked, but we didn't crash. With the memory leak fixed, this began to
crash since we'd attempt to cancel already destroyed tasks.
Solution:
- Remove task records whenever they complete a waiting task.
- This can ONLY be done by the "group owning task" itself, becuause
the contract of ONLY this task being allowed to modify records. o
It MUST NOT be done by the completing tasks as they complete, as it
would race with the owning task modifying this linked list of child
tasks in the group record.
Tracking this as a single bit is actually largely uninteresting
to the runtime. To handle priority escalation properly, we really
need to track this at a finer grain of detail: recording that the
task is running on a specific thread, enqueued on a specific actor,
or so on. But starting by tracking a single bit is important for
two reasons:
- First, it's more realistic about the performance overheads of
tasks: we're going to be doing this tracking eventually, and
the cost of that tracking will be dominated by the atomic
access, so doing that access now sets the baseline about right.
- Second, it ensures that we've actually got runtime involvement
in all the right places to do this tracking.
A propos of the latter: there was no runtime involvement with
awaiting a continuation, which is a point at which the task
potentially transitions from running to suspended. We must do
the tracking as part of this transition, rather than recognizing
in the run-loops that a task is still active and treating it as
having suspended, because the latter point potentially races with
the resumption of the task. To do this, I've had to introduce
a runtime function, swift_continuation_await, to do this awaiting
rather than inlining the atomic operation on the continuation.
As part of doing this work, I've also fixed a bug where we failed
to load-acquire in swift_task_escalate before walking the task
status records to invoke escalation actions.
I've also fixed several places where the handling of task statuses
may have accidentally allowed the task to revert to uncancelled.
Take the existing CompatibilityOverride mechanism and generalize it so it can be used in both the runtime and Concurrency libraries. The mechanism is preprocessor-heavy, so this requires some tricks. Use the SWIFT_TARGET_LIBRARY_NAME define to distinguish the libraries, and use a different .def file and mach-o section name accordingly.
We want the global/main executor functions to be a little more flexible. Instead of using the override mechanism, we expose function pointers that can be set by the compatibility library, or by any other code that wants to use a custom implementation.
rdar://73726764
Most of the async runtime functions have been changed to not
expect the task and executor to be passed in. When knowing the
task and executor is necessary, there are runtime functions
available to recover them.
The biggest change I had to make to a runtime function signature
was to swift_task_switch, which has been altered to expect to be
passed the context and resumption function instead of requiring
the caller to park the task. This has the pleasant consequence
of allowing the implementation to very quickly turn around when
it recognizes that the current executor is satisfactory. It does
mean that on arm64e we have to sign the continuation function
pointer as an argument and then potentially resign it when
assigning into the task's resume slot.
rdar://70546948
os_unfair_lock is much smaller than pthread_mutex_t (4 bytes versus 64) and a bit faster.
However, it doesn't support condition variables. Most of our uses of Mutex don't use condition variables, but a few do. Introduce ConditionMutex and StaticConditionMutex, which allow condition variables and continue to use pthread_mutex_t.
On all other platforms, we continue to use the same backing mutex type for both Mutex and ConditionMutex.
rdar://problem/45412121
Implement a new builtin, `cancelAsyncTask()`, to cancel the given
asynchronous task. This lowers down to a call into the runtime
operation `swift_task_cancel()`.
Use this builtin to implement Task.Handle.cancel().
There are things about this that I'm far from sold on. In
particular, I'm concerned that in order to implement escalation
correctly, we're going to have to add a status record for the
fact that the task is being executed, which means we're going
to have to potentially wait to acquire the status lock; overall,
that means making an extra runtime function call and doing some
atomics whenever we resume or suspend a task, which is an
uncomfortable amount of overhead.
The testing here is pretty grossly inadequate, but I wanted to
lay down the groundwork here.