If we didn't do this (and we didn't), the tasks get released as we
perform the next() impl, and move the value from the ready task to the
waiting task. Then, the ready task gets destroyed.
But as the task group exists, it performs a cancelAll() and that
iterates over all records. Those records were not removed previously
(!!!) which meant we were pointing at now deallocated tasks.
Previously this worked because we didn't deallocate the tasks, so they
leaked, but we didn't crash. With the memory leak fixed, this began to
crash since we'd attempt to cancel already destroyed tasks.
Solution:
- Remove task records whenever they complete a waiting task.
- This can ONLY be done by the "group owning task" itself, becuause
the contract of ONLY this task being allowed to modify records. o
It MUST NOT be done by the completing tasks as they complete, as it
would race with the owning task modifying this linked list of child
tasks in the group record.
Tracking this as a single bit is actually largely uninteresting
to the runtime. To handle priority escalation properly, we really
need to track this at a finer grain of detail: recording that the
task is running on a specific thread, enqueued on a specific actor,
or so on. But starting by tracking a single bit is important for
two reasons:
- First, it's more realistic about the performance overheads of
tasks: we're going to be doing this tracking eventually, and
the cost of that tracking will be dominated by the atomic
access, so doing that access now sets the baseline about right.
- Second, it ensures that we've actually got runtime involvement
in all the right places to do this tracking.
A propos of the latter: there was no runtime involvement with
awaiting a continuation, which is a point at which the task
potentially transitions from running to suspended. We must do
the tracking as part of this transition, rather than recognizing
in the run-loops that a task is still active and treating it as
having suspended, because the latter point potentially races with
the resumption of the task. To do this, I've had to introduce
a runtime function, swift_continuation_await, to do this awaiting
rather than inlining the atomic operation on the continuation.
As part of doing this work, I've also fixed a bug where we failed
to load-acquire in swift_task_escalate before walking the task
status records to invoke escalation actions.
I've also fixed several places where the handling of task statuses
may have accidentally allowed the task to revert to uncancelled.
Take the existing CompatibilityOverride mechanism and generalize it so it can be used in both the runtime and Concurrency libraries. The mechanism is preprocessor-heavy, so this requires some tricks. Use the SWIFT_TARGET_LIBRARY_NAME define to distinguish the libraries, and use a different .def file and mach-o section name accordingly.
We want the global/main executor functions to be a little more flexible. Instead of using the override mechanism, we expose function pointers that can be set by the compatibility library, or by any other code that wants to use a custom implementation.
rdar://73726764
Most of the async runtime functions have been changed to not
expect the task and executor to be passed in. When knowing the
task and executor is necessary, there are runtime functions
available to recover them.
The biggest change I had to make to a runtime function signature
was to swift_task_switch, which has been altered to expect to be
passed the context and resumption function instead of requiring
the caller to park the task. This has the pleasant consequence
of allowing the implementation to very quickly turn around when
it recognizes that the current executor is satisfactory. It does
mean that on arm64e we have to sign the continuation function
pointer as an argument and then potentially resign it when
assigning into the task's resume slot.
rdar://70546948
os_unfair_lock is much smaller than pthread_mutex_t (4 bytes versus 64) and a bit faster.
However, it doesn't support condition variables. Most of our uses of Mutex don't use condition variables, but a few do. Introduce ConditionMutex and StaticConditionMutex, which allow condition variables and continue to use pthread_mutex_t.
On all other platforms, we continue to use the same backing mutex type for both Mutex and ConditionMutex.
rdar://problem/45412121
Implement a new builtin, `cancelAsyncTask()`, to cancel the given
asynchronous task. This lowers down to a call into the runtime
operation `swift_task_cancel()`.
Use this builtin to implement Task.Handle.cancel().
There are things about this that I'm far from sold on. In
particular, I'm concerned that in order to implement escalation
correctly, we're going to have to add a status record for the
fact that the task is being executed, which means we're going
to have to potentially wait to acquire the status lock; overall,
that means making an extra runtime function call and doing some
atomics whenever we resume or suspend a task, which is an
uncomfortable amount of overhead.
The testing here is pretty grossly inadequate, but I wanted to
lay down the groundwork here.