_swift_taskGroup_cancelAllChildren relies on there being no concurrent modification when called from the owning task, but this is not guaranteed.
Rearrange things to always take the owning task's status record lock when walking the group's children. Split _swift_taskGroup_cancelAllChildren into two functions, one which assumes/requires the lock is already held, and one which acquires the lock. We don't have the owning task in this case, but we can either get it from the current task, or by looking at the parent of the child task we're working on.
rdar://147172991
* [Concurrency] Adjust task escalation APIs to SE accepted shapes
* adjust test a little bit
* Fix closure lifetime in withTaskPriorityEscalationHandler
* avoid bringing workaround func into abi by marking AEIC
The way that we include COMPATIBILITY_OVERRIDE_INCLUDE_PATH freaks out the
syntax highlighting of editors like emacs. It causes the whole file to be
highlighted like it is part of the include string.
To work around this, this patch creates a separate file called
CompatibilityOverrideIncludePath.h that just includes
COMPATIBILITY_OVERRIDE_INCLUDE_PATH. So its syntax highlighting is borked, but
at least in the actual files that contain real code, the syntax highlighting is
restored.
When using a future adapter, the resume context may not be valid after the task starts running. Only peer through the adapter when we're starting to run.
rdar://126298035
it is enqueued on. This way, we have the necessary bookkeeping to
escalate an executor when a task that is enqueued, is escalated.
Radar-Id: rdar://problem/101864092
status record that is already registered with the task. Provide more
versatile removeStatusRecord functions and update clients to use them
Radar-Id: rdar://problem/101864092
done the load or who need the oldStatus information after adding the
status record.
Change some of the memory barrier logic since we can take advantage of
load-through HW address dependency.
Radar-Id: rdar://problem/105634683
field of an ActiveTaskStatus can also be modified while the
TaskStatusRecord list is being modified. Make the StatusRecordLock
reentrant.
Radar-Id: rdar://problem/88093007
We were detaching the child by just modifying the list, but the cancellation path was assuming that that would not be done without holding the task status lock.
This patch just fixes the current runtime; the back-deployment side is complicated.
Fixes rdar://88398824
Load task status with an acquire when canceling a task, to synchronize with the store-release that comes when updating a task's status.
Add explicit TSan calls in cancellation, as well as withStatusRecordLock and addStatusRecord, to avoid TSan complaining about data races when canceling a task.
Add a test that checks for TSan-reported data races when canceling a task.
rdar://93892417
Moved all the threading code to one place. Added explicit support for
Darwin, Linux, Pthreads, C11 threads and Win32 threads, including new
implementations of Once for Linux, Pthreads, C11 and Win32.
rdar://90776105
Moved all the threading code to one place. Added explicit support for
Darwin, Linux, Pthreads, C11 threads and Win32 threads, including new
implementations of Once for Linux, Pthreads, C11 and Win32.
rdar://90776105
A task can be in one of 4 states over its lifetime:
(a) suspended
(b) enqueued
(c) running
(d) completed
This change provides priority inversion avoidance support if a task gets
escalated when it is in state (a), (c), (d).
Radar-Id: rdar://problem/76127624
thread has the task status record lock.
Today, if a thread is holding the StatusRecordLock, then no other
modification of the task status is possible - including a thread
starting to execute the task or stopping execution of the task.
However, the TaskStatusRecordLock is really about protecting the linked
list that is maintained in the ActiveTaskStatus. As such, other
operations which don't need to look at that linked list of us records
really shouldn't have to block on the StatusRecordLock.
This change allows for concurrent modification of the veTaskStatus while
the TaskStatusRecordLock is held. In particular, a task can cancelled,
escalated, start and stop running, all while another ad is holding onto
the task's StatusRecordLock. In the event of cancellation and
escalation, the task's StatusRecordLock must be n in order to propagate
cancellation and escalation to its child tasks is not needed to cancel
or escalate the task itself.
Radar-Id: rdar://problem/76127624
Each trace point is declared as a function in the new `Tracing.h` header. These functions are called from the appropriate places in the concurrency runtime.
On Darwin, an implementation of these functions is provided which uses the `os/signpost.h` API to emit signpost events/intervals.
When the signpost API is not available, no-op stub implementations are provided. Implementations for other OSes can be provided by providing implementations of the trace functions for that OS.
rdar://81858487
This change has two parts to it:
1. Add in a new interface (addStatusRecordWithChecks) for adding task
status records that also takes in a function ref. This function ref will
be used to evaluate if current state of the parent task has any changes
that need to be propagated to the child task that has been created.
This is necessary to prevent the following race between task creation
and concurrent cancellation and escalation:
a. Parent task create child task. It does lazy relaxed loads on its own
state while doing so and propagates this state to the child.
b. Child task is created but has not been attached to the parent
task/task group.
c. Parent task gets cancelled by another thread.
d. Child task gets linked into the parent’s task status records but no
reevaluation has happened to account for changes that might have happened to
the parent after (a).
2. Move status record management functions from the
Runtime/Concurrency.h to TaskPrivate.h. Remove any corresponding
overrides that are no longer needed. Remove unused tryAddStatusRecord
method whose functionality is provided by addStatusRecordWithChecks.
Radar-Id: rdar://problem/86347801
handler of parent task that created the group
Change comment in TaskGroup.swift to enforce that only parent task can
call cancelAll on the group
Add tests to verify mutating of task group in child tasks will fail
Radar-Id: rdar://problem/86346865
Since locking the task status is a presumed-uncommon case, the
trade-offs inherent in the allocation patterns of AtomicWaitQueue
are very appropriate here, even more than they are for metadata
completion.
rdar://86100232
when a task is adding adding new children to a task group, we need to
synchronize with the task status record lock of the parent task that has the
task group, to prevent races with concurrent cancellation and escalation.
Radar-Id: rdar://problem/86311782
If we didn't do this (and we didn't), the tasks get released as we
perform the next() impl, and move the value from the ready task to the
waiting task. Then, the ready task gets destroyed.
But as the task group exists, it performs a cancelAll() and that
iterates over all records. Those records were not removed previously
(!!!) which meant we were pointing at now deallocated tasks.
Previously this worked because we didn't deallocate the tasks, so they
leaked, but we didn't crash. With the memory leak fixed, this began to
crash since we'd attempt to cancel already destroyed tasks.
Solution:
- Remove task records whenever they complete a waiting task.
- This can ONLY be done by the "group owning task" itself, becuause
the contract of ONLY this task being allowed to modify records. o
It MUST NOT be done by the completing tasks as they complete, as it
would race with the owning task modifying this linked list of child
tasks in the group record.
Tracking this as a single bit is actually largely uninteresting
to the runtime. To handle priority escalation properly, we really
need to track this at a finer grain of detail: recording that the
task is running on a specific thread, enqueued on a specific actor,
or so on. But starting by tracking a single bit is important for
two reasons:
- First, it's more realistic about the performance overheads of
tasks: we're going to be doing this tracking eventually, and
the cost of that tracking will be dominated by the atomic
access, so doing that access now sets the baseline about right.
- Second, it ensures that we've actually got runtime involvement
in all the right places to do this tracking.
A propos of the latter: there was no runtime involvement with
awaiting a continuation, which is a point at which the task
potentially transitions from running to suspended. We must do
the tracking as part of this transition, rather than recognizing
in the run-loops that a task is still active and treating it as
having suspended, because the latter point potentially races with
the resumption of the task. To do this, I've had to introduce
a runtime function, swift_continuation_await, to do this awaiting
rather than inlining the atomic operation on the continuation.
As part of doing this work, I've also fixed a bug where we failed
to load-acquire in swift_task_escalate before walking the task
status records to invoke escalation actions.
I've also fixed several places where the handling of task statuses
may have accidentally allowed the task to revert to uncancelled.