I have identified the following conceptual synchronization points at
which task data and computation can cross thread boundaries. We need to
model these in TSan to avoid false positives:
Awaiting an async task (`AsyncTask::waitFuture`), which has two cases:
1) The task has completed (`AsyncTask::completeFuture`). Everything
that happened during task execution "happened before" before the
point where we access its result. We synchronize on the *awaited*
task.
2) The task is still executing: the current execution is suspended and
the waiting task is put into the list of "waiters". Once the awaited
task completes, the waiters will be scheduled. In this case, we
synchronize on the *waiting* task.
Note: there is a similar relationship for task groups which I still have
to investigate. I will follow-up with an additional patch and tests.
Actor job execution (`swift::runJobInExecutorContext`):
Job scheduling (`swift::swift_task_enqueue`) precedes/happens before job
execution. Also all job executions (switching actors or suspend/resume)
are serially ordered.
Note: the happens-before edge for schedule->execute isn't strictly
needed in most cases since scheduling calls through to libdispatch's
`dispatch_async_f`, which we already intercept and model in TSan.
However, I am trying to model Swift Task semantics to increase the
chance of things to continue to work in case the "task backend" is
switched out.
rdar://74256733
This is conditional on UseAsyncLowering and in the future should also be
conditional on `clangTargetInfo.isSwiftAsyncCCSupported()` once that
support is merged.
Update tests to work either with swiftcc or swifttailcc.
We expect to iterate on this quite a bit, both publicly
and internally, but this is a fine starting-point.
I've renamed runAsync to runAsyncAndBlock to underline
very clearly what it does and why it's not long for this
world. I've also had to give it a radically different
implementation in an effort to make it continue to work
given an actor implementation that is no longer just
running all work synchronously.
The major remaining bit of actor-scheduling work is to
make swift_task_enqueue actually do something sensible
based on the executor it's been given; currently it's
expecting a flag that IRGen simply doesn't know to set.