A task can be in one of 4 states over its lifetime:
(a) suspended
(b) enqueued
(c) running
(d) completed
This change provides priority inversion avoidance support if a task gets
escalated when it is in state (a), (c), (d).
Radar-Id: rdar://problem/76127624
This change adds support for WASI in stdlib tests. Some tests that expect a crash to happen had to be disabled, since there's currently no way to observe such crash from a WASI host.
* [Distributed] Adjust interface of `swift_distributed_execute_target`
Since this is a special function, `calleeContext` doesn't point to
a direct parent but instead both parent context (uninitialized)
and resume function are passed as last arguments which means that
`callContext` has to act as an intermediate context in call to accessor.
* [Distributed] Drop optionality from result buffer in `_executeDistributedTarget`
`RawPointer?` is lowered into a two arguments since it's a struct,
to make it easy let's just allocate an empty pointer for `Void` result.
* [Distributed] NFC: Update _remoteCall test-case to check multiple different result types
* [Distributed] Implement func metadata and executeDistributedTarget
dont expose new entrypoints
able to get all the way to calling _execute
* [Distributed] reimplement distributed get type info impls
* [Distributed] comment out distributed_actor_remoteCall for now
* [Distributed] disable test on linux for now
Each trace point is declared as a function in the new `Tracing.h` header. These functions are called from the appropriate places in the concurrency runtime.
On Darwin, an implementation of these functions is provided which uses the `os/signpost.h` API to emit signpost events/intervals.
When the signpost API is not available, no-op stub implementations are provided. Implementations for other OSes can be provided by providing implementations of the trace functions for that OS.
rdar://81858487
We do not have the `llvm/Config/config.h` header available in the forked
LLVMSupport library in our standard library packaging. This removes
that dependency by using the standard macro `_POSIX_THREADS` to detect
if we should use pthreads.
This has been dead since before the first release. We need something
like it, but we should probably do it on a clean foundation rather than
building on what's already there.
I've left the barest foundation for "messages" that can be dropped in
the queue but aren't real jobs.
I've left the barest skeleton for "messages" rather than
Instead of trying to return result from distributed thunk directly,
modify accessor to store result into the caller-provided buffer.
Doing so helps us avoid boxing the result into `Any`.
The implementation is as follows:
- Looks up distributed accessor by the given target name
- Extracts information required to setup async context from
async function pointer stored in accessor record
- Allocates context and calls accessor
This fixes a latent UB instance in the `DefaultActor` implementation
that has haunted the Windows target. The shared constructor for the
type caused an errant typo that happened to compile which introduced
UB but happened to work for the non-Windows cases. This happened to
work for the other targets as `swift::atomic` had a `std::atomic` at
on most configurations, and the C delegate for the Actor initializer
happened to overlap and initialize the memory properly. The Windows
case used an inline pointer width value but would be attempted to be
initialized as a `std::atomic`. Relying on the overlap is unsafe to
assume, and we should use the type's own constructor which delegates
appropriately.
The goal here is not to eventually implement a concurrent thread
pool ourselves. We're just making it easier for integrators who
have their own pool and don't want to use Dispatch to build the
Swift concurrency runtime. Just hook the right functions and
you should be fine.
The necessary functions to hook are:
- swift_task_enqueueGlobal
- swift_task_enqueueGlobalAfterDelay
The following functions *would* be necessary to hook:
- swift_task_enqueueMainExecutor
- swift_task_asyncMainDrainQueue (only if you have an async main?)
However, this configuration does not currently properly support
the main executor, and so `@MainActor` should be avoided for now.
rdar://83513751
The goal here is not to eventually implement a concurrent thread
pool ourselves. We're just making it easier for integrators who
have their own pool and don't want to use Dispatch to build the
Swift concurrency runtime. Just hook the right functions and
you should be fine.
The necessary functions to hook are:
- swift_task_enqueueGlobal
- swift_task_enqueueGlobalAfterDelay
The following functions *would* be necessary to hook:
- swift_task_enqueueMainExecutor
- swift_task_asyncMainDrainQueue (only if you have an async main?)
However, this configuration does not currently properly support
the main executor, and so `@MainActor` should be avoided for now.
rdar://83513751
Darwin OSes support vouchers, which are key/value sets that can be adopted on a thread to influence its execution, or sent to another process. APIs like Dispatch propagate vouchers to worker threads when running async code. This change makes Swift Concurrency do the same.
The change consists of a few different parts:
1. A set of shims (in VoucherShims.h) which provides declarations for the necessary calls when they're not available from the SDK, and stub implementations for non-Darwin platforms.
2. One of Job's reserved fields is now used to store the voucher associated with a job.
3. Jobs grab the current thread's voucher when they're created.
4. A VoucherManager class manages adoption of vouchers when running a Job, and replacing vouchers in suspended tasks.
5. A VoucherManager instance is maintained in ExecutionTrackingInfo, and is updated as necessary throughout a Job/Task's lifecycle.
rdar://76080222
This macro takes the string and parameters directly, and is conditionally defined to either call fprintf or ignore its arguments. This makes the call sites a little more pleasant (no #if scattered about) and ensures every log includes the thread ID and a newline automatically.
The actor runtime has some known issues with deadlock when an actor has
to give up its thread because it's running lower-priority work. To
avoid deadlocks here, disable all of the logic that tries to give up
higher-priority threads when only lower-priority work is available, or
to escalate work, effectively making the actor runtime ignore
priorities internally.
Fixes rdar://79378762.
Tracking this as a single bit is actually largely uninteresting
to the runtime. To handle priority escalation properly, we really
need to track this at a finer grain of detail: recording that the
task is running on a specific thread, enqueued on a specific actor,
or so on. But starting by tracking a single bit is important for
two reasons:
- First, it's more realistic about the performance overheads of
tasks: we're going to be doing this tracking eventually, and
the cost of that tracking will be dominated by the atomic
access, so doing that access now sets the baseline about right.
- Second, it ensures that we've actually got runtime involvement
in all the right places to do this tracking.
A propos of the latter: there was no runtime involvement with
awaiting a continuation, which is a point at which the task
potentially transitions from running to suspended. We must do
the tracking as part of this transition, rather than recognizing
in the run-loops that a task is still active and treating it as
having suspended, because the latter point potentially races with
the resumption of the task. To do this, I've had to introduce
a runtime function, swift_continuation_await, to do this awaiting
rather than inlining the atomic operation on the continuation.
As part of doing this work, I've also fixed a bug where we failed
to load-acquire in swift_task_escalate before walking the task
status records to invoke escalation actions.
I've also fixed several places where the handling of task statuses
may have accidentally allowed the task to revert to uncancelled.