It just calls Task.runDetatched.
It's more efficient to have a non-generic compiler intrinsic than to let the compiler call the generic Task.runDetatched.
The _runAsyncHandler doesn't have to be generic because the return value of the run function is defined to be Void.
Implement SIL generation for "async let" constructs, which involves:
1. Creating a child task future at the point of declaration of the "async let",
which runs the initializer in an async closure.
2. Entering a cleanup to destroy the child task.
3. Entering a cleanup to cancel the child task.
4. Waiting for the child task when any of the variables is reference.
5. Decomposing the result of the child task to write the results into the
appropriate variables.
Implements rdar://71123479.
We somehow ended up with a set hidden in `Task` as well as a set at top level. SILGen currently
hooks into the top-level ones, so shed the `Task`-namespaced versions for now.
Switch the contract between the runtime operation `swift_future_task_wait`
and Task.Handle.get() pver to an asynchronous call, so that the
compiler will set up the resumption frame for us. This allows us to
correctly wait on futures.
Update our "basic" future test to perform both normal returns and
throwing returns from a future, either having to wait on the queue or
coming by afterward.
Rather than immediately running the task synchronously within
runDetached, return the handle to the newly-created task. Add a method
task.Handle.run() to execute the task. This is just a temporary hack
that should not persist in the API, but it lets us launch tasks on a
particular Dispatch queue:
```swift
extension DispatchQueue {
func async<R>(execute: @escaping () async -> R) -> Task.Handle<R> {
let handle = Task.runDetached(operation: execute)
// Run the task
_ = { self.async { handle.run() } }()
return handle
}
}
```
One can pass asynchronous work to DispatchQueue.async, which will
schedule that work on the dispatch queue and return a handle. Another
asynchronous task can then read the result.
Yay for rdar://71125519.
Introduce `FutureAsyncContext` to line up with the async context formed
by IR generation for the type `<T> () async throws -> T`. When allocating
a future task, set up the context with the address of the future's storage
for the successful result and null out the error result, so the caller
will directly fill in the result. This eliminates a bunch of extra
complexity and a copy.
Use a single atomic for the wait queue that combines the status with
the first task in the queue. Address race conditions in waiting and
completing the future.
Thanks to John for setting the direction here for me.
Extend AsyncTask and the concurrency runtime with basic support for
task futures. AsyncTasks with futures contain a future fragment with
information about the type produced by the future, and where the
future will put the result value or the thrown error in the initial
context.
We still don't have the ability to schedule the waiting tasks on an
executor when the future completes, so this isn't useful for anything
just test, and we can only test limited code paths.
os_unfair_lock is much smaller than pthread_mutex_t (4 bytes versus 64) and a bit faster.
However, it doesn't support condition variables. Most of our uses of Mutex don't use condition variables, but a few do. Introduce ConditionMutex and StaticConditionMutex, which allow condition variables and continue to use pthread_mutex_t.
On all other platforms, we continue to use the same backing mutex type for both Mutex and ConditionMutex.
rdar://problem/45412121
Implement a new builtin, `cancelAsyncTask()`, to cancel the given
asynchronous task. This lowers down to a call into the runtime
operation `swift_task_cancel()`.
Use this builtin to implement Task.Handle.cancel().