This patch has two desirable effects for the price of one.
1. An uncaught error thrown from main will now explode
2. Move us off of using runAsyncAndBlock
The issue with runAsyncAndBlock is that it blocks the main thread
outright. UI and the main actor need to run on the main thread or bad
things happen, so blocking the main thread results in a bad day for
them.
Instead, we're using CFRunLoopRun to run the core-foundation run loop on
the main thread, or, dispatch_main if CFRunLoopRun isn't available.
The issue with just using dispatch_main is that it doesn't actually
guarantee that it will run the tasks on the main thread either, just
that it clears the main queue. We don't want to require everything that
uses concurrency to have to include CoreFoundation either, but dispatch
is already required, which supplies dispatch_main, which just empties
out the main queue.
To help catch runtime issues adopting `withUnsafeContinuation`, such as callback-based APIs that misleadingly
invoke their callback multiple times and/or not at all, provide a couple of classes that can take ownership of
a fresh `UnsafeContinuation` or `UnsafeThrowingContinuation`, and log attempts to resume the continuation multiple times
or discard the object without ever resuming the continuation.
move comments to the wired up continuations
remove duplicated continuations; leep the wired up ones
before moving to C++ for queue impl
trying to next wait via channel_poll
submitting works; need to impl next()
We expect to iterate on this quite a bit, both publicly
and internally, but this is a fine starting-point.
I've renamed runAsync to runAsyncAndBlock to underline
very clearly what it does and why it's not long for this
world. I've also had to give it a radically different
implementation in an effort to make it continue to work
given an actor implementation that is no longer just
running all work synchronously.
The major remaining bit of actor-scheduling work is to
make swift_task_enqueue actually do something sensible
based on the executor it's been given; currently it's
expecting a flag that IRGen simply doesn't know to set.
Switch the contract between the runtime operation `swift_future_task_wait`
and Task.Handle.get() pver to an asynchronous call, so that the
compiler will set up the resumption frame for us. This allows us to
correctly wait on futures.
Update our "basic" future test to perform both normal returns and
throwing returns from a future, either having to wait on the queue or
coming by afterward.
Introduce `FutureAsyncContext` to line up with the async context formed
by IR generation for the type `<T> () async throws -> T`. When allocating
a future task, set up the context with the address of the future's storage
for the successful result and null out the error result, so the caller
will directly fill in the result. This eliminates a bunch of extra
complexity and a copy.
Use a single atomic for the wait queue that combines the status with
the first task in the queue. Address race conditions in waiting and
completing the future.
Thanks to John for setting the direction here for me.
Extend AsyncTask and the concurrency runtime with basic support for
task futures. AsyncTasks with futures contain a future fragment with
information about the type produced by the future, and where the
future will put the result value or the thrown error in the initial
context.
We still don't have the ability to schedule the waiting tasks on an
executor when the future completes, so this isn't useful for anything
just test, and we can only test limited code paths.
There are things about this that I'm far from sold on. In
particular, I'm concerned that in order to implement escalation
correctly, we're going to have to add a status record for the
fact that the task is being executed, which means we're going
to have to potentially wait to acquire the status lock; overall,
that means making an extra runtime function call and doing some
atomics whenever we resume or suspend a task, which is an
uncomfortable amount of overhead.
The testing here is pretty grossly inadequate, but I wanted to
lay down the groundwork here.