Add SWIFT_DEBUG_ENABLE_TASK_SLAB_ALLOCATOR, which is on by default. When turned off, async stack allocations call through to malloc/free. This allows memory debugging tools to be used on async stack allocations.
functions.
These were introduced in an early draft implementation of async let, but
never used by a released compiler. They are not used as symbols by any
app binaries. There's no reason to keep carrying them.
While I'm at it, dramatically improve the documentation of the remaining
async let API functions.
If the preloaded status is locked, then we need to reload it in order to distinguish between the current thread holding the lock and another thread holding the lock. Without this, if another thread holds the lock, then we won't set the is-locked bit. We'll still actually hold the lock, but other threads may perform operations locklessly if the bit is not set, which can cause a crash. By reloading status in that case, we ensure that the bit is always set correctly.
This manifested as crashes in task cancellation but could cause other task-related issues as well.
Also remove an assert of !isStatusRecordLocked() in AsyncTask::complete(). We allow other threads to access tasks and take the lock for things like cancellation, so the lock may legitimately be held at that point.
rdar://150327908
Because `TaskAllocator` is not a round multiple of the machine word
size on 64-bit platforms, I think we end up with padding before the
`TaskLocal::Storage` following it, which makes the `PrivateStorage`
structure larger than the calculation in `ABI/Task.h`.
rdar://149067144
This avoids the potential to race with the triggering coming from
task_cancel, because we first set the cancelled flag, and only THEN
take the lock and iterate over the inserted records. Because of this we
could: T1 flip the cancelled bit; T2 observes that, and triggers
"immediately" during installing the handler record. T1 then proceeds to
lock records and trigger it again, causing a double trigger of the
cancellation handler.
resolves https://github.com/swiftlang/swift/issues/80161
resolves rdar://147493150
Move to a recursive lock inline in the Task. This avoids the need to allocate a lock record and simplifies the code somewhat.
Change Task's OpaquePrivateStorage to compute its size at build time based on the sizes of its components, rather than having it be a fixed size. It appears that the fixed size was intended to be part of the ABI, but that didn't happen and we're free to change this size. We need to expand it slightly when using pthread_mutex as the recursive lock, as pthread_mutex is pretty big. Other recursive locks allow it to shrink slightly.
We don't have a recursive mutex in our Threading support code, so add a RecursiveMutex type.
rdar://113898653
_swift_taskGroup_cancelAllChildren relies on there being no concurrent modification when called from the owning task, but this is not guaranteed.
Rearrange things to always take the owning task's status record lock when walking the group's children. Split _swift_taskGroup_cancelAllChildren into two functions, one which assumes/requires the lock is already held, and one which acquires the lock. We don't have the owning task in this case, but we can either get it from the current task, or by looking at the parent of the child task we're working on.
rdar://147172991
* [Concurrency] Adjust task escalation APIs to SE accepted shapes
* adjust test a little bit
* Fix closure lifetime in withTaskPriorityEscalationHandler
* avoid bringing workaround func into abi by marking AEIC
There cannot be undefined behavior at compile time, so these warnings can be
ignored when `offsetof` is used in a static assert:
```
warning: offset of on non-standard-layout type 'AsyncTask' [-Winvalid-offsetof]
```
When using a future adapter, the resume context may not be valid after the task starts running. Only peer through the adapter when we're starting to run.
rdar://126298035
* Use the longer name ThreadSanitizer rather than TSan for the new files.
* Don't implement `tsan::consume` at all for now.
* Do the `tsan::release` for `ulock_unlock()` at the head of the function,
not at the tail.
* Add a comment to test/Sanitizers/tsan/once.swift to explain the test a
little more clearly.
rdar://110665213
Move the TSan functionality from Concurrency into Threading. Use it
in the Linux `ulock` implementation so that TSan knows about `ulock`
and will tolerate the newer `swift_once` implementation that uses it.
rdar://110665213
it is enqueued on. This way, we have the necessary bookkeeping to
escalate an executor when a task that is enqueued, is escalated.
Radar-Id: rdar://problem/101864092
status record that is already registered with the task. Provide more
versatile removeStatusRecord functions and update clients to use them
Radar-Id: rdar://problem/101864092
done the load or who need the oldStatus information after adding the
status record.
Change some of the memory barrier logic since we can take advantage of
load-through HW address dependency.
Radar-Id: rdar://problem/105634683
field of an ActiveTaskStatus can also be modified while the
TaskStatusRecord list is being modified. Make the StatusRecordLock
reentrant.
Radar-Id: rdar://problem/88093007
other threads/tasks which have a reference to the task might need to
access parts of it even if the task itself is completed.
Radar-Id: rdar://problem/88093007
We were detaching the child by just modifying the list, but the cancellation path was assuming that that would not be done without holding the task status lock.
This patch just fixes the current runtime; the back-deployment side is complicated.
Fixes rdar://88398824