Extend AsyncTask and the concurrency runtime with basic support for
task futures. AsyncTasks with futures contain a future fragment with
information about the type produced by the future, and where the
future will put the result value or the thrown error in the initial
context.
We still don't have the ability to schedule the waiting tasks on an
executor when the future completes, so this isn't useful for anything
just test, and we can only test limited code paths.
`Builtin.createAsyncTask` takes flags, an optional parent task, and an
async/throwing function to execute, and passes it along to the
`swift_task_create_f` entry point to create a new (potentially child)
task, returning the new task and its initial context.
os_unfair_lock is much smaller than pthread_mutex_t (4 bytes versus 64) and a bit faster.
However, it doesn't support condition variables. Most of our uses of Mutex don't use condition variables, but a few do. Introduce ConditionMutex and StaticConditionMutex, which allow condition variables and continue to use pthread_mutex_t.
On all other platforms, we continue to use the same backing mutex type for both Mutex and ConditionMutex.
rdar://problem/45412121
Implement a new builtin, `cancelAsyncTask()`, to cancel the given
asynchronous task. This lowers down to a call into the runtime
operation `swift_task_cancel()`.
Use this builtin to implement Task.Handle.cancel().
* [Runtime] Switch MetadataCache to ConcurrentReadableHashMap.
Use StableAddressConcurrentReadableHashMap. MetadataCacheEntry's methods for awaiting a particular state assume a stable address, where it will repeatedly examine `this` in a loop while waiting on a condition variable, so we give it a stable address to accommodate that. Some of these caches may be able to tolerate unstable addresses if this code is changed to perform the necessary table lookup each time through the loop instead. Some of them store metadata inline and we assume metadata never moves, so they'll have to stay this way.
* Have StableAddressConcurrentReadableHashMap remember the last found entry and check that before doing a more expensive lookup.
* Make a SmallMutex type that stores the mutex data out of line, and use it to get LockingConcurrentMapStorage to fit into the available space on 32-bit.
rdar://problem/70220660
Add a new entry point for getting generic metadata which adds the
canonical metadata records attached to the nominal type descriptor to
the metadata cache.
Change the implementation of the primary entry-point
swift_getGenericMetadata to stop looking through canonical
prespecialized records.
Change the implementation of swift_getCanonicalSpecializedMetadata to
use the caching token attached to the nominal type descriptor to add
canonical prespecialized metadata records to the metadata cache only
once rather than using the cache variables to limit the number of times
the attempt was made.
There are things about this that I'm far from sold on. In
particular, I'm concerned that in order to implement escalation
correctly, we're going to have to add a status record for the
fact that the task is being executed, which means we're going
to have to potentially wait to acquire the status lock; overall,
that means making an extra runtime function call and doing some
atomics whenever we resume or suspend a task, which is an
uncomfortable amount of overhead.
The testing here is pretty grossly inadequate, but I wanted to
lay down the groundwork here.
This gives us faster lookups and a small advantage in memory usage. Most of these maps need stable addresses for their entries, so we add a level of indirection to ConcurrentReadableHashMap for these cases to accommodate that. This costs some extra memory, but it's still a net win.
A new StableAddressConcurrentReadableHashMap type handles this indirection and adds a convenience getOrInsert to take advantage of it.
ConcurrentReadableHashMap is tweaked to avoid any global constructors or destructors when using it as a global variable.
ForeignWitnessTables does not need stable addresses and it now uses ConcurrentReadableHashMap directly.
rdar://problem/70056398
We're using a lot of space on the free lists. Each vector is three words, and we have two of them. Switch to a single linked list. We only need one list, as both kinds of pointers just get free()'d. A linked list optimizes for the common case where the list is empty. This takes us from six words to one.
Also make ReaderCount, ElementCount, and ElementCapacity uint32_ts. The size_ts were unnecessarily large and this saves some space on 64-bit systems.
While we're in there, add 0/NULL initialization to all elements. The current use in the runtime is unaffected (it's statically allocated) but the local variables used in the test were tripping over this.
to use it.
ConcurrentReadableHashMap is lock-free for readers, with writers using a lock to
ensure mutual exclusion amongst each other. The intent is to eventually replace
all uses ConcurrentMap with ConcurrentReadableHashMap.
ConcurrentReadableHashMap provides for relatively quick lookups by using a hash
table. Rearders perform an atomic increment/decrement in order to inform writers
that there are active readers. The design attempts to minimize wasted memory by
storing the actual elements out-of-line, and having the table store indices into
a separate array of elements.
The protocol conformance cache now uses ConcurrentReadableHashMap, which
provides faster lookups and less memory use than the previous ConcurrentMap
implementation. The previous implementation caches
ProtocolConformanceDescriptors and extracts the WitnessTable after the cache
lookup. The new implementation directly caches the WitnessTable, removing an
extra step (potentially a quite slow one) from the fast path.
The previous implementation used a generational scheme to detect when negative
cache entries became obsolete due to new dynamic libraries being loaded, and
update them in place. The new implementation just clears the entire cache when
libraries are loaded, greatly simplifying the code and saving the memory needed
to track the current generation in each negative cache entry. This means we need
to re-cache all requested conformances after loading a dynamic library, but
loading libraries at runtime is rare and slow anyway.
rdar://problem/67268325
Swift's isa mask includes the signature bits. objc_debug_isa_class_mask does not. Switch to objc_absolute_packed_isa_class_mask instead, which does.
While we're at it, get rid of the now-unnecessary guards for back-deployment.
rdar://problem/60148213
The new function swift_getCanonicalSpecializedMetadata takes a metadata
request, a prespecialized non-canonical metadata, and a cache as its
arguments. The idea of the function is either to bless the provided
prespecialized metadata as canonical if there is not currently a
canonical metadata record for the type it describes or else to return
the actual canonical metadata.
When called, the metadata cache checks for a preexisting entry for this
metadata. If none is found, the passed-in prespecialized metadata is
added to the cache. Otherwise, the metadata record found in the cache
is returned.
rdar://problem/56995359
Move the ObjC class name stability check logic to the Swift runtime, exposing it as a new SPI called _swift_isObjCTypeNameSerializable.
Update the reporting logic. The ObjC names of generic classes are considered stable now, but private classes and classes defined in function bodies or other anonymous contexts are unstable by design.
On the overlay side, rewrite the check’s implementation in Swift and considerably simplify it.
rdar://57809977
The new function swift_compareProtocolConformanceDescriptors calls
through to the preexisting code in MetadataCacheKey which has been
extracted out from MetadataCacheKey::compareWitnessTables into a new
public static function
MetadataCacheKey::compareProtocolConformanceDescriptors.
The new function's availability is "future" for now.
The new function `swift_compareTypeContextDescriptors` is equivalent to
a call through to swift::equalContexts. The implementation it the same
as that of swift::equalContexts with the following removals:
- Handling of context descriptors of kind other outside of
ContextDescriptorKind::Type_First...ContextDescriptorKind::Type_Last.
Because the arguments are both TypeContextDescriptors, the kinds are
known to fall within that range.
- Casting to TypeContextDescriptor. The arguments are already of that
type.
For now, the new function has "future" availability.