Commit Graph

1369 Commits

Author SHA1 Message Date
Doug Gregor
85d003ef9b [Concurrency] Implement basic runtime support for task futures.
Extend AsyncTask and the concurrency runtime with basic support for
task futures. AsyncTasks with futures contain a future fragment with
information about the type produced by the future, and where the
future will put the result value or the thrown error in the initial
context.

We still don't have the ability to schedule the waiting tasks on an
executor when the future completes, so this isn't useful for anything
just test, and we can only test limited code paths.
2020-11-14 15:18:54 -08:00
Arnold Schwaighofer
193a3d5a8f Temporarily add runAsync to run a task synchronously in the current thread
Add a temporary runtime entry to run a task synchronous.
2020-11-13 10:41:42 -08:00
Arnold Schwaighofer
ed5a759fcf Change swift_task_create_f to swift_task_create now that explosions are async function pointers 2020-11-13 10:41:42 -08:00
Mike Ash
768a085de8 Merge pull request #34598 from mikeash/os-unfair-lock-mutex
[Runtime] Use os_unfair_lock for Mutex and StaticMutex on Darwin.
2020-11-11 11:37:12 -05:00
Mike Ash
1b376cddf1 Merge pull request #34463 from mikeash/concurrentreadablehashmap-inline-indices
[Runtime] Have ConcurrentReadableHashMap store indices inline when the table is sufficiently small.
2020-11-10 14:59:16 -05:00
Mike Ash
e82d9e8c7b Move ConditionMutex to ConditionVariable::Mutex and move various other Mutex.h types to be nested. 2020-11-10 14:44:59 -05:00
Doug Gregor
4c2c2f32e9 [Concurrency] Implement a builtin createAsyncTask() to create a new task.
`Builtin.createAsyncTask` takes flags, an optional parent task, and an
async/throwing function to execute, and passes it along to the
`swift_task_create_f` entry point to create a new (potentially child)
task, returning the new task and its initial context.
2020-11-07 23:05:04 -08:00
Mike Ash
dd6c235a2d [Runtime] Use os_unfair_lock for Mutex and StaticMutex on Darwin.
os_unfair_lock is much smaller than pthread_mutex_t (4 bytes versus 64) and a bit faster.

However, it doesn't support condition variables. Most of our uses of Mutex don't use condition variables, but a few do. Introduce ConditionMutex and StaticConditionMutex, which allow condition variables and continue to use pthread_mutex_t.

On all other platforms, we continue to use the same backing mutex type for both Mutex and ConditionMutex.

rdar://problem/45412121
2020-11-06 13:05:37 -05:00
Doug Gregor
c291eb596b [Concurrency] Add cancelAsyncTask() builtin.
Implement a new builtin, `cancelAsyncTask()`, to cancel the given
asynchronous task. This lowers down to a call into the runtime
operation `swift_task_cancel()`.

Use this builtin to implement Task.Handle.cancel().
2020-11-05 13:50:17 -08:00
nate-chandler
5c59babfeb Merge pull request #34490 from nate-chandler/generic-metadata-prespecialization-components/special-entry
[metadata prespecialization] Add canonical records to cache.
2020-11-04 19:35:40 -08:00
Mike Ash
12d114713f [Runtime] Switch MetadataCache to ConcurrentReadableHashMap. (#34307)
* [Runtime] Switch MetadataCache to ConcurrentReadableHashMap.

Use StableAddressConcurrentReadableHashMap. MetadataCacheEntry's methods for awaiting a particular state assume a stable address, where it will repeatedly examine `this` in a loop while waiting on a condition variable, so we give it a stable address to accommodate that. Some of these caches may be able to tolerate unstable addresses if this code is changed to perform the necessary table lookup each time through the loop instead. Some of them store metadata inline and we assume metadata never moves, so they'll have to stay this way.

* Have StableAddressConcurrentReadableHashMap remember the last found entry and check that before doing a more expensive lookup.

* Make a SmallMutex type that stores the mutex data out of line, and use it to get LockingConcurrentMapStorage to fit into the available space on 32-bit.

rdar://problem/70220660
2020-11-04 15:10:50 -05:00
Nate Chandler
cda11b90a1 [Runtime] Add prespecialized-caching getGenericMetadata variant.
Add a new entry point for getting generic metadata which adds the
canonical metadata records attached to the nominal type descriptor to
the metadata cache.

Change the implementation of the primary entry-point
swift_getGenericMetadata to stop looking through canonical
prespecialized records.

Change the implementation of swift_getCanonicalSpecializedMetadata to
use the caching token attached to the nominal type descriptor to add
canonical prespecialized metadata records to the metadata cache only
once rather than using the cache variables to limit the number of times
the attempt was made.
2020-11-01 13:35:03 -08:00
Alejandro Alonso
424802fb34 Revert SE-0283 (#34492)
Reverted despite build failures.
2020-10-29 17:32:06 -07:00
Mike Ash
782fa27206 [Runtime] Move ElementCapacity into the Elements allocation.
This shrinks ConcurrentReadableHashMap a bit, which will be needed for adapting it for metadata caches.
2020-10-28 10:23:21 -04:00
Mike Ash
9f53d4a52e [Runtime] Have ConcurrentReadableHashMap store indices inline when the table is sufficiently small. 2020-10-27 11:55:31 -04:00
Azoy
4ff28f2b40 Implement Tuple Hashable Conformance 2020-10-22 18:28:02 -04:00
Azoy
fd950ebbf3 Implement Tuple Comparable Conformance
Add protocol witnesses for all Comparable requirements
2020-10-22 18:27:07 -04:00
Azoy
e60ef84bd2 Implement Tuple Equatable Conformance 2020-10-22 18:24:28 -04:00
John McCall
a06d18ce81 Add API for creating unscheduled tasks.
Make all tasks into heap objects.
2020-10-22 00:53:16 -04:00
John McCall
c18331c837 Move swift_task_alloc/dealloc into the Concurrency library.
Also, rename them to follow the general namespacing scheme.
2020-10-22 00:53:15 -04:00
John McCall
8ac4362754 Implement a simple library for task cancellation and status management.
There are things about this that I'm far from sold on.  In
particular, I'm concerned that in order to implement escalation
correctly, we're going to have to add a status record for the
fact that the task is being executed, which means we're going
to have to potentially wait to acquire the status lock; overall,
that means making an extra runtime function call and doing some
atomics whenever we resume or suspend a task, which is an
uncomfortable amount of overhead.

The testing here is pretty grossly inadequate, but I wanted to
lay down the groundwork here.
2020-10-15 00:36:36 -04:00
Mike Ash
daca3d9cf0 Merge pull request #34225 from mikeash/simpleglobalcache-concurrentreadablehashmap
[Runtime] Change SimpleGlobalCache to use ConcurrentReadableHashMap instead of ConcurrentMap.
2020-10-13 19:19:20 -04:00
John McCall
4481e3bf35 Generalize SWIFT_RUNTIME_EXPORT to work for other runtime libraries.
I intend to use this in the _Concurrency library.
2020-10-09 23:51:24 -04:00
Mike Ash
630aff7b19 [Runtime] Change SimpleGlobalCache to use ConcurrentReadableHashMap instead of ConcurrentMap.
This gives us faster lookups and a small advantage in memory usage. Most of these maps need stable addresses for their entries, so we add a level of indirection to ConcurrentReadableHashMap for these cases to accommodate that. This costs some extra memory, but it's still a net win.

A new StableAddressConcurrentReadableHashMap type handles this indirection and adds a convenience getOrInsert to take advantage of it.

ConcurrentReadableHashMap is tweaked to avoid any global constructors or destructors when using it as a global variable.

ForeignWitnessTables does not need stable addresses and it now uses ConcurrentReadableHashMap directly.

rdar://problem/70056398
2020-10-08 14:57:39 -04:00
Pavel Yaskevich
e6b140e32b [Runtime] NFC: Fix swift_runtime_unreachable to swift_unreachable 2020-10-08 11:54:12 -07:00
Mike Ash
d369ce1bb5 Merge pull request #34063 from mikeash/concurrenthashmap-indices-shrink
[Runtime] Use 1-byte or 2-byte indices in ConcurrentReadableHashMap when possible.
2020-10-08 13:47:57 -04:00
nate-chandler
515f0db4ed Merge pull request #34141 from nate-chandler/concurrency/irgen/signature
[Concurrency] Async CC, part 1.
2020-10-06 07:43:02 -07:00
John McCall
0fb407943f [NFC] Rename swift_runtime_unreachable to swift_unreachable and make it use LLVM's support when available. 2020-10-03 02:54:56 -04:00
John McCall
66a42d8de7 [NFC] Move some of the runtime's compiler abstraction into Compiler.h 2020-10-03 02:54:56 -04:00
Nate Chandler
607772aaa2 [Runtime] Stubbed entry points for task de/alloc. 2020-09-30 18:57:48 -07:00
Mike Ash
ece0399d60 [Runtime] Have ConcurrentReadableHashMap use 1-byte or 2-byte indices when possible. 2020-09-24 15:28:18 -04:00
3405691582
2dd75857bb Specify Float80 as supported on OpenBSD.
This fixes unit test `stdlib/PrintFloat.swift.gyb` on OpenBSD.
2020-09-17 21:30:54 -04:00
Kuba (Brecka) Mracek
ea6cae658e Make waiting on condition variables error out on SWIFT_STDLIB_SINGLE_THREADED_RUNTIME (#33788) 2020-09-03 18:44:47 -07:00
Mike Ash
50ea66d1d9 Merge pull request #33585 from mikeash/type-lookup-error-reporting
Add error reporting when looking up types by demangled name.
2020-08-28 17:42:47 -04:00
Mike Ash
0990fa90b3 Merge pull request #33647 from mikeash/shrink-concurrentreadablehashmap
[Runtime] Shrink ConcurrentReadableHashMap a bit.
2020-08-28 14:45:17 -04:00
Mike Ash
fd6922f92d Add error reporting when looking up types by demangled name. 2020-08-28 14:43:51 -04:00
Mike Ash
4cd1b715bd [Runtime] Shrink ConcurrentReadableHashMap a bit.
We're using a lot of space on the free lists. Each vector is three words, and we have two of them. Switch to a single linked list. We only need one list, as both kinds of pointers just get free()'d. A linked list optimizes for the common case where the list is empty. This takes us from six words to one.

Also make ReaderCount, ElementCount, and ElementCapacity uint32_ts. The size_ts were unnecessarily large and this saves some space on 64-bit systems.

While we're in there, add 0/NULL initialization to all elements. The current use in the runtime is unaffected (it's statically allocated) but the local variables used in the test were tripping over this.
2020-08-27 13:05:40 -04:00
Mike Ash
58604ff1ba [Runtime] Fix memory ordering on a store in ConcurrentReadableArray. 2020-08-26 13:24:44 -04:00
Kuba (Brecka) Mracek
9ead8d57fa Add a single-threaded stdlib mode, use it for the 'minimal' stdlib (#33437) 2020-08-25 06:03:14 -07:00
Kuba (Brecka) Mracek
76da9a064a Use defined(_POSIX_THREADS) to detect pthread support (#33609)
Co-authored-by: Ben Rimmington <me@benrimmington.com>
2020-08-24 17:53:25 -07:00
Mike Ash
6467827add [Runtime] Fix memory ordering on two loads in ConcurrentReadableHashMap. 2020-08-24 15:29:37 -04:00
Mike Ash
af35936589 [Runtime] Fix a race condition in ConcurrentReadableHashMap with concurrent reads and clears. 2020-08-22 10:51:04 -04:00
Mike Ash
ecd6d4ddec Add a new ConcurrentReadableHashMap type. Switch the protocol conformance cache
to use it.

ConcurrentReadableHashMap is lock-free for readers, with writers using a lock to
ensure mutual exclusion amongst each other. The intent is to eventually replace
all uses ConcurrentMap with ConcurrentReadableHashMap.

ConcurrentReadableHashMap provides for relatively quick lookups by using a hash
table. Rearders perform an atomic increment/decrement in order to inform writers
that there are active readers. The design attempts to minimize wasted memory by
storing the actual elements out-of-line, and having the table store indices into
a separate array of elements.

The protocol conformance cache now uses ConcurrentReadableHashMap, which
provides faster lookups and less memory use than the previous ConcurrentMap
implementation. The previous implementation caches
ProtocolConformanceDescriptors and extracts the WitnessTable after the cache
lookup. The new implementation directly caches the WitnessTable, removing an
extra step (potentially a quite slow one) from the fast path.

The previous implementation used a generational scheme to detect when negative
cache entries became obsolete due to new dynamic libraries being loaded, and
update them in place. The new implementation just clears the entire cache when
libraries are loaded, greatly simplifying the code and saving the memory needed
to track the current generation in each negative cache entry. This means we need
to re-cache all requested conformances after loading a dynamic library, but
loading libraries at runtime is rare and slow anyway.

rdar://problem/67268325
2020-08-20 13:05:30 -04:00
Mike Ash
e26aeca7eb [Runtime] Fix the isa mask assert for ARM64e.
Swift's isa mask includes the signature bits. objc_debug_isa_class_mask does not. Switch to objc_absolute_packed_isa_class_mask instead, which does.

While we're at it, get rid of the now-unnecessary guards for back-deployment.

rdar://problem/60148213
2020-08-06 16:33:59 -04:00
Nate Chandler
b0fc8daa83 [Runtime] Add entry point to canonicalize metadata.
The new function swift_getCanonicalSpecializedMetadata takes a metadata
request, a prespecialized non-canonical metadata, and a cache as its
arguments.  The idea of the function is either to bless the provided
prespecialized metadata as canonical if there is not currently a
canonical metadata record for the type it describes or else to return
the actual canonical metadata.

When called, the metadata cache checks for a preexisting entry for this
metadata.  If none is found, the passed-in prespecialized metadata is
added to the cache.  Otherwise, the metadata record found in the cache
is returned.

rdar://problem/56995359
2020-07-15 15:27:36 -07:00
Karoy Lorentey
f44cbe4697 [Foundation] Update & simplify class name stability check
Move the ObjC class name stability check logic to the Swift runtime, exposing it as a new SPI called _swift_isObjCTypeNameSerializable.

Update the reporting logic. The ObjC names of generic classes are considered stable now, but private classes and classes defined in function bodies or other anonymous contexts are unstable by design.

On the overlay side, rewrite the check’s implementation in Swift and considerably simplify it.

rdar://57809977
2020-07-02 19:32:22 -07:00
Nate Chandler
6d0c34caf3 [Runtime] Add entry point to compare conformance descriptors.
The new function swift_compareProtocolConformanceDescriptors calls
through to the preexisting code in MetadataCacheKey which has been
extracted out from MetadataCacheKey::compareWitnessTables into a new
public static function
MetadataCacheKey::compareProtocolConformanceDescriptors.

The new function's availability is "future" for now.
2020-06-22 15:01:28 -07:00
swift-ci
ecad335f70 Merge pull request #32476 from nate-chandler/runtime/add-swift_compareTypeContextDescriptors 2020-06-22 14:40:35 -07:00
Nate Chandler
2c6d7d78df [Runtime] Add entry point to compare type context descriptors.
The new function `swift_compareTypeContextDescriptors` is equivalent to
a call through to swift::equalContexts.  The implementation it the same
as that of swift::equalContexts with the following removals:
- Handling of context descriptors of kind other outside of
  ContextDescriptorKind::Type_First...ContextDescriptorKind::Type_Last.
  Because the arguments are both TypeContextDescriptors, the kinds are
  known to fall within that range.
- Casting to TypeContextDescriptor.  The arguments are already of that
  type.

For now, the new function has "future" availability.
2020-06-22 12:07:02 -07:00
Mike Ash
3fcf86ac01 [Reflection][swift-inspect] Show tag names when dumping raw metadata allocations. 2020-06-17 11:31:12 -04:00