Moved the test for the metadata kind to MetadataLookup.cpp.
Added an assertion (for debug builds) to Metadata.cpp to catch the case where
something manages to bypass that test.
Added a special test for getObjCClassByMangledName; this needs testing
separately as it uses the DecodedMetadataBuilder, which doesn't get exercised
by the normal demangling tests.
Added all the test cases from rdar://63485806, rdar://63488139, rdar://63496478,
rdar://63410196 and rdar://68449341. The test cases from rdar://63485806 are
disabled for now because the problem there is the error handling mechanism (or
lack thereof), rather than us not handling errors.
Fixes the remaining cases from
rdar://63488139
rdar://63496478
Added SWIFT_RUNTIME_WEAK_IMPORT/CHECK/USE macros.
Everything supports fast dealloc except x86 iOS simulators, so we no longer need
to look up objc_has_weak_formation_callout.
Added direct references for
objc_setHook_lazyClassNamer
_objc_realizeClassFromSwift
objc_setHook_getClass
os_system_version_get_current_version
_dyld_is_objc_constant
Implement name mangling, type metadata, runtime demangling, etc. for
global-actor qualified function types. Ensure that the manglings
round-trip through the various subsystems.
Implements rdar://78269642.
Previously, AsyncFunctionPointer constants were signed as code. That
was incorrect considering that these constants are in fact data. Here,
that is fixed.
rdar://76118522
* Move differentiability kinds from target function type metadata to trailing objects so that we don't exhaust all remaining bits of function type metadata.
* Differentiability kind is now stored in a tail-allocated word when function type flags say it's differentiable, located immediately after the normal function type metadata's contents (with proper alignment in between).
* Add new runtime function `swift_getFunctionTypeMetadataDifferentiable` which handles differentiable function types.
* Fix mangling of different differentiability kinds in function types. Mangle it like `ConcurrentFunctionType` so that we can drop special cases for escaping functions.
```
function-signature ::= params-type params-type async? sendable? throws? differentiable? // results and parameters
...
differentiable ::= 'jf' // @differentiable(_forward) on function type
differentiable ::= 'jr' // @differentiable(reverse) on function type
differentiable ::= 'jd' // @differentiable on function type
differentiable ::= 'jl' // @differentiable(_linear) on function type
```
Resolves rdar://75240064.
The previous fix here switched dyn_cast for dyn_cast_or_null, but this left us with an assertion failure in cast<>. Instead, explicitly check for NULL at the top of the function.
Somehow, clang was generating code that accepted nil and returned nil in swift_getObjCClassFromMetadata, but is no longer doing so. Some code relies on this to work, so switch to dyn_cast_or_null to accept nil explicitly.
rdar://74895271
Take the existing CompatibilityOverride mechanism and generalize it so it can be used in both the runtime and Concurrency libraries. The mechanism is preprocessor-heavy, so this requires some tricks. Use the SWIFT_TARGET_LIBRARY_NAME define to distinguish the libraries, and use a different .def file and mach-o section name accordingly.
We want the global/main executor functions to be a little more flexible. Instead of using the override mechanism, we expose function pointers that can be set by the compatibility library, or by any other code that wants to use a custom implementation.
rdar://73726764
Some ObjC runtime calls are weak or strong depending on the deployment target. When strong, we get warnings that the NULL checks always succeed; silence them.
Some of the adjacent code looked up functions using dlsym when they aren't provided by the SDK. Our current minimum SDK always has them, so remove the dlsym workaround.
In the uncached case, we'd scan conformances, cache them, then re-query the cache. This worked fine when the cache always grew, but now we clear the cache when loading new Swift images into the process. If that happens between the scan and the re-query, we lose the entry and return a false negative.
Instead, track what we've found in the scan in a separate local table, then query that after completing the scan.
While we're in there, fix a bug in TypeLookupError where operator= accidentally copied this->Context instead of other.Context. This caused the runtime to crash when trying to print error messages due to the false negative.
Add a no-parameter constructor to TypeLookupErrorOr<> to distinguish the case where it's being initialized with nothing from the case where it's being initialized with nullptr.
Instead of scribbling each allocation as it's parceled out, scribble the entire chunk up-front. Then, when handing out an allocation, check to make sure it still has the right scribbled data in it.
To prevent rdar://problem/68997282 from regressing, verify at runtime in
debug builds that in calls to swift_allocateGenericValueMetadata the
extraDataSize argument matches the OffsetInWords and SizeInWords
specified by the GenericMetadataPartialPattern available within the
pattern argument.
initClassFieldOffsetVector writes the instanceStart and size to the class's rodata. In some cases they already match, and this write will dirty memory unnecessarily, and prevent the compiler from emitting those rodatas into read-only memory.
rdar://problem/71119533
* [Runtime] Switch MetadataCache to ConcurrentReadableHashMap.
Use StableAddressConcurrentReadableHashMap. MetadataCacheEntry's methods for awaiting a particular state assume a stable address, where it will repeatedly examine `this` in a loop while waiting on a condition variable, so we give it a stable address to accommodate that. Some of these caches may be able to tolerate unstable addresses if this code is changed to perform the necessary table lookup each time through the loop instead. Some of them store metadata inline and we assume metadata never moves, so they'll have to stay this way.
* Have StableAddressConcurrentReadableHashMap remember the last found entry and check that before doing a more expensive lookup.
* Make a SmallMutex type that stores the mutex data out of line, and use it to get LockingConcurrentMapStorage to fit into the available space on 32-bit.
rdar://problem/70220660
Add a new entry point for getting generic metadata which adds the
canonical metadata records attached to the nominal type descriptor to
the metadata cache.
Change the implementation of the primary entry-point
swift_getGenericMetadata to stop looking through canonical
prespecialized records.
Change the implementation of swift_getCanonicalSpecializedMetadata to
use the caching token attached to the nominal type descriptor to add
canonical prespecialized metadata records to the metadata cache only
once rather than using the cache variables to limit the number of times
the attempt was made.
When swift_compareTypeContextDescriptors was added, it did not auth the
TypeContextDesriptor arguments that were passed to it. Fix that here.
There are not any uses of this function yet, so there are no signs that
need to be added.
This gives us faster lookups and a small advantage in memory usage. Most of these maps need stable addresses for their entries, so we add a level of indirection to ConcurrentReadableHashMap for these cases to accommodate that. This costs some extra memory, but it's still a net win.
A new StableAddressConcurrentReadableHashMap type handles this indirection and adds a convenience getOrInsert to take advantage of it.
ConcurrentReadableHashMap is tweaked to avoid any global constructors or destructors when using it as a global variable.
ForeignWitnessTables does not need stable addresses and it now uses ConcurrentReadableHashMap directly.
rdar://problem/70056398
Previously, the metadata accessor for which canonical prespecializations
had been formed included checks against the passed-in arguments to
determine whether the access matched a prespecialized record or not.
Now that the prespecialized records are attached to the nominal type
descriptor for the type, eliminate this hard-coded generated code and
instead let swift_getGenericMetadata do the work of looking through the
prespecializations.
Previously, noncanonical records were only allowed if one of the
arguments was from the module where the record was to be emitted.
Here, that restriction is lifted. Now getSpecializedGenericMetadata
will, on first run, register all canonical metadata with the global
cache before attempting to bless the passed-in noncanonical record,
allowing noncanonical records to be for the same arguments as a
canonical record (since in the case that a canonical record does exist,
it will be returned).
The new function swift_getCanonicalSpecializedMetadata takes a metadata
request, a prespecialized non-canonical metadata, and a cache as its
arguments. The idea of the function is either to bless the provided
prespecialized metadata as canonical if there is not currently a
canonical metadata record for the type it describes or else to return
the actual canonical metadata.
When called, the metadata cache checks for a preexisting entry for this
metadata. If none is found, the passed-in prespecialized metadata is
added to the cache. Otherwise, the metadata record found in the cache
is returned.
rdar://problem/56995359
Two protocol conformance descriptors are passed to
swift_compareProtocolConformanceDecriptors from generic metadata
accessors when there is a canonical prespecialization and one of the
generic arguments has a protocol requirement.
Previously, the descriptors were incorrectly being passed without
ptrauth processing: one from the witness table in the arguments that are
passed in to the accessor and one known statically.
Here, the descriptor in the witness table is authed using the
ProtocolConformanceDescriptor schema. Then, both descriptors are signed
using the ProtocolConformanceDescriptorsAsArguments schema. Finally, in
the runtime function, the descriptors are authed.
The new function swift_compareProtocolConformanceDescriptors calls
through to the preexisting code in MetadataCacheKey which has been
extracted out from MetadataCacheKey::compareWitnessTables into a new
public static function
MetadataCacheKey::compareProtocolConformanceDescriptors.
The new function's availability is "future" for now.
The new function `swift_compareTypeContextDescriptors` is equivalent to
a call through to swift::equalContexts. The implementation it the same
as that of swift::equalContexts with the following removals:
- Handling of context descriptors of kind other outside of
ContextDescriptorKind::Type_First...ContextDescriptorKind::Type_Last.
Because the arguments are both TypeContextDescriptors, the kinds are
known to fall within that range.
- Casting to TypeContextDescriptor. The arguments are already of that
type.
For now, the new function has "future" availability.
Previously, when NDEBUG was not defined, the allocations made for value
metadata records were filled with 0xAA bytes. Here, that behavior is
both expanded to all metadata records and enabled in release builds when
the environment variable SWIFT_DEBUG_ENABLE_MALLOC_SCRIBBLE is set.
During witness table instantiation, the witness table is constructed
several sources: the pattern, the resilient witnesses, the private
data, and default implementations. The private data area is the only
one that was being zeroed out; the rest we rely on always filling in
the data from the conformance descriptor and provided info.
However, witness table instantiation uses the presence of a NULL
pointer for a particular witness in the resulting table to indicate
that no witness fulfilled that requirement, so that it can fill in the
default witnesss. Except that, without zeroing that part of the table
beforehand, we aren't guaranteed to have a NULL pointer for witness
entries that the client (protocol conformance) did not know about at
the time it was compiled.
Zero out the entire witness table so default implementations can be
filled in appropriately.
Fixes rdar://problem/64295849.