We have a few constructor functions that aren't wrapped in SWIFT_ALLOWED_RUNTIME_GLOBAL_CTOR_BEGIN/SWIFT_ALLOWED_RUNTIME_GLOBAL_CTOR_END and which have started to produce warnings in a new clang version. Explicitly allow these constructors by adding those.
rdar://147703947
* [CS] Decline to handle InlineArray in shrink
Previously we would try the contextual type `(<int>, <element>)`,
which is wrong. Given we want to eliminate shrink, let's just bail.
* [Sema] Sink `ValueMatchVisitor` into `applyUnboundGenericArguments`
Make sure it's called for sugar code paths too. Also let's just always
run it since it should be a pretty cheap check.
* [Sema] Diagnose passing integer to non-integer type parameter
This was previously missed, though would have been diagnosed later
as a requirement failure.
* [Parse] Split up `canParseType`
While here, address the FIXME in `canParseTypeSimpleOrComposition`
and only check to see if we can parse a type-simple, including
`each`, `some`, and `any` for better recovery.
* Introduce type sugar for InlineArray
Parse e.g `[3 x Int]` as type sugar for InlineArray. Gated behind
an experimental feature flag for now.
Replace the pair of global actor type/conformance we are passing around with
a general "conformance execution context" that could grow new functionality
over time. Add three external symbols to the runtime:
* swift_conformsToProtocolWithExecutionContext: a conforms-to-protocol check
that also captures the execution context that should be checked before
using the conformance for anything. The only execution context right now
is for an isolated conformance.
* swift_isInConformanceExecutionContext: checks whether the function is
being executed in the given execution context, i.e., running on the
executor for the given global actor.
* swift_ConformanceExecutionContextSize: the size of the conformance
execution context. Client code outside of the Swift runtime can allocate
a pointer-aligned region of memory of this size to use with the runtime
functions above.
In the prior implementation of runtime resolution of isolated conformances,
the runtime had to look in both the protocol conformance descriptor and
in all conditional conformance requirements (recursively) to find any
isolated conformances. If it found one, it had to demangle the global
actor type to metadata. Since swift_conformsToProtocol is a hot path through
the runtime, we can't afford this non-constant-time work in the common
case.
Instead, cache the resolved global actor and witness table as part of the
conformance cache, so that we have access to this information every time
we look up a witness table for a conformance. Propagate this up through
various callers (e.g., generic requirement checking) to the point where
we either stash it in the cache or check it at runtime. This gets us down
to a very quick check (basically, NULL-or-not) for nonisolated conformances,
and just one check for isolated conformances.
Following the approach taken with the concurrency-specific type
descriptors, register a hook function for the "is current global actor"
check used for isolated conformances.
The way that we include COMPATIBILITY_OVERRIDE_INCLUDE_PATH freaks out the
syntax highlighting of editors like emacs. It causes the whole file to be
highlighted like it is part of the include string.
To work around this, this patch creates a separate file called
CompatibilityOverrideIncludePath.h that just includes
COMPATIBILITY_OVERRIDE_INCLUDE_PATH. So its syntax highlighting is borked, but
at least in the actual files that contain real code, the syntax highlighting is
restored.
Some requirement machine work
Rename requirement to Value
Rename more things to Value
Fix integer checking for requirement
some docs and parser changes
Minor fixes
The descriptor map is keyed by a simplified mangling that canonicalizes the differences that we accept in _contextDescriptorMatchesMangling, such as the ability to specify any kind of type with an OtherNominalType node.
This simplified mangling is not necessarily unique, but we use _contextDescriptorMatchesMangling for the final equality checking when looking up entries in the map, so occasional collisions are acceptable and get resolved when probing the table.
The table is meant to be comprehensive, so it includes all descriptors that can be looked up by name, and a negative result means the descriptor does not exist in the shared cache. We add a flag to the options that can mark it as non-definitive in case we ever need to degrade this, and fall back to a full search after a negative result.
The map encompasses the entire shared cache but we need to reject lookups for types in images that aren't loaded. The map includes an image index which allows us to cheaply query whether a given descriptor is in a loaded image or not, so we can ignore ones which are not.
TypeMetadataPrivateState now has a separate sections array for sections within the shared cache. _searchTypeMetadataRecords consults the map first. If no result is found in the map and the map is marked as comprehensive, then only the sections outside the shared cache need to be scanned.
Replace the SWIFT_DEBUG_ENABLE_LIB_PRESPECIALIZED environment variable with one specifically for metadata and one for descriptor lookup so they can be controlled independently. Also add SWIFT_DEBUG_VALIDATE_LIB_PRESPECIALIZED_DESCRIPTOR_LOOKUP which consults the map and does the full scan, and ensures they produce the same result, for debugging purposes.
Enhance the environment variable code to track whether a variable was set at all. This allows SWIFT_DEBUG_ENABLE_LIB_PRESPECIALIZED to override the default in either direction.
Remove the disablePrespecializedMetadata global and instead modify the mapConfiguration to disable prespecialized metadata when an image is loaded that overrides one in the shared cache.
rdar://113059233
If a type or opaque type descriptor appears inside of a protocol extension of the form:
```
extension P where Self == Nominal { ... }
```
then the runtime representation of the extension context was interpreted by the runtime
demangler as a nominal type extension, even though the parameterization is on the
`<Self>` generic signature of the protocol extension and not the generic signature of
the nominal type. This would cause spurious rejection of mangled names referencing the
extension context because we mistakenly thought that the type arguments mismatched with
the nominal type signature rather than matching them to the extension context.
Add some code to `_findExtendedTypeContextDescriptor` to detect when an extension is
a protocol extension with a `Self == SameType` constraint, and pass the extension along
rather than treating it as a nominal type extension. Fixes rdar://130168101.
Track the key argument index separately from the generic parameter index when performing the invertible protocol checking in _checkGenericRequirements. This keeps the indexing correct when a non-key argument is followed by a key argument.
rdar://128774651
The pointer location can be computed from the symbolic reference location and offset, which we already provide, but it's not clear that you should add them together, nor is it clear why this failure would occur. Add the location of the NULL pointer itself to the error message, and also mention that it's probably caused by a missing weak symbol.
This PR implements first set of changes required to support autodiff for coroutines. It mostly targeted to `_modify` accessors in standard library (and beyond), but overall implementation is quite generic.
There are some specifics of implementation and known limitations:
- Only `@yield_once` coroutines are naturally supported
- VJP is a coroutine itself: it yields the results *and* returns a pullback closure as a normal return. This allows us to capture values produced in resume part of a coroutine (this is required for defers and other cleanups / commits)
- Pullback is a coroutine, we assume that coroutine cannot abort and therefore we execute the original coroutine in reverse from return via yield and then back to the entry
- It seems there is no semantically sane way to support `_read` coroutines (as we will need to "accept" adjoints via yields), therefore only coroutines with inout yields are supported (`_modify` accessors). Pullbacks of such coroutines take adjoint buffer as input argument, yield this buffer (to accumulate adjoint values in the caller) and finally return the adjoints indirectly.
- Coroutines (as opposed to normal functions) are not first-class values: there is no AST type for them, one cannot e.g. store them into tuples, etc. So, everywhere where AST type is required, we have to hack around.
- As there is no AST type for coroutines, there is no way one could register custom derivative for coroutines. So far only compiler-produced derivatives are supported
- There are lots of common things wrt normal function apply's, but still there are subtle but important differences. I tried to organize the code to enable code reuse, still it was not always possible, so some code duplication could be seen
- The order of how pullback closures are produced in VJP is a bit different: for normal apply's VJP produces both value and pullback closure via a single nested VJP apply. This is not so anymore with coroutine VJP's: yielded values are produced at `begin_apply` site and pullback closure is available only from `end_apply`, so we need to track the order in which pullbacks are produced (and arrange consumption of the values accordingly – effectively delay them)
- On the way some complementary changes were required in e.g. mangler / demangler
This patch covers the generation of derivatives up to SIL level, however, it is not enough as codegen of `partial_apply` of a coroutine is completely broken. The fix for this will be submitted separately as it is not directly autodiff-related.
---------
Co-authored-by: Andrew Savonichev <andrew.savonichev@gmail.com>
Co-authored-by: Richard Wei <rxwei@apple.com>
We need to check for overridden images on every image load, otherwise
XCTest (among others) may `dlopen()` an image that pulls in something
that is overridden, at which point the prespecialized metadata won't
match the image we loaded.
rdar://125727356
Add more runtime support for checking suppressible protocol requirements:
* Parameter packs now check all of the arguments appropriately
* Most structural types now implement checking (these are hard to test).
Introduce metadata and runtime support for describing conformances to
"suppressible" protocols such as `Copyable`. The metadata changes occur
in several different places:
* Context descriptors gain a flag bit to indicate when the type itself has
suppressed one or more suppressible protocols (e.g., it is `~Copyable`).
When the bit is set, the context will have a trailing
`SuppressibleProtocolSet`, a 16-bit bitfield that records one bit for
each suppressed protocol. Types with no suppressed conformances will
leave the bit unset (so the metadata is unchanged), and older runtimes
don't look at the bit, so they will ignore the extra data.
* Generic context descriptors gain a flag bit to indicate when the type
has conditional conformances to suppressible protocols. When set,
there will be trailing metadata containing another
`SuppressibleProtocolSet` (a subset of the one in the main context
descriptor) indicating which suppressible protocols have conditional
conformances, followed by the actual lists of generic requirements
for each of the conditional conformances. Again, if there are no
conditional conformances to suppressible protocols, the bit won't be
set. Old runtimes ignore the bit and any trailing metadata.
* Generic requirements get a new "kind", which provides an ignored
protocol set (another `SuppressibleProtocolSet`) stating which
suppressible protocols should *not* be checked for the subject type
of the generic requirement. For example, this encodes a requirement
like `T: ~Copyable`. These generic requirements can occur anywhere
that there is a generic requirement list, e.g., conditional
conformances and extended existentials. Older runtimes handle unknown
generic requirement kinds by stating that the requirement isn't
satisfied.
Extend the runtime to perform checking of the suppressible
conformances on generic arguments as part of checking generic
requirements. This checking follows the defaults of the language, which
is that every generic argument must conform to each of the suppressible
protocols unless there is an explicit generic requirement that states
which suppressible protocols to ignore. Thus, a generic parameter list
`<T, Y where T: ~Escapable>` will check that `T` is `Copyable` but
not that it is `Escapable`, and check that `U` is both `Copyable` and
`Escapable`. To implement this, we collect the ignored protocol sets
from these suppressed requirements while processing the generic
requirements, then check all of the generic arguments against any
conformances not suppressed.
Answering the actual question "does `X` conform to `Copyable`?" (for
any suppressible protocol) looks at the context descriptor metadata to
answer the question, e.g.,
1. If there is no "suppressed protocol set", then the type conforms.
This covers types that haven't suppressed any conformances, including
all types that predate noncopyable generics.
2. If the suppressed protocol set doesn't contain `Copyable`, then the
type conforms.
3. If the type is generic and has a conditional conformance to
`Copyable`, evaluate the generic requirements for that conditional
conformance to answer whether it conforms.
The procedure above handles the bits of a `SuppressibleProtocolSet`
opaquely, with no mapping down to specific protocols. Therefore, the
same implementation will work even with future suppressible protocols,
including back deployment.
The end result of this is that we can dynamically evaluate conditional
conformances to protocols that depend on conformances to suppressible
protocols.
Implements rdar://123466649.
Extend TypeDecoder with support for inverse requirements, passing them
along to the type builder. Then implement support for inverse
requirements within the AST demangler, which addresses the round-trip
demangling failures we've been seeing.
The runtime and remote inspection facilities still need metadata to
deal with inverse requirements.
Fixes rdar://124564447.
LLVM is presumably moving towards `std::string_view` -
`StringRef::startswith` is deprecated on tip. `SmallString::startswith`
was just renamed there (maybe with some small deprecation inbetween, but
if so, we've missed it).
The `SmallString::startswith` references were moved to
`.str().starts_with()`, rather than adding the `starts_with` on
`stable/20230725` as we only had a few of them. Open to switching that
over if anyone feels strongly though.
Not quite NFC because apparently the representation bleeds into what's
accepted in some situations where we're supposed to be warning about
conflicts and then making an arbitrary choice. But what we're doing
is nonsense, so we definitely need to break behavior here.
This is setting up for isolated(any) and isolated(caller). I tried
to keep that out of the patch as much as possible, though.
This library uses GenericMetadataBuilder with a ReaderWriter that can read data and resolve pointers from MachO files, and emit a JSON representation of a dylib containing the built metadata.
We use LLVM's binary file readers to parse the MachO files and resolve fixups so we can follow pointers. This code is somewhat MachO specific, but could be generalized to other formats that LLVM supports.
rdar://116592577
Extend function type metadata with an entry for the thrown error type,
so that thrown error types are represented at runtime as well. Note
that this required the introduction of "extended" function type
flags into function type metadata, because we would have used the last
bit. Do so, and define one extended flag bit as representing typed
throws.
Add `swift_getExtendedFunctionTypeMetadata` to the runtime to build
function types that have the extended flags and a thrown error type.
Teach IR generation to call this function to form the metadata, when
appropriate.
Introduce all of the runtime mangling/demangling support needed for
thrown error types.
Using symbolic references instead of a text based mangling avoids the
expensive type descriptor scan when objective c protocols are requested.
rdar://111536582
The more awkward setup with pushGenericParams()/popGenericParams()
is required so that when demangling requirements and field types
we get the correct pack-ness for each type parameter. The pack-ness
is encoded as part of the generic signature, and not as part of
the type parameter mangling itself.
Fixes the ASTDemangler issue from rdar://problem/115459973.
The old behavior was only correct when building substituted types,
ie, if createTupleType() was never called with a pack expansion type.
This was the case in the runtime's MetadataLookup which applies
substitutions to an interface type to ultimately construct metadata
for a fully-concrete type, but not in the ASTDemangler, where we
actually build interface types.
Since TypeDecoder doesn't have any way to query the kind of type
it just built, let's just instead make this decision inside the
implementation of the type builder concept.
Fixes https://github.com/apple/swift/issues/67322.