Some requirement machine work
Rename requirement to Value
Rename more things to Value
Fix integer checking for requirement
some docs and parser changes
Minor fixes
For the purposes of availability calculations, direct use of
`llvm::VersionTuple` and `VersionRange` is discouraged, since these fundamental
version representations are divorced from their context. For example, comparing
an iOS platform version to a visionOS platform version is invalid since the
versioning systems of the two platforms differ. Although visionOS inherits
avialability from iOS, an iOS version must be converted to a visionOS version
prior to comparison. In the future, `AvailabilityContext` can be enriched to
carry the information necessary to verify that its algebraic operations are
being performed on compatible values.
NFC.
An `AvailabilityContext` represents an abstract version range in which
something is available. In the future, these version ranges may not necessarily
always correspond to operating system version ranges.
NFC.
This conforms mutable C++ container types, such as `std::vector`, to `MutableCollection` via a new overlay protocol `CxxMutableRandomAccessCollection`.
rdar://134531554
Although I don't plan to bring over new assertions wholesale
into the current qualification branch, it's entirely possible
that various minor changes in main will use the new assertions;
having this basic support in the release branch will simplify that.
(This is why I'm adding the includes as a separate pass from
rewriting the individual assertions)
rdar://127535274
The layout string needs to be assigned before completion, to make it available in recursive metadata initialization. Setting it in the completion function causes a race between other metadata initializers using it and the completion function running.
Resilence support will require changes to the Objective-C runtime to expand support for metadata initialization functions. Add a separate experimental feature flag to help with staging that support in, and modify diagnostics to not suggest increasing the minimum deployment target for now.
We really don’t need ‘em; we can just adjust the direct field offsets.
The runtime entry point currently uses a weird little hack that we will refactor away shortly.
When an @objc @implementation class requires the use of `ClassMetadataStrategy::Update` because some of its stored properties do not have fixed sizes, we adjust the direct field offsets during class realization by emitting a custom metadata update function which calls a new entry point in the Swift runtime. That entry point adjusts field offsets like `swift_updateClassMetadata2()`, but it only assumes that the class has Objective-C metadata, not Swift metadata.
This commit introduces an alternative mechanism which does the same thing without using any Swift-only metadata. It’s a rough implementation with important limitations:
• We’re currently using the field offset vector, which means that field offsets are being emitted into @objc @implementation classes; these will be removed.
• The new Swift runtime entry point duplicates a lot of `swift_updateClassMetadata2()`’s implementation; it will be refactored into something much smaller and more compact.
• Availability bounds for this feature have not yet been implemented.
Future commits in this PR will correct these issues.
Rather than having individual methods return without doing anything, arrange for ClassMetadataVisitor to never call them in the first place. This makes it so ClassMetadataScanner also knows which fields are omitted and it can compute offsets correctly.
The old logic is still present for field offsets to make this change NFC.
Emit metadata for runtime checks of conformances of associated types to
invertible protocols, e.g., `T.Assoc: Copyable`. This allows us to
correctly handle, e.g., dynamic casting involving conditional
conformances that have such constraints.
The model we use here is to emit an invertible-protocol constraint
that leaves only the specific bit clear in the invertible protocol
set.
Invertible protocols are currently always mangled with `Ri`, followed by
a single letter for each invertible protocol (e.g., `c` and `e` for
`Copyable` and `Escapable`, respectively), followed by the generic
parameter index. However, this requires that we extend the mangling
for any future invertible protocols, which mean they won't be
backward compatible.
Replace this mangling with one that mangles the bit # for the
invertible protocol, e.g., `Ri_` (followed by the generic parameter
index) is bit 0, which is `Copyable`. `Ri0_` (then generic parameter
index) is bit 1, which is `Escapable`. This allows us to round-trip
through mangled names for any invertible protocol, without any
knowledge of what the invertible protocol is, providing forward
compatibility. The same forward compatibility is present in all
metadata and the runtime, allowing us to add more invertible
protocols in the future without updating any of them, and also
allowing backward compatibility.
Only the demangling to human-readable strings maps the bit numbers
back to their names, and there's a fallback printing with just the bit
number when appropriate.
Also generalize the mangling a bit to allow for mangling of invertible
requirements on associated types, e.g., `S.Sequence: ~Copyable`. This
is currently unsupported by the compiler or runtime, but that may
change, and it was easy enough to finish off the mangling work for it.
Introduce metadata and runtime support for describing conformances to
"suppressible" protocols such as `Copyable`. The metadata changes occur
in several different places:
* Context descriptors gain a flag bit to indicate when the type itself has
suppressed one or more suppressible protocols (e.g., it is `~Copyable`).
When the bit is set, the context will have a trailing
`SuppressibleProtocolSet`, a 16-bit bitfield that records one bit for
each suppressed protocol. Types with no suppressed conformances will
leave the bit unset (so the metadata is unchanged), and older runtimes
don't look at the bit, so they will ignore the extra data.
* Generic context descriptors gain a flag bit to indicate when the type
has conditional conformances to suppressible protocols. When set,
there will be trailing metadata containing another
`SuppressibleProtocolSet` (a subset of the one in the main context
descriptor) indicating which suppressible protocols have conditional
conformances, followed by the actual lists of generic requirements
for each of the conditional conformances. Again, if there are no
conditional conformances to suppressible protocols, the bit won't be
set. Old runtimes ignore the bit and any trailing metadata.
* Generic requirements get a new "kind", which provides an ignored
protocol set (another `SuppressibleProtocolSet`) stating which
suppressible protocols should *not* be checked for the subject type
of the generic requirement. For example, this encodes a requirement
like `T: ~Copyable`. These generic requirements can occur anywhere
that there is a generic requirement list, e.g., conditional
conformances and extended existentials. Older runtimes handle unknown
generic requirement kinds by stating that the requirement isn't
satisfied.
Extend the runtime to perform checking of the suppressible
conformances on generic arguments as part of checking generic
requirements. This checking follows the defaults of the language, which
is that every generic argument must conform to each of the suppressible
protocols unless there is an explicit generic requirement that states
which suppressible protocols to ignore. Thus, a generic parameter list
`<T, Y where T: ~Escapable>` will check that `T` is `Copyable` but
not that it is `Escapable`, and check that `U` is both `Copyable` and
`Escapable`. To implement this, we collect the ignored protocol sets
from these suppressed requirements while processing the generic
requirements, then check all of the generic arguments against any
conformances not suppressed.
Answering the actual question "does `X` conform to `Copyable`?" (for
any suppressible protocol) looks at the context descriptor metadata to
answer the question, e.g.,
1. If there is no "suppressed protocol set", then the type conforms.
This covers types that haven't suppressed any conformances, including
all types that predate noncopyable generics.
2. If the suppressed protocol set doesn't contain `Copyable`, then the
type conforms.
3. If the type is generic and has a conditional conformance to
`Copyable`, evaluate the generic requirements for that conditional
conformance to answer whether it conforms.
The procedure above handles the bits of a `SuppressibleProtocolSet`
opaquely, with no mapping down to specific protocols. Therefore, the
same implementation will work even with future suppressible protocols,
including back deployment.
The end result of this is that we can dynamically evaluate conditional
conformances to protocols that depend on conformances to suppressible
protocols.
Implements rdar://123466649.
Decls with a package access level are currently set to public SIL
linkages. This limits the ability to have more fine-grained control
and optimize around resilience and serialization.
This PR introduces a separate SIL linkage and FormalLinkage for
package decls, pipes them down to IRGen, and updates linkage checks
at call sites to include package linkage.
Resolves rdar://121409846
access level for optimization: `public`. It requires an extra check for
the actual access level that was declared when determining serialization
since the behavior should be different.
This PR sets its effective access level to `package` as originally defined,
updates call sites to make appropriate acces level comparisons, and removes
`package` specific checks.
When an actual instance of a distributed actor is on the local node, it is
has the capabilities of `Actor`. This isn't expressible directly in the type
system, because not all `DistributedActor`s are `Actor`s, nor is the
opposite true.
Instead, provide an API `DistributedActor.asLocalActor` that can only
be executed when the distributed actor is known to be local (because
this API is not itself `distributed`), and produces an existential
`any Actor` referencing that actor. The resulting existential value
carries with it a special witness table that adapts any type
conforming to the DistributedActor protocol into a type that conforms
to the Actor protocol. It is "as if" one had written something like this:
extension DistributedActor: Actor { }
which, of course, is not permitted in the language. Nonetheless, we
lovingly craft such a witness table:
* The "type" being extended is represented as an extension context,
rather than as a type context. This hasn't been done before, all Swift
runtimes support it uniformly.
* A special witness is provided in the Distributed library to implement
the `Actor.unownedExecutor` operation. This witness back-deploys to the
Swift version were distributed actors were introduced (5.7). On Swift
5.9 runtimes (and newer), it will use
`DistributedActor.unownedExecutor` to support custom executors.
* The conformance of `Self: DistributedActor` is represented as a
conditional requirement, which gets satisfied by the witness table
that makes the type a `DistributedActor`. This makes the special
witness work.
* The witness table is *not* visible via any of the normal runtime
lookup tables, because doing so would allow any
`DistributedActor`-conforming type to conform to `Actor`, which would
break the safety model.
* The witness table is emitted on demand in any client that needs it.
In back-deployment configurations, there may be several witness tables
for the same concrete distributed actor conforming to `Actor`.
However, this duplication can only be observed under fairly extreme
circumstances (where one is opening the returned existential and
instantiating generic types with the distributed actor type as an
`Actor`, then performing dynamic type equivalence checks), and will
not be present with a new Swift runtime.
All of these tricks together mean that we need no runtime changes, and
`asLocalActor` back-deploys as far as distributed actors, allowing it's
use in `#isolation` and the async for...in loop.