Decls with a package access level are currently set to public SIL
linkages. This limits the ability to have more fine-grained control
and optimize around resilience and serialization.
This PR introduces a separate SIL linkage and FormalLinkage for
package decls, pipes them down to IRGen, and updates linkage checks
at call sites to include package linkage.
Resolves rdar://121409846
Not quite NFC because apparently the representation bleeds into what's
accepted in some situations where we're supposed to be warning about
conflicts and then making an arbitrary choice. But what we're doing
is nonsense, so we definitely need to break behavior here.
This is setting up for isolated(any) and isolated(caller). I tried
to keep that out of the patch as much as possible, though.
A swiftmodule can only be correctly ingested by a compiler
that has a matching state of using or not-using
NoncopyableGenerics.
The reason for this is fundamental: the absence of a Copyable
conformance in the swiftmodule indicates that a type is
noncopyable. Thus, if a compiler with NoncopyableGenerics
reads a swiftmodule that was not compiled with that feature,
it will think every type in that module is noncopyable.
Similarly, if a compiler with NoncopyableGenerics produces a
swiftmodule, there will be Copyable requirements on each
generic parameter that the compiler without the feature will
become confused about.
The solution here is to trigger a module mismatch, so that
the compiler re-generates the swiftmodule file using the
swiftinterface, which has been kept compatible with the compiler
regardless of whether the feature is enabled.
access level for optimization: `public`. It requires an extra check for
the actual access level that was declared when determining serialization
since the behavior should be different.
This PR sets its effective access level to `package` as originally defined,
updates call sites to make appropriate acces level comparisons, and removes
`package` specific checks.
When an actual instance of a distributed actor is on the local node, it is
has the capabilities of `Actor`. This isn't expressible directly in the type
system, because not all `DistributedActor`s are `Actor`s, nor is the
opposite true.
Instead, provide an API `DistributedActor.asLocalActor` that can only
be executed when the distributed actor is known to be local (because
this API is not itself `distributed`), and produces an existential
`any Actor` referencing that actor. The resulting existential value
carries with it a special witness table that adapts any type
conforming to the DistributedActor protocol into a type that conforms
to the Actor protocol. It is "as if" one had written something like this:
extension DistributedActor: Actor { }
which, of course, is not permitted in the language. Nonetheless, we
lovingly craft such a witness table:
* The "type" being extended is represented as an extension context,
rather than as a type context. This hasn't been done before, all Swift
runtimes support it uniformly.
* A special witness is provided in the Distributed library to implement
the `Actor.unownedExecutor` operation. This witness back-deploys to the
Swift version were distributed actors were introduced (5.7). On Swift
5.9 runtimes (and newer), it will use
`DistributedActor.unownedExecutor` to support custom executors.
* The conformance of `Self: DistributedActor` is represented as a
conditional requirement, which gets satisfied by the witness table
that makes the type a `DistributedActor`. This makes the special
witness work.
* The witness table is *not* visible via any of the normal runtime
lookup tables, because doing so would allow any
`DistributedActor`-conforming type to conform to `Actor`, which would
break the safety model.
* The witness table is emitted on demand in any client that needs it.
In back-deployment configurations, there may be several witness tables
for the same concrete distributed actor conforming to `Actor`.
However, this duplication can only be observed under fairly extreme
circumstances (where one is opening the returned existential and
instantiating generic types with the distributed actor type as an
`Actor`, then performing dynamic type equivalence checks), and will
not be present with a new Swift runtime.
All of these tricks together mean that we need no runtime changes, and
`asLocalActor` back-deploys as far as distributed actors, allowing it's
use in `#isolation` and the async for...in loop.
`@GlobalActor(unsafe)` and `@preconcurrency @GlobalActor` mean the same
thing, but there were two different representations in the actor isolation
checker. Standardize on the preconcurrency representation.
It's not clear that its worth keeping this as a
base class for SerializedAbstractClosure and
SerializedTopLevelCodeDecl, most clients are
interested in the concrete kinds, not only whether
the context is serialized.
Most clients only want to set one of the two
parameters, split it into `setPattern` and
`setInitContext` (the latter of which now
handles calling `setBinding`).
Switch from promising a DeclContext to a
PatternBindingInitializer.
This has a couple of benefits:
- It eliminates a few places where we were force
`cast`'ing to PatternBindingInitializer.
- It improves the clarity of what's being stored,
it's not whatever the parent context of the
initializer is, it's specifically the
PatternBindingInitializer context if it exists.
Introduce a new expression macro that produces an value of type
`(any AnyActor)?` that describes the current actor isolation. This
isolation will be `nil` in non-isolated code, and refer to either the
actor instance of shared global actor in other cases.
This is currently behind the experimental feature flag
OptionalIsolatedParameters.