* [Concurrency] Fix task excutor handling of default actor isolation
The task executor API did not properly account for taking the default
actor locking into account when running code on it, we just took the job
and ran it without checking with the serial executor at all, which
resulted in potential concurrent executions inside the actor --
violating actor isolation.
Here we change the TaskExecutor enqueue API to accept the "target"
serial executor, which in practice will be either generic or a specific
default actor, and coordinate with it when we perform a
runSynchronously.
The SE proposal needs to be amended to showcase this new API, however
without this change we are introducing races so we must do this before
the API is stable.
* Remove _swift_task_enqueueOnTaskExecutor as we don't use it anymore
* no need for the new protocol requirement
* remove the enqueue(_ job: UnownedJob, isolatedTo unownedSerialExecutor: UnownedSerialExecutor)
Thankfully we dont need it after all
* Don't add swift_defaultActor_enqueue_withTaskExecutor and centralize the task executor getting to enqueue()
* move around extern definitions
Out of an abundance of caution, we:
1. Left in parsing support for transferring but internally made it rely on the
internals of sending.
2. Added a warning to tell people that transferring was going to
be removed very soon.
Now that we have given people some time, remove support for parsing
transferring.
rdar://130253724
When using a future adapter, the resume context may not be valid after the task starts running. Only peer through the adapter when we're starting to run.
rdar://126298035
We still only parse transferring... but this sets us up for adding the new
'sending' syntax by first validating that this internal change does not mess up
the current transferring impl since we want both to keep working for now.
rdar://128216574
This fixes a crash at runtime when destroying a Swift array of values of a C++ foreign reference type.
Swift optimizes the amount of metadata emitted for `_ContiguousArrayStorage<Element>` by reusing `_ContiguousArrayStorage<AnyObject>` whenever possible (see `getContiguousArrayStorageType`). However, C++ foreign reference types are not `AnyObject`s, since they have custom retain/release operations.
This change disables the `_ContiguousArrayStorage` metadata optimization for C++ reference types, which makes sure that `swift_arrayDestroy` will call the correct release operation for elements of `[MyCxxRefType]`.
rdar://127154770
Invertible protocols are currently always mangled with `Ri`, followed by
a single letter for each invertible protocol (e.g., `c` and `e` for
`Copyable` and `Escapable`, respectively), followed by the generic
parameter index. However, this requires that we extend the mangling
for any future invertible protocols, which mean they won't be
backward compatible.
Replace this mangling with one that mangles the bit # for the
invertible protocol, e.g., `Ri_` (followed by the generic parameter
index) is bit 0, which is `Copyable`. `Ri0_` (then generic parameter
index) is bit 1, which is `Escapable`. This allows us to round-trip
through mangled names for any invertible protocol, without any
knowledge of what the invertible protocol is, providing forward
compatibility. The same forward compatibility is present in all
metadata and the runtime, allowing us to add more invertible
protocols in the future without updating any of them, and also
allowing backward compatibility.
Only the demangling to human-readable strings maps the bit numbers
back to their names, and there's a fallback printing with just the bit
number when appropriate.
Also generalize the mangling a bit to allow for mangling of invertible
requirements on associated types, e.g., `S.Sequence: ~Copyable`. This
is currently unsupported by the compiler or runtime, but that may
change, and it was easy enough to finish off the mangling work for it.
Form a set of suppressed protocols for a function type based on
the extended flags (where future compilers can start recording
suppressible protocols) and the existing "noescape" bit. Compare
that against the "ignored" suppressible protocol requirements, as we
do for other types.
This involves a behavior change if any client has managed to evade the
static checking for noescape function types, but it's unlikely that
existing code has done so (and it was unsafe anyway).
Introduce metadata and runtime support for describing conformances to
"suppressible" protocols such as `Copyable`. The metadata changes occur
in several different places:
* Context descriptors gain a flag bit to indicate when the type itself has
suppressed one or more suppressible protocols (e.g., it is `~Copyable`).
When the bit is set, the context will have a trailing
`SuppressibleProtocolSet`, a 16-bit bitfield that records one bit for
each suppressed protocol. Types with no suppressed conformances will
leave the bit unset (so the metadata is unchanged), and older runtimes
don't look at the bit, so they will ignore the extra data.
* Generic context descriptors gain a flag bit to indicate when the type
has conditional conformances to suppressible protocols. When set,
there will be trailing metadata containing another
`SuppressibleProtocolSet` (a subset of the one in the main context
descriptor) indicating which suppressible protocols have conditional
conformances, followed by the actual lists of generic requirements
for each of the conditional conformances. Again, if there are no
conditional conformances to suppressible protocols, the bit won't be
set. Old runtimes ignore the bit and any trailing metadata.
* Generic requirements get a new "kind", which provides an ignored
protocol set (another `SuppressibleProtocolSet`) stating which
suppressible protocols should *not* be checked for the subject type
of the generic requirement. For example, this encodes a requirement
like `T: ~Copyable`. These generic requirements can occur anywhere
that there is a generic requirement list, e.g., conditional
conformances and extended existentials. Older runtimes handle unknown
generic requirement kinds by stating that the requirement isn't
satisfied.
Extend the runtime to perform checking of the suppressible
conformances on generic arguments as part of checking generic
requirements. This checking follows the defaults of the language, which
is that every generic argument must conform to each of the suppressible
protocols unless there is an explicit generic requirement that states
which suppressible protocols to ignore. Thus, a generic parameter list
`<T, Y where T: ~Escapable>` will check that `T` is `Copyable` but
not that it is `Escapable`, and check that `U` is both `Copyable` and
`Escapable`. To implement this, we collect the ignored protocol sets
from these suppressed requirements while processing the generic
requirements, then check all of the generic arguments against any
conformances not suppressed.
Answering the actual question "does `X` conform to `Copyable`?" (for
any suppressible protocol) looks at the context descriptor metadata to
answer the question, e.g.,
1. If there is no "suppressed protocol set", then the type conforms.
This covers types that haven't suppressed any conformances, including
all types that predate noncopyable generics.
2. If the suppressed protocol set doesn't contain `Copyable`, then the
type conforms.
3. If the type is generic and has a conditional conformance to
`Copyable`, evaluate the generic requirements for that conditional
conformance to answer whether it conforms.
The procedure above handles the bits of a `SuppressibleProtocolSet`
opaquely, with no mapping down to specific protocols. Therefore, the
same implementation will work even with future suppressible protocols,
including back deployment.
The end result of this is that we can dynamically evaluate conditional
conformances to protocols that depend on conformances to suppressible
protocols.
Implements rdar://123466649.
LLVM is presumably moving towards `std::string_view` -
`StringRef::startswith` is deprecated on tip. `SmallString::startswith`
was just renamed there (maybe with some small deprecation inbetween, but
if so, we've missed it).
The `SmallString::startswith` references were moved to
`.str().starts_with()`, rather than adding the `starts_with` on
`stable/20230725` as we only had a few of them. Open to switching that
over if anyone feels strongly though.
This has been the behavior of the runtime since the initial release.
Initially, it was thought that task executors would provide similar
functionality, so they naturally took over the enumerator. After that
changed, we forgot to change it back. Fortunately, we haven't released
any versions of Swift with the task executors feature yet, so it's not
too late to fix this.
The `ABI` headers had accidentally grown an `#include` into compiler headers,
allowing the enum constant values of the `ValueOwnership` enum to leak into
the runtime ABI. Sever this inappropriate relationship by declaring a separate
`ParameterOwnership` enum with ABI-stable values in the ABI headers, and
explicitly converting between the AST and ABI representation where needed.
Fixes rdar://122435628.
This includes runtime support for instantiating transferring param/result in
function types. This is especially important since that is how we instantiate
function types like: typealias Fn = (transferring X) -> ().
rdar://123118061
This library uses GenericMetadataBuilder with a ReaderWriter that can read data and resolve pointers from MachO files, and emit a JSON representation of a dylib containing the built metadata.
We use LLVM's binary file readers to parse the MachO files and resolve fixups so we can follow pointers. This code is somewhat MachO specific, but could be generalized to other formats that LLVM supports.
rdar://116592577
When an actual instance of a distributed actor is on the local node, it is
has the capabilities of `Actor`. This isn't expressible directly in the type
system, because not all `DistributedActor`s are `Actor`s, nor is the
opposite true.
Instead, provide an API `DistributedActor.asLocalActor` that can only
be executed when the distributed actor is known to be local (because
this API is not itself `distributed`), and produces an existential
`any Actor` referencing that actor. The resulting existential value
carries with it a special witness table that adapts any type
conforming to the DistributedActor protocol into a type that conforms
to the Actor protocol. It is "as if" one had written something like this:
extension DistributedActor: Actor { }
which, of course, is not permitted in the language. Nonetheless, we
lovingly craft such a witness table:
* The "type" being extended is represented as an extension context,
rather than as a type context. This hasn't been done before, all Swift
runtimes support it uniformly.
* A special witness is provided in the Distributed library to implement
the `Actor.unownedExecutor` operation. This witness back-deploys to the
Swift version were distributed actors were introduced (5.7). On Swift
5.9 runtimes (and newer), it will use
`DistributedActor.unownedExecutor` to support custom executors.
* The conformance of `Self: DistributedActor` is represented as a
conditional requirement, which gets satisfied by the witness table
that makes the type a `DistributedActor`. This makes the special
witness work.
* The witness table is *not* visible via any of the normal runtime
lookup tables, because doing so would allow any
`DistributedActor`-conforming type to conform to `Actor`, which would
break the safety model.
* The witness table is emitted on demand in any client that needs it.
In back-deployment configurations, there may be several witness tables
for the same concrete distributed actor conforming to `Actor`.
However, this duplication can only be observed under fairly extreme
circumstances (where one is opening the returned existential and
instantiating generic types with the distributed actor type as an
`Actor`, then performing dynamic type equivalence checks), and will
not be present with a new Swift runtime.
All of these tricks together mean that we need no runtime changes, and
`asLocalActor` back-deploys as far as distributed actors, allowing it's
use in `#isolation` and the async for...in loop.
`AbsoluteFunctionPointer` was not able to be passed to a function that
takes a `TargetCompactFunctionPointer` because the template parameters
`Nullable` and `Offset` were not parameterized by `AbsoluteFunctionPointer`.
29c350e813 introduced the first such
function `InProcessReaderWriter::resolveFunctionPointer` and it revealed
this issue.
Create a version of the metadata specialization code which is abstracted so that it can work in different contexts, such as building specialized metadata from dylibs on disk rather than from inside a running process.
The GenericMetadataBuilder class is templatized on a ReaderWriter. The ReaderWriter abstracts out everything that's different between in-process and external construction of this data. Instead of reading and writing pointers directly, the builder calls the ReaderWriter to resolve and write pointers. The ReaderWriter also handles symbol lookups and looking up other Swift types by name.
This is accompanied by a simple implementation of the ReaderWriter which works in-process. The abstracted calls to resolve and write pointers are implemented using standard pointer dereferencing.
A new SWIFT_DEBUG_VALIDATE_EXTERNAL_GENERIC_METADATA_BUILDER environment variable uses the in-process ReaderWriter to validate the builder by running it in parallel with the existing metadata builder code in the runtime. When enabled, the GenericMetadataBuilder is used to build a second copy of metadata built by the runtime, and the two are compared to ensure that they match. When this environment variable is not set, the new builder code is inactive.
The builder is incomplete, and this initial version only works on structs. Any unsupported type produces an error, and skips the validation.
rdar://116592420