Reformatting everything now that we have `llvm` namespaces. I've
separated this from the main commit to help manage merge-conflicts and
for making it a bit easier to read the mega-patch.
This is phase-1 of switching from llvm::Optional to std::optional in the
next rebranch. llvm::Optional was removed from upstream LLVM, so we need
to migrate off rather soon. On Darwin, std::optional, and llvm::Optional
have the same layout, so we don't need to be as concerned about ABI
beyond the name mangling. `llvm::Optional` is only returned from one
function in
```
getStandardTypeSubst(StringRef TypeName,
bool allowConcurrencyManglings);
```
It's the return value, so it should not impact the mangling of the
function, and the layout is the same as `std::optional`, so it should be
mostly okay. This function doesn't appear to have users, and the ABI was
already broken 2 years ago for concurrency and no one seemed to notice
so this should be "okay".
I'm doing the migration incrementally so that folks working on main can
cherry-pick back to the release/5.9 branch. Once 5.9 is done and locked
away, then we can go through and finish the replacement. Since `None`
and `Optional` show up in contexts where they are not `llvm::None` and
`llvm::Optional`, I'm preparing the work now by going through and
removing the namespace unwrapping and making the `llvm` namespace
explicit. This should make it fairly mechanical to go through and
replace llvm::Optional with std::optional, and llvm::None with
std::nullopt. It's also a change that can be brought onto the
release/5.9 with minimal impact. This should be an NFC change.
When `-unavailable-decl-optimization=complete` is specified, exclude
unavailable enum cases from the runtime layout of enums with payloads. Without
this, the type metadata for unavailable types may be referenced by enum cases
with unavailable payloads and cause linker failures.
Resolves rdar://107483852
This removes the "optimization" where a function type, metatype or
tuple type was split up into structural components, because it seems
that in general we need this structural type metadata again.
Similarly, this no longer tries to split up dependent concrete
conformances and instead passes the witness table in the context.
This makes the context larger potentially, but it avoids calls to
metadata access functions and swift_getWitnessTable() every time the
closure is invoked.
* [Executors][Distributed] custom executors for distributed actor
* harden ordering guarantees of synthesised fields
* the issue was that a non-default actor must implement the is remote check differently
* NonDefaultDistributedActor to complete support and remote flag handling
* invoke nonDefaultDistributedActorInitialize when necessary in SILGen
* refactor inline assertion into method
* cleanup
* [Executors][Distributed] Update module version for NonDefaultDistributedActor
* Minor docs cleanup
* we solved those fixme's
* add mangling test for non-def-dist-actor
We don't have any language or runtime support for noncopyable types as generic
or dynamic types yet, and existing reflection code almost certainly assumes it
can copy the values it's working with, and will trap or corrupt state if it does
so with noncopyable types. But a class can have noncopyable fields while the
type itself is copyable, and existing code assumes that it can use `Mirror` or
other reflection mechanisms to safely traverse the contents of an arbitrary
class.
Allow this sort of code to continue working, while still preparing for forward
compatibility with future runtimes that do support noncopyable generics, by
emitting the type references for fields using a function that probes the
address of a new symbol in the Swift runtime. The symbol will either be missing
or defined with an absolute address of zero in current or previous runtime
versions, but can be changed to a non-null address in the future.
Previously it was testing for opened existentials specifically.
We should really teach outlining to handle local archetypes properly.
We'd have to build a generic signature for the lowered type, and that
probably means also adding requirements that are relevant to value
operations, but it would mean outlining would benefit all types, and
it would let us avoid bundling in unnecessary information from the
enclosing generic environment.
A minor side-effect of this is that we no longer bind names to
opened element type values. The names were things like \tau_1_0,
which is not very useful, especially in LLVM IR where \tau is
printed with two UTF-8 escapes.
A lot of the fixes here are adjustments to compensate in the
fulfillment and metadata-path subsystems for the recent pack
substitutions representation change. I think these adjustments
really make the case for why the change was the right one to make:
the code was clearly not considering the possibility of packs
in these positions, and the need to handle packs makes everything
work out much more cleanly.
There's still some work that needs to happen around type packs;
in particular, we're not caching them or fulfilling them as a
whole, and we do have the setup to do that properly now.
This commit begins to generate correct metadata for @_objcImplementation extensions:
• Swift-specific metadata and symbols are not generated.
• For main-class @_objcImpls, we visit the class to emit metadata, but visit the extension’s members.
• Includes both IR tests and executable tests, including coverage of same-module @objc subclasses, different-module @objc subclasses, and clang subclasses.
The test cases do not yet cover stored properties.
In preparation for moving to llvm's opaque pointer representation
replace getPointerElementType and CreateCall/CreateLoad/Store uses that
dependent on the address operand's pointer element type.
This means an `Address` carries the element type and we use
`FunctionPointer` in more places or read the function type off the
`llvm::Function`.
We had two notions of canonical types, one is the structural property
where it doesn't contain sugared types, the other one where it does
not contain reducible type parameters with respect to a generic
signature.
Rename the second one to a 'reduced type'.
I wrote out this whole analysis of why different existential types
might have the same logical content, and then I turned around and
immediately uniqued existential shapes purely by logical content
rather than the (generalized) formal type. Oh well. At least it's
not too late to make ABI changes like this.
We now store a reference to a mangling of the generalized formal
type directly in the shape. This type alone is sufficient to unique
the shape:
- By the nature of the generalization algorithm, every type parameter
in the generalization signature should be mentioned in the
generalized formal type in a deterministic order.
- By the nature of the generalization algorithm, every other
requirement in the generalization signature should be implied
by the positions in which generalization type parameters appear
(e.g. because the formal type is C<T> & P, where C constrains
its type parameter for well-formedness).
- The requirement signature and type expression are extracted from
the existential type.
As a result, we no longer rely on computing a unique hash at
compile time.
Storing this separately from the requirement signature potentially
allows runtimes with general shape support to work with future
extensions to existential types even if they cannot demangle the
generalized formal type.
Storing the generalized formal type also allows us to easily and
reliably extract the formal type of the existential. Otherwise,
it's quite a heroic endeavor to match requirements back up with
primary associated types. Doing so would also only allows us to
extract *some* matching formal type, not necessarily the *right*
formal type. So there's some good synergy here.
On some Harvard architectures like WebAssembly that allow sliding code
and data address space offsets independently, it's impossible to make
direct relative reference to code from data because the relative offset
between them is not representable.
Use absolute function references instead of relative ones on such targets.
This adds a new reflection record type carrying spare bit information for multi-payload enums.
The compiler includes this for any type that might need it in order to accurately reflect the contents of the enum. The RemoteMirror library will use this if present to determine how to project the contents of the enum. If not present (for example, in older binaries), the RemoteMirror library falls back on an internal calculation of the spare bitmask.
A few notes:
* The internal calculation is not perfect. In particular, it does not support MPEs that contain other enums (e.g., optionals). It should accurately refuse to project any MPE that it does not correctly support.
* The new reflection field is designed to be expandable; this might someday avoid the need for a new section.
Resolves rdar://61158214
`ArchetypeType` has all of the API we need for these cases, so use
that instead to isolate us from `NestedArchetypeType`, which we would
like to eliminate soon-ish.
In general, we need to check the features of a type in order
of newest to oldest, when determining the runtime version
that supports demangling that type.
One of the places where we ask whether a type's metadata should
be obtained via its mangled name was missing the newer, more
robust checking for minimum deployment target.