`shouldCoerceToContextualType` used `solution.getType(ASTNode)`
which returns a type that has type variables in it. To properly
check whether result type needs a coercion it has to be resolved
first which is done via `solution.getResultType(ASTNode)`.
Resolves: rdar://88285682
Opened archetypes can be created in the constraint system, and the
existential type it wraps can contain type variables. This can happen
when the existential type is inferred through a typealias inside a
generic type, and a member reference whose base is the opened existential
gets bound before binding the generic arguments of the parent type.
However, simplifying opened archetypes to replace type variables is
not yet supported, which leads to type variables escaping the constraint
system. We can support cases where the underlying existential type doesn't
depend on the type variables by canonicalizing it when opening the
existential. Cases where the underlying type requires resolved generic
arguments are still unsupported for now.
Nested archetypes are represented by their base archetype kinds (primary,
opened, or opaque type) with an interface type that is a nested type,
as represented by a DependentMemberType. This provides a more uniform
representation of archetypes throughout the frontend.
We were never setting these opaque type substitutions, but code
generation was silently failing. Now we assert, so move the code into
the proper common location so we always set opaque type substitutions
on properties.
Fixes rdar://86800325.
Opaque opaque types and record them within the "opened types" of the
constraint system, then use that information to compute the set of
substitutions needed for the opaque type declaration using the normal
mechanism of the constraint solver. Record these substitutions within
the underlying-to-opaque conversion.
Use the recorded substitutions in the underlying-to-opaque conversion
to set the underlying substitutions for the opaque type declaration
itself, rather than reconstructing the substitutions in an ad hoc manner
that does not account for structural opaque result types.
Instead of checking the frontend flag directly, let's use dedicated
method on a constraint system to determine whether closure did
participate in inference or not.
Insert an implicit conversion from pack types to tuples with equivalent parallel structure. That means
1) The tuple must have the same arity
2) The tuple may not have any argument labels
3) The tuple may not have any variadic or inout components
4) The tuple must have the same element types as the pack
The heart of this patchset: An inference scheme for variadic generic functions and associated pack expansions.
A traditional variadic function looks like
func foo<T>(_ xs: T...) {}
Along with the corresponding function type
<T>(T [variadic]) -> Void
which the constraint system only has to resolve one two for. Hence it opens <T> as a type variable and uses each argument to the function to try to solve for <T>. This approach cannot work for variadic generics as each argument would try to bind to the same <T> and the constraint system would lose coherency in the one case we need it: when the arguments all have different types.
Instead, notice when we encounter expansion types:
func print<T...>(_ xs: T...)
print("Macs say Hello in", 42, "different languages")
We open this type as
print : ($t0...) -> ($t0...)
Now for the brand new stuff: We need to create and bind a pack to $t0, which will trigger the expansion we need for CSApply to see a coherent view of the world. This means we need to examine the argument list and construct the pack type <$t1, $t2, $t3, ...> - one type variable per argument - and bind it to $t0. There's also the catch that each argument that references the opened type $t0 needs to have the same parallel structure, including its arity. The algorithm is thus:
For input type F<... ($t0...), ..., ($tn..) ...> and an apply site F(a0, ..., an) we walk the type `F` and record an entry in a mapping for each opened variadic generic parameter. Now, for each argument ai in (a0, ..., an), we create a fresh type variable corresponding to the argument ai, and record an additional entry in the parameter pack type elements corresponding to that argument.
Concretely, suppose we have
func print2<T..., U...>(first: T..., second: U...) {}
print2(first: "", 42, (), second: [42])
We open print2 as
print2 : ($t0..., $t1...) -> Void
And construct a mapping
$t0 => <$t2, $t3, $t4> // eventually <String, Int, Void>
$t1 => <$t5> // eventually [Int]
We then yield the entries of this map back to the solver, which constructs bind constraints where each => exists in the diagram above. The pack type thus is immediately substituted into the corresponding pack expansion type which produces a fully-expanded pack type of the correct arity that the solver can actually use to make forward progress.
Applying the solution is as simple as pulling out the pack type and coercing arguments element-wise into a pack expression.
The new type, called ExistentialType, is not yet used in type resolution.
Later, existential types written with `any` will resolve to this type, and
bare protocol names will resolve to this type depending on context.
- Frontend: Implicitly import `_StringProcessing` when frontend flag `-enable-experimental-string-processing` is set.
- Type checker: Set a regex literal expression's type as `_StringProcessing.Regex<(Substring, DynamicCaptures)>`. `(Substring, DynamicCaptures)` is a temporary `Match` type that will help get us to an end-to-end working system. This will be replaced by actual type inference based a regex's pattern in a follow-up patch (soon).
- SILGen: Lower a regex literal expression to a call to `_StringProcessing.Regex.init(_regexString:)`.
- String processing runtime: Add `Regex`, `DynamicCaptures` (matching actual APIs in apple/swift-experimental-string-processing), and `Regex(_regexString:)`.
Upcoming:
- Build `_MatchingEngine` and `_StringProcessing` modules with sources from apple/swift-experimental-string-processing.
- Replace `DynamicCaptures` with inferred capture types.
With `-enable-experimental-string-processing`,
start lexing `'` delimiters as regex literals (this
is just a placeholder delimiter for now). The
contents of which gets passed to the libswift
library, which can return an error string to be
emitted, or null for success.
The libswift side isn't yet hooked up to the Swift
regex parser, so for now just emit a dummy
diagnostic for regexes starting with quantifiers.
If successful, build an AST node which will be
emitted as an implicit call to an
`init(_regexString:)` initializer of an in-scope
`Regex` decl (which will eventually be a known
stdlib decl).
Now that the CSApply just uses components, we can
better split up the key path constructors to either
accept a set of resolved components, or a parsed
root or path.
The logic here could form AST loops due to passing
in `anchor` for the key path's parsed path.
However setting a parsed path here seems to be a
holdover from the CSDiag days, so set the path to
`nullptr` and rip out and the rest of the synthesis
and SanitizeExpr logic for it.
rdar://85236369
This logic cannot live in `ActorIsolationChecker` because
all of the relevant information is only accessible through
constraint system while applying solutions.
Use newly added `isDistributedThunk` check to determine whether
the given call is to a remote distributed actor and if so mark
it as implicitly throwing.
Resolves: rdar://83610106
These restrictions are meant to keep placeholder types from escaping TypeCheckType. But there's really no harm in that happening as long as we diagnose it on the way out in the places it's banned. (We also need to make sure we're only diagnosing things in primaries, but that's a minor issue). The end result is that we lose information because a lot of the AST that has placeholders in it becomes filled with error types instead.
Lift the restriction on placeholders appearing in the interface type, teach the mangler to treat them as unresolved types, and teach serialization to treat them as error types.
When referencing a function that contains parameters with the hidden
`@_unsafeSendable` or `@_unsafeMainActor` attributes, adjust the
function type to make the types of those parameters `@Sendable` or
`@MainActor`, respectively, based on both the context the expression:
* `@Sendable` will be applied when we are in a context with strict
concurrency checking.
* `@MainActor` will be applied when we are in a context with strict
concurrency checking *or* the function is being directly applied so
that an argument is provided in the immediate expression.
The second part of the rule of `@MainActor` reflects the fact that
making the parameter `@MainActor` doesn't break existing code (because
there is a conversion to add a global actor to a function value), but
it does enable such code to synchronously use a `@MainActor`-qualified
API.
The main effect of this change is that, in a strict concurrency
context, the type of referencing an unapplied function involving
`@_unsafeSendable` or `@_unsafeMainActor` in a strict context will
make those parameters `@Sendable` or `@MainActor`, which ensures that
these constraints properly work with non-closure arguments. The former
solution only applied to closure literals, which left some holes in
Sendable checking.
Fixes rdar://77753021.
The current IUO design always forms a disjunction
at the overload reference, for both:
- An IUO property `T!`, forming `$T := T? or T`
- An IUO-returning function `() -> T!`, forming `$T := () -> T? or () -> T`
This is simple in concept, however it's suboptimal
for the latter case of IUO-returning functions for
a couple of reasons:
- The arguments cannot be matched independently of
the disjunction
- There's some awkwardness when it comes e.g wrapping
the overload type in an outer layer of optionality
such as `(() -> T!)?`:
- The binding logic has to "adjust" the correct
reference type after forming the disjunction.
- The applicable fn solving logic needs a special
case to handle such functions.
- The CSApply logic needs various hacks such as
ImplicitlyUnwrappedFunctionConversionExpr to
make up for the fact that there's no function
conversion for IUO functions, we can only force
unwrap the function result.
- This lead to various crashes in cases where
we we'd fail to detect the expr and peephole
the force unwrap.
- This also lead to crashes where the solver
would have a different view of the world than
CSApply, as the former would consider an
unwrapped IUO function to be of type `() -> T`
whereas CSApply would correctly see the overload
as being of type `() -> T?`.
To remedy these issues, IUO-returning functions no
longer have their disjunction formed at the overload
reference. Instead, a disjunction is formed when
matching result types for the applicable fn
constraint, using the callee locator to determine
if there's an IUO return to consider. CSApply then
consults the callee locator when finishing up
applies, and inserts the force unwraps as needed,
eliminating ImplicitlyUnwrappedFunctionConversionExpr.
This means that now all IUO disjunctions are of the
form `$T := T? or T`. This will hopefully allow a
further refactoring away from using disjunctions
and instead using type variable binding logic to
apply the correct unwrapping.
Fixes SR-10492.
Let's delay solution application to multi-statement closure bodies,
because declarations found in the body of such closures may depend
on its `ExtInfo` flags e.g. no-escape is set based on parameter if
closure is used in a call.
A contextual purpose for a sequence expression associated with
`for-in` statement, that decays into a `ConformsTo` constraint
to a `Sequence` or `AsyncSequence` protocol.
Note that CTP_ForEachSequence is almost identical to CTP_ForEachStmt
but the meaning of latter is overloaded, so to avoid breaking solution
targets I have decided to add a new purpose for now.