The existing overloading rules strongly prefer async functions within
async contexts, and synchronous functions in synchronous contexts.
However, when there are other differences in the
signature, particularly parameters of function type that differ in
async vs. synchronous, the overloading rule would force the use of the
synchronous function even in cases where the synchronous function
would be better. An example:
func f(_: (Int) -> Int) { }
func f(_: (Int) async -> Int) async { }
func g(_ x: Int) -> Int { -x }
func h() async {
f(g) // currently selects async f, want to select synchronous f
}
Effect the semantics change by splitting the "sync/async mismatch"
score in the constraint system into an "async in sync mismatch" score
that is mostly disqualifying (because the call will always fail) and a
less-important score for "sync used in an async context", which also
includes conversion from a synchronous function to an asynchronous
one. This way, only synchronous functions are still considered within
a synchronous context, but we get more natural overloading behavior
within an asynchronous context. The end result is intended to be
equivalent to what one would get with reasync:
func f(_: (Int) async -> Int) async { ... }
Addresses rdar://74289867.
for arithmetic operators.
Only sort overloads that are related, e.g. Sequence
overloads. Further, choose which generic overloads
to attempt first based on whether any known argument types
conform to one of the standard arithmetic protocols.
attempt the most specific choices first. Then, if the solver finds
a solution with one choice, it can skip any subsequent choices that
can be unconditionally used in place of the successful chioce and produce
the same solution.
disjunction choice that does not introduce conversions, check to see
if known argument types satisfy generic operator conformance requirements
early, and skip the overload choice if any requirements fail.
This helps the solver avoid exploring way too much search space when
the right solution involves a generic operator, but the argument types
are known up front, such as `collection + collection + collection`.
`DisjunctionStep::shortCircuitDisjunctionAt`.
This code is unnecessary because SIMD overloads are in their own
partition, so the short circuiting will happen automatically.
successfully finding a solution by favoring operators already bound
elsewhere.
Favoring existing operator bindings often lets the solver find a solution
fast, but it's not necessarily the best solution.
Make sure we don't end up in a situation where we
have unsolved constraints left over and consider
the system fully solved.
This requires tweaking the type matching code for
dependent members such that a concrete base is
considered a failure rather than being left
unsolved. This should only happen when not in
diagnostic mode, as otherwise we use a hole.
* If there is a disjunction associated with closure type e.g.
coercion to some other type `_ = { $0 } as (Int32) -> Void`
* If there is a disjunction associated with a collection which
could provide more context to the constraint solver.
Even if contextual closure type has type variables inside
prefer it over disjunction because it provides a lot of
contextual information early which helps to solve the system
faster.
An awful pattern we use throughout the compiler is to save and restore global flags just for little things. In this case, it was just to turn on some extra options in AST printing for type variables. The kicker is that the ASTDumper doesn't even respect this flag. Add this as a PrintOption and remove the offending save-and-restores.
This doesn't quite get them all: we appear to have productized this pattern in the REPL.
automatically.
This commit also renames `ConstraintSystem::recordHole/isHole` to
`recordPotentialHole` and `isPotentialHole` to make it clear that
we don't know for sure whether a type variable is a hole until it's
bound to unresolved.
in the constraint system over a different binding or disjunction.
In other words, we will only choose to bind a hole to `Any` if there
are no other bindings and no disjunctions.
all cases of missing generic parameters.
In `ComponentStep::take` when there are no bindings or disjunctions, use hole
propagation to default remaining free type variables that aren't for generic
parameters and continue solving. Rather than using a defaultable constraint for
holes, assign a fixed type directly when we have no bindings to try.
One-way constraint expressions, which are the only things that
introduce one-way constraints at this point, want to look through
lvalue types to produce values. Rename OneWayBind to OneWayEqual, map
it down to an Equal constraint when it is simplified (to drop
lvalue-ness), and apply that coercion during constraint application.
Part of rdar://problem/50150793.
There were a few places where we wanted fast testing to see whether a
particular type variable is currently of interest. Instead of building
local hash tables in those places, keep type variables in a SetVector
for efficient testing.
Introduce the notion of "one-way" binding constraints of the form
$T0 one-way bind to $T1
which treats the type variables $T0 and $T1 as independent up until
the point where $T1 simplifies down to a concrete type, at which point
$T0 will be bound to that concrete type. $T0 won't be bound in any
other way, so type information ends up being propagated right-to-left,
only. This allows a constraint system to be broken up in more
components that are solved independently. Specifically, the connected
components algorithm now proceeds as follows:
1. Compute connected components, excluding one-way constraints from
consideration.
2. Compute a directed graph amongst the components using only the
one-way constraints, where an edge A -> B indicates that the type
variables in component A need to be solved before those in component
B.
3. Using the directed graph, compute the set of components that need
to be solved before a given component.
To utilize this, implement a new kind of solver step that handles the
propagation of partial solutions across one-way constraints. This
introduces a new kind of "split" within a connected component, where
we collect each combination of partial solutions for the input
components and (separately) try to solve the constraints in this
component. Any correct solution from any of these attempts will then
be recorded as a (partial) solution for this component.
For example, consider:
let _: Int8 = b ? Builtin.one_way(int8Or16(17)) :
Builtin.one_way(int8Or16(42\
))
where int8Or16 is overloaded with types `(Int8) -> Int8` and
`(Int16) -> Int16`. There are two one-way components (`int8Or16(17)`)
and (`int8Or16(42)`), each of which can produce a value of type `Int8`
or `Int16`. Those two components will be solved independently, and the
partial solutions for each will be fed into the component that
evaluates the ternary operator. There are four ways to attempt that
evaluation:
```
[Int8, Int8]
[Int8, Int16]
[Int16, Int8]
[Int16, Int16]
To test this, introduce a new expression builtin `Builtin.one_way(x)` that
introduces a one-way expression constraint binding the result of the
expression 'x'. The builtin is meant to be used for testing purposes,
and the one-way constraint expression itself can be synthesized by the
type checker to introduce one-way constraints later on.
Of these two, there are only two (partial) solutions that can work at
all, because the types in the ternary operator need a common
supertype:
[Int8, Int8]
[Int16, Int16]
Therefore, these are the partial solutions that will be considered the
results of the component containing the ternary expression. Note that
only one of them meets the final constraint (convertibility to
`Int8`), so the expression is well-formed.
Part of rdar://problem/50150793.
Move the logic for creating connected components of orphaned
constraints into the connected-components algorithm code, rather than
making it a special part of SplitterStep.