Create a new namespace - `swift::constraints::inference` and associate
`PotentialBinding` with it. This way it would be possible for constraint
graph to operate on `PotentialBinding(s)` in the future.
Currently potential bindings are stored in a vector (`SmallVector`)
and every call has to pass additional set of unique types to
inference methods to unqiue the bindings. Instead let's merge
these two together and use `SetVector` for binding storage,
which would also be great for incremental mode that can't
pass additional sets around.
for arithmetic operators.
Only sort overloads that are related, e.g. Sequence
overloads. Further, choose which generic overloads
to attempt first based on whether any known argument types
conform to one of the standard arithmetic protocols.
disjunction choice that does not introduce conversions, check to see
if known argument types satisfy generic operator conformance requirements
early, and skip the overload choice if any requirements fail.
This helps the solver avoid exploring way too much search space when
the right solution involves a generic operator, but the argument types
are known up front, such as `collection + collection + collection`.
Doing so streamlines access to the information associated with literal
protocol requirements and allows to add more helpers.
Also cache default type in the struct itself for easy access.
One more step towards incrementality of binding inference. Instead of
trying to determine literal protocol coverage during finalization
of the bindings, let's do that as soon as new bindings and/or literal
protocol requirements are discovered.
Literal protocol requirements are handled differently from other
protocols, they require additional contextual information (such
as coverage, direct/transitive distinction), and participate in
binding inference (could be turned into default bindings).
Let's keep defaults separate from direct and transitive bindings,
that would make it easier to handle them in incremental model.
Instead of generating bindings for defaults and adding to the main
set, let's allow producer to choose what to do with them once type
variable has been picked for attempting.
As a step towards making binding inference more incremental, let's
make producer responsible for adding hole type binding instead of
doing so in `finalize`.
Wrapping bindings into optional type based on presence of
an `ExpressibleByNilLiteral` conformance requirement should
be done after type variable has been selected for attempting.
Otherwise such upfront work would be wasteful since it doesn't
affect binding ranking in any way.
Replaces `InvolvesTypeVariables` flag with a set of adjacent type
variables found during binding inference.
This approach is more suitable for incremental binding computation
because it allows to maintain a list of type variables that affect
ranking and check whether something has been resolved without having
to re-evaluate constraints associated with the given type variable.
- Keep track of all constraints which caused a particular type variable to
be de-prioritized.
- Convert a property to a method which would determine whether
given set of potential bindings is "fully bound" and has to
be delayed until certain constraints are simplified.
Instead of attempting to join only the last recorded supertype
binding, let's attempt to join any previously recorded supertype
binding with an incoming one to make sure that set contains only
the most viable types.
successfully finding a solution by favoring operators already bound
elsewhere.
Favoring existing operator bindings often lets the solver find a solution
fast, but it's not necessarily the best solution.
func foo(a: Int, b: Int) {}
func foo(a: String) {}
// Int and String should both be valid, despite the missing argument for the
// first overload since the second arg may just have not been written yet.
foo(a: <complete here>
func bar(a: (Int) -> ()) {}
func bar(a: (String, Int) -> ()) {}
// $0 being of type String should be valid, rather than just Int, since $1 may
// just have not been written yet.
bar { $0.<complete here> }
This is necessary in order to have access to private members of
a `ConstraintSystem` for testing purposes, such as logic related
to potential binding computation.
Implements iterative protocol requirement inference through
subtype, conversion and equivalence relationships.
This algorithm doesn't depend on a type variable finalization
order (which is currently the order of type variable introduction).
If a given type variable doesn't yet have its transitive protocol
requirements inferred, algorithm would use iterative depth-first
walk through its supertypes and equivalences and incrementally
infer transitive protocols for each type variable involved,
transferring new information down the chain e.g.
T1 T3
\ /
T4 T5
\ /
T2
Here `T1`, `T3` are supertypes of `T4`, `T4` and `T5`
are supertypes of `T2`.
Let's assume that algorithm starts at `T2` and none of the involved
type variables have their protocol requirements inferred yet.
First, it would consider supertypes of `T2` which are `T4` and `T5`,
since `T5` is the last in the chain algorithm would transfer its
direct protocol requirements to `T2`. `T4` has supertypes `T1` and
`T3` - they transfer their direct protocol requirements to `T4`
and `T4` transfers its direct and transitive (from `T1` and `T3`)
protocol requirements to `T2`. At this point all the type variables
in subtype chain have their transitive protocol requirements resolved
and cached so they don't have to be re-inferred later.