The existing check is no-op because it would never produce a null for
`paramType` under the conditions in `else` branch. A better
API it use here is `conformsToKnownProtocol` just like in other cases.
`??` operator is overloaded on optionality of its result. When the
first argument matches exactly, the ranking is going to be skewed
towards selecting an overload choice that returns a non-optional type.
This is not always correct i.e. when operator is involved in optional
chaining. To avoid producing an incorrect favoring, let's skip the this
disjunction when constraints associated with result type indicate
that it should be optional.
Simply adding it as a binding won't work because if the second argument
is non-optional the overload that returns `T?` would still have a lower
score.
Resolves: rdar://164201746
This is a fix for the ported "calls with a single unlabeled argument"
hack. If overload doesn't match context on async effect, let's not favor
it because that is more important than defaulted parameters.
Resolves: rdar://164269641
Update special favoring logic for unlabeled unary calls to support
non-overloads member references in argument positions.
The original hack missed a case where a type of a member is known
in advance (i.e. a property without overloads) because there as
another hack (shrink) for that.
This helps in situations like `Double(x)` where `x` is a property
of some type that is referenced using an implicit `self.` injected
by the compiler.
Resolves: rdar://161419917
This type is only intended for pattern matching against `nil`
and the solver shouldn't early attempt to infer this type for
`nil` for arguments of `==` and `!=` operators it should instead
be inferred from other argument or result.
Resolves: rdar://158063151
If the parameter is `Any` we assume that all candidates are
convertible to it, which makes it a perfect match. The solver
would then decide whether erasing to an existential is preferable.
Resolves: rdar://157644867
Since parameters that have function types don't participate in
ranking, function types that are wrapped in optionals should be
excluded as well, because it's possible to overload on that and
such overloads with optional types would gain an undue advantage.
For example:
```
func test(_: (() -> Void)?) {}
func test(_: () -> Void) {}
func compute(handler: () -> Void) {
test(handler)
}
```
Without this change the second overload would be ignored and
the first one would be an exact match.
Resolves: rdar://157234317
This makes sure that optional and non-optional types are ranked
uniformly when matched against a generic parameter type that
could accept either of them.
This is a more general fix for https://github.com/swiftlang/swift/pull/83365
`??` is overloaded on optionality of the second parameter,
prevent ranking the argument candidates for this parameter
if there are candidates that come from failable initializer
overloads because non-optional candidates are always going
to be better and that can skew the selection.
Resolves: rdar://156853018
This fixes a regression introduced in https://github.com/swiftlang/swift/pull/82574.
The test case demonstrates the issue: we would incorrectly choose the base class
overload of == if one of the parameters was an archetype or dynamic Self.
Fixes rdar://156454697.
Infix logical operators are usually not overloaded and don't
form disjunctions, but when they do, let's prefer them over
other operators when they have fewer choices because it helps
to split operator chains.
- Expand the inference to include prefix and postfix unary operators
- Recognize previously resolved declaration and member references
in argument positions and record their types.
- Expand reconciliation logic from Double<->Int to include other
floating-point types and `CGFloat`.
If the scores are the same and both disjunctions are operators
they could be ranked purely based on whether the candidates
were speculative or not. The one with more context always wins.
Consider the following situation:
```swift
func test(_: Int) { ... }
func test(_: String) { ... }
test("a" + "b" + "c")
```
In this case we should always prefer ... + "c" over "a" + "b"
because it would fail and prune the other overload if parameter
type (aka contextual type) is `Int`.
This is helpful in situations when all of the chained operators
have literal arguments because it would make sure that every
operator has the same score if there is no contextual type.
These choices could be better than some other non-disfavored ones
in certain situations i.e. when `async` overload is disfavored
but appears in async context it's preferrable to a non-async
overload choice.
Note that the code that mimic old hacks still needs to filter on
`@_disfavoredOverload` in few places to maintain source compatibility.
This matches the behavior of the old hack where favoring choices
were rolled back if `mustConsider` produced `true` which happened
only for protocol requirements and variadic overload choice regardless
of their viability.
When matching candidate like `[Int]` against `Array<Element>`
we need to conservatively assume that if the nominals match
the argument is a viable exact match because otherwise it's
possible to skip some of the valid matches when other overload
choice have generic parameters at the same parameter position.
The problem this is trying to solve is eager selection of operators
over unsupported disjunctions, when matching operators let's take
speculative information into account because it helps to make better
choices in this case.
We need to have a notion of "complete" binding set before
we can allow inference from generic parameters and ternary,
otherwise we'd make a favoring decision that might not be
correct i.e. `v ?? (<<cond>> ? nil : o)` where `o` is `Int`.
`getBindingsFor` doesn't currently infer transitive bindings
which means that for a ternary we'd only have a single
binding - `Int` which could lead to favoring overload of
`??` and has non-optional parameter on the right-hand side.
Some of the disjunctions are not supported by the optimizers but
could still be a better choice than an operator. Using a non-score
based preference mechanism first allows us to make sure that
operator disjunctions are not selected too eagerly in some situations
when i.e. a member (supported or not) could be a better choice.
`isPreferable` currently targets only operators in result builder
contexts but it could be expanded to more uses in the future.
New ranking + selection algorithm suffered from over-eagerly selecting
operator disjunctions vs. unsupported non-operator ones even if the
ranking was based purely on literal candidates.
This change introduces a notion of a speculative candidate - one which
has a type inferred from a literal or an initializer call that has
failable overloads and/or implicit conversions (i.e. Double/CGFloat).
`determineBestChoicesInContext` would reset the score of an operator
disjunction which was computed based on speculative candidates alone
but would preserve favoring information. This way selection algorithm
would not be skewed towards operators and at the same time if there
is no no choice by to select one we'd still have favoring information
available which is important for operator chains that consist purely
of literals.
Thanks to `LinkedExprAnalyzer` unary argument hack was able to
infer matching based on literals and arithmetic operator chains,
let's preserve that behavior in a more principled manner.
`==` and `!=` operators have special overloads that allow matching
`nil` literal on either side even if wrapped type on the other side
doesn't conform to `Equatable`.
If there are no-same type requirements and parameters use
either concrete types or generic parameter types directly,
the optimizer should be able to handle ranking. Currently
candidate arguments are considered in isolation which makes
it impossible to deal with same-type requirements and
complex generic signatures.
Disjunctions with a single element are sometimes introduced after
disfavoring, so we need to make sure that they are always preferred
during disjunction selection.
Having it be part of the other matching wasn't a good idea because
previous "favoring" happened only in a few situations - if argument
was a declaration reference, application or (dynamic) subscript that
had overload choice selected during constraint generation.
Since each candidate and overload choice are considered independenty
there is no way to judge whether non-default literal type is going
to result in a worse solution than non-default one.
For example, `??` operator could produce an optional type
so `test(<<something>> ?? 0) could result in an optional
argument that wraps a type variable. It should be possible
to infer bindings from underlying type variable and restore
optionality.