Diagnostics can outlive the ConstraintSystem itself if we have a
diagnostic transaction for e.g `typeCheckParameterDefault`, make sure
we don't try to use a solver-allocated type as an argument.
Adjust isolation checking to handle misused `isolated` attribute
and let attribute checker property diagnose it.
Resolves: rdar://148076903
Resolves: https://github.com/swiftlang/swift/issues/80363
Instead of checking `EnableConstraintSolverPerformanceHacks`
directly, let's use an option which is easier to set i.e.
when a block list is implemented.
Infer result type of a subscript with Array or Dictionary base type
if argument type matches the key type exactly or it's a supported
literal type.
This helps to maintain the existing behavior without having to resort
to "favored type" computation.
- Expand the inference to include prefix and postfix unary operators
- Recognize previously resolved declaration and member references
in argument positions and record their types.
- Expand reconciliation logic from Double<->Int to include other
floating-point types and `CGFloat`.
If the scores are the same and both disjunctions are operators
they could be ranked purely based on whether the candidates
were speculative or not. The one with more context always wins.
Consider the following situation:
```swift
func test(_: Int) { ... }
func test(_: String) { ... }
test("a" + "b" + "c")
```
In this case we should always prefer ... + "c" over "a" + "b"
because it would fail and prune the other overload if parameter
type (aka contextual type) is `Int`.
This is helpful in situations when all of the chained operators
have literal arguments because it would make sure that every
operator has the same score if there is no contextual type.
These choices could be better than some other non-disfavored ones
in certain situations i.e. when `async` overload is disfavored
but appears in async context it's preferrable to a non-async
overload choice.
Note that the code that mimic old hacks still needs to filter on
`@_disfavoredOverload` in few places to maintain source compatibility.
This matches the behavior of the old hack where favoring choices
were rolled back if `mustConsider` produced `true` which happened
only for protocol requirements and variadic overload choice regardless
of their viability.
When matching candidate like `[Int]` against `Array<Element>`
we need to conservatively assume that if the nominals match
the argument is a viable exact match because otherwise it's
possible to skip some of the valid matches when other overload
choice have generic parameters at the same parameter position.
The problem this is trying to solve is eager selection of operators
over unsupported disjunctions, when matching operators let's take
speculative information into account because it helps to make better
choices in this case.
We need to have a notion of "complete" binding set before
we can allow inference from generic parameters and ternary,
otherwise we'd make a favoring decision that might not be
correct i.e. `v ?? (<<cond>> ? nil : o)` where `o` is `Int`.
`getBindingsFor` doesn't currently infer transitive bindings
which means that for a ternary we'd only have a single
binding - `Int` which could lead to favoring overload of
`??` and has non-optional parameter on the right-hand side.
Some of the disjunctions are not supported by the optimizers but
could still be a better choice than an operator. Using a non-score
based preference mechanism first allows us to make sure that
operator disjunctions are not selected too eagerly in some situations
when i.e. a member (supported or not) could be a better choice.
`isPreferable` currently targets only operators in result builder
contexts but it could be expanded to more uses in the future.
New ranking + selection algorithm suffered from over-eagerly selecting
operator disjunctions vs. unsupported non-operator ones even if the
ranking was based purely on literal candidates.
This change introduces a notion of a speculative candidate - one which
has a type inferred from a literal or an initializer call that has
failable overloads and/or implicit conversions (i.e. Double/CGFloat).
`determineBestChoicesInContext` would reset the score of an operator
disjunction which was computed based on speculative candidates alone
but would preserve favoring information. This way selection algorithm
would not be skewed towards operators and at the same time if there
is no no choice by to select one we'd still have favoring information
available which is important for operator chains that consist purely
of literals.
Thanks to `LinkedExprAnalyzer` unary argument hack was able to
infer matching based on literals and arithmetic operator chains,
let's preserve that behavior in a more principled manner.
`==` and `!=` operators have special overloads that allow matching
`nil` literal on either side even if wrapped type on the other side
doesn't conform to `Equatable`.
If there are no-same type requirements and parameters use
either concrete types or generic parameter types directly,
the optimizer should be able to handle ranking. Currently
candidate arguments are considered in isolation which makes
it impossible to deal with same-type requirements and
complex generic signatures.
Since such choices are all but guaranteed to be worst than any
other non-disfavored choice, let's attempt them last to avoid
having to form a complete solution just to filter it out during
ranking.
Disjunctions with a single element are sometimes introduced after
disfavoring, so we need to make sure that they are always preferred
during disjunction selection.
Having it be part of the other matching wasn't a good idea because
previous "favoring" happened only in a few situations - if argument
was a declaration reference, application or (dynamic) subscript that
had overload choice selected during constraint generation.
Since each candidate and overload choice are considered independenty
there is no way to judge whether non-default literal type is going
to result in a worse solution than non-default one.
For example, `??` operator could produce an optional type
so `test(<<something>> ?? 0) could result in an optional
argument that wraps a type variable. It should be possible
to infer bindings from underlying type variable and restore
optionality.
Don't attempt this optimization if call has number literals.
This is intended to narrowly fix situations like:
```swift
func test<T: FloatingPoint>(_: T) { ... }
func test<T: Numeric>(_: T) { ... }
test(42)
```
The call should use `<T: Numeric>` overload even though the
`<T: FloatingPoint>` is a more specialized version because
selecting `<T: Numeric>` doesn't introduce non-default literal
types.
(cherry picked from commit 8d5cb112ef)
`calleeFn` now returns the underlying declaration reference looking through
`ConstructorRefCallExpr`, which means the downgrade logic needs to check
whether the call is using initializer reference before making a decision.
This is currently unused because current mechanism set favored choices
directly but it would be utilized by the disjunction optimizer.
(cherry picked from commit e404ed722a)
Previously only nominal type and existential where allowed but
that's outdated, it's possible to reference a member of an opaque
result type as well, the concrete type in this case is going to
be deduced by the solver.