These will never appear in the source language, but can arise
after substitution when the original type is a tuple type with
a pack expansion type.
Two examples:
- original type: (Int, T...), substitution T := {}
- original type: (T...), substitution T := {Int}
We need to model these correctly to maintain invariants.
Callers that previously used to rely on TupleType::get()
returning a ParenType now explicitly check for the one-element
case instead.
To make this test work, fix an issue in `ConstraintSystem::salvage` where a
threshold breach during solving went unnoticed due to exiting on ambiguity
before reaching the `isTooComplex` check. Address this by moving the
`isTooComplex` check to before we start processing solutions, and stick another
one in `findBestSolution` for short-circuiting while we're here.
Code completion related changes introduced a bug which increased
`score2` regardless which type was an archetype, which surfaced
as a source compatibility regression in ReactiveKit.
This hooks up call argument position completion to the typeCheckForCodeCompletion API to generate completions from all the solutions the constraint solver produces (even those requiring fixes), rather than relying on a single solution being applied to the AST (if any).
Co-authored-by: Nathan Hawes <nathan.john.hawes@gmail.com>
This cleans up 90 instances of this warning and reduces the build spew
when building on Linux. This helps identify actual issues when
building which can get lost in the stream of warning messages. It also
helps restore the ability to build the compiler with gcc.
When ranking constructor parameter lists, we
compose them as tuples or parens, and check if
they are subtypes or unlabeled versions of each
other. Previously this was done with the parameter
flags intact, but recently I changed the logic to
explicitly strip parameter flags in preparation
for no longer storing the flags on these types.
This caused a slight behavior change, as it turns
out we have a special case in `TupleType::get`
that allows an unlabeled single parameter to be
composed as a tuple type if its variadic bit is
set. With the parameter flags now stripped, we
produce a paren type. This means that when
comparing the parameter lists e.g `(x: Int...)`
and `(Int...)`, instead of comparing two tuple
types end up comparing a tuple with a paren and
fail.
To preserve the old behavior, implement a special
case for when we have an unlabeled and labeled
variadic comparison for a single parameter. In
this case, add the parameter types directly to the
type diff, and track which one had the label. The
ranking logic can then use this to prefer the
unlabeled variant. This is only needed in the
single parameter case, as other cases will compare
as tuples the same as before. In cases where
variadics aren't used, we may end up trying to
compare parens with tuples, but that's consistent
with what we previously did.
rdar://84279742
The removed condition was incorrect because first of all it would
always be true since `losers` are populated with `false` based on
number of viable solutions and secondly it would result in a mix
of solutions with and without fixes.
Instead, in ambiguity cases, let's remove all of the solutions
that are worse than others based on score to avoid doing any
extra work in the future steps or during diagnostics.
Previously we were introducing a type variable
to mark a constructor's parameter list as
`TVO_PrefersSubtypeBinding`. Unfortunately this
relies on representing the parameter list as a
tuple, which will no longer be properly supported
once param flags are removed from tuple types.
Move the logic into CSRanking such that we pick up
and compare the parameter lists when comparing
overload bindings. For now, this still relies on
comparing the parameter lists as tuples, as there's
some subtle tuple subtyping rules that could
potentially affect source compatibility here, but
at least we can explicitly strip the parameter
flags and localise the hack to CSRanking rather
than exposing it as a constraint.
for unapplied references when the choice is a function declaration.
This will allow the solver to prune those overload choices when it
has already found a solultion with a property (all else equal in the
score). This is already done as an ambiguity tie-breaker in solution
ranking, but adding this bit to the score will prune a lot of search
space within the solver.
Start treating the null {Can}GenericSignature as a regular signature
with no requirements and no parameters. This not only makes for a much
safer abstraction, but allows us to simplify a lot of the clients of
GenericSignature that would previously have to check for null before
using the abstraction.
Not-filtering solutions causes unacceptable slownesses in some cases.
For now, filter solutions as normal typechecking does to restore the
performance.
rdar://76714968
One last usage of getOldType() remains here, but it's actually
meaningful since we want to handle InOutType there, so it will
take more work to eliminate.
The existing overloading rules strongly prefer async functions within
async contexts, and synchronous functions in synchronous contexts.
However, when there are other differences in the
signature, particularly parameters of function type that differ in
async vs. synchronous, the overloading rule would force the use of the
synchronous function even in cases where the synchronous function
would be better. An example:
func f(_: (Int) -> Int) { }
func f(_: (Int) async -> Int) async { }
func g(_ x: Int) -> Int { -x }
func h() async {
f(g) // currently selects async f, want to select synchronous f
}
Effect the semantics change by splitting the "sync/async mismatch"
score in the constraint system into an "async in sync mismatch" score
that is mostly disqualifying (because the call will always fail) and a
less-important score for "sync used in an async context", which also
includes conversion from a synchronous function to an asynchronous
one. This way, only synchronous functions are still considered within
a synchronous context, but we get more natural overloading behavior
within an asynchronous context. The end result is intended to be
equivalent to what one would get with reasync:
func f(_: (Int) async -> Int) async { ... }
Addresses rdar://74289867.
We were previously completely skipping the "best" solution filtering the solver
does to make sure we didn't miss any non-best but still viable solutions, as
the completions generated from them can make them become the best solution. E.g:
struct Foo { let onFoo = 10 }
func foo(_ x: Int) -> Int { return 1 }
func foo<T>(_ x: T) -> Foo { return Foo() }
foo(3).<here> // the "best" solution is the one with the more-specialized foo(x: Int) overload
In the example above we shouldn't remove the solution for foo(x: T) even though
there is a single "best" solution (`foo(x: Int)`) as picking a completion
result generated from it (`onFoo`) would make the foo(x: T) overload the best
and only viable solution.
Completely skipping this filtering as we were previously doing is overkill
though and adversely affects performance. E.g. it makes sense to filter out
and stop exploring solutions with overload choices for foo that required fixes
for missing arguments if there is another solution with an overload choice that
didn't require any fixes.
This patch restores best solution filtering during code completion and instead updates
the compareSolutions function it compare solutions based purely on their fixed score.
Resolves rdar://problem/73282163
Following on from updating regular member completion, this hooks up unresolved
member completion (i.e. .<complete here>) to the typeCheckForCodeCompletion API
to generate completions from all solutions the constraint solver produces (even
those requiring fixes), rather than relying on a single solution being applied
to the AST (if any). This lets us produce unresolved member completions even
when the contextual type is ambiguous or involves errors.
Whenever typeCheckExpression is called on an expression containing a code
completion expression and a CompletionCallback has been set, each solution
formed is passed to the callback so the type of the completion expression can
be extracted and used to lookup up the members to return.
Allow an 'async' function to overload a non-'async' one, e.g.,
func performOperation(_: String) throws -> String { ... }
func performOperation(_: String) async throws -> String { ... }
Extend the scoring system in the type checker to penalize cases where
code in an asynchronous context (e.g., an `async` function or closure)
references an asychronous declaration or vice-versa, so that
asynchronous code prefers the 'async' functions and synchronous code
prefers the non-'async' functions. This allows the above overloading
to be a legitimate approach to introducing asynchronous functionality
to existing (blocking) APIs and letting code migrate over.
This approach, suggested by Xiaodi Wu, provides better source
compatibility for existing Swift code, by breaking ties in favor of the
existing Swift semantics. Each time the backward-scan rule is needed
(and differs from the forward-scan result), we will produce a warning
+ Fix-It to prepare for Swift 6 where the backward rule can be
removed.
To better preserve source compatibility, teach the constraint
solver to try both the new forward scanning rule as well as the
backward scanning rule when matching a single, unlabeled trailing
closure. In the extreme case, where the unlabeled trailing closure
matches different parameters with the different rules, and yet both
produce a potential match, introduce a disjunction to explore both
possibilities.
Prefer solutions that involve forward scans to those that involve
backward scans, so we only use the backward scan as a fallback.
All callers can trivially be refactored to use ModuleDecl::lookupConformance()
instead. Since this was the last flag in ConformanceCheckOptions, we can remove
that, too.
A request is intended to be a pure function of its inputs. That function could, in theory, fail. In practice, there were basically no requests taking advantage of this ability - the few that were using it to explicitly detect cycles can just return reasonable defaults instead of forwarding the error on up the stack.
This is because cycles are checked by *the Evaluator*, and are unwound by the Evaluator.
Therefore, restore the idea that the evaluate functions are themselves pure, but keep the idea that *evaluation* of those requests may fail. This model enables the best of both worlds: we not only keep the evaluator flexible enough to handle future use cases like cancellation and diagnostic invalidation, but also request-based dependencies using the values computed at the evaluation points. These aforementioned use cases would use the llvm::Expected interface and the regular evaluation-point interface respectively.
Introduce `SK_Hole` which is used to count a number of "holes" in
a given solution. It is used to distinguish solutions with fewer holes.
Also it makes it possible to check whether a solution has holes but
no fixes, which is an issue and such solution shouldn't be applied
to AST.
If constraint system is underconstrained e.g. because there are
editor placeholders, it's possible to end up with multiple solutions
where each ambiguous declaration is going to have its own overload kind:
```swift
func foo(_: Int) -> [Int] { ... }
func foo(_: Double) -> (result: String, count: Int) { ... }
_ = foo(<#arg#>).count
```
In this case solver would produce 2 solutions: one where `count`
is a property reference on `[Int]` and another one is tuple access
for a `count:` element.
Resolves: rdar://problem/49712598
Single type of keypath dynamic member lookup could refer to different
member overlaods, we have to do a pair-wise comparison in such cases
otherwise ranking would miss some viable information e.g.
`_ = arr[0..<3]` could refer to subscript through writable or read-only
key path and each of them could also pick overload which returns `Slice<T>`
or `ArraySlice<T>` (assuming that `arr` is something like `Box<[Int]>`).