To check whether the code completion node was type checked in a multi statement closure, we were assuming that we had a code completion *expression* and not considering that we could also complete in a key path component. When applicable, check if the completion key path component has a type to determine if the completion node was type checked as part of a multi-statement closure.
Resolves rdar://80271598 [SR-14891]
AutoClosureExprs created by the constraint system used to be constructed
with the decl context of the constraint system itself. This meant that
autoclosures in expressions nested in closures would initially be
parented onto any enclosing functions rather than the deepest closure
context. When we ran capture analysis and lookup from inside of the body
of these nascent values, we would fail to find declarations brought into
scope by those parent closures. This is especially relevant when
pre-typechecked code is involved since captures for those declarations
will be forced before their bodies have been recontextualized. See
issue #34230 for why we need to force things so early.
The attached test case demonstrates both bugs: The former a bogus lookup
through the parent context that would incorrectly reject this otherwise
well-formed code. The latter is a crash in SILGen when the capture
computation would fail to note $0.
Use the decl context of the solution application target, which is always
going to be the deepest user-written closure expression available to us,
and therefore the deepest scope that can introduce capturable variables.
rdar://79248469
Allow type variable binding producer to record types as they
are being produced. Otherwise it could be possible to re-discover
already explored types during `computeNext()` which leads to
duplicate bindings e.g. inferring fallback `Void` for a closure
result type when `Void` was already inferred as a direct/transitive
binding.
Not-filtering solutions causes unacceptable slownesses in some cases.
For now, filter solutions as normal typechecking does to restore the
performance.
rdar://76714968
Having purpose attached to the contextual type element makes it much
easier to diagnose contextual mismatches without involving constraint
system state.
This saves us from needing to re-match args to params in CSApply and is also
useful for a forthcoming change migrating code completion in argument position
to use the solver-based typeCheckForCodeCompletion api.
rdar://76581093
Conformance constraints could be transferred through conversions,
but that would also require checking implicit conversions
such as optional and pointer promotions for conformance is the
type itself doesn't conform, for that let's add a special constraint
`TransitivelyConformsTo`.
Not all of the unary operators have `CGFloat` overloads,
so in order to preserve previous behavior (and overall
best solution) with implicit Double<->CGFloat conversion
we need to allow attempting generic operators for such cases.
```swift
let _: CGFloat = -.pi / 2
```
`-` doesn't have `CGFloat` overload (which might be an oversight),
so in order to preserve type-checking behavior solver can't be
allowed to pick `-(Double) -> Double` based on overload of `/`,
the best possible solution would be with `/` as `(CGFloat, CGFloat) -> CGFloat`
and `-` on a `FloatingPoint` protocol.
This makes it possible to pick `CGFloat` function/operator overloads even
in the presence of literals, otherwise non-default literal score gets in
a way and solver ends up producing a lot more solutions with implicit
Double<->CGFloat conversion than it can disambiguate, so it's better to
just preserve the old behavior and pick `CGFloat` concrete overloads when
appropriate.
- Prefer CGFloat -> Double over the other way around to avoid
ambiguities;
- Every new conversion impacts the score by factor of number of
previously applied conversions to make it possible to select
solutions that require the least such conversions.
- Prefer concrete overloads with Double <-> CGFloat conversion
over generic ones.
The existing overloading rules strongly prefer async functions within
async contexts, and synchronous functions in synchronous contexts.
However, when there are other differences in the
signature, particularly parameters of function type that differ in
async vs. synchronous, the overloading rule would force the use of the
synchronous function even in cases where the synchronous function
would be better. An example:
func f(_: (Int) -> Int) { }
func f(_: (Int) async -> Int) async { }
func g(_ x: Int) -> Int { -x }
func h() async {
f(g) // currently selects async f, want to select synchronous f
}
Effect the semantics change by splitting the "sync/async mismatch"
score in the constraint system into an "async in sync mismatch" score
that is mostly disqualifying (because the call will always fail) and a
less-important score for "sync used in an async context", which also
includes conversion from a synchronous function to an asynchronous
one. This way, only synchronous functions are still considered within
a synchronous context, but we get more natural overloading behavior
within an asynchronous context. The end result is intended to be
equivalent to what one would get with reasync:
func f(_: (Int) async -> Int) async { ... }
Addresses rdar://74289867.
Currently bindings where inferred on every `bindTypeVariable` call,
but that's wasteful because not all binds are always correct. To
avoid unnecessary inference traffic let's wait until re-activated
constraints are simplified and notify binding inference about new
fixed type only if all of them are successful.