Today ParenType is used:
1. As the type of ParenExpr
2. As the payload type of an unlabeled single
associated value enum case (and the type of
ParenPattern).
3. As the type for an `(X)` TypeRepr
For 1, this leads to some odd behavior, e.g the
type of `(5.0 * 5).squareRoot()` is `(Double)`. For
2, we should be checking the arity of the enum case
constructor parameters and the presence of
ParenPattern respectively. Eventually we ought to
consider replacing Paren/TuplePattern with a
PatternList node, similar to ArgumentList.
3 is one case where it could be argued that there's
some utility in preserving the sugar of the type
that the user wrote. However it's really not clear
to me that this is particularly desirable since a
bunch of diagnostic logic is already stripping
ParenTypes. In cases where we care about how the
type was written in source, we really ought to be
consulting the TypeRepr.
My change to preserve type sugar in type transforms introduced a
performance regression here, because openType() is called for
every disjunction choice.
My hope is to optimize the TypeAliasType representation and remove
this at some point, but for now, let's just restore the old
desugaring behavior in this case.
To compute the expected type of a call pattern (which is the return type of the function if that call pattern is being used), we called `getTypeForCompletion` for the entire call, in the same way that we do for the code completion token. However, this pattern does not generally work. For the code completion token it worked because the code completion expression doesn’t have an inherent type and it inherits the type solely from its context. Calls, however, have an inherent return type and that type gets assigned as the `typeForCompletion`.
Implement targeted checks for the two most common cases where an expected type exists: If the call that we suggest call patterns for is itself an argument to another function or if it is used in a place that has a contextual type in the constraint system (eg. a variable binding or a `return` statement). This means that we no longer return `Convertible` for call patterns in some more complex scenarios. But given that this information was computed based on incorrect results and that in those cases all call patterns had a `Convertible` type relation, I think that’s acceptable. Fixing this would require recording more information in the constraints system, which is out-of-scope for now.
When completing in cases like `bar(arg: foo(|, option: 1)`, we don’t know if `option` belongs to the call to `foo` or `bar`. Be defensive and also suggest the signature.
This removes the distinction between argument completions and postfix expr paren completions, which was meaningless since solver-based completion.
It then determines whether to suggest the entire function call pattern (with all argument labels) or only a single argument based on whether there are any existing arguments in the call.
For this to work properly, we need to improve parser recovery a little bit so that it parsers arguments after the code completion token properly.
This should make call pattern heuristics obsolete.
rdar://84809503
Ignore conversion score increases during code completion to make sure we don't filter solutions that might start receiving the best score based on a choice of the code completion token.
We previously asserted that for a call the function type had the same number of parameters as the declaration. But that’s not true for parameter packs anymore because the parameter pack will be exploded in the function type to account for passing multiple arguments to the pack.
To fix this, use `ConcreteDeclRef` instead of a `ValueDecl`, which has a substitution map and is able to account for the exploded parameter packs when accessed using `getParameterAt`.
rdar://100066716
In ExprContextAnalyzer, when looking up members, some implicit
members weren't populated. Ensure all implicit members available by
force synthesizing them.
rdar://89773376
I think that preferring identical over convertible makes sense in e.g. C++ where we have implicit user-defined type conversions but since we don’t have them in Swift, I think the distinction doesn’t make too much sense, because if we have a `func foo(x: Int?)`, want don’t really want to prioritize variables of type `Int?` over `Int` Similarly if we have `func foo(x: View)`, we don’t want to prioritize a variable of type `View` over e.g. `Text`.
rdar://91349364
During normal type-checking we ignore functions that contain an error. During code completion, we want to consider them and replace all error types by placeholders so we can match up the known types.
We already do this for member types (see `getTypeOfMemberReference`). We should also do it for top-level functions.
Fixes rdar://81425383 [SR-14992]
This simplifies the logic in `ide::typeCheckContextAt` and fixes an issue that prevent code completion from working inside variables that are initialized by calling a closure.
Fixes rdar://90455154 [SR-16012]
Fixes rdar://85600167 [SR-15495]
This hooks up call argument position completion to the typeCheckForCodeCompletion API to generate completions from all the solutions the constraint solver produces (even those requiring fixes), rather than relying on a single solution being applied to the AST (if any).
Co-authored-by: Nathan Hawes <nathan.john.hawes@gmail.com>
`CodeCompletioString::getName()` was used only as the sorting keys in
`CodeCompletionContext::sortCompletionResults()` which is effectively
deprecated. There's no reason to check them in `swift-ide-test`. Instead,
check `printCodeCompletionResultFilterName()` that is actually used for
filtering.
struct Generic<T> {
init(destination: T, isActive: Bool) { }
}
invalid {
Generic(destination: true, #^COMPLETE^#)
}
In this case, since `invalid { ... }` cannot be type checked.
`ExprContextAnalysis` tries to type check `Generic` alone which resolves
to bound `Generic` type with an unresolved type. Previously, we used to
give up the analysis when seeing unresolved type. That caused mis callee
analyis.
rdar://79092374
To describe fine grained priorities.
Introduce 'CodeCompletionFlair' that is a set of more descriptive flags for
prioritizing completion items. This aims to replace '
SemanticContextKind::ExpressionSpecific' which was a "catch all"
prioritization flag.
When completing for an argument expression, make
sure to include keywords that can be used in an
expression position, such as `try`, `await` and
`super`.
rdar://77869845
When matching an argument to a parameter list, if there is no matching
parameter, the argument list is not applicable to the parameters. In
such case, code completion should not suggest argument labels of the
function.
rdar://77867723
According to Pavel, we want to eliminate allowing unresolved variables as much as possible. Removing this flag doesn’t break any test cases (at least not in a meaningful way) and fixes a crasher, so it seems reasonable to remove it.
Fixes rdar://76686564
For a function and call like
```swift
func test(_: Foo..., yArg: Baz) {}
test(.bar, #^COMPLETE^#)
```
the parser matches the code completion token to the `yArg` with a missing label, because this way all parameters are provided. However, because of this we don’t suggest any variables that could belong the the previous vararg list.
To fix this, if we encounter such a situation (argument without label after vararg), manually adjust the code completion token’s position in params to belong to the vararg list.
Fixes rdar://76977325 [SR-14515]
The actual fix is to perform qualified lookup on a TypeExpr in `collectPossibleCalleesForApply`.
This changes the behaviour of some test cases in `complete_multiple_trailingclosure.swift`, which now provide argument labels. To make the choices suggested less verbose, I refined the parameter matching to only match trailing closures to parameters of function types.
In `INIT_FALLBACK_1` we technically shouldn't be suggesting `arg3` based on the matched argument types (the closure returns `Int` and the constructor with `arg3` requires the closure to return `String`), but AFAICT we aren't doing type-checking at this stage, so there's no way to rule it out.
Currently, if the code completion token is after a label that refers to a vararg, it is part of that VarargExpansionExpr, so we don’t suggest the subsequent parameter’s label.
Add special handling to detect this situation and suggest the parameter label.
Fixes rdar://76355192
If have a function that takes a trailing closure as follows
```
func sort(callback: (_ left: Int, _ right: Int) -> Bool) {}
```
completing a call to `sort` and expanding the trailing closure results in
```
sort { <#Int#>, <#Int#> in
<#code#>
}
```
We should be doing a better job here and defaulting the trailing closure's to the internal names specified in the function signature. I.e. the final result should be
```
sort { left, right in
<#code#>
}
```
This commit does exactly that.
Firstly, it keeps track of the closure's internal names (as specified in the declaration of `sort`) in the closure's type through a new `InternalLabel` property in `AnyFunctionType::Param`. Once the type containing the parameter gets canonicalized, the internal label is dropped.
Secondly, it adds a new option to `ASTPrinter` to always try and print parameter labels. With this option set to true, it will always print external paramter labels and, if they are present, print the internal parameter label as `_ <internalLabel>`.
Finally, we can use this new printing mode to print the trailing closure’s type as
```
<#T##callback: (Int, Int) -> Bool##(_ left: Int, _ right: Int) -> Bool#>
```
This is already correctly expanded by code-expand to the desired result. I also added a test case for that behaviour.
Previously, we always assumed that the position in the arguments would match the position in the parameter, which isn’t true in case of defaulted arguments. Try matching parameters based on argument labels to give better completion results.
Fixes rdar://60346573
The `init` of `self.init` needs to be looked up on the Metatype of `self`, not on the type of `self` itself. We were already doing a similar special-casing for `super.init`, which I extened to also cover `self.init`.
Resolves rdar://36521732
Following on from updating regular member completion, this hooks up unresolved
member completion (i.e. .<complete here>) to the typeCheckForCodeCompletion API
to generate completions from all solutions the constraint solver produces (even
those requiring fixes), rather than relying on a single solution being applied
to the AST (if any). This lets us produce unresolved member completions even
when the contextual type is ambiguous or involves errors.
Whenever typeCheckExpression is called on an expression containing a code
completion expression and a CompletionCallback has been set, each solution
formed is passed to the callback so the type of the completion expression can
be extracted and used to lookup up the members to return.