Today ParenType is used:
1. As the type of ParenExpr
2. As the payload type of an unlabeled single
associated value enum case (and the type of
ParenPattern).
3. As the type for an `(X)` TypeRepr
For 1, this leads to some odd behavior, e.g the
type of `(5.0 * 5).squareRoot()` is `(Double)`. For
2, we should be checking the arity of the enum case
constructor parameters and the presence of
ParenPattern respectively. Eventually we ought to
consider replacing Paren/TuplePattern with a
PatternList node, similar to ArgumentList.
3 is one case where it could be argued that there's
some utility in preserving the sugar of the type
that the user wrote. However it's really not clear
to me that this is particularly desirable since a
bunch of diagnostic logic is already stripping
ParenTypes. In cases where we care about how the
type was written in source, we really ought to be
consulting the TypeRepr.
Previously, the constraint solver would first attempt member lookup that
excluded members from transitively imported modules. If there were no viable
candidates, it would perform a second lookup that included the previously
excluded members, treating any candidates as unviable. This meant that if the
member reference did resolve to one of the unviable candidates the resulting
AST would be broken, which could cause unwanted knock-on diagnostics.
Now, members from transitively imported modules are always returned in the set
of viable candidates. However, scoring will always prioritize candidates from
directly imported modules over members from transitive imports. This solves the
ambiguities that `MemberImportVisibility` is designed to prevent. If the only
viable candidates are from transitively imported modules, though, then the
reference will be resolved successfully and diagnosed later in
`MiscDiagnostics.cpp`. The resulting AST will not contain any errors, which
ensures that necessary access levels can be computed correctly for the imports
suggested by `MemberImportVisibility` fix-its.
Resolves rdar://126637855.
Although I don't plan to bring over new assertions wholesale
into the current qualification branch, it's entirely possible
that various minor changes in main will use the new assertions;
having this basic support in the release branch will simplify that.
(This is why I'm adding the includes as a separate pass from
rewriting the individual assertions)
We have two places where `CompareDeclSpecializationRequest` is used:
- A performance optimization that compares two generic overloads;
- Solution ranking that checks all of the selected overloads against
another solution.
The former can allow missing conformances and shouldn't prevent
the solver from checking overloads that differ on `Sendable`
(because there is no information about what is passed as arguments)
but the latter, since it has a solution, should prefer Sendable
overloads over non-Sendable ones if possible
(i.e. `init<T: Sendable>(_: T)` is a subtype of `init<T>(_: T)`).
If one of the choices is variadic generic, let's use `matchCallArguments`
to find argument/parameter mappings and form pack expansions for arguments
when necessary.
Resolves: rdar://112029630
Ignore conversion score increases during code completion to make sure we don't filter solutions that might start receiving the best score based on a choice of the code completion token.
This is phase-1 of switching from llvm::Optional to std::optional in the
next rebranch. llvm::Optional was removed from upstream LLVM, so we need
to migrate off rather soon. On Darwin, std::optional, and llvm::Optional
have the same layout, so we don't need to be as concerned about ABI
beyond the name mangling. `llvm::Optional` is only returned from one
function in
```
getStandardTypeSubst(StringRef TypeName,
bool allowConcurrencyManglings);
```
It's the return value, so it should not impact the mangling of the
function, and the layout is the same as `std::optional`, so it should be
mostly okay. This function doesn't appear to have users, and the ABI was
already broken 2 years ago for concurrency and no one seemed to notice
so this should be "okay".
I'm doing the migration incrementally so that folks working on main can
cherry-pick back to the release/5.9 branch. Once 5.9 is done and locked
away, then we can go through and finish the replacement. Since `None`
and `Optional` show up in contexts where they are not `llvm::None` and
`llvm::Optional`, I'm preparing the work now by going through and
removing the namespace unwrapping and making the `llvm` namespace
explicit. This should make it fairly mechanical to go through and
replace llvm::Optional with std::optional, and llvm::None with
std::nullopt. It's also a change that can be brought onto the
release/5.9 with minimal impact. This should be an NFC change.
`openType` didn't need a locator before it was simply replacing generic
parameters with corresponding type variables but now, with opening of
pack expansions types, a locator is needed for pack expansion variables.
`getValue` -> `value`
`getValueOr` -> `value_or`
`hasValue` -> `has_value`
`map` -> `transform`
The old API will be deprecated in the rebranch.
To avoid merge conflicts, use the new API already in the main branch.
rdar://102362022
These will never appear in the source language, but can arise
after substitution when the original type is a tuple type with
a pack expansion type.
Two examples:
- original type: (Int, T...), substitution T := {}
- original type: (T...), substitution T := {Int}
We need to model these correctly to maintain invariants.
Callers that previously used to rely on TupleType::get()
returning a ParenType now explicitly check for the one-element
case instead.
To make this test work, fix an issue in `ConstraintSystem::salvage` where a
threshold breach during solving went unnoticed due to exiting on ambiguity
before reaching the `isTooComplex` check. Address this by moving the
`isTooComplex` check to before we start processing solutions, and stick another
one in `findBestSolution` for short-circuiting while we're here.
Code completion related changes introduced a bug which increased
`score2` regardless which type was an archetype, which surfaced
as a source compatibility regression in ReactiveKit.
This hooks up call argument position completion to the typeCheckForCodeCompletion API to generate completions from all the solutions the constraint solver produces (even those requiring fixes), rather than relying on a single solution being applied to the AST (if any).
Co-authored-by: Nathan Hawes <nathan.john.hawes@gmail.com>
This cleans up 90 instances of this warning and reduces the build spew
when building on Linux. This helps identify actual issues when
building which can get lost in the stream of warning messages. It also
helps restore the ability to build the compiler with gcc.
When ranking constructor parameter lists, we
compose them as tuples or parens, and check if
they are subtypes or unlabeled versions of each
other. Previously this was done with the parameter
flags intact, but recently I changed the logic to
explicitly strip parameter flags in preparation
for no longer storing the flags on these types.
This caused a slight behavior change, as it turns
out we have a special case in `TupleType::get`
that allows an unlabeled single parameter to be
composed as a tuple type if its variadic bit is
set. With the parameter flags now stripped, we
produce a paren type. This means that when
comparing the parameter lists e.g `(x: Int...)`
and `(Int...)`, instead of comparing two tuple
types end up comparing a tuple with a paren and
fail.
To preserve the old behavior, implement a special
case for when we have an unlabeled and labeled
variadic comparison for a single parameter. In
this case, add the parameter types directly to the
type diff, and track which one had the label. The
ranking logic can then use this to prefer the
unlabeled variant. This is only needed in the
single parameter case, as other cases will compare
as tuples the same as before. In cases where
variadics aren't used, we may end up trying to
compare parens with tuples, but that's consistent
with what we previously did.
rdar://84279742
The removed condition was incorrect because first of all it would
always be true since `losers` are populated with `false` based on
number of viable solutions and secondly it would result in a mix
of solutions with and without fixes.
Instead, in ambiguity cases, let's remove all of the solutions
that are worse than others based on score to avoid doing any
extra work in the future steps or during diagnostics.
Previously we were introducing a type variable
to mark a constructor's parameter list as
`TVO_PrefersSubtypeBinding`. Unfortunately this
relies on representing the parameter list as a
tuple, which will no longer be properly supported
once param flags are removed from tuple types.
Move the logic into CSRanking such that we pick up
and compare the parameter lists when comparing
overload bindings. For now, this still relies on
comparing the parameter lists as tuples, as there's
some subtle tuple subtyping rules that could
potentially affect source compatibility here, but
at least we can explicitly strip the parameter
flags and localise the hack to CSRanking rather
than exposing it as a constraint.
for unapplied references when the choice is a function declaration.
This will allow the solver to prune those overload choices when it
has already found a solultion with a property (all else equal in the
score). This is already done as an ambiguity tie-breaker in solution
ranking, but adding this bit to the score will prune a lot of search
space within the solver.
Start treating the null {Can}GenericSignature as a regular signature
with no requirements and no parameters. This not only makes for a much
safer abstraction, but allows us to simplify a lot of the clients of
GenericSignature that would previously have to check for null before
using the abstraction.
Not-filtering solutions causes unacceptable slownesses in some cases.
For now, filter solutions as normal typechecking does to restore the
performance.
rdar://76714968