This allows us to skip attempting actual conversions.
This speeds up one of our slow test cases, and perturbs the output of
another test. In the latter case, we stop emitting conversions as part
of the non-semantic piece of the array_expr. The fact that we're not
putting conversions in on that path is something I've seen before in
other instances. I'll open a bug if I cannot find one for it, although
I believe it's entirely cosmetic in this case since we don't rely on
the conversion being there.
Also move one from fast to slow based on the fact that it wasn't
representative of the original issue (which was an expression that
didn't typecheck successfully).
In cases where we have multiple designated types, sort the types that
were designated for this operator based on any information we can
gather about actual argument types at usage sites.
We can consider extending this further in a future commit to ignore
designated types when we have concrete type information that we are
confident of.
Add the following to all the expression type checker performance tests
that do not regress as a result:
- `-swift-version 5`
- `-solver-disable-shrink`
- `-disable-constraint-solver-performance-hacks`
- `-solver-enable-operator-designated-types`
This is a follow-up to 6de03f709f, where
these should have been included.
Add the following to all the expression type checker performance tests
that do not regress as a result:
- `-swift-version 5`
- `-solver-disable-shrink`
- `-disable-constraint-solver-performance-hacks`
- `-solver-enable-operator-designated-types`
At some point this test case was updated such that it no longer
compiled successfully. The originally reported test case did compile
successfully, just slowly.
This counts the number of leaf scopes we reach while solving the
constraint sytem, and is a much better measure of the growth of
unnecessary work than the total number of scopes opened.
There were two tests where I had a difficult time getting scale-test
to fit the curve even after adjusting some of the parameters, so I've
left those to use the old stat for now.
Have the constraint solver consider multiple designated types for an
operator. We currently consider the overloads from each in turn,
stopping as soon as we have a solution. As a result, we can still end
up with exponential type checking in some cases if an operator has
more than a single designated type. This still allows us to reduce the
base of that exponent, though, which makes it possible to increase the
number of expressions we can type check successfully in practice.
Currently (with or w/o failures) constraint system is not returned
back to its original state after solving, because constraints from
initial "active" list are not returned to the system. To fix that
let's allocate "initial" scope which captures state right before
solving begins, and add "active" list to the solver state to capture
information about "active" constraints at the time of its creation.
This is follow-up to https://github.com/apple/swift/pull/19873
Rather than limiting this to protocols, allow any nominal type.
Rename -enable-operator-designated-protocols to
-enable-operator-designated-types to reflect the change.
For operators with default implementations in an extension, we don't
want to typecheck it both with overloads from the protocol type and
the ones from the extension.
These are cases that I know are faster when this is enabled. The test
updates all additionally disable the existing performance hacks as
well as the shrink phase of the solver.
This will minimize the chance of breaking code. Currently SwiftLint
has one "too complex" expression with this change. Further changes to
the solver may improve that situation, and potentially allow us to
move this out from -swift-version 5 if we're willing to take the risk
of breaking some code as a result.
Attempt to visit disjunctions that are associated with applies where
we have at least some useful information about the types of all of the
arguments before visiting other disjunctions.
Two tests here got faster, and one slightly slower. One of the
faster tests is actually moving from test/ to the slow/ directory in
validation-test because despite going from 16s to less than 1s, it was
still borderline for what we consider the slow threshold, so I made
the test more complex. The one that got a little slower is
rdar22022980, which I also made more complex so that it is clearly
"slow" by the way we are testing it.
slower:
rdar22022980.swift
faster:
rdar33688063.swift
expression_too_complex_4.swift
* Move logic to ensure that r-value type var would get r-value type to `PotentialBindings`;
* Strip uncessary parens directly when creating `PotentialBinding`;
* Check if the binding is viable before inclusion in the set, instead of filtering it later;
* Assert that bindings don't have types with errors included in the set.
* Obsolete ModifierSlice typealiases in 5.0
* Obsolete *Indexable in 5.0
* Obsolete IteratorOverOne/EmptyIterator in 5.0
* Obsolete lazy typealiases in 5.0
* Drop .characters from tests
* Obsolete old literal protocols in 5.0
* Obsolete Range conversion helpers in 5.0
* Obsolete IndexDistance helpers in 5.0
* Obsolete Unsafe compat helpers in 5.0
* Obsolete flatMap compatibility helper in 5.0
* Obsolete withMutableCharacters in 5.0
* Obsolete customPlaygroundQuickLook in 5.0
* Deprecate Zip2Sequence streams in 5.0
* Replace * with swift on lotsa stuff
* Back off obsoleting playground conformances for now
If generic parameter associated with missing conformance comes
from different context diagnose the problem as "referencing" a
specific declaration from affected type.
Instead of simply pointing out which type had conformance failures,
let's use affected declaration instead, which makes diagnostics much
richer e.g.
```
'List<[S], S.Id>' requires that 'S.Id' conform to 'Hashable'
```
versus
```
initializer 'init(_🆔)' requires that 'E' conform to 'Hashable' [with 'E' = 'S.Id']
```
Since latter message uses information about declaration, it can also
point to it in the source. That makes is much easier to understand when
problem is related to overloaded (function) declarations.
One expression in the new hashing implementation is going exponential,
accounting for a huge amount of type-checking type. Add (admittedly ugly)
“as UInt64” annotations to greatly reduce the time to type-check this
expression.
*Ahem* type-checking time for the standard library goes from 24s->14s with
this change. Added a type-checker “slow” performance test and captured
the problem in rdar://problem/42672946.
Since it's possible to find the same constraint through two different
but equivalent type variables, let's use a set to store constraints
instead of a vector to avoid processing the same constraint multiple
times.
This is how we originally controlled whether or not we printed out ownership
annotations when we printed SIL. Since then, I have changed (a few months ago I
believe) the ownership model eliminator to know how to eliminate these
annotations from the SIL itself. So this hack can be removed.
As an additional benefit, this will let me rename -enable-sil-ownership to
-enable-sil-ownership-verifier. This will I hope eliminate confusion around this
option in the short term while I am preparing to work on semantic sil again.
rdar://42509812
SE-0213 improved cases like that but we still have problem with operator
overloads, so just need to make this a bit more complicated to reproduce again.
Resolves: rdar://problem/42304000
...unless the argument is an `Any?`, in which case we prefer `f(_: Any?)`.
This change also results in our selecting f<T>(_: T) over f(_:
Any). Coercing with 'as Any' makes it possible to explicitly select
the Any overload. Previously there was no way to select the generic
overload.
Implementation is as follows: In `preCheckExpression` try to
detect if there is `T(literal)` call in the AST, replace it with
implicit `literal as T`, while trying to form type-checked AST,
after constraint solving, restore source information and drop
unnecessary coercion expression.
Resolves: rdar://problem/17088188
Resolves: rdar://problem/39120081
Resolves: rdar://problem/23672697
Resolves: rdar://problem/40379985
From the perspective of the compiler implementation, they're elements. But users will think of these as cases—and many diagnostics already refer to these as enum cases.
None of these failed naturally based on the limits of the "too
complex" heuristic and several were relatively closer to the time-out
threshold used in the test RUN line.
Increasing the complexity will ensure that we don't see spurious
failures on builders or on local machines.