This test is technically scaling linearly in the metrics being
checked, but it was still taking 5+ minutes to type check. Reduce the
upper bound so it finishes in a reasonable amount of time.
When favoring particular constraints in a disjunction, don't remove
the old disjunction and create a new one---it's just churn in the
constraint system. Instead, favor the constraints that need it.
This technically flips the SR-139 test case from "too complex" to
being fast enough, but it's still fairly slow.
Looks like obsoleted initializer has been removed,
and there are no solutions available for 'init(truncatingBitPattern:)'
which leads to performance regression with designated types enabled.
Resolves: rdar://problem/48061151
`selectApplyDisjunction` only makes sense in conjunction with
designated types feature because it's going to prioritize disjunctions
with arguments which are known to conform to literal protocols.
Prioritization like that is harmful without designated types feature
because it doesn't always lead to better constraint system splits,
and could prioritize bigger disjunctions which harms chances to
short-circuit solving.
Resolves: rdar://problem/47492691
Currently if member has been found through same-type constraint
it would only be properly handled when it comes from a class type,
because that's the only time when base type gets replaced with
"concrete" type from equivalence class, but nested types could also
come from structs, enums and sometimes protocols (e.g. typealias)
which `resolveDependentMemberType` has to handle.
Resolves: rdar://problem/47334176
The problem Assume 'A' and 'B' are enums with cases .a1, .a2, etc. and
.b1, .b2, etc. If we try to typecheck this switch:
switch (a, b) {
case (.a1, .b1),
(.a1, .b2):
break
...
The compiler is going to try to perform the following series of
operations:
> var s = (A, B)
> s -= (.a1, .b1)
((A - .a1, B) | (A, B - .b1))
> s -= (.a1, .b2)
(((A - .a1, B) | (A - .a1, B - .b2)) |
((A - .a1, B - .b1) | (A, B - .b1 - .b2)))
...
As you can see, the disjunction representing the uncovered space is
growing exponentially. Eventually some of these will start
disappearing (for instance, if B only has .b1 and .b2, that last term
can go away), and if the switch is exhaustive they can /all/ start
disappearing. But several of them are also redundant: the second and
third cases are fully covered by the first.
I exaggerated a little: the compiler is already smart enough to know
that the second case is redundant with the first, because it already
knows that (.a1, .b2) isn't a subset of (A - .a1, B). However, the
third and fourth cases are generated separately from the first two,
and so nothing ever checks that the third case is also redundant.
This patch changes the logic for subtracting from a disjunction so
that
1. any resulting disjunctions are flattened, and
2. any later terms that are subspaces of earlier terms are dropped
This is a quadratic algorithm in the worst case (compare every space
against every earlier space), but because it saves us from exponential
growth (or at least brings down the exponent) it's worth it. For the
test case now committed in the repository, we went from 40 seconds
(using -switch-checking-invocation-threshold=20000000 to avoid cutting
off early) to 0.2 seconds.
I'll admit I was only timing this one input, and it's possible that
other complex switches will not see the same benefit, or may even see
a slowdown. But I do think this kind of switch is common in both
hand-written and auto-generated code, and therefore this is likely to
be a benefit in many places.
rdar://problem/47365349
After collecting possible conformances and types for each argument,
`ArgumentInfoCollector` attempts literal protocol minimalization
to reduce the set of possible types argument could assume. Let's
exclude literal protocols without default types like
`ExpressibleByNilLiteral` from consideration by that algorithm.
Resolves: rdar://problem/47266563
Introducing the SIMD operators back into the standard library regresses
one test (SR-139 goes exponential against without the designated types
for operatores feature). Split that part of the test out.
Moving them out to SIMDOperators didn't help, but the type checker hack
might. Move them back into the Swift standard library where they belong,
but leave SIMDOperators there to smooth over any short-term
incompatibilities.
For operators with multiple designated types, we attempt to decide the
best order for exploring the types. We were doing this by checking
for matching types. Extend this to also consider conforming types.
Fixes: rdar://problem/46687985
The new SIMD proposal introduced a number of new operators, the presence of
which causes more "expression too complex" failures. Route around the
problem by de-prioritizing those operators, visiting them only if no
other operator could be chosen. This should limit the type checker
performance cost of said operators to only those expressions that need
them OR that already failed to type-check.
Fixes rdar://problem/46541800.
In a previous commit, I banned in the verifier any SILValue from producing
ValueOwnershipKind::Any in preparation for this.
This change arises out of discussions in between John, Andy, and I around
ValueOwnershipKind::Trivial. The specific realization was that this ownership
kind was an unnecessary conflation of the a type system idea (triviality) with
an ownership idea (@any, an ownership kind that is compatible with any other
ownership kind at value merge points and can only create). This caused the
ownership model to have to contort to handle the non-payloaded or trivial cases
of non-trivial enums. This is unnecessary if we just eliminate the any case and
in the verifier separately verify that trivial => @any (notice that we do not
verify that @any => trivial).
NOTE: This is technically an NFC intended change since I am just replacing
Trivial with Any. That is why if you look at the tests you will see that I
actually did not need to update anything except removing some @trivial ownership
since @any ownership is represented without writing @any in the parsed sil.
rdar://46294760
We try to work more-or-less bottom up in the expression when we
attempt apply disjunctions first. This is very helpful to get good
results with designated types enabled in the solver. Rather than
forcing tests to specify both `-swift-version 5` (which also enables
this bottom-up behavior) and enabling designated types, allow them to
just enable the later to get the behavior.
We'll eventually have to revisit this when we decide the specific
conditions that we'll enable the use of designated types under.
Also add overloads for these operators to an extension of Array.
This allows us to typecheck array concatenation quickly with
designated type support enabled and the remaining type checker hacks
disabled.
This allows us to skip attempting actual conversions.
This speeds up one of our slow test cases, and perturbs the output of
another test. In the latter case, we stop emitting conversions as part
of the non-semantic piece of the array_expr. The fact that we're not
putting conversions in on that path is something I've seen before in
other instances. I'll open a bug if I cannot find one for it, although
I believe it's entirely cosmetic in this case since we don't rely on
the conversion being there.
Also move one from fast to slow based on the fact that it wasn't
representative of the original issue (which was an expression that
didn't typecheck successfully).
In cases where we have multiple designated types, sort the types that
were designated for this operator based on any information we can
gather about actual argument types at usage sites.
We can consider extending this further in a future commit to ignore
designated types when we have concrete type information that we are
confident of.
Add the following to all the expression type checker performance tests
that do not regress as a result:
- `-swift-version 5`
- `-solver-disable-shrink`
- `-disable-constraint-solver-performance-hacks`
- `-solver-enable-operator-designated-types`
This is a follow-up to 6de03f709f, where
these should have been included.
Add the following to all the expression type checker performance tests
that do not regress as a result:
- `-swift-version 5`
- `-solver-disable-shrink`
- `-disable-constraint-solver-performance-hacks`
- `-solver-enable-operator-designated-types`
At some point this test case was updated such that it no longer
compiled successfully. The originally reported test case did compile
successfully, just slowly.
This counts the number of leaf scopes we reach while solving the
constraint sytem, and is a much better measure of the growth of
unnecessary work than the total number of scopes opened.
There were two tests where I had a difficult time getting scale-test
to fit the curve even after adjusting some of the parameters, so I've
left those to use the old stat for now.
Have the constraint solver consider multiple designated types for an
operator. We currently consider the overloads from each in turn,
stopping as soon as we have a solution. As a result, we can still end
up with exponential type checking in some cases if an operator has
more than a single designated type. This still allows us to reduce the
base of that exponent, though, which makes it possible to increase the
number of expressions we can type check successfully in practice.
Currently (with or w/o failures) constraint system is not returned
back to its original state after solving, because constraints from
initial "active" list are not returned to the system. To fix that
let's allocate "initial" scope which captures state right before
solving begins, and add "active" list to the solver state to capture
information about "active" constraints at the time of its creation.
This is follow-up to https://github.com/apple/swift/pull/19873
Rather than limiting this to protocols, allow any nominal type.
Rename -enable-operator-designated-protocols to
-enable-operator-designated-types to reflect the change.
For operators with default implementations in an extension, we don't
want to typecheck it both with overloads from the protocol type and
the ones from the extension.
These are cases that I know are faster when this is enabled. The test
updates all additionally disable the existing performance hacks as
well as the shrink phase of the solver.