Move associated type inference into its own class, to make this
code easier to understand/maintain/improve. Minor diagnostics changes
because we're properly passing uninference associated type declarations
to the "group" checker.
There is nothing specifically wrong with uttering a same-type
constraint in a where clause where both sides are concrete
types. Downgrade this to a warning; we'll check that the concrete
types match (of course), and such a well-formed constraint will simply
be canonicalized away.
This aids the migration of IndexDistance from an associated type to
Int.
When forming a specialized protocol conformance, we substitute into the
conditional requirements. Allow this substitution to look into the module
to find conformances, which might be required to accurately represented
the requirements. Otherwise, we can silently end up dropping them.
We should rethink this notion of eagerly substituting conditional
requirements, and consider whether clients should always handle this
substitution. For now, fixes rdar://problem/35837054.
Allow conformance lookup in module context for conditional
* Refactor Indices and Slice to use conditional conformance
* Replace ReversedRandomAccessCollection with a conditional extension
* Refactor some types into struct+extensions
* Revise Slice documentation
* Fix test cases for adoption of conditional conformances.
* [RangeReplaceableCollection] Eliminate unnecessary slicing subscript operator.
* Add -enable-experimental-conditional-conformances to test.
* Gruesome workaround for crasher in MutableSlice tests
Conditional conformances aren't quite ready yet for Swift 4.1, so
introduce the flag `-enable-experimental-conditional-conformances` to
enable conditional conformaces, and an error when one declares a
conditional conformance without specifying the flag.
Add this flag when building the standard library (which will vend
conditional conformances) and to all of the tests that need it.
Fixes rdar://problem/35728337.
Rather than wantonly dropping the conditional requirements when checking
for type erasure, add them in the same way we do for (e.g.) conformance
checking for generics. Fixes rdar://problem/35480860.
Substitute the type arguments of the conforming type into the conditional
requirements of the specialized conformance, so they reflect the specific
requirements of the specialized conformance.
Fixes rdar://problem/34944286.
Rather than storing contextual types in the type witnesses and associated
conformances of NormalProtocolConformance, store only interface types.
@huonw did most of the work here, and @DougGregor patched things up to
complete the change.
Add a verification pass to ensure that all of the generic signatures in
a module are both minimal and canonical. The approach taken is quite
direct: for every (canonical) generic signature in the module, try
removing a single requirement and forming a new generic signature from
the result. If that new generic signature that provide the removed
requirement, then the original signature was not minimal.
Also canonicalize each resulting signature, to ensure that it meets the
requirements for a canonical signature.
Add a test to ensure that all of the generic signatures in the Swift
module are minimal and canonical, since they are ABI.
Previously, we were inferring requirements from types within the definitions
of protocols, e.g., given something like:
protocol P {
associatedtype A: Collection
associatedtype B where A.Element == Set<B>
}
we would infer that B: Hashable. The code for doing this was actually
incorrect due to its mis-use of requirement sources, causing a few
crashers. Plus, it's not a good idea in general because it hides the
actual requirements on B. Stop doing this.
Also stop trying to infer requirements from conditional
requirements---those have already been canonicalized and minimized, so
there's nothing to infer from.
The first step in enumerating the minimal, canonical set of requirements for
a generic signature is identifying which "subject" types will show up in
the left-hand side of the requirements. Previously, this would require us
to realize all of the potential archetypes, and perform a number of
archetype-anchor computations and comparisons.
Replace that with a simpler walk over the equivalence classes,
identifying the anchor types within each derived same-type component
of those equivalence classes, which form the subject types. This is
more straightforward, doesn't rely on potential archetypes, simplifies
the code, and eliminates a silly O(n^2)-for-small-n that's been
bothering me for a while.
Associated type redeclarations occasionally occur to push around
associated type witness inference. Suppress the warning about redeclarations
that add no requirements (i.e., have neither an inheritance nor a
where clause).
Equivalence classes stored their same-type constraints in a MapVector
keyed on the source potential archetype, which allowed traversal along the
paths of the graph. However, this capability isn't required, because we
end up walking all of the edges each time. Flatten the list of same-type
constraints to a single vector, eliminating constraint duplication between
the source and the target out-edge lists.
This cuts down on the number of same-type constraints we record by 50%,
but performance gains are limited (6% of stdlib type-checking time)
because most of the time these same-type constraints were skipped
anyway.
When we have an equivalence class that contains two unrelated
associated types with the same name, infer a same-type constraint
between those two associated types. This is a more principled way to
introduce these constraints that we had before, fixing a
recently-introduced regression.
Use the "override" information in associated type declarations to provide
AST-level access to the associated type "anchor", i.e., the canonical
associated type that will be used in generic signatures, mangling,
etc.
In the Generic Signature Builder, only build potential archetypes for
associated types that are anchors, which reduces the number of
potential archetypes we build when type-checking the standard library
by 14% and type-checking time for the standard library by 16%.
There's a minor regression here in some generic signatures that were
accidentally getting (correct) same-type constraints. There were
existing bugs in this area already (Huon found some of them), while
will be addressed as a follow-up.
Fies SR-5726, where we were failing to type-check due to missed
associated type constraints.
If the class itself is generic and we're looking up a nested type
from a conformance, we would pass an interface type to
mapTypeOutOfContext() and crash.
Introduce (recursive) constraints that make the *Collection constraint
of SubSequence match that of its enclosing *Collection, e.g.,
MutableCollection.SubSequence conforms to MutableCollection.
Fixes rdar://problem/20715031 and more of SR-3453.
The full state of the GSB isn’t all that useful for testing, creates a ton of noise and gets in the way of some cleanups we’d like to make in the interface.
Stop dumping it as part of `-debug-generic-signatures`.
The full state of the GSB isn’t all that useful for testing, creates a ton of noise and gets in the way of some cleanups we’d like to make in the interface.
Stop dumping it as part of `-debug-generic-signatures`.
When type-checking a function or subscript that itself does not have generic
parameters (but is within a generic context), we were creating a generic
signature builder which will always produce the same generic signature as
the enclosing context. Stop creating that generic signature builder.
Instead, teach the CompleteGenericTypeResolver to use the generic signature
+ the canonical generic signature builder for that signature to resolve
types, which also eliminates some extraneous re-type-checking.
Improves type-checking performance of the standard library by 36%.
The full state of the GSB isn’t all that useful for testing, creates a ton of noise and gets in the way of some cleanups we’d like to make in the interface.
Stop dumping it as part of `-debug-generic-signatures`.
When type-checking a function or subscript that itself does not have generic
parameters (but is within a generic context), we were creating a generic
signature builder which will always produce the same generic signature as
the enclosing context. Stop creating that generic signature builder.
Instead, teach the CompleteGenericTypeResolver to use the generic signature
+ the canonical generic signature builder for that signature to resolve
types, which also eliminates some extraneous re-type-checking.
Improves type-checking performance of the standard library by 36%.
When we have two nested types of a given potential archetype that have
the same name, introduce a (quietly) inferred constraint. This is
a future-proofing step for canonicalization, for a possible future
where we no longer require all of the nested types of a given name
to be equivalent, e.g., because we have a proper disambiguation
mechanism.
When a potential archetype refers to a concrete (non-associated) type
declaration, we bind to that concrete type. Add a new requirement
source kind for this case that is always derived, separating it from
the nested-type-name-match source.
One important aspect of this is that typealiases in protocols that
"override" an associated type an inherited protocol will generate the
same requirement signature as the equivalent protocol that uses a
same-type constraint, making the suppression of the "hey, this is
equivalent to a same-type constraint now!" warning an ABI-preserving
change.
With this, remove a now-unnecessary hack for nested-name-match
requirement sources.
Currently `visitAssignExpr` always attempts to use type
derived from destination as a contextual type for assignment
source type-checking, which doesn't always lead to better
results.
Resolves: SR-5081