To preserve correct behavior under substitution, the parent type
of a protocol type alias must be the 'Self' generic parameter and
not the protocol's existential type.
Flush it and the early validation hack now that we can delay computing the underlying interface type on demand and have taught type resolution to honor the structural type of a typealias.
This changes the way requirement signatures are spelled as a side effect.
We generated a mix of "inferred" and "nested type name match"
constraints for the case where we had two nested types with the same
name and inferred that they are equal. Make them consistent by always
using nested type name match constraints. This fixes a bug where we
would get different canonical generic signatures in different source
files because we inferred the same-type constraint with different
requirement sources.
Fixes rdar://problem/48049725.
When extending a type via a generic typealias, where the type parameters of
the underlying nominal type line up precisely with those of the
generic typealias and its specialization of the underlying nominal
type (a so-called "pass-through" typealias in the new code), maintain
type sugar in the extension declaration.
This new type sugar enables inference of type requirements from the
generic typealias, which is both useful by itself (it lets the type
requirements on generic typealiases be meaningful for extensions like
they are elsewhere), and also addresses a source-compatability
regression where an extension of `CountableRange` will now infer the
requirement `Bound: Comparable`.
Fixes SR-6907 / rdar://problem/29066394.
Generic typealiases can add requirements that aren't used by their
underlying type. For example, CountableRange in the standard library:
public typealias CountableRange<Bound: Comparable> = Range<Bound>
Perform requirement inference based on uses of generic typealiases,
such that a generic function like this:
func f<T>(_: CountableRange<T>) { }
will infer T: Bound from the use of CountableRange.
Note that this does not yet work for extensions.
* Make Range conditionally a Collection
* Convert ClosedRange to conditionally a collection
* De-gyb Range/ClosedRange, refactoring some methods.
* Remove use of Countable{Closed}Range from stdlib
* Remove Countable use from Foundation
* Fix test errors and warnings resulting from Range/CountableRange collapse
* fix prespecialize test for new mangling
* Update CoreAudio use of CountableRange
* Update SwiftSyntax use of CountableRange
* Restore ClosedRange.Index: Hashable conformance
* Move fixed typechecker slowness test for array-of-ranges from slow to fast, yay
* Apply Doug's patch to loosen test to just check for error
A type Foo<...>.Bar may only exist conditionally (i.e. constraints on the
generic parameters of Foo), in which case those conditional requirements
should be implied when Foo<...>.Bar is mentioned.
Fixes SR-6850.
* Refactor Indices and Slice to use conditional conformance
* Replace ReversedRandomAccessCollection with a conditional extension
* Refactor some types into struct+extensions
* Revise Slice documentation
* Fix test cases for adoption of conditional conformances.
* [RangeReplaceableCollection] Eliminate unnecessary slicing subscript operator.
* Add -enable-experimental-conditional-conformances to test.
* Gruesome workaround for crasher in MutableSlice tests
Conditional conformances aren't quite ready yet for Swift 4.1, so
introduce the flag `-enable-experimental-conditional-conformances` to
enable conditional conformaces, and an error when one declares a
conditional conformance without specifying the flag.
Add this flag when building the standard library (which will vend
conditional conformances) and to all of the tests that need it.
Fixes rdar://problem/35728337.
Previously, we were inferring requirements from types within the definitions
of protocols, e.g., given something like:
protocol P {
associatedtype A: Collection
associatedtype B where A.Element == Set<B>
}
we would infer that B: Hashable. The code for doing this was actually
incorrect due to its mis-use of requirement sources, causing a few
crashers. Plus, it's not a good idea in general because it hides the
actual requirements on B. Stop doing this.
Also stop trying to infer requirements from conditional
requirements---those have already been canonicalized and minimized, so
there's nothing to infer from.
The first step in enumerating the minimal, canonical set of requirements for
a generic signature is identifying which "subject" types will show up in
the left-hand side of the requirements. Previously, this would require us
to realize all of the potential archetypes, and perform a number of
archetype-anchor computations and comparisons.
Replace that with a simpler walk over the equivalence classes,
identifying the anchor types within each derived same-type component
of those equivalence classes, which form the subject types. This is
more straightforward, doesn't rely on potential archetypes, simplifies
the code, and eliminates a silly O(n^2)-for-small-n that's been
bothering me for a while.
Associated type redeclarations occasionally occur to push around
associated type witness inference. Suppress the warning about redeclarations
that add no requirements (i.e., have neither an inheritance nor a
where clause).
When we have an equivalence class that contains two unrelated
associated types with the same name, infer a same-type constraint
between those two associated types. This is a more principled way to
introduce these constraints that we had before, fixing a
recently-introduced regression.
Use the "override" information in associated type declarations to provide
AST-level access to the associated type "anchor", i.e., the canonical
associated type that will be used in generic signatures, mangling,
etc.
In the Generic Signature Builder, only build potential archetypes for
associated types that are anchors, which reduces the number of
potential archetypes we build when type-checking the standard library
by 14% and type-checking time for the standard library by 16%.
There's a minor regression here in some generic signatures that were
accidentally getting (correct) same-type constraints. There were
existing bugs in this area already (Huon found some of them), while
will be addressed as a follow-up.
Fies SR-5726, where we were failing to type-check due to missed
associated type constraints.
Introduce (recursive) constraints that make the *Collection constraint
of SubSequence match that of its enclosing *Collection, e.g.,
MutableCollection.SubSequence conforms to MutableCollection.
Fixes rdar://problem/20715031 and more of SR-3453.
The full state of the GSB isn’t all that useful for testing, creates a ton of noise and gets in the way of some cleanups we’d like to make in the interface.
Stop dumping it as part of `-debug-generic-signatures`.
The full state of the GSB isn’t all that useful for testing, creates a ton of noise and gets in the way of some cleanups we’d like to make in the interface.
Stop dumping it as part of `-debug-generic-signatures`.
When type-checking a function or subscript that itself does not have generic
parameters (but is within a generic context), we were creating a generic
signature builder which will always produce the same generic signature as
the enclosing context. Stop creating that generic signature builder.
Instead, teach the CompleteGenericTypeResolver to use the generic signature
+ the canonical generic signature builder for that signature to resolve
types, which also eliminates some extraneous re-type-checking.
Improves type-checking performance of the standard library by 36%.
The full state of the GSB isn’t all that useful for testing, creates a ton of noise and gets in the way of some cleanups we’d like to make in the interface.
Stop dumping it as part of `-debug-generic-signatures`.
When type-checking a function or subscript that itself does not have generic
parameters (but is within a generic context), we were creating a generic
signature builder which will always produce the same generic signature as
the enclosing context. Stop creating that generic signature builder.
Instead, teach the CompleteGenericTypeResolver to use the generic signature
+ the canonical generic signature builder for that signature to resolve
types, which also eliminates some extraneous re-type-checking.
Improves type-checking performance of the standard library by 36%.
When we have two nested types of a given potential archetype that have
the same name, introduce a (quietly) inferred constraint. This is
a future-proofing step for canonicalization, for a possible future
where we no longer require all of the nested types of a given name
to be equivalent, e.g., because we have a proper disambiguation
mechanism.
When a potential archetype refers to a concrete (non-associated) type
declaration, we bind to that concrete type. Add a new requirement
source kind for this case that is always derived, separating it from
the nested-type-name-match source.
One important aspect of this is that typealiases in protocols that
"override" an associated type an inherited protocol will generate the
same requirement signature as the equivalent protocol that uses a
same-type constraint, making the suppression of the "hey, this is
equivalent to a same-type constraint now!" warning an ABI-preserving
change.
With this, remove a now-unnecessary hack for nested-name-match
requirement sources.
In some circumstances, we could end up growing increasingly-nested
potential archetypes due to a poor choice of representatives and
anchors. Address this in two places:
* Always prefer to use the potential archetype with a lower nesting
depth (== number of nested types) to one with a greater nesting
depth, so we don't accumulate more nested types onto the
already-longer potential archetypes, and
* Prefer archetype anchors with a lower nesting depth *except* that we
always prefer archetype anchors comprised of a sequence of
associated types (i.e., no concrete type declarations), which is
important for canonicalization.
Fixes SR-4757 / rdar://problem/31912838, as well as a regression
involving infinitely-recursive potential archetypes caused by the
previous commit.
Infer same-type requirements among same-named associated
types/typealiases within inherited protocols. This is more staging; it
doesn't really have teeth until we stop wiring together these types as
part of lookup.
Introduce a warning about redeclaring the associated types from an
inherited protocol in the protocol being checked:
* If the new declaration is an associated type, note that the
declaration could be replaced by requirements in the protocol's
where clause.
* If the new declaration is a typealias, note that it could be
replaced by a same-type constraint in the protocol's where clause.
This stops after 5 recurrences of the same associated type. It is a
gross hack and a terrible idea, here as a placeholder to prevent us
from running off the rails in ill-formed code. This will go away when
we get further along the path with recursive protocol constraints.
Apply the same logic used for self-derived conformance constraints,
where we drop constraints derived from concrete conformances, to the
remaining kinds of constraints covered by isSelfDerivedSource().
When an otherwise abstract conformance constraint is derived from a
concrete conformance, retain the abstract conformance by removing the
requirement source that involves the concrete conformance. This
eliminates our reliance on the concrete conformance, which is not
retained as part of the generic signature.
Fixes rdar://problem/31163470 and rdar://problem/31520386.
When a requirement mentions a concrete type, that type might utter
other types (e.g., Set<T>) that infer requirements (here, T:
Hashable). Perform requirement inference for such types.
Part of rdar://problem/31520386.
When a nested type is within the same equivalence class as its parent,
don't emit a redundant same-type-to-concrete constraint for the
corresponding potential archetype. The nested type's constraint will
be derived from the parent... which is technically a self-derived
constraint, yet needs to be suppressed.
Generic signature canonicalization/minimization never removes type
parameters, so we cannot suppress type-parameter-to-concrete
requirements even when they are derived.
Fixes the rest of the known cases of rdar://problem/30478915.
Move the storage for the protocols to which a particular potential
archetype conforms into EquivalenceClass, so that it is more easily
shared. More importantly, keep track of *all* of the constraint
sources that produced a particular conformance requirement, so we can
revisit them later, which provides a number of improvements:
* We can drop self-derived requirements at the end, once we've
established all of the equivalence classes
* We diagnose redundant conformance requirements, e.g., "T: Sequence"
is redundant if "T: Collection" is already specified.
* We can choose the best path when forming the conformance access
path.
If a requirement is made redundant due to another requirement that was
inferred from the signature of a generic declaration, don't diagnose
the former as redundant. The user has likely written the requirement
explicitly for clarity purposes (e.g., to emphasize the Hashable
requirement on a function that takes a Set<T>). Removing the
requirement to silence the warning would make the code less clear.
This eliminates all of the annoying, spurious warnings from the build
of the overlays.