This doesn't fix the fundamental problem of correctly handling such cases, but
it is better than the "error message" that occurred previously:
Assertion failed: ((bool)typeSig == (bool)extensionSig && "unexpected generic-ness mismatch on conformance").
Fixes the crash rdar://problem/41281406 (that in
https://bugs.swift.org/browse/SR-6569 (rdar://problem/36068136)),
https://bugs.swift.org/browse/SR-8019 (rdar://problem/41216423) and
https://bugs.swift.org/browse/SR-7989 (rdar://problem/41126254).
Same-type constraints are one of the primary places where potential
archetypes are used explicitly in the GenericSignatureBuilder. Switch
the representation over to a dependent type (represented by a `Type`)
instead, furthering the demise of PotentialArchetype.
Same-type constraints are (still) described in terms of potential
archetypes, so push the "realization" operation for potential
archetypes down into the function that adds a same-type constraint
between type parameters.
GenericSignatureBuilder::addSameTypeRewriteRule() was described in
terms of an equivalence class and a potential archetype. Rework it in
terms of two dependent types (i.e., the anchors of their equivalence
classes), so we don't need to rely on the PotentialArchetype
mechanism.
Now that we no longer have DependentMemberTypes within the GSB that don't
have associated type declarations, we can no longer fail when
unpacking type into a RewritePath, so remove a bunch of optionals and
null Type checks that are now superfluous.
Introduce the first step of a minimization algorithm for the
term-rewriting system used to produce anchors of equivalence
classes. This step simplifies the right-hand sides of each rewrite
rule, so that each rewrite step goes to the minimal result.
This code is currently not enabled; it *can* be enabled by minimizing
before computing anchors, but it ends up pessimizing compile times to
do so.
Previously, the following code would result in different sets of
conditional requirements:
struct RedundancyOrderDependenceGood<T: P1, U> {}
extension RedundancyOrderDependenceGood: P2 where U: P1, T == U {}
struct RedundancyOrderDependenceBad<T, U: P1> {}
extension RedundancyOrderDependenceBad: P2 where T: P1, T == U {}
The T: P1 requirement is redundant: it is implied from U: P1 and T == U,
but just checking it in the signature of the struct isn't sufficient,
the T == U requirement needs to be considered too. This uses a quadratic
algorithm to identify those cases. We don't think the quadratic-ness is
too bad: most cases have relatively few requirements.
Fixes rdar://problem/34944318.
To make generic signature builder more robust it's imperative to
eliminate possibility of out-of-order typealias substitution.
These changes try to make it so potential archetypes could only be
constructed with associated types by doing early concrete type (typealias)
subsitution while trying to resolve equivalence class.
Instead of representing every same-type constraint we see as a rewrite rule,
only record rewrite rules when we merge equivalence classes, and record
rules that map the between the anchors of the equivalence classes. This
gives us fewer, smaller rewrite rules that (by construction) build correct
anchors.
Use term rewriting exclusively in the computation of the canonical
dependent type (anchor) for an equivalence class, eliminating any dependence
on the specific potential archetypes and eliminating the hacks around
generic type parameters.
Previously, the term rewriting logic for same-type constraints was only
able to express “relative” rewrites, where the base of the type being
rewritten (i.e., the generic type at the root) is unchanged. This meant
that we were unable to capture same-type constraints such as “T == U” or
“T.Foo == T” within the rewriting system.
Separate out the notion of a rewrite path and extend it with an optional
new base. When we’re simplifying a term and we encounter one of these
replacements, the whole type from the base up through the last-matched
path component gets replaced with the replacement path.
Fully rewriting a given type will produce the canonical representation of that type, because all rewrite rules take a step toward a more-canonical type. Use this approach to compute the anchor of an equivalence class, replacing the existing ad hoc approach of enumerating known potential
archetypes—which was overly dependent on having the “right” set of
potential archetypes already computed.
Introduce a new representation of same-type constraints as rewrite
rules in a term-rewriting system, where each same-type constraint maps
to a rewrite rule that produces a "more canonical"
term. Fully-simplifying a type according to these rewrite rules will
produce the canonical representation of that type, i.e., the "anchor"
of its equivalence class.
The rewrite rules are stored as a prefix tree, such that a "match"
operation walks the tree matching the prefix of a path of associated
type references, e.g., SubSequence -> SubSequence -> Element, and any
node within the tree can provide a replacement path (e.g.,
"Element"). The "dump" operation provides a visualization of the tree,
e.g.,
::
`--(cont'd)
`--Sequence.Iterator
| `--IteratorProtocol.Element --> [Sequence.Element]
`--Sequence.SubSequence --> []
| `--Sequence.Element --> [Sequence.Iterator -> IteratorProtocol.Element]
| | `--(cont'd) --> [Sequence.Element]
| `--Sequence.Iterator --> [Sequence.Iterator]
| | `--IteratorProtocol.Element --> [Sequence.Iterator -> IteratorProtocol.Element]
| `--Sequence.SubSequence --> []
| | `--Sequence.Element --> [Sequence.Element]
| | `--Sequence.Iterator --> [Sequence.Iterator]
| `--C.Index --> [C.Index]
| `--C.Indices --> [C.Indices]
| `--Sequence.Element --> [C.Indices -> Sequence.Element]
| `--Sequence.SubSequence --> [C.Indices -> Sequence.SubSequence]
| `--C.Index --> [C.Indices -> C.Index]
| `--C.Indices --> [C.Indices -> C.Indices]
`--C.Index --> [Sequence.Element]
`--C.Indices
`--Sequence.Element --> [C.Index]
Thus far, there are no clients for this information: this commit
starts building these rewriting trees and can visualize them for debug
information.
If we used the requirement signature to create a protocol requirement
element in a requirement source, there's no need to verify that it's
from the requirement signature (duh).
Add a verification pass to ensure that all of the generic signatures in
a module are both minimal and canonical. The approach taken is quite
direct: for every (canonical) generic signature in the module, try
removing a single requirement and forming a new generic signature from
the result. If that new generic signature that provide the removed
requirement, then the original signature was not minimal.
Also canonicalize each resulting signature, to ensure that it meets the
requirements for a canonical signature.
Add a test to ensure that all of the generic signatures in the Swift
module are minimal and canonical, since they are ABI.
Replace the pair of PotentialArchetype's getArchetypeAnchor() and
getNestedArchetypeAnchor() with a straightforward, more-efficient
computation based on equivalence classes. This reduces the number of
times we query the archetype anchor cache by 88% when building the
standard library, as well as eliminating some
PotentialArchetype-specific APIs.
Equivalence classes stored their same-type constraints in a MapVector
keyed on the source potential archetype, which allowed traversal along the
paths of the graph. However, this capability isn't required, because we
end up walking all of the edges each time. Flatten the list of same-type
constraints to a single vector, eliminating constraint duplication between
the source and the target out-edge lists.
This cuts down on the number of same-type constraints we record by 50%,
but performance gains are limited (6% of stdlib type-checking time)
because most of the time these same-type constraints were skipped
anyway.
Now that getBuilder() is gone, stash an ASTContext into root
PotentialArchetypes instead of a GenericSignatureBuilder. This eliminates
some ickiness where we had to re-root potential archetypes when moving a
GenericSignatureBuilder.
Refactor the interfaces that involve “walking” a requirement source
(e.g., minimization, computation of the affected type, etc.) to rely
only on interface types and not potential archetypes.
We want to delay the realization of potential archetypes as late as
possible; for layout constraints, push realization of potential archetypes
down past the point where we know whether we’re learning anything new
about the equivalence class.
No functionality change, yet.