When enumerating same-type-to-concrete requirements, don't emit a
same-type-to-concrete requirement for a nested archetype anchor when
it's parent also is equivalent to a concrete type, because the former
can always be derived from the latter.
Fixes SR-4456 / rdar://problem/31286125.
When a nested type is within the same equivalence class as its parent,
don't emit a redundant same-type-to-concrete constraint for the
corresponding potential archetype. The nested type's constraint will
be derived from the parent... which is technically a self-derived
constraint, yet needs to be suppressed.
Generic signature canonicalization/minimization never removes type
parameters, so we cannot suppress type-parameter-to-concrete
requirements even when they are derived.
Fixes the rest of the known cases of rdar://problem/30478915.
The general self-derived check doesn't really make sense for
conformance constraints, because we want to distinguish among
different protocol conformances.
This PR addresses TODOs from #8241.
- It supports merging for layout constraints, e.g., if both a _Trivial constraint and a _Trivial(64) constraint appear on a type parameter, we keep only _Trivial(64) as a more specific layout constraint. We do a similar thing for ref-counted/native-ref-counted. The overall idea is to keep the more specific of two compatible layout constraints.
- The presence of a superclass constraint implies a layout constraint, e.g., a superclass constraint implies _Class or _NativeClass
Diagnose redundant same-type constraints using most of the same
machinery for diagnosing other redundant constraints. However,
same-type constraints are particularly interesting because
redundancies can be spelled in a number of different ways. Address
this using the connected components of the subgraph involving only
derived requirements (which is already used for the minimized generic
signature). Then, separate all of the non-derived requirements into
the intracomponent requirements and intercomponent requirements:
* All of the intracomponent requirements are redundant by definition,
because the components are defined by derived constraints.
* For the intercomponent requirements, form a spanning tree among the
various components and diagnose as redundant any edges that do not
extend the spanning tree.
It's better to compute this information once while we're sorting
through all of the same-type constraints, so we can use it later when
performing queries (e.g., enumerating requirements).
We were emitting a superclass constraint for each connected component
of derived same-type constraints within an equivalence class, when in
fact we only need one superclass constraint for the entire equivalence
class.
We were emitting a superclass constraint for each connected component
of derived same-type constraints within an equivalence class, when in
fact we only need one superclass constraint for the entire equivalence
class.
As we've done with all of the other kinds of constraints, keep track
of all of the layout constraints on the equivalence class. Use the
normal mechanism to diagnose conflicts between different layout
constraints, warn about duplicate layout constraints, etc.
As we've been doing with other kinds of constraints, track *all* of
the requirement sources for deriving same-type constraints within the
equivalence class, then remove self-derived constraints at the end.
There is no checking for duplicated same-type constraints yet.
Move the storage for the protocols to which a particular potential
archetype conforms into EquivalenceClass, so that it is more easily
shared. More importantly, keep track of *all* of the constraint
sources that produced a particular conformance requirement, so we can
revisit them later, which provides a number of improvements:
* We can drop self-derived requirements at the end, once we've
established all of the equivalence classes
* We diagnose redundant conformance requirements, e.g., "T: Sequence"
is redundant if "T: Collection" is already specified.
* We can choose the best path when forming the conformance access
path.
Our handling of nested types was scattered in several places, and
(worse) correct computation of archetype anchors required us to
"explode" out all of the potential archetypes for every associated
type with the given name to ensure that we get the right one.
Make nested type construction somewhat more lazy: if asked for a
nested type for a specific associated type, just create the nested
type for that associated type (instead of *all* of them). If asked for
a nested type by name, either return the one we already have or create
the one that's most likely to be the archetype anchor. Overall, this
should result in many fewer potential archetypes being constructed.
This hack is papering over other issues in the generic signature
builder, where canonicalizing an existing generic signature---in which
we already know the specific associated type declarations---could
introduce "inferred" requirements, breaking the resulting requirement
signatures by producing too-short paths. By delaying same-type
requirements, we make this case less likely.
The correct solution is to eliminate
PotentialArchtype::getNestedType()'s injection of an inferred
requirement, but doing so requires a bit more surgery.
If a requirement is made redundant due to another requirement that was
inferred from the signature of a generic declaration, don't diagnose
the former as redundant. The user has likely written the requirement
explicitly for clarity purposes (e.g., to emphasize the Hashable
requirement on a function that takes a Set<T>). Removing the
requirement to silence the warning would make the code less clear.
This eliminates all of the annoying, spurious warnings from the build
of the overlays.
The ad hoc substitution functions here were really odd; use
SubstitutionMap directly, and pass it through to
GenericSignatureBuilder::addRequirement().
The stored dependent types in ProtocolRequirement elements within
requirement sources were incorrect for requirements created from the
requirement signature of another protocol, because we picked up the
already-substituted subject type. Thread the optional substitution map
through addRequirement(Requirement) as well, so we maintain the
original spelling of the stored dependent type.
This is a temporary fix; we should be able to recover the stored
dependent types from the potential archetypes in the requirement
source, so that we don't need to specify them explicitly at
construction time.
For a protocol requirement element within a requirement source, track
both the protocol in which the requirement was introduced as well as
the dependent type (relative to that protocol) on which the
requirement was introduced. This information is important when
reconstructing the path from a requirement-as-written to the location
of a desired protocol conformance.
Start reshuffling RequirementSource to store more information about
requirements in protocols. As a small step, track the source locations
for requirements written within the protocols themselves.
Note: there's a QoI regression here where we get duplicated
diagnostics (due to multiple generic signature builders being built
from a bad signature).