When constructing a generic signature, any redundant explicit requirements
are dropped from the final signature.
We would assume this operation is idempotent, that is, building a new
GenericSignatureBuilder from the resulting minimized signature produces
an equivalent GenericSignatureBuilder to the original one.
Unfortunately, this is not true in the case of conformance requirements.
Namely, if a conformance requirement is made redundant by a superclass
or concrete same-type requirement, then dropping the conformance
requirement changes the canonical type computation.
For example, consider the following:
public protocol P {
associatedtype Element
}
public class C<O: P>: P {
public typealias Element = O.Element
}
public func toe<T, O, E>(_: T, _: O, _: E, _: T.Element)
where T : P, O : P, O.Element == T.Element, T : C<E> {}
In the generic signature of toe(), the superclass requirement 'T : C<E>'
implies the conformance requirement 'T : P' because C conforms to P.
However, the presence of the conformance requirement makes it so that
T.Element is the canonical representative, so previously this signature
was minimized down to:
<T : C<E>, O : P, T.Element == O.Element>
If we build the signature again from the above requirements, then we
see that T.Element is no longer the canonical representative; instead,
T.Element canonicalizes as E.Element.
For this reason, we must rebuild the signature to get the correct
canonical type computation.
I realized that this is not an artifact of incorrect design in the
current GSB; my new rewrite system formalism would produce the same
result. Rather, it is a subtle consequence of the specification of our
minimization algorithm, and therefore it must be formalized in this
manner.
We used to sort-of do this with the HadAnyRedundantRequirements hack,
but it was both overly broad (we only need to rebuild if a conformance
requirement was implied by a superclass or concrete same-type
requirement) and not sufficient (when rebuilding, we need to strip any
bound associated types from our requirements to ensure the canonical
type anchors are re-computed).
Fixes rdar://problem/65263302, rdar://problem/75010156,
rdar://problem/75171977.
Recall the meaning of an explicit requirement here:
- If the requirement is part of a root SCC, it is redundant
unless it is the 'best' requirement from that root SCC.
- If the requirement is part of a non-root SCC, it is always
redundant.
Instead of computing the set of redundant requirements, we
build a mapping.
The value in the mapping stores the set of root requirements
that imply the given redundant requirement. This mapping is
computed by traversing the graph from each root, recording
which components can be reached from each root.
For now, I'm using this to fix rdar://problem/65263302.
After fixing that bug, this will also allow us to radically
simplify the various callers of checkConstraintList().
If we have a conformance requirement T : P, and a concrete type
requirement T == G<...>, and G _conditionally_ conforms to P,
we would infer the conditional requirements of G needed to
satisfy the conformance.
However, if the conformance requirement T : P was not explicit,
this would mean in practice that we would need to infer an
infinite number of conditional requirements, because there
might be an infinite number of types T for which T : P.
Previously we would infer these up to some limit, based on
how many levels of nested types the GSB had expanded.
Since this is untenable, let's instead change the rules so
that conditional requirement inference is only performed
when the concretizing requirement was explicit.
We shouldn't generate NestedTypeNameMatch same-type constraints
between associated types we haven't realized yet.
Otherwise, since maybeResolveEquivalenceClass() can call
lookupNestedType() before looking up a PotentialArchetype, it
is possible that maybeResolveEquivalenceClass() will return
the newly-realized type even when resolutionKind is AlreadyKnown.
This can cause an infinite recursion in expandConformanceRequirement().
However, we don't really have to do this here at all, because if
a PotentialArchetype is registered with the same name later, we
will introduce the same-type constraint in addedNestedType().
It suffices for lookupNestedType() to return the best associated
type anchor and ignore the rest.
Fixes https://bugs.swift.org/browse/SR-14289 / rdar://problem/74876047.
Doing this when computing a canonical signature didn't really
make sense because canonical signatures are not canonicalized
any more strongly _with respect to the builder_; they just
canonicalize their requirement types.
Instead, let's do these checks after creating the signature in
computeGenericSignature().
The old behavior had another undesirable property; since the
canonicalization was done by registerGenericSignatureBuilder(),
we would always build a new GSB from scratch for every
signature we compute.
The new location also means we do these checks for protocol
requirement signatures as well. This flags an existing fixed
crasher where we still emit bogus same-type requirements in
the requirement signature, so I moved this test back into
an unfixed state.
Generic signature minimization needs to diagnose and remove any
redundant requirements, that is, requirements that can be proven
from some subset of the remaining requirements.
For each requirement on an equivalence class, we record a set of
RequirementSources; a RequirementSource is a "proof" that the
equivalence class satisfies the requirement.
A RequirementSource is either "explicit" -- meaning it
corresponds to a generic requirement written by the user -- or it
is "derived", meaning it can be proven from some other explicit
requirement.
The most naive formulation of the minimization problem is that
we say that an explicit requirement is redundant if there is a
derived source for the same requirement. However, this is not
sufficient, for example:
protocol P {
associatedtype A : P where A.A == Self
}
In the signature <T where T : P>, the explicit requirement
T : P also has a derived source T.A.A : P. However, this source
is "self-derived", meaning that in order to obtain the
witness table for T.A.A : P, we first have to obtain the witness
table for T.A.
The GenericSignatureBuilder handled this kind of 'self-derived'
requirement correctly, by removing it from the list of sources.
This was implemented in the removeSelfDerived() function.
After removeSelfDerived() was called, any remaining derived
requirement sources were assumed to obsolete any explicit
source for the same requirement.
However, even this was not sufficient -- namely, it only handled
the case where a explicit requirement would imply a derived
source for itself, and not a cycle involving multiple explicit
sources that would imply each other.
For example, the following generic signature would be misdiagnosed
with *both* conformance requirements as redundant, resulting in
an invalid generic signature:
protocol P {
associatedtype T : P
}
func f<T : P, U : P>(_: T, _: U) where T.X == U, U.X == T {}
In the above example, T : P has an explicit requirement source,
as well as a derived source (U : P)(U.X : P). Similarly, U : P
has an explicit requirement source, as well as a derived source
(T : P)(T.X : P). Since neither of the derived sources were
"self-derived" according to our definition, we would diagnose
*both* explicit sources as redundant. But of course, after
dropping them, we are left with the following generic signature:
func f<T, U>(_: T, _: U) where T.X == U, U.X == T {}
This is no longer valid -- since neither T nor U have a conformance
requirement, the nested types T.X and U.X referenced from our
same-type requirements are no longer valid.
The new algorithm abandons the "self-derived" concept. Instead,
we build a directed graph where the vertices are explicit
requirements, and the edges are implications where one explicit
requirement implies another. In the above example, each of the
explicit conformance requirements implies the other. This means
a correct minimization must pick exactly one of the two -- not
zero, and not both.
The set of minimized requirements is formally defined as the
minimum set of requirements whose transitive closure is the
entire graph.
We compute this set by first building the graph of strongly
connected components using Tarjan's algorithm. The graph of SCCs
is a directed acyclic graph, which means we can compute the root
set of the DAG. Finally, we pick a suitable representative
requirement from each SCC in the root set, using the lexshort
order on subject types.
This commit implements the new algorithm, runs it on each generic
signature and asserts some properties about the results, but
doesn't actually use the algorithm for computing the minimized
signature or diagnosing redundancies; that will come in the next
commit.
RequirementSources with Superclass and Concrete kind would
reference a protocol conformance. However in the case where
the concrete type was an existential conforming to itself,
the SelfProtocolConformance does not store the original
type. Since self-conforming existentials don't have any
nested types, we don't really need to store this conformance
at all.
Instead of storing a protocol conformance in the case where
the original type is an existential, just the original type
itself. This allows us to recover the requirement from the
RequirementSource, which is important for the new
implementation of computing redundant requirements.
'Derived' was not a great name, since we already use the term
'derived requirement source' to mean something else.
A Derived source was only added in one place, when recording
a superclass constraint -- the idea is that this source
supercedes any explicit layout constraint, eg
class SomeClass {}
func foo<T>(_: T) where T : SomeClass, T : AnyObject {}
Here we have two sources for the 'T : AnyObject' layout constraint:
Explicit: T
Explicit: T -> Derived
Note that the 'Derived' requirement source does not store
a 'proof' -- we can't figure out _how_ we determined that
the explicit 'T : AnyObject' constraint is redundant here.
In the case where a superclass requirement makes a protocol
conformance redundant, we do have a 'proof', because the
'Superclass' requirement source stores a conformance:
class SomeClass : SomeProto {}
func foo<T>(_: T) where T : SomeClass, T : SomeProto {}
Explicit: T
Explicit: T -> Superclass: [SomeClass : P]
From looking at the second requirement source, we can
determine that the requirement was imposed by the explicit
constraint 'T : SomeClass'.
For the 'Layout' requirement source, there's not really a
"conformance", so we can just store the superclass type:
Explicit: T
Explicit: T -> Layout: SomeClass
The call to enumerateRequirements() here actually makes debugging
more difficult, since it has a lot of side effects, for example
calling maybeResolveEquivalenceClass() and removeSelfDerived().
Also, this is the only usage of enumerateRequirements() other than
collectRequirements(), which allows the two to be merged together
and simplified.
Consider the following program:
protocol P1 {
associatedtype A : P2
}
protocol P2 {
associatedtype A
}
func f<T>(_: T) where T : P2, T.A : P1, T.A.A == T {}
There are two proofs of T : P2:
- The explicit requirement in f()'s generic signature.
- Since T.A.A == T, we can also prove T : P2 via T.A.A : P2:
- First, we prove that T.A : P1 via the explicit requirement
in f()'s generic signature.
- Second, we prove that T.A.A : P1 via Self.A : P2 in P1's
requirement signature.
However, the second proof does not render the explicit requirement
T : P2 redundant, because it relies on the existence of the
nested type T.A, which only exists if T : P2.
This is captured in getMinimalConformanceSource(), which returns
nullptr for the requirement source corresponding to the second proof
above. It does this by looking at the root type of the requirement
source, T.A.
Now consider the analogous situation but with protocols -- let's
replace f() with a protocol P3:
protocol P3 : P2 where Self.A : P1, Self.A.A == Self {}
Here, we also have two proofs of Self : P2:
- The explicit requirement in P3's requirement signature.
- First, we prove that Self.A : P1 via the explicit requirement
in P3's requirement siganture.
- Second, we prove that Self.A.A : P1 via Self.A : P2 in P1's
requirement signature.
Once again, the second proof implicitly depends on the explicit
requirement, so we cannot use it to mark the explicit requirement
as redundant. However, since the requirement source root type here
is just 'Self', we were unable to recognize this, and we would
diagnose the requirement as redundant and drop it, resulting in
computing an invalid requirement signature for protocol P3.
To fix this, handle requirements at the top level of a protocol
requirement signature just like they are explicit requirements.
Fixes https://bugs.swift.org/browse/SR-13850 / rdar://problem/71377571.
A DependentMemberType can either have a bound AssociatedTypeDecl,
or it might be 'unresolved' and only store an identifier.
In maybeResolveEquivalenceClass(), we did not handle the unresolved
case when the base type of the DependentMemberType had itself been
resolved to a concrete type.
Fixes <rdar://problem/71162777>.
Compiler:
- Add `Forward` and `Reverse` to `DifferentiabilityKind`.
- Expand `DifferentiabilityMask` in `ExtInfo` to 3 bits so that it now holds all 4 cases of `DifferentiabilityKind`.
- Parse `@differentiable(reverse)` and `@differentiable(_forward)` declaration attributes and type attributes.
- Emit a warning for `@differentiable` without `reverse`.
- Emit an error for `@differentiable(_forward)`.
- Rename `@differentiable(linear)` to `@differentiable(_linear)`.
- Make `@differentiable(reverse)` type lowering go through today's `@differentiable` code path. We will specialize it to reverse-mode in a follow-up patch.
ABI:
- Add `Forward` and `Reverse` to `FunctionMetadataDifferentiabilityKind`.
- Extend `TargetFunctionTypeFlags` by 1 bit to store the highest bit of differentiability kind (linear). Note that there is a 2-bit gap in `DifferentiabilityMask` which is reserved for `AsyncMask` and `ConcurrentMask`; `AsyncMask` is ABI-stable so we cannot change that.
_Differentiation module:
- Replace all occurrences of `@differentiable` with `@differentiable(reverse)`.
- Delete `_transpose(of:)`.
Resolves rdar://69980056.
Instead of recomputing it on every call to getDependentType(),
we can just store it in there instead of the GenericParamKey
or AssociatedTypeDecl.
We still need to 're-sugar' user-visible dependent types to
get the right generic parameter name; a new getSugaredDependentType()
utility method is called in the right places for that.
A nested type of an archetype type might be concrete, for example, via a
same-type constraint:
extension SomeProtocol where SomeAssoc == Int {
... Self.SomeAssoc ...
}
This can happen in one of two ways; either the EquivalenceClass of the
nested type has a concrete type, or it is "fully concrete" because
there is no equivalence class and maybeResolveEquivalenceClass() returns
a ResolvedType storing the concrete type.
For some reason we didn't handle the second case here.
Fixes https://bugs.swift.org/browse/SR-13519 / rdar://problem/68531679
AbstractGenericSignatureRequest tries to minimize the number of GSBs that we
spin up by only creating a GSB if the generic parameter and requirement types
are canonical. If they're not canonical, it first canonicalizes them, then
kicks off a request to compute the canonical signature, and finally, re-applies
type sugar.
We would do this by building a mapping for re-sugaring generic parameters,
however this mapping was only populated for the newly-added generic parameters.
If some of the newly-added generic requirements mention the base signature's
generic parameters, they would remain canonicalized.
Fixes <rdar://problem/67579220>.
Previously we had two representations for the 'where' clause of a
parsed declaration; if the declaration had generic parameters of
its own, we would store them in the GenericParamList, otherwise
we would store them separately in a TrailingWhereClause instance.
Since the latter is more general and also used for protocols and
extensions, let's just use it for everything and simplify
GenericParamList in the process.
EquivalenceClass::getAnchor() always returns a canonical type and
the parent of a canonical type is itself be canonical.
This means that EquivalenceClass::getTypeInContext() can safely
assume that the parent type of a DependentMemberType maps to an
ArchetypeType in the GenericEnvironment.
Instead of trying to handle the case where the parent is concrete,
let's just crash by changing the conditional cast to an
unconditional one.
When passing in wantExactPotentialArchetype=false, we don't actually ever
call getDependentType() on the result. So the Type + EquivalenceClass form
of ResolvedType can be removed.
Under certain circumstances, introducing a concrete same-type or
superclass constraint can re-introduce conformance constraints
which were previously redundant.
For example, consider this code, which we correctly support today:
protocol P {
associatedtype T : Q
}
protocol Q {}
class SomeClass<U : Q> {}
struct Outer<T> where T : P {
func inner<U>(_: U) where T == SomeClass<U>, U : Q {}
}
The constraint 'T == SomeClass<U>' makes the outer constraint
`T : P' redundant, because SomeClass already conforms to P.
It also introduces an implied same-type constraint 'U == T.T'.
However, whereas 'T : P' together with 'U == T.T' make 'U : Q'
redundant, the introduction of the constraint 'T == SomeClass<U>'
removes 'T : P', so we re-introduce an explicit constraint 'U : Q'
in order to get a valid generic signature.
This code path did the right thing for constraints derived via
concrete same-type constraints, but it did not handle superclass
constraints.
As a result, this case was broken:
struct Outer<T> where T : P {
func inner<U>(_: U) where T : SomeClass<U>, U : Q {}
}
This is the same example as above, except T is related via a
superclass constraint to SomeClass<U>, instead of via a concrete
same-type constraint.
The subtlety here is that we must check if the superclass type
actually conforms to the requirement source's protocol, because it
is possible to have a superclass-constrained generic parameter
where some conformances are abstract. Eg, if SomeClass did not
conform to another protocol P2, we could write
func foo<T, U>(_: T, _: U) where T : SomeClass<U>, T : P2 {}
In this case, 'T : P2' is an abstract conformance on the type 'T'.
The common case where this would come up in real code is when you
have a class that conforms to a protocol with an associated type,
and one of the protocol requirements was fulfilled by a default in
a protocol extension, eg:
protocol P {
associatedtype T : Q
func foo()
}
extension P {
func foo() {}
}
class ConformsWithDefault<T : Q> : P {}
The above used to crash; now it will type-check correctly.
Fixes <rdar://problem/44736411>, <https://bugs.swift.org/browse/SR-8814>..
Name lookup might find an associated type whose protocol is not in our
conforms-to list, if we have a superclass constraint and the superclass
conforms to the associated type's protocol.
We used to return an unresolved type in this case, which would result in
the constraint getting delayed forever and dropped.
While playing wack-a-mole with regressing crashers, I had to do some
refactoring to get all the tests to pass. Unfortuanately these refactorings
don't lend themselves well to being peeled off into their own commits:
- maybeAddSameTypeRequirementForNestedType() was almost identical to
concretizeNestedTypeFromConcreteParent(), except for superclasses
instead of concrete same-type constraints. I merged them together.
- We used to drop same-type constraints where the subject type was an
ErrorType, because maybeResolveEquivalenceClass() would return an
unresolved type in this case.
This violated some invariants around nested types of ArchetypeTypes,
because now it was possible for a nested type of a concrete type to
be non-concrete, if the type witness in the conformance was missing
due to an error.
Fix this by removing the ErrorType hack, and adjusting a couple of
other places to handle ErrorTypes in order to avoid regressing with
invalid code.
Fixes <rdar://problem/45216921>, <https://bugs.swift.org/browse/SR-8945>,
<https://bugs.swift.org/browse/SR-12744>.
When adding a superclass constraint, we need to find any nested
types belonging to protocols that the superclass conforms to,
and introduce implicit same-type constraints between each nested
type and the corresponding type witness in the superclass's
conformance to that protocol.
Fixes <rdar://problem/39481178>, <https://bugs.swift.org/browse/SR-11232>.