If you have a pair of requirements T : P and T == G<U>, the conformance
G : P might be conditional, imposing arbitrary requirements on U.
In particular, these conditional requirements can mention arbitrary
protocols on the right hand side.
Introducing these conformance requirements during property map construction
is totally fine when building a top-level generic signature, but when
building a protocol requirement signature, things get a bit tricky.
Because of the design of the requirement machine, it is better if the set
of protocols appearing on the right hand side of conformance requirements
in another protocol (the "protocol dependencies") are known *before* we
start building the requirement signature, because we build the requirement
signatures of all protocols in a connected component of this graph
simultaneously.
Introducing conformance requirements to hithero-unseen protocols after
the graph of connected components had already been built would require
mutating it in a tricky way, and possibly merging connected components.
I didn't find any examples of protocols that rely on conditional
requirement inference in our test suite, or in the source compatibility
suite.
So for now, I'm going to try to disable this feature inside protocols.
Another argument in favor of not doing conditional requirement
inference in protocols is that we don't do the ordinary kind of requirement
inference there either. That is, the following is an error:
protocol P {
associatedtype T == Set<U>
associatedtype U
}
Unlike with an ordinary top-level generic signature, we don't infer
'U : Hashable' here. So arguably the current behavior of protocols inferring
these requirements in the case of a conditional conformance only is also
rather odd.
This doesn't actually matter, except to make requirement machine
minimization cross-checking work on highly invalid code with
duplicate associated type declarations in the same protocol.
For SIL substituted generic signature construction to work, we must
perform this step if either the conformance requirement or the
concrete type requirement is explicit. Previously, we only did it
if the concrete type requirement was explicit.
This is still somewhat unprincipled and I need to think about it
some more before porting it over to the requirement machine.
Fixes https://bugs.swift.org/browse/SR-15254 / rdar://problem/84827656.
This is a refactoring needed to implement 'verify' mode. The
RequirementMachine computes the requirement signature for an
entire connected component of protocols at once, whereas the
GenericSignatureBuilder only does one protocol at a time.
Using the same request for both in 'verify' mode meant that
we would only call the GSB for the first protocol in a
connected component, and then the RequirementMachine would
fill in the rest.
To fix this, split it up into two requests. The original
RequirementSignatureRequest calls into the GSB, and then
kicks off a RequirementSignatureRequestRQM to get the
requirement signature computed by the RequirementMachine
(possibly cached, if this protocol is part of a connected
component with more than one protocol in it).
We used to assert if a generic requirement of a signature could
be proved from the signature with the requirement removed.
However, in some rare cases minimization can *add* new requirements
as well as remove them.
This is going to come up even more with the RequirementMachine,
where for example the following signature is accepted:
protocol Q { associatedtype A : P }
protocol P { associatedtype B }
<T where T.A : Q, T.A.B == T>
and minimized as follows:
<T where T : P, T.A : Q, T.A.B == T>
I'm looking at a bug where we end with a signature like
protocol B { ... }
protocol BB : B { }
<Self where Self : B, Self : BB>
While this one be a one-off bug, it's easy enough to check for this
condition with an assert here.
This enabled a gross idiom that should not have been allowed in the first place:
typealias G<T> = Any where T : P
protocol P {}
protocol Q : G<Self> {} // Q inherits from P now!
I'd like to ban this, assuming nothing is actually relying on this behavior.
Returning a null GenericSignature is not the right way to break a cycle,
because then callers have to be careful to handle the case of a null
GenericSignature together with a non-null GenericParamList, for example
in applyGenericArguments().
An even worse problem can occur when a GenericSignatureRequest for a
nested generic declaration requests the signature of the parent context,
which hits a cycle. In this case, we would build a signature where
the first generic parameter did not have depth 0.
This makes the requirement machine upset, so this patch implements a new
strategy to break such cycles. Instead of returning a null
GenericSignature, we build a signature with the correct generic
parameters, but no requirements. The generic parameters can be computed
just by traversing GenericParamLists, which does not trigger more
GenericSignatureRequests, so this should be safe.
I'm about to fix the same bug in the RequirementMachine, and to avoid spurious cross-checking
failures in -requirement-machine=verify mode, just fix this in the GSB as well.
After we drop redundant conformance requirements, the left hand side
of a concrete same-type requirement might become unresolvable:
protocol P {
associatedtype T where T == Self
}
struct S : P {}
extension P where T == S {}
Here, we begin with <Self where Self : P, Self.T == S>, and then we
drop (Self : P). However, <Self where Self.T == S> is no longer a
valid generic signature.
We can canonicalize Self.T down to Self before we build the new
signature, but we must only do this for concrete same-type requirements,
since canonicalizing the subject type of an abstract same-type
requirement might lose information produce a trivial requirement of
the form 'T == T'.
This is really unprincipled, and no doubt other counter-examples
exist. The entire procedure for rebuilding a generic signature needs
to be re-designed from first principles.
Fixes rdar://problem/80503090.
A protocol conformance requirement together with a superclass requirement
can collapse down to a same-type requirement if the protocol itself has
a 'where' clause:
protocol P {
associatedtype T where T == Self
}
class C : P {}
extension P where T : C {}
(Self : P) and (Self.T : C) imply that (Self == C), because protocol P
says that Self.T == Self, and the witness of P.T in C is C, so when we
substitute that in we get Self == C.
Part of rdar://problem/80503090.
Generally we say that a conformance requirement in a generic signature
is redundant if there is some other way to derive this conformance by
starting from another conformance requirement in the same signature,
and possibly following one or more conformance requirements on nested
types defined in protocols.
The notion of a 'valid derivation path' comes into play when you have
something like this:
protocol P {
associatedtype A
}
protocol Q {
associatedtype B : P
}
<T where T : P, T.A : P, T.A.B == T>
Here, we don't want to conclude that (T : P) can be derived from
(T.A : P)(Self.B : P), because if we drop (T : P) from the signature,
we end up with
<T where T.A : P, T.A.B == T>
Now in order to recover the witness table for T : P, we need to start
from T.A : P, which requires the type metadata for T.A, which can
only be recovered from the witness table for T : P, etc. We're stuck.
What we want to do is say that T : P is not redundant, because we
cannot derive it from T.A : P, because T.A depends on T : P.
However, this check was also too strict. Consider this example:
protocol P {
associatedtype T where T == Self
}
protocol Q : P {}
<T where T : P, T.T : Q>
The naive algorithm would conclude that T : P is redundant because
it can be derived as (T.T : Q)(Self.T == Self). However, the valid
derivation path check would fail since T.T is derived from (T : P).
The problem is that since T.T is equivalent to T via (Self.T == T),
we would end up with this minimized signature:
<T where T : P, T : Q>
The derivation path check should canonicalize the type first.
I'm still not 100% convinced this logic is correct, but at least,
we have another test case and maybe it's _more_ correct now.
Fixes part of rdar://problem/80503090.
Start treating the null {Can}GenericSignature as a regular signature
with no requirements and no parameters. This not only makes for a much
safer abstraction, but allows us to simplify a lot of the clients of
GenericSignature that would previously have to check for null before
using the abstraction.
If we don't set this flag, we can end up making an invalid GSB into
the canonical builder for some signature. This was caught by
requirement machine cross-checking on the compiler_crashers suite.
When merging two type parameters T1 and T2, if T2 had a concrete type
and T1 had conformance requirements, we did not "concretize" the
conformances.
As a result, the conformance requirements were not marked as redundant,
which would cause a crash (no assert build) or assertion failure (in an
assert build) later on inside rebuildSignatureWithoutRedundantRequirements().
Fixes rdar://problem/79570734.
When a protocol's requirement signature is being computed by
GenericSignatureBuilder::computeGenericSignature(), we call
checkGenericSignature(), which contains various assertions
that call GenericSignature::isCanonicalTypeInContext().
One of these assertions forgot to pass the GSB instance down
as the 'builder' parameter.
The end result is that we would create a new GSB from the
protocol's requirement signature, which is generally not
well-formed since it does not have a conformance requirement
on 'Self'.
It seems this was harmless, other than the wasted CPU cycles,
but it was caught by some RequirementMachine assertions.