mirror of
https://github.com/apple/swift.git
synced 2025-12-14 20:36:38 +01:00
3833 lines
260 KiB
TeX
3833 lines
260 KiB
TeX
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
%
|
|
% This source file is part of the Swift.org open source project
|
|
%
|
|
% Copyright (c) 2021 Apple Inc. and the Swift project authors
|
|
% Licensed under Apache License v2.0 with Runtime Library Exception
|
|
%
|
|
% See https://swift.org/LICENSE.txt for license information
|
|
% See https://swift.org/CONTRIBUTORS.txt for the list of Swift project authors
|
|
%
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
|
|
\documentclass[headsepline,bibliography=totoc]{scrreport}
|
|
|
|
\usepackage{imakeidx}
|
|
|
|
% FIXME: make links blue?
|
|
\usepackage[pdfborder={0 0 0 }]{hyperref}
|
|
\usepackage{upgreek}
|
|
\usepackage{bm}
|
|
\usepackage{amsthm}
|
|
\usepackage{amsmath}
|
|
\usepackage{amssymb}
|
|
\usepackage{multirow}
|
|
\usepackage{textgreek}
|
|
\usepackage{tikz-cd}
|
|
\usepackage{mathtools}
|
|
\usepackage{fancyvrb}
|
|
\usepackage{framed}
|
|
|
|
\newcommand{\namesym}[1]{\mathsf{#1}}
|
|
\newcommand{\genericparam}[1]{\bm{\mathsf{#1}}}
|
|
\newcommand{\proto}[1]{\bm{\mathsf{#1}}}
|
|
\newcommand{\protosym}[1]{[\proto{#1}]}
|
|
\newcommand{\gensig}[2]{\langle #1\;\textit{where}\;#2\rangle}
|
|
\newcommand{\genericsym}[2]{\bm{\uptau}_{#1,#2}}
|
|
\newcommand{\assocsym}[2]{[\proto{#1}\colon\namesym{#2}]}
|
|
\newcommand{\layoutsym}[1]{[\mathsf{layout\;#1}]}
|
|
\newcommand{\supersym}[1]{[\mathsf{superclass}\;#1]}
|
|
\newcommand{\concretesym}[1]{[\mathsf{concrete}\;#1]}
|
|
\DeclareMathOperator{\gpdepth}{depth}
|
|
\DeclareMathOperator{\gpindex}{index}
|
|
\DeclareMathOperator{\domain}{domain}
|
|
\DeclareNewTOC[
|
|
type = listing,
|
|
counterwithin=chapter,
|
|
float
|
|
]{listing}
|
|
|
|
\newtheorem{theorem}{Theorem}[chapter]
|
|
\newtheorem{lemma}{Lemma}[chapter]
|
|
|
|
\theoremstyle{definition}
|
|
\newtheorem{example}{Example}[chapter]
|
|
|
|
\theoremstyle{definition}
|
|
\newtheorem{definition}{Definition}[chapter]
|
|
|
|
\theoremstyle{definition}
|
|
\newtheorem{algorithm}{Algorithm}[chapter]
|
|
|
|
\RecustomVerbatimEnvironment{Verbatim}{Verbatim}{frame=single,numbers=left}
|
|
|
|
\makeindex[intoc]
|
|
|
|
\title{\[
|
|
\left<
|
|
\begin{array}{cc}
|
|
:&=
|
|
\\
|
|
\uptau&\rightarrow
|
|
\end{array}
|
|
\right>
|
|
\]
|
|
The Requirement Machine}
|
|
\subtitle{Sound and decidable generic programming for Swift}
|
|
\author{Slava Pestov}
|
|
\date{}
|
|
|
|
\pagestyle{headings}
|
|
|
|
\begin{document}
|
|
|
|
\maketitle
|
|
|
|
\tableofcontents
|
|
|
|
\chapter{Introduction}
|
|
|
|
The Swift 5.6 compiler incorporates a new generic programming implementation, dubbed the ``requirement machine''. The goals were to improve the correctness, performance and maintainability of this aspect of the compiler. Internally, the requirement machine is based around confluent rewrite systems.
|
|
|
|
It is really best that you have at least a passing familiarity with the Swift language before reading this book; actually \emph{learning} generic programming by studying how the compiler goes about implementing things might prove to be unnecessarily difficult.
|
|
|
|
Regardless, in the interest of scoping the effort, defining terminology, and maybe filling in small gaps in your understanding, I'll quickly review Swift's generic programming features in Chapter \ref{languageoverview}.
|
|
|
|
Chapter~\ref{history} will give the historical context showing how the evolution of the language and implementation motivated work on the requirement machine.
|
|
|
|
Chapter \ref{genericsignatures} explains the interface to the requirement machine from the point of view of the rest of compiler; this interface is centered on resolving queries involving type parameters written with respect to a generic signature.
|
|
|
|
Chapters \ref{monoids}, \ref{monoidsasprotocols} and \ref{rewritesystemintro} give an overview of the mathematical theory of confluent rewrite systems that underpins the requirement machine.
|
|
|
|
Chapter \ref{protocolsasmonoids} will begin to translate this theory into practice, then Chapter~\ref{associatedtypes} builds up a series of progressively more complex examples showing how rewrite systems can be used to reason about type parameters in a generic signature.
|
|
|
|
Chapter \ref{requirementmachine} will build upon this intuitive understanding to give a formal definition of the requirement machine rewrite system.
|
|
|
|
Chapter \ref{propertymap} introduces the ``property map,'' which builds upon the rewrite system and answers queries that cannot be resolved with term rewriting alone. The property map also plays a crucial role in the implementation of superclass and concrete type requirements.
|
|
|
|
Occasionally, I will find occasion to talk about some specific part of the compiler codebase. Usually this will be tangental to the main point being discussed. These will be demarcated with a vertical bar, and if you're only interested in learning the theory behind the requirement machine, you can skip these sections. If you're planning on contributing to the compiler you can read them. Here's the first one:
|
|
|
|
\begin{leftbar}
|
|
\noindent The source code for the requirement machine is in the \texttt{lib/AST/RequirementMachine} directory of the Swift source repository.
|
|
\end{leftbar}
|
|
|
|
\chapter{Language Overview}\label{languageoverview}
|
|
|
|
Swift supports \emph{parametric polymorphism}. Functions and types can be equipped with a generic parameter list containing one or more generic parameters, together with a set of requirements which constrain these generic parameters.
|
|
|
|
\index{substitution}
|
|
Wherever a generic definition is used, the use site substitutes concrete types in place of the definition's generic parameters, and these concrete types in turn must satisfy the requirements imposed on those generic parameters. Unlike the C++ template expansion model where type checking is delayed until concrete substitutions are known, Swift supports separate type checking and compilation of generic definitions across module boundaries. For this reason, the Swift compiler must be able to reason about generic parameters and their requirements abstractly.
|
|
|
|
The simplest case is a definition with a single unconstrained generic parameter. Here is the identity function over all possible concrete types:
|
|
\begin{Verbatim}
|
|
func identity<T>(x: T) -> T {
|
|
return x
|
|
}
|
|
\end{Verbatim}
|
|
|
|
\index{protocol}
|
|
\index{inheritance clause}
|
|
You can't do much with an unconstrained generic parameter type other than pass its instances around, as the values themselves are completely opaque. In order to manipulate ``the contents'' of generic values, generic parameters must be constrained in some manner. This restricts the set of substitutable concrete types but in turn introduces capabilities on the generic type.
|
|
|
|
\index{conformance requirement}
|
|
The most fundamental kind of constraint is the \emph{protocol conformance requirement}. A protocol conformance requirement can be stated in a generic parameter's inheritance clause:
|
|
\begin{Verbatim}
|
|
func draw<S : Shape>(shape: S, at points: [(Int, Int)]) {
|
|
// shape.draw(x:y:) exists because `shape : S'
|
|
// and `S : Shape'.
|
|
points.forEach(shape.draw(x:y:))
|
|
}
|
|
|
|
protocol Shape {
|
|
func draw(x: Int, y: Int)
|
|
}
|
|
\end{Verbatim}
|
|
|
|
\index{associated type}
|
|
\index{nested type}
|
|
Protocols can define associated types, the canonical example being the lazy iterator protocol from the standard library:
|
|
\begin{Verbatim}
|
|
protocol IteratorProtocol {
|
|
associatedtype Element
|
|
mutating func next() -> Element?
|
|
}
|
|
\end{Verbatim}
|
|
A generic function that takes an iterator and returns its second element can be defined by constraining a generic parameter $\genericparam{I}$ to $\proto{IteratorProtocol}$, and declaring a return type of $\genericparam{I}.\namesym{Element}$:
|
|
\begin{Verbatim}
|
|
func second<I : IteratorProtocol>(of iterator: inout I) -> I.Element {
|
|
_ = iterator.next() // drop the first element and advance
|
|
return iterator.next()!
|
|
}
|
|
\end{Verbatim}
|
|
The \texttt{second(of:)} function defines one generic parameter, but two \emph{type parameters} are visible in its lexical scope: $\genericparam{I}$ and $\genericparam{I}.\namesym{Element}$.
|
|
|
|
Associated types can also state a protocol conformance requirement, just like generic parameters; for example, you can define the protocol of sequence types which vend an iterator (this version is slightly simpler than what's in the standard library; you'll see the real definition shortly):
|
|
\begin{Verbatim}
|
|
protocol Sequence {
|
|
associatedtype Iterator : IteratorProtocol
|
|
func makeIterator() -> Iterator
|
|
}
|
|
\end{Verbatim}
|
|
% present this after the code example
|
|
A different version of the above \texttt{second(of:)} that ranges over types conforming to $\proto{Sequence}$ has \emph{three} type parameters; $\genericparam{S}$, $\genericparam{S}.\namesym{Iterator}$, and $\genericparam{S}.\namesym{Iterator}.\namesym{Element}$. Notice how all three appear in the definition below.
|
|
\begin{Verbatim}
|
|
func second<S : Sequence>(of sequence: S) -> S.Iterator.Element {
|
|
var iterator: S.Iterator = sequence.makeIterator()
|
|
_ = iterator.next()
|
|
return iterator.next()!
|
|
}
|
|
\end{Verbatim}
|
|
\index{desugaring}
|
|
Instead of stating a protocol conformance requirement in the inheritance clause of a generic parameter, you can attach a more general \texttt{where} clause to the parent declaration. A \texttt{where} clause can constrain nested type parameters, not just top-level generic parameters. It can also state more kinds of requirements.
|
|
\index{where clause}
|
|
|
|
\eject
|
|
The \texttt{second(of:)} function can be written with a \texttt{where} clause:
|
|
\begin{Verbatim}
|
|
func second<I>(of iterator: inout I) -> I.Element
|
|
where I : IteratorProtocol {...}
|
|
\end{Verbatim}
|
|
Similarly, the $\proto{Sequence}$ protocol could have used one instead:
|
|
\begin{Verbatim}
|
|
protocol Sequence {
|
|
associatedtype Iterator where Iterator : IteratorProtocol
|
|
}
|
|
\end{Verbatim}
|
|
Or even like this---a \texttt{where} clause attached to a protocol is no different than one on an associated type (you can even mix both; it's a purely stylistic choice, and in more complicated examples carefully organizing requirements helps readability):
|
|
\begin{Verbatim}
|
|
protocol Sequence where Iterator : IteratorProtocol {
|
|
associatedtype Iterator
|
|
}
|
|
\end{Verbatim}
|
|
|
|
The next example shows a requirement that cannot be expressed without using \texttt{where}. First define a protocol for linearly-ordered types:
|
|
\begin{Verbatim}
|
|
protocol Comparable {
|
|
static func <(lhs: Self, rhs: Self) -> Bool
|
|
}
|
|
\end{Verbatim}
|
|
Now you can define a function ranging over types conforming to $\proto{Sequence}$ whose element type conforms to $\proto{Comparable}$:
|
|
\begin{Verbatim}
|
|
func minimum<S>(of elements: S) -> S.Iterator.Element
|
|
where S : Sequence,
|
|
S.Iterator.Element : Comparable {
|
|
// Elements of this iterator are Comparable
|
|
var iterator = elements.makeIterator()
|
|
...
|
|
}
|
|
\end{Verbatim}
|
|
A particularly powerful kind of generic requirement---and perhaps the entire reason the requirement machine exists---is the same-type requirement.
|
|
|
|
Supppose you're working with $\genericparam{S}$, a generic parameter ranging over $\proto{Sequence}$. Writing $\genericparam{S}.\namesym{Iterator}.\namesym{Element}$ everywhere will get awkward fast. As I mentioned, the real version of $\proto{Sequence}$ in the standard library is slightly more complex. Indeed, it defines an $\namesym{Element}$ associated type in the protocol itself, so that the user can write $\genericparam{S}.\namesym{Element}$. Simply adding the associated type definition is not enough, because now $\genericparam{S}.\namesym{Element}$ and $\genericparam{S}.\namesym{Iterator}.\namesym{Element}$ are \emph{different} type parameters and cannot be used interchangeably.
|
|
|
|
\index{same-type requirement}
|
|
To express this restriction that concrete types conforming to $\proto{Sequence}$ must provide an iterator type compatible with the element type, a \texttt{where} clause in the protocol can state a same-type requirement:
|
|
\begin{Verbatim}
|
|
protocol Sequence {
|
|
associatedtype Element
|
|
where Element == Iterator.Element
|
|
associatedtype Iterator : IteratorProtocol
|
|
}
|
|
\end{Verbatim}
|
|
Now, for a generic parameter $\genericparam{S}$ ranging over $\proto{Sequence}$, $\genericparam{S}.\namesym{Iterator}.\namesym{Element}$ and $\genericparam{S}.\namesym{Element}$ become two equivalent ways to refer to the \emph{same} type parameter.
|
|
|
|
\index{requirement}
|
|
\index{conformance requirement}
|
|
\index{superclass requirement}
|
|
\index{layout requirement}
|
|
\index{concrete type requirement}
|
|
So far you've seen protocol conformance requirements and same-type requirements between type parameters. Here is the full list of requirement kinds:
|
|
|
|
\index{substitution}
|
|
\begin{itemize}
|
|
\item \textbf{Protocol conformance requirements}: written as $\namesym{T}\colon\proto{P}$ where $\namesym{T}$ is a type parameter and $\proto{P}$ is a protocol. States that the concrete substitution for $\namesym{T}$ must conform to $\proto{P}$.
|
|
\item \textbf{Layout requirements}: written as $\namesym{T}\colon\proto{L}$ where $\namesym{T}$ is a type parameter and $\proto{L}$ is a layout constraint. The only kind of layout constraint available in the surface language is $\namesym{AnyObject}$, which states that the concrete type substituted for $\namesym{T}$ must be \emph{some} class type. This makes the reference storage qualifiers \texttt{weak}, \texttt{unowned} and \texttt{unowned(unsafe)} available for use on stored properties of type $\namesym{T}$. Other kinds of layout constraints are available via the unofficial \texttt{@\_specialize} attribute, but other than the fact that they exist, the details of their implementation don't really matter to the requirement machine.
|
|
% implies layout requirement
|
|
\item \textbf{Superclass requirements}: written as $\namesym{T}\colon\namesym{C}$ where $\namesym{T}$ is a type parameter and $\namesym{C}$ is a class. States that the concrete substitution for $\namesym{T}$ must be an instance of $\namesym{C}$ or some subclass of $\namesym{C}$. As with protocol conformance requirements, superclass requirements can be also stated in the inheritance clause of a generic parameter; this is completely equivalent to the \texttt{where} clause form. If the class is generic, the type constructor can be instantiated with other type parameters; for example,
|
|
\begin{Verbatim}
|
|
func cache<S : Sequence, C>(elements: S, in cache: C)
|
|
where C : Cache<S.Element>
|
|
\end{Verbatim}
|
|
\item \textbf{Concrete type requirements}: written as $\namesym{T} == \namesym{U}$, where exactly one of $\namesym{T}$ or $\namesym{U}$ is a type parameter, and the other is a concrete type. This states that the concrete type substitution for the type parameter must exactly equal the concrete type (no subclassing, implicit conversions, etc, are allowed).
|
|
\item \textbf{Same type requirements}: written as $\namesym{T}==\namesym{U}$, where both $\namesym{T}$ and $\namesym{U}$ are type parameters. This requirement states that whatever concrete types are substituted for $\namesym{T}$ and $\namesym{U}$, they must be identical types.
|
|
\end{itemize}
|
|
|
|
What about same-type requirement where \emph{both} sides are concrete types? The compiler allows this, but always produces a warning, because it looks a little silly and doesn't give you any more expressive power. Something like $\namesym{Int} == \namesym{Int}$ is tautological, and $\namesym{Int} == \namesym{String}$ is invalid. To be fair, you \emph{can} state a non-trivial requirement with concrete types on both sides if at least one of them involves a generic type constructor instantiated with a type parameter, for example:
|
|
\begin{Verbatim}
|
|
func sum<S : Sequence>(elements: S) -> Int
|
|
where Array<T.Element> == Array<Int> {...}
|
|
\end{Verbatim}
|
|
But in this case, you may as well just peel off the $\namesym{Array}\langle\bullet\rangle$ type constructor from both sides and write a simpler concrete type requirement:
|
|
\[\genericparam{T}.\namesym{Element} == \namesym{Int}.\]
|
|
|
|
\index{power set}
|
|
One more small language feature is worth mentioning. You might want to define a function that constructs the the set of all subsets of a set, i.e., the power set. The $\namesym{Set}\langle\bullet\rangle$ type constructor requires that its argument conforms to $\proto{Hashable}$:
|
|
\begin{Verbatim}
|
|
struct Set<Element : Hashable> {...}
|
|
\end{Verbatim}
|
|
When you declare your \texttt{powerSet(of:)} function, the requirement $\genericparam{Element}\colon\proto{Hashable}$ doesn't need to be stated since it can be inferred from the application $\namesym{Set}\langle\genericparam{Element}\rangle$ appearing in the type signature of \texttt{powerSet(of:)}:
|
|
\begin{Verbatim}
|
|
func powerSet<Element>(of set: Set<Element>) -> Set<Set<Element>> {...}
|
|
\end{Verbatim}
|
|
\index{requirement inference}
|
|
\index{desugaring}
|
|
This feature is known as \emph{requirement inference}, and it is an example of what is called a \emph{desugaring}. By the time the requirement machine comes into play, all inferred requirements have already become explicit.
|
|
|
|
% talk about concrete types conforming to protocols and meniton associated type inference
|
|
|
|
We've now seen enough to dive---or at least begin dipping our toes---into the depths of the implementation. But first, a quick history lesson.
|
|
|
|
\chapter{A Little Bit of History}\label{history}
|
|
|
|
The original Swift 1.0 language supported all the modern kinds of generic requirements except for layout requirements; those did not exist because $\proto{AnyObject}$ was actually a special protocol with built-in support from the compiler, but it behaved much like the $\proto{AnyObject}$ layout constraint does today.
|
|
|
|
However, Swift 1.0 imposed two major restrictions on the expressivity of protocol definitions:
|
|
\begin{itemize}
|
|
\item protocol definitions did not allow \texttt{where} clauses,
|
|
\item associated types in protocols could not state a conformance requirement which referenced the protocol containing the associated type, either directly or indirectly.
|
|
\end{itemize}
|
|
Swift 4.0 introduced \texttt{where} clauses on protocols and associated types \cite{se0142}. Swift 4.1 lifted the restriction prohibiting recursive protocol conformances \cite{se0157}. Both features are desirable, as the modern $\proto{Collection}$ protocol demonstrates as part of the definition of the $\namesym{SubSequence}$ associated type:
|
|
\begin{Verbatim}
|
|
protocol Collection : Sequence {
|
|
associatedtype SubSequence : Collection
|
|
where SubSequence.Element == Element,
|
|
SubSequence.SubSequence == SubSequence
|
|
}
|
|
\end{Verbatim}
|
|
Intuitively, these requirements have the following interpretation:
|
|
\begin{itemize}
|
|
\item Slicing a collection can return another, possibly different type of collection, but the slice must have the same element type as the original.
|
|
\item If you slice a slice, you get the same type, since it would not be desirable if slices stacked recursively.
|
|
\end{itemize}
|
|
|
|
\index{recursive conformance requirement}
|
|
\index{conformance requirement!recursive|see{recursive conformance requirement}}
|
|
A requirement like $\namesym{SubSequence}\colon\proto{Collection}$ is called a \emph{recursive conformance requirement}, because it appears inside the definition of the $\proto{Collection}$ protocol itself.
|
|
|
|
In the $\proto{Collection}$ protocol, the recursion via $\namesym{SubSequence}$ only goes one level deep, because of the second same-type requirement which ``ties it off''. That is, if $\genericparam{T}$ is constrained to $\proto{Collection}$, $\genericparam{T}.\namesym{SubSequence}$ is a distinct type parameter conforming to $\proto{Collection}$, but $\genericparam{T}.\namesym{SubSequence}.\namesym{SubSequence}$ is equivalent to $\genericparam{T}.\namesym{SubSequence}$.
|
|
|
|
However, it is also permissible to use an unconstrained recursive conformance requirement to define an infinite sequence of type parameters. The SwiftUI $\proto{View}$ protocol is one example:
|
|
\begin{Verbatim}
|
|
protocol View {
|
|
associatedtype Body : View
|
|
var body: Body { get }
|
|
}
|
|
\end{Verbatim}
|
|
If $\genericparam{V}$ is constrained to conform to $\proto{View}$, there is an infinite sequence of unique type parameters rooted at $\genericparam{V}$:
|
|
\begin{align*}
|
|
&\genericparam{V}\\
|
|
&\genericparam{V}.\namesym{Body}\\
|
|
&\genericparam{V}.\namesym{Body}.\namesym{Body}\\
|
|
&\genericparam{V}.\namesym{Body}.\namesym{Body}.\namesym{Body}\\
|
|
&\cdots
|
|
\end{align*}
|
|
|
|
In contrast, in the absence of recursive protocol conformances, a generic signature can only induce a \emph{finite} set of distinct type parameters.
|
|
|
|
In Swift 3.1 and older, the compiler component for reasoning about type parameters had a simple design:
|
|
\begin{algorithm}[Old \texttt{ArchetypeBuilder} algorithm]\label{archetypebuilder} The inputs are a list of generic parameters and generic requirements. The output is a directed graph.
|
|
|
|
\index{equivalence class}
|
|
A path beginning at a root node corresponds to a valid type parameter. Multiple type parameters that are equivalent via same-type requirements are different paths that reach the same node. A node corresponds to an equivalence class of type parameters. Nodes store a list of conformance, superclass and concrete type requirements that apply to each type parameter in the equivalence class.
|
|
|
|
The algorithm proceeds in two phases:
|
|
\begin{enumerate}
|
|
\item (Expand) Begin by building a ``forest'' of type parameters, with generic parameters at the roots. Each generic parameter node starts out without children.
|
|
\begin{enumerate}
|
|
\item For every top-level requirement, find the subject generic parameter, record the requirement in the generic parameter's node, and if its a conformance requirement, add new children corresponding to each associated type of the protocol.
|
|
\item Recursively record and expand requirements on any associated type nodes introduced above.
|
|
\end{enumerate}
|
|
\item (Union-find) Then, process top-level same-type requirements:
|
|
\begin{enumerate}
|
|
\item First, resolve the left and right-hand sides, and merge the two nodes into an equivalence class.
|
|
\item Find any pairs of child nodes in the two merged nodes that have the same name, and recursively merge those child nodes as well.
|
|
\end{enumerate}
|
|
\end{enumerate}
|
|
\end{algorithm}
|
|
|
|
% add a graph
|
|
|
|
A side effect of both the recursive expansion and union-find steps is the gathering of a list of requirements in each equivalence class. These gathered requirements were used to answer queries such as ``does this type parameter conform to a protocol''.
|
|
|
|
This algorithm survived the introduction of protocol \texttt{where} clauses in Swift 4.0 with some relatively minor changes; namely, the processing of same-type requirements became somewhat more complex, since they could be introduced at any level in the graph.
|
|
|
|
When recursive conformances were introduced in Swift 4.1, the \texttt{ArchetypeBuilder} underwent a major overhaul, where it was renamed to \texttt{GenericSignatureBuilder}. Since the equivalence class graph was no longer necessarily finite, the biggest change was the move to a lazy evaluation approach---traversing a hitherto-unvisited part of the equivalence class graph would now lazily expand conformance requirements as needed.
|
|
|
|
Unfortunately the limitations of this lazy expansion approach soon made themselves apparent. The equivalence class graph could be mutated both as a consequence of adding the initial set of requirements in a generic signature, and also by lazy expansion performed while answering queries. The highly mutable nature of the implementation made it difficult to understand and debug. It also became a performance problem, because ``expanding too much'' had to be balanced against ``not expanding enough.''
|
|
|
|
For example, any generic signature referencing one of the more complicated protocol towers, such as $\proto{RangeReplaceableCollection}$ or $\proto{FixedWidthInteger}$ from the standard library, would re-build the entire sub-graph of all nested associated types of each protocol from scratch. On the other hand, skipping expansion of recursive nested types could lead to same-type requirements being missed, which would result in the incorrect formation of multiple distinct equivalence classes that should actually be a single class.
|
|
|
|
I later realized the lazy expansion strategy suffers from an even fundamental problem; as you will see in Chapter~\ref{monoidsasprotocols}, the full generality of the generics system makes generic signature queries undecidable. The design of the \texttt{GenericSignatureBuilder} was not sufficiently principled to determine if the input was too complex for its internal model, either crashing or silently producing incorrect results if this occurred.
|
|
|
|
The development of the requirement machine was motivated by the desire to find an immutable, closed-form representation of an entire, potentially infinite, type parameter graph. While the undecidability of the problem means this is not possible in the general case, I believe the formulation in terms of a confluent rewrite system should handle any reasonable generic signatures that appear in practice.
|
|
|
|
\chapter{Generic Signatures}\label{genericsignatures}
|
|
\begin{listing}\caption{A single linked list data type}\label{linkedlist}
|
|
\begin{Verbatim}
|
|
enum LinkedList<Element> {
|
|
indirect case node(Element, LinkedList<Element>)
|
|
case none
|
|
|
|
func map<Result>(_ fn: (Element) -> Result) -> LinkedList<Result> {
|
|
switch self {
|
|
case .none:
|
|
return .none
|
|
case let .node(x, xs):
|
|
return .node(fn(x), xs.map(fn))
|
|
}
|
|
}
|
|
}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
|
|
\index{nested generic declaration}
|
|
The compiler itself is a large software system, and the generics system is just one component; you can look at its interface and implementation separately. The ``interface'' part does not expose the ``requirement machine,'' rewrite systems, monoids, or any other such nonsense; in fact, the interface of the generics system barely changed when I implemented the requirement machine. In this section, I'll talk about the interface to the generics system, while hand-waving away the implementation details.
|
|
|
|
Generic declarations can be nested almost arbitrarily in Swift:
|
|
\begin{itemize}
|
|
\item Generic functions may appear inside other generic functions.
|
|
\item Generic functions may appear inside generic types.
|
|
\item Generic types may appear inside other generic types.
|
|
\end{itemize}
|
|
The only restriction at this time is that generic types cannot be nested in generic functions. This mostly stems from some limitations in other parts of the compiler, and is not inherent to the generics system.
|
|
|
|
Listing \ref{linkedlist} shows the implementation of a linked list with a \texttt{map(\_:)} function. Both $\namesym{LinkedList}$ and \texttt{map(\_:)} introduce a new generic parameter. Since \texttt{map(\_:)} is nested within \texttt{LinkedList}, both generic parameters are visible from the body \texttt{map(\_:)}.
|
|
|
|
The compiler doesn't really care about the names of generic parameters, but it still needs some way to talk about them, so it assigns each one a \emph{depth} and \emph{index}. The depth is the level of nesting; every declaration that has a generic parameter list increases the depth of any generic parameters introduced, with generic parameters in the outer-most declaration having a depth of 0. The index is the ordinal of the generic parameter relative to a single generic parameter list. In the linked list example, there are two generic parameters:
|
|
\begin{itemize}
|
|
\item $\genericparam{Element}$ with depth 0 and index 0, and
|
|
\item $\genericparam{Result}$ with depth 1 and index 0.
|
|
\end{itemize}
|
|
\index{canonical type}
|
|
Most of the time I will still refer to generic parameters by names, but in some formal definitions I will use the notation $\uptau_{d,i}$, where of course $d$ is the depth and $i$ is the index.\footnote{And sometimes, the compiler loses track of a name and accidentally spits out $\mathtt{\uptau\_1\_0}$ or something in a diagnostic. That's a bug when it happens.}
|
|
|
|
While the depth and index uniquely identify a generic parameter visible at a given source location, they do not uniquely identify a generic parameter globally. In order to answer a question about a generic parameter or one of its nested type parameters, the query must be asked with respect to some generic signature.
|
|
|
|
\index{generic signature}
|
|
\begin{definition} A \emph{generic signature} is a ``flat'' list of all generic parameters that are in scope at some source location, together with all generic requirements from every \texttt{where} clause.
|
|
\end{definition}
|
|
\begin{example}
|
|
The generic signature of \texttt{map(\_:)} has two generic parameters and no requirements:
|
|
\[\langle\genericparam{Element}, \genericparam{Result}\rangle\]
|
|
\end{example}
|
|
\begin{example}
|
|
\index{where clause}
|
|
Suppose we define an extension of $\namesym{LinkedList}$ with a \texttt{where} clause:
|
|
\begin{Verbatim}
|
|
extension LinkedList where Element == Int {
|
|
func sum() -> Int {
|
|
switch self {
|
|
case .none: return 0
|
|
case let .node(x, xs) return x + xs.sum()
|
|
}
|
|
}
|
|
}
|
|
\end{Verbatim}
|
|
The extension starts from the generic signature of $\namesym{LinkedList}$ and adds a new same-type requirement. The \texttt{sum()} function has this generic signature:
|
|
\[\gensig{\genericparam{Element}}{\genericparam{Element}==\namesym{Int}}\]
|
|
\end{example}
|
|
\begin{example}
|
|
Here is a funny example. Suppose we want to build linked lists from the elements of some lazy iterator:
|
|
\begin{Verbatim}
|
|
extension LinkedList {
|
|
init<I : IteratorProtocol>(from iterator: inout I)
|
|
where I.Element == Element {
|
|
if let elt = iterator.next() {
|
|
self = .node(elt, .init(from: iterator))
|
|
} else {
|
|
self = .none
|
|
}
|
|
}
|
|
}
|
|
\end{Verbatim}
|
|
The declaration of \texttt{.init(from:)} has the following generic signature:
|
|
\[\gensig{\genericparam{Element}, \genericparam{I}}{\genericparam{Element}==\genericparam{I}.\namesym{Element}}\]
|
|
Or alternatively, using the ``canonical type'' notation,
|
|
\[\gensig{\genericsym{0}{0}, \genericsym{1}{0}}{\genericsym{0}{0}==\genericsym{1}{0}.\namesym{Element}}\]
|
|
\end{example}
|
|
|
|
\index{protocol Self type}
|
|
A protocol has a generic signature too, even though it doesn't have a generic parameter list in the source language.
|
|
|
|
\begin{definition}
|
|
The generic signature of a protocol $\proto{P}$ has a single generic parameter $\genericparam{Self}$, together with a conformance requirement that says $\genericparam{Self}$ must conform to $\proto{P}$:
|
|
\[\gensig{\genericparam{Self}}{\genericparam{Self}\colon\proto{P}}\]
|
|
\end{definition}
|
|
This is where the $\genericparam{Self}$ type inside a protocol definition comes from; it is the generic parameter that is substituted with the concrete conforming type when the protocol is used. Since protocols cannot be nested inside other declarations, $\genericparam{Self}$ is always at depth and index 0; or in other words, its canonical type is $\genericsym{0}{0}$.
|
|
|
|
Generic signatures do not tell you anything about what is ``inside'' protocols. The protocol $\proto{P}$ might have one or more associated types, or a \texttt{where} clause, but none of this information appears in the protocol's generic signature. There is another, related concept that peels back one level of indirection to encode the ``guts'' of a protocol.
|
|
|
|
\index{requirement signature}
|
|
\begin{definition} A \emph{requirement signature} of a protocol $\proto{P}$ is the single generic parameter $\genericparam{Self}$ together with any requirements collected from inheritance clauses and \texttt{where} clauses written inside the protocol.
|
|
\end{definition}
|
|
|
|
For example, here is the requirement signature of the $\proto{Sequence}$ protocol:
|
|
\begin{align*}
|
|
\gensig{\genericparam{Self}}{&\genericparam{Self}.\namesym{Element}==\genericparam{Self}.\namesym{Iterator}.\namesym{Element},\\
|
|
&\genericparam{Self}.\namesym{Iterator}\colon\proto{IteratorProtocol}}
|
|
\end{align*}
|
|
|
|
\index{desugaring}
|
|
\index{protocol inheritance}
|
|
The definition of a requirement signature mentions that it collects requirements from inheritance clauses and \texttt{where} clauses. Associated types can have inheritance clauses, but so can protocols. How can protocol inheritance be represented in a protocol requirement signature? Well, it's just a protocol conformance requirement on $\genericparam{Self}$. For example the requirement signature of $\proto{Hashable}$ shown in Listing \ref{hashableequatable} records the inheritance relationship with $\proto{Equatable}$:
|
|
\[\gensig{\genericparam{Self}}{\genericparam{Self}\colon\proto{Equatable}}.\]
|
|
\begin{listing}\caption{Simplified forms of the $\proto{Hashable}$ and $\proto{Equatable}$ protocols.}\label{hashableequatable}
|
|
\begin{Verbatim}
|
|
protocol Hashable : Equatable {
|
|
func hash(into: inout Hasher)
|
|
}
|
|
|
|
protocol Equatable {
|
|
static func ==(lhs: Self, rhs: Self) -> Bool
|
|
}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
|
|
\index{name lookup}
|
|
There is an important caveat here. Protocol inheritance is not equivalent to writing down a conformance requirement on $\genericparam{Self}$ in the \emph{source language}. The reason is that protocol inheritance has a special meaning to name lookup, which sits ``below'' the generics implementation in the compiler. In order to build the requirement signature of a protocol in the first place, the compiler performs name lookup, so name lookup must rely on constructs more primitive than the generic requirements in order to resolve identifiers without causing an infinite recursion through the compiler. For this reason, the compiler emits a warning upon encountering a conformance requirement on $\genericparam{Self}$ not stated via protocol inheritance syntax.
|
|
\vfill
|
|
\eject
|
|
\section{Reasoning About Type Parameters}
|
|
\begin{listing}\caption{A function to compare the first element of two sequences for equality}\label{areequalex}
|
|
\begin{Verbatim}
|
|
func areFirstElementsEqual<S1 : Sequence, S2 : Sequence>(
|
|
_ first: S1, _ second: S2) -> Bool
|
|
where S1.Element : Equatable,
|
|
S1.Element == S2.Element {
|
|
let firstIter = first.makeIterator()
|
|
let secondIter = second.makeIterator()
|
|
|
|
let firstElt = firstIter.next()!
|
|
let secondElt = secondIter.next()!
|
|
|
|
// `==' called with S1.Iterator.Element and S2.Iterator.Element
|
|
return firstElt == secondElt
|
|
}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
To understand how generic code might be type checked, let's look at the example in Listing \ref{areequalex}. While it looks rather simple, there's a fair amount of complexity here. The generic signature of the \texttt{areFirstElementsEqual(\_:\_:)} function itself is:
|
|
\begin{align*}
|
|
\gensig{\genericparam{S1},\genericparam{S2}}
|
|
{&\genericparam{S1}\colon\proto{Sequence},\\
|
|
&\genericparam{S2}\colon\proto{Sequence},\\
|
|
&\genericparam{S1}.\namesym{Element}\colon\proto{Equatable},\\
|
|
&\genericparam{S1}.\namesym{Element}==\genericparam{S2}.\namesym{Element}}
|
|
\end{align*}
|
|
Now consider the call to the \texttt{==} operator on line 11. The \texttt{==} operator is defined in the $\proto{Equatable}$ protocol as taking two parameters, both of type $\genericparam{Self}$, given by the protocol generic signature $\gensig{\genericparam{Self}}{\genericparam{Self}\colon\proto{Equatable}}$. At the call site, the arguments
|
|
\texttt{firstElt} and \texttt{secondElt} are of type $\genericparam{S1}.\namesym{Iterator}.\namesym{Element}$ and
|
|
$\genericparam{S2}.\namesym{Iterator}.\namesym{Element}$, respectively.
|
|
|
|
In order for this application of \texttt{==} to be valid, the type checker must prove two things:
|
|
\begin{enumerate}
|
|
\item first, $\genericparam{S1}.\namesym{Iterator}.\namesym{Element}$ and $\genericparam{S2}.\namesym{Iterator}.\namesym{Element}$ must be the same type;
|
|
\item second, that either one or the other (since they're the same type!) must conform to $\proto{Equatable}$.
|
|
\end{enumerate}
|
|
Both facts can be proven as follows:
|
|
\begin{itemize}
|
|
\item $\genericparam{S1}\colon\proto{Sequence}$ together with the same-type requirement in the $\proto{Sequence}$ protocol implies $\genericparam{S1}.\namesym{Element}==\genericparam{S1}.\namesym{Iterator}.\namesym{Element}$.
|
|
\item $\genericparam{S2}\colon\proto{Sequence}$ together with the same-type requirement in the $\proto{Sequence}$ protocol implies $\genericparam{S2}.\namesym{Element}==\genericparam{S2}.\namesym{Iterator}.\namesym{Element}$.
|
|
\item $\genericparam{S1}.\namesym{Element}==\genericparam{S2}.\namesym{Element}$ together with the above implies the following, which concludes the proof of the first fact:
|
|
\[\genericparam{S1}.\namesym{Iterator}.\namesym{Element}==\genericparam{S2}.\namesym{Iterator}.\namesym{Element}.\]
|
|
\item $\genericparam{S1}.\namesym{Element}\colon\proto{Equatable}$ together with the above implies the second fact.
|
|
\end{itemize}
|
|
|
|
\section{Generic Signature Queries}\label{intqueries}
|
|
% define what an archetype is
|
|
|
|
Proving properties about type parameters is non-trivial, which is why this functionality is encapsulated in a series of re-usable \emph{generic signature queries}.
|
|
Table \ref{querytable} shows a summary, grouping the queries into three sets. You can probably guess what some of them do just by seeing their names. You will see the actual implementation of these queries later, in Section~\ref{implqueries}.
|
|
\begin{leftbar}
|
|
\noindent In the compiler, each generic signature query is a method in the \texttt{GenericSignature} class.
|
|
\end{leftbar}
|
|
|
|
\begin{table}\caption{Generic signature queries}\label{querytable}
|
|
\begin{center}
|
|
\begin{tabular}{|l|l|}
|
|
\hline
|
|
Query kind&Query name\\
|
|
\hline
|
|
\hline
|
|
\multirow{3}{7.5em}{Predicates}&\texttt{requiresProtocol()}\\
|
|
&\texttt{requiresClass()}\\
|
|
&\texttt{isConcreteType()}\\
|
|
\hline
|
|
\multirow{4}{7.5em}{Properties}&\texttt{getRequiredProtocols()}\\
|
|
&\texttt{getSuperclassBound()}\\
|
|
&\texttt{getConcreteType()}\\
|
|
&\texttt{getLayoutConstraint()}\\
|
|
\hline
|
|
\multirow{3}{7.5em}{Canonical types}&\texttt{areSameTypeParameterInContext()}\\
|
|
&\texttt{isCanonicalTypeInContext()}\\
|
|
&\texttt{getCanonicalTypeInContext()}\\
|
|
\hline
|
|
\end{tabular}
|
|
\end{center}
|
|
\end{table}
|
|
|
|
The simplest of all queries are the binary predicates, which respond with a true/false answer.
|
|
\begin{description}
|
|
\item [\texttt{requiresProtocol()}] answers if a type parameter conforms to a protocol.
|
|
\item [\texttt{requiresClass()}] answers if a type parameter is represented at runtime as a single retainable pointer.
|
|
\item [\texttt{isConcreteType()}] answers if a type parameter is fixed to a concrete type.
|
|
\end{description}
|
|
|
|
\begin{example}
|
|
\index{protocol inheritance}
|
|
\index{conformance requirement}
|
|
Recall from Listing \ref{hashableequatable} there is an inheritance relationship from $\proto{Hashable}$ and $\proto{Equatable}$, and take the generic signature
|
|
\[\gensig{\genericparam{T}}{\genericparam{T}\colon\proto{Hashable}}.\]
|
|
Since protocol inheritance is expressed as a conformance requirement on $\genericparam{Self}$, both of these queries will return true:
|
|
\begin{itemize}
|
|
\item \texttt{requiresProtocol(T, Hashable)}
|
|
\item \texttt{requiresProtocol(T, Equatable)}
|
|
\end{itemize}
|
|
\end{example}
|
|
|
|
\index{layout requirement}
|
|
\index{superclass requirement}
|
|
Next, I'll show two examples of \texttt{requiresClass()}. The $\namesym{AnyObject}$ layout constraint, which states that an instance of a type is represented as a single retainable pointer, can either be stated explicitly, or be inferred from a superclass requirement.
|
|
|
|
\begin{example}
|
|
An example of the first case, where \texttt{requiresClass(T.Element)} is true:
|
|
\[\gensig{\genericparam{T}}{\genericparam{T}\colon\proto{Sequence}, \genericparam{T}.\namesym{Element}\colon\proto{Executor}}\]
|
|
This follows from the definition of the $\proto{Executor}$ protocol in the standard library, which constrains $\genericparam{Self}$ to $\namesym{AnyObject}$:
|
|
\begin{Verbatim}
|
|
protocol Executor : AnyObject {...}
|
|
\end{Verbatim}
|
|
\end{example}
|
|
|
|
\begin{example}An example of the second case, where \texttt{requiresClass(C)} is true:
|
|
\[\gensig{\genericparam{C}}{\genericparam{C}\colon\namesym{Cache}}\]
|
|
\begin{Verbatim}
|
|
class Cache {...}
|
|
\end{Verbatim}
|
|
\end{example}
|
|
|
|
\begin{example}
|
|
\index{concrete type requirement}
|
|
Consider this generic signature:
|
|
\[\gensig{\genericparam{T},\genericparam{U}}{\genericparam{T}\colon\proto{Sequence},\genericparam{T}.\namesym{Element}==\namesym{Array}\langle\genericparam{U}\rangle}\]
|
|
Here both of the following are true:
|
|
\begin{itemize}
|
|
\item \texttt{isConcreteType(T.Element)}
|
|
\item \texttt{isConcreteType(T.Iterator.Element)}
|
|
\end{itemize}
|
|
\end{example}
|
|
|
|
% what is 'empty type' or just drop 'if there is one'
|
|
% 'empty layout constraint'
|
|
The next set of queries derive more complex properties that are not just true/false predicates.
|
|
\begin{description}
|
|
\item [\texttt{getRequiredProtocols()}] returns the list of all protocols that a type parameter must conform to. The list is minimal in the sense that no protocol inherits from any other protocol in the list (Definition~\ref{minimalproto}), and this list is sorted in canonical protocol order (Definition~\ref{canonicalprotocol}).
|
|
\index{superclass requirement}
|
|
\item [\texttt{getSuperclassBound()}] returns the superclass bound of a type parameter if there is one, or the empty type otherwise.
|
|
\index{concrete type requirement}
|
|
\item [\texttt{getConcreteType()}] returns the concrete type to which a type parameter is fixed if there is one, or the empty type otherwise.
|
|
\index{layout requirement}
|
|
\item [\texttt{getLayoutConstraint()}] returns the layout constraint describing a type parameter's runtime representation if there is one, or the empty layout constraint otherwise.
|
|
\end{description}
|
|
|
|
\begin{example}
|
|
% move code example before description
|
|
In the following, \texttt{getSuperclassBound(T)} is $\namesym{G}\langle\genericparam{U}\rangle$:
|
|
\[\gensig{\genericparam{T}, \genericparam{U}}{\genericparam{T}\colon\namesym{G}\langle\genericparam{U}\rangle}\]
|
|
\begin{Verbatim}
|
|
class G<A> {}
|
|
\end{Verbatim}
|
|
\end{example}
|
|
|
|
\begin{example}
|
|
% Range's Element is the same as Range.Element
|
|
In the following, \texttt{getConcreteType(T.Index)} is $\namesym{Int}$:
|
|
\[\gensig{\genericparam{T}}{\genericparam{T}\colon\proto{Collection},\genericparam{T}.\namesym{Indices}==\namesym{Range}\langle\namesym{Int}\rangle}\]
|
|
This is a non-trivial consequence of several requirements:
|
|
\begin{itemize}
|
|
\item the concrete type requirement $\genericparam{T}.\namesym{Indices}==\namesym{Range}\langle\namesym{Int}\rangle$ stated above;
|
|
\item $\genericparam{Self}.\namesym{Indices}.\namesym{Element}==\genericparam{Self}.\namesym{Index}$ in the requirement signature of $\proto{Collection}$;
|
|
\item the standard library's conditional conformance of $\namesym{Range}\langle\namesym{Element}\rangle$ to $\proto{Collection}$ where $\namesym{Element}$ is $\proto{Strideable}$.
|
|
\end{itemize}
|
|
\end{example}
|
|
|
|
\section{Canonical Types}\label{canonicaltypes}
|
|
|
|
\index{canonical type}
|
|
The final set of generic signature queries concern so-called canonical types, but before I can say what a canonical type \emph{is}, I need to formally define type parameters.
|
|
|
|
% each associated type is defined in a protocol that the prefix conforms to
|
|
% maybe a recursive definition is better
|
|
|
|
\begin{definition} A \emph{type parameter} is a non-empty sequence where the first element is a generic parameter (or really, a depth and an index), and all remaining elements are associated types. The \emph{length} of a type parameter $T$, denoted $|T|$, is the length of this sequence.
|
|
\end{definition}
|
|
\begin{leftbar}
|
|
\noindent In the compiler implementation, type parameters are represented recursively. A type parameter is either a \texttt{GenericTypeParameterType}, or a \texttt{DependentMemberType}\footnote{The ``dependent type'' terminology comes from C++. These are not dependent types in the sense of the lambda cube, but rather dependent on the concrete type substitution when the generic declaration is ultimately referenced from source.} whose base type is some other type parameter. Furthermore, \texttt{DependentMemberType} comes in two flavors:
|
|
\begin{itemize}
|
|
\item ``resolved'' \texttt{DependentMemberTypes} point to an \texttt{AssociatedTypeDecl}.
|
|
\item ``unresolved'' \texttt{DependentMemberTypes} store an \texttt{Identifier}.
|
|
\end{itemize}
|
|
Most of the time, the requirement machine implementation works on resolved type parameters.
|
|
\end{leftbar}
|
|
|
|
% why do we care about 'non-root' associated types
|
|
\index{root associated type}
|
|
\begin{definition}\label{rootassoctypedef} A \emph{root associated type} is an associated type defined in a protocol such that no inherited protocol has an associated type with the same name.
|
|
\end{definition}
|
|
\begin{example}
|
|
In Listing \ref{rootassoctypesrc}, $\proto{Q}.\namesym{A}$ is not a root associated type, because $\proto{Q}$ inherits $\proto{P}$ and $\proto{P}$ also declares an associated type named $\namesym{A}$.
|
|
\end{example}
|
|
\begin{listing}\caption{Examples of root and non-root associated types}\label{rootassoctypesrc}
|
|
\begin{Verbatim}
|
|
protocol P {
|
|
associatedtype A // root
|
|
}
|
|
|
|
protocol Q : P {
|
|
associatedtype A // not a root
|
|
associatedtype B // root
|
|
}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
Now, I'm going to equip type parameters with a linear order, which is just a way of saying that one type is ``smaller'' than another.
|
|
\index{linear order}
|
|
\index{protocol order!canonical}
|
|
\begin{definition}[Canonical protocol order]\label{canonicalprotocol} If $\bm{\mathsf{P}}$ and $\bm{\mathsf{Q}}$ are protocols, then $\bm{\mathsf{P}} < \bm{\mathsf{Q}}$ if
|
|
\begin{itemize}
|
|
\item $\bm{\mathsf{P}}$ is in a different module than $\bm{\mathsf{Q}}$, and the module name of $\bm{\mathsf{P}}$ lexicographically precedes the module name of $\bm{\mathsf{Q}}$, or
|
|
\item $\bm{\mathsf{P}}$ is in the same module as $\bm{\mathsf{Q}}$, and the name of $\bm{\mathsf{P}}$ lexicographically precedes the name of $\bm{\mathsf{Q}}$.
|
|
\end{itemize}
|
|
\end{definition}
|
|
|
|
\begin{example} If the $\namesym{Barn}$ module defines a $\proto{Horse}$ protocol, then by Rule 1, $\proto{Horse}<\proto{Collection}$, since $\proto{Collection}$ is defined in a module named $\namesym{Swift}$, and $\namesym{Barn}<\namesym{Swift}$.
|
|
|
|
If the $\namesym{Barn}$ module also defines a $\proto{Saddle}$ protocol, then by Rule 2, $\proto{Horse}<\proto{Saddle}$.
|
|
\end{example}
|
|
|
|
\index{type parameter order!canonical}
|
|
\begin{definition}[Canonical type order]\label{canonicaltypeorder} If $T$ and $U$ are type parameters, then $T < U$ if
|
|
\begin{itemize}
|
|
\item $|T| < |U|$, or
|
|
\item $|T|=|U|$ and there exists $0\le j < |T|$ such that
|
|
\begin{itemize}
|
|
\item $T_j<U_j$, and
|
|
\item $T_i=U_i$ for each $0\le i<j$.
|
|
\end{itemize}
|
|
\end{itemize}
|
|
The order on components $T_j<U_j$ is in turn defined as follows:
|
|
\begin{itemize}
|
|
\item If $j=0$, then $T_j$ and $U_j$ are generic parameters; in this case, $T_j<U_j$ if
|
|
\begin{itemize}
|
|
\item $\gpdepth(T_j)<\gpdepth(U_j)$, or
|
|
\item $\gpdepth(T_j)=\gpdepth(U_j)$ and $\gpindex(T_j)<\gpindex(U_j)$.
|
|
\end{itemize}
|
|
\item If $j>0$, then $T_j$ and $U_j$ are associated types; in this case, $T_j<U_j$ if:
|
|
\begin{itemize}
|
|
\item the name of $T_j$ lexicographically precedes the name of $U_j$, or
|
|
\item $T_j$ is a root associated type and $U_j$ is not a root associated type, or
|
|
\item if both $T_j$ and $U_j$ have the same name, and the protocol of $T_j$ precedes the protocol of $U_j$ in the canonical protocol order.
|
|
\end{itemize}
|
|
\end{itemize}
|
|
\end{definition}
|
|
|
|
\index{well-founded order}
|
|
\index{shortlex order}
|
|
\index{relation}
|
|
Linear orders will be defined formally later (Definition~\ref{partialorderdef}), where you will see it is an example of a relation (Definition \ref{relationdef}). Furthermore, the above order is an example of a \emph{shortlex order} (Definition~\ref{shortlex}) that is \emph{well-founded} (Definition \ref{wellfounded}).
|
|
|
|
% its actually source location not memory address
|
|
% could be index of assoc type in the protocol
|
|
\begin{leftbar}\noindent When type checking invalid code, a protocol might appear that has two associated type declarations with the same name. This breaks the assumption that the canonical type order is linear. To handle this case, the compiler uses the memory addresses of associated type declarations as the final tie-breaker. While this introduces non-determinism between runs, invalid code does not make it to the code generation stage, so it should not matter in practice.
|
|
\end{leftbar}
|
|
|
|
\begin{example} In the signature
|
|
$\gensig{\genericparam{T},\genericparam{U}}
|
|
{\genericparam{T}\colon \proto{Sequence},
|
|
\genericparam{U}\colon \proto{Sequence}}$,
|
|
\begin{align*}
|
|
\genericparam{T}&<\genericparam{U}\\
|
|
&<\genericparam{T}.\namesym{Element}\\
|
|
&<\genericparam{T}.\namesym{Iterator}\\
|
|
&<\genericparam{U}.\namesym{Element}\\
|
|
&<\genericparam{T}.\namesym{Iterator}.\namesym{Element}\\
|
|
&<\genericparam{T}.\namesym{SubSequence}.\namesym{Element}
|
|
\end{align*}
|
|
\end{example}
|
|
|
|
\index{canonical anchor}
|
|
\begin{definition}\label{canonicalanchor}The \emph{canonical anchor} of a type parameter $T$ is the smallest type parameter with respect to the canonical type order that is equivalent to $T$.
|
|
\end{definition}
|
|
\begin{lemma} The canonical anchor always exists, because the canonical type order is well-founded. The canonical anchor is unique because it is a linear order.
|
|
\end{lemma}
|
|
\begin{proof} These are standard results about order relations.\end{proof}
|
|
|
|
Now I will generalize canonical anchors to finally arrive at the concept of a \emph{canonical type}. Note that the canonical anchor of a type parameter is always another type parameter. However, if the type parameter is fixed to a concrete type, the compiler doesn't actually care about the anchor anymore; instead, the concrete type itself is more useful.
|
|
|
|
\index{canonical type}
|
|
\begin{definition} [Canonical type]\label{canonicaltype} If $T$ is a type parameter, the canonical type of $T$ is either:
|
|
\begin{itemize}
|
|
\item the canonical anchor of $T$, if $T$ is not fixed to a concrete type;
|
|
\item the canonical type of the concrete type of $T$.
|
|
\end{itemize}
|
|
|
|
% clarify this
|
|
If $T$ is an arbitrary type, the canonical type of $T$ is defined by substituting each type parameter appearing in a structural position with its canonical type.
|
|
\end{definition}
|
|
|
|
\begin{example}
|
|
In the signature
|
|
$\gensig{\genericparam{T},\genericparam{U}}
|
|
{\genericparam{T}\colon \proto{Collection},
|
|
\genericparam{T}.\namesym{Element}==\namesym{Int}}$,
|
|
\begin{itemize}
|
|
\item The canonical anchor of $\genericparam{T}.\namesym{SubSequence}.\namesym{Iterator}.\namesym{Element}$ is $\genericparam{T}.\namesym{Element}$,
|
|
\item The canonical type of $\genericparam{T}.\namesym{SubSequence}.\namesym{Iterator}.\namesym{Element}$ is $\namesym{Int}$,
|
|
\item The canonical type of $\namesym{Array}\langle \genericparam{T}.\namesym{Iterator}.\namesym{Element} \rangle$ is $\namesym{Array}\langle \namesym{Int}\rangle$.
|
|
\end{itemize}
|
|
\end{example}
|
|
|
|
Finally, I can describe the remaining set of generic signature queries.
|
|
|
|
\begin{description}
|
|
\item [\texttt{areSameTypeParameterInContext()}] answers if two type parameters have the same canonical anchor. Does not produce a useful result if one or the other is concrete.
|
|
\item [\texttt{isCanonicalTypeInContext()}] answers if an arbitrary type is already canonical.
|
|
\item [\texttt{getCanonicalTypeInContext()}] computes the canonical type of an arbitrary type.
|
|
\end{description}
|
|
|
|
The \texttt{getCanonicalTypeInContext()} query is more general than the other two, which only exist for performance reasons.
|
|
|
|
\chapter{Monoids}\label{monoids}
|
|
|
|
Over the next two chapters, I'm going to take a rather circuitous route through the field of abstract algebra.\footnote{Unfortunately, you won't see a definition of the mathematical concept of a ``field.'' It's not useful here.} It's possible to define finitely-presented monoids without talking about quotients of free monoids. For that matter, rewrite systems can be introduced without using the word ``monoid'' at all. But the various definitions you'll see along the way will come in handy later, and you can always skim this section and come back to it if needed.
|
|
|
|
\index{monoid}
|
|
\index{binary operation}
|
|
\index{identity element}
|
|
\begin{definition} A \emph{monoid} $(M,\, \otimes,\, \varepsilon)$ is a set $M$ together with a binary operation $\otimes$ and a unique\footnote{Just for fun, you can try showing that the uniqueness of the identity doesn't have to be stated explicitly as an axiom; that is, if $\varepsilon$ and $\epsilon$ both satisfy the conditions of an identity element, it necessarily follows that $\varepsilon=\epsilon$.} identity element $\varepsilon$, where the binary operation satisfies the following pair of axioms:
|
|
\begin{itemize}
|
|
% FIXME closure
|
|
\index{associativity}
|
|
\item (Associativity) For $x, y, z \in M$, $x\otimes(y\otimes z)=(x\otimes y)\otimes z$.
|
|
\item (Identity) For $x\in M$, $x\otimes \varepsilon=\varepsilon\otimes x=x$.
|
|
\end{itemize}
|
|
\end{definition}
|
|
|
|
% FIXME mention 'group'
|
|
|
|
Often it's convenient to omit the binary operation symbol, for example when it's juxtaposed between two single-letter variable names, and e.g., write $xy$ in place of $x\otimes y$. I will also use exponent notation $a^n$ to denote the product of $n$ copies of $a$:
|
|
\[a^n=\underbrace{a\otimes\cdots\otimes a}_{\textrm{$n$ times}}\]
|
|
|
|
\begin{example}
|
|
Once you know what to look for, you'll start seeing monoids everywhere. Some common examples of monoids:
|
|
|
|
\begin{enumerate}
|
|
\index{natural numbers}
|
|
\item Natural numbers under addition: $(\mathbb{N},\, +,\, 0)$.
|
|
\item Integers modulo 4 under multiplication: $(\mathbb{Z}/4\mathbb{Z},\, \times,\, 1)$.
|
|
\index{strings}
|
|
\item Strings over the alphabet $\{a,b,c\}$ with string concatenation as the binary operation, with the empty string as the identity element: $(\{a, b, c\}^*,\, \otimes,\, \varepsilon)$.
|
|
\item All functions $S\rightarrow S$ for some set $S$, with function composition as the binary operation, and the identity function $\mathrm{id}\colon S\rightarrow S$ as the identity element: $(S^S,\, \circ,\, \mathrm{id})$.
|
|
\end{enumerate}
|
|
\end{example}
|
|
\index{commutativity}
|
|
The last two examples show that the binary operation need not be commutative; that is, in general $x\otimes y \neq y\otimes x$.
|
|
|
|
Examples 1 and 3 are two instances of a particularly important class of monoid.
|
|
\index{free monoid}
|
|
\begin{definition}
|
|
A \emph{free monoid} over some set $A$, denoted $A^*$, is the set of all strings with elements from $A$, together with string concatenation as the binary operation, and the empty string as the identity. Since the empty string is, well, empty, I'm going to write it as $\varepsilon$ (hopefully $\varepsilon$ itself is not an element of $A$, though). The set $A$ is the \emph{alphabet} of $A^*$; another bit of jargon you will see is that $A$ is the \emph{generating set} of $A^*$.
|
|
\end{definition}
|
|
|
|
The third example is of course the free monoid over the 3-symbol alphabet $\{a,\,b,\,c\}$. Two elements of this monoid are $abab$ and $cbca$; their product is the concatenation:
|
|
\[abab\otimes cbca=ababcbca.\]
|
|
|
|
What about the first example of the natural numbers under addition? You should be able to convince yourself that this is the free monoid generated by the singleton set $\{1\}$, or using the \emph{Kleene star} notation, $\{1\}^*$. The identity element, or ``empty string of 1's,'' is denoted $0$. This follows from the fact that any natural number can be written as a (possibly empty) sum of $1$'s; e.g., $3=1+1+1$.
|
|
|
|
\index{finitely-generated free monoid}
|
|
The set $A$ generating a free monoid may be finite or infinite. If the generating set $A$ is finite, you can say that $A^*$ is \emph{finitely generated}. The set of \emph{elements} of $A^*$ on the other hand is never finite unless $A$ is empty, in which case $A^*$ is the trivial monoid consisting of a single identity element, $\{\varepsilon\}$.
|
|
|
|
\section{Finitely-Presented Monoids}
|
|
Next, let's generalize finitely-generated free monoids to get what is called a \emph{finitely-presented monoid}. Informally, you can think of a finitely-presented monoid as also being a set of strings over an alphabet under concatenation, except that there are one or more ``equations'' for rewriting substrings, with each element possibly having multiple different equivalent spellings. First, I'm going to formalize what is meant by ``equations.''
|
|
|
|
\index{relation}
|
|
\begin{definition}\label{relationdef} If $S$ is some set, then $R$ is a \emph{relation} over $S$ if $R\subseteq S\times S$. That is, a relation is a set of ordered pairs with elements from $S$. For some $s, t\in S$, $R$ \emph{relates} $s$ and $t$, or more concisely $s\mathrel{R}t$, if $(s, t)\in R$.
|
|
|
|
An alternative definition is that a relation is a function $R\colon S\times S \rightarrow \{0, 1\}$, where $s\mathrel{R}t$ if $R(s, t)=1$. This definition should feel familiar to programmers since most languages model relations as operators or functions returning a boolean value.
|
|
|
|
Some relations have special properties:
|
|
\begin{itemize}
|
|
\index{reflexive relation}
|
|
\item The relation is \emph{reflexive} if for all $s\in S$, $s\mathrel{R}s$.
|
|
\index{transitive relation}
|
|
\item The relation is \emph{transitive} if for all $s, t, u\in S$, $s\mathrel{R}t$ and $t\mathrel{R}u$ implies that $s\mathrel{R}u$.
|
|
\index{symmetric relation}
|
|
\item The relation is \emph{symmetric} if for all $s, t\in S$, $s\mathrel{R}t$ if and only if $t\mathrel{R}s$.
|
|
\index{equivalence relation}
|
|
\index{equivalence class}
|
|
\item The relation is an \emph{equivalence relation} is it is reflexive, transitive, and symmetric.
|
|
\item If $R$ is an equivalence relation over $S$ and $s\in S$, then the \emph{equivalence class} of $s$ is the set of all elements $t\in S$ such that $s\mathrel{R}t$. Every element of $S$ belongs to exactly one equivalence class.
|
|
\end{itemize}
|
|
\end{definition}
|
|
|
|
Some example relations and the properties they satisfy:
|
|
\begin{itemize}
|
|
\item The ``less than or equal to'' relation on natural numbers is reflexive, because $n\le n$ for all $n\in\mathbb{N}$. However, ``strictly less than'' is not reflexive, because $n \nless n$.
|
|
\item Both the ``less than or equal to'' and ``strictly less than'' relations on natural numbers are transitive; if $a\le b\le c$, then $a\le c$.
|
|
\item Neither ``less than or equal to'' nor ``strictly less than'' are symmetric; $4 < 5$ certainly does not imply $5 < 4$.
|
|
\index{power set}
|
|
\item Given some set $S$, the relation $R$ over the power set\footnote{The set of all subsets of $S$, sometimes denoted $2^S$ or $\mathcal{P}(S)$.} of $S$ defined by ``two sets have a non-empty intersection'' is symmetric and reflexive, but not transitive. To see why, let $S=\{x, y, z\}$. The sets $\{x\}$ and $\{x, y\}$ are related by $R$; so are the sets $\{x, y\}$ and $\{y\}$. However, $\{x\}$ and $\{y\}$ are not related since they do not intersect.
|
|
\item The relation over $\mathbb{N}$ given by $x\mathrel{R}y$ iff $x=y \mod 5$ is an equivalence relation.\footnote{You can try showing that this is true. Hint: observe that $x\mathrel{R}y$ iff $x-y=5n$ for some $n\in\mathbb{N}$, and then prove that $R$ is symmetric, reflexive, and transitive.}
|
|
\end{itemize}
|
|
Now, let's focus on relations over monoids specifically, rather than arbitrary sets.
|
|
\index{translation-invariant relation}
|
|
\begin{definition}\label{transinv} A relation $R$ over the free monoid $A^*$ is \emph{translation-invariant} if for all $x$, $y$, $s$, $t\in A^*$,
|
|
\[x\mathrel{R}y\qquad\hbox{implies}\qquad (s\otimes x\otimes t)\mathrel{R}(s\otimes y\otimes t).\]
|
|
\end{definition}
|
|
|
|
Translation invariance generalizes an intuitive geometric concept---if you take two points on the real number line $x,y\in\mathbb{R}$, then moving both sides by an equal amount preserves inequalities; for example $x<y$ implies $x+5<y+5$. You will encounter translation invariance again in Definition~\ref{transinvdef}.
|
|
|
|
\index{monoid congruence}
|
|
\begin{definition} A relation $R$ over the free monoid $A^*$ is a \emph{monoid congruence} if it is both translation-invariant, and an equivalence relation.
|
|
\end{definition}
|
|
|
|
\index{integers modulo $n$}
|
|
One way to think of a relation over a monoid $M$ is to see it as a set of ``equations''. The running example here will start with the free monoid $M=\{a\}^*$, which as you saw is actually a fancy way of defining the natural numbers $\mathbb{N}$. Let $R$ be the relation consisting of the single equation $a^5\mathrel{R}\varepsilon$. I'm going to use this relation to construct a new monoid, called $M/R$ (more on this notation below) where the equation $a^5=\varepsilon$ holds---which will also imply that, e.g.,
|
|
\begin{align*}
|
|
a^3\otimes a^2&=a^5=\varepsilon\\
|
|
a^3\otimes a^3&=(a^5)\otimes a=a
|
|
\end{align*}
|
|
You might have already guessed\footnote{If not, that's okay.} that this monoid $M/R$ is another way of looking at the set of integers modulo 5 under addition; a more convincing demonstration appears below after more definitions. This construction can be formalized as follows.
|
|
|
|
\begin{definition}
|
|
Every relation $R$ over a monoid $M$ can be extended to a monoid congruence; there is a unique extension, called the \emph{monoid congruence closure} of $R$ and written as $\sim_R$.
|
|
\end{definition}
|
|
|
|
\begin{example}
|
|
Here are some properties of $\sim_R$, the monoid congruence closure of the relation $a^5\mathrel{R}\varepsilon$ over $M=\{a\}^*$:
|
|
|
|
\begin{itemize}
|
|
\item While the only pair of elements $(x, y)\in M$ that satisfy $x\mathrel{R}y$ are $x=a^5$ and $y=\varepsilon$, $\sim_R$ relates many more (in fact, infinitely many) pairs of elements $x, y\in M$.
|
|
\item First of all, $\sim_R$ contains $R$, so $a^5\sim_R\varepsilon$.
|
|
\item Next, $\sim_R$ is symmetric, so you can flip the initial equation around and get $\varepsilon\sim_R a^5$.
|
|
\item Another great property is that $\sim_R$ is translation-invariant, so it relates the pairs $a^n\sim_R a^{n+5}$ and $a^{n+5}\sim_R a^n$ for all $n\in\mathbb{N}$; these were obtained by taking pairs of elements related by $R$ and multiplying ``both sides'' by $a^n$.
|
|
\item Finally, $\sim_R$ is transitive, so for example, $\varepsilon\sim_R a^5$ and $a^5\sim_R a^{10}$ together imply that $\varepsilon\sim_R a^{10}$.
|
|
\end{itemize}
|
|
|
|
Table \ref{classesmod5} shows the equivalence classes of $\sim_R$. From looking at the exponents you might guess that the equivalence classes are sets of integers modulo 5. Furthermore, if you pick elements $x, y$ out of any two equivalence classes of $\sim_R$, then the equivalence class of $x\otimes y$ under $\sim_R$ does not depend on the choice of $x$ and $y$.
|
|
|
|
\begin{table}\caption{Equivalence classes of $\sim_R$}\label{classesmod5}
|
|
\begin{center}
|
|
{\renewcommand{\arraystretch}{1.5}
|
|
\begin{tabular}{c|l|l}
|
|
Class&List of members&General form\\
|
|
\hline
|
|
\hline
|
|
$0$&$\varepsilon$, $a^5$, $a^{10}$, $a^{15}$, $\ldots$&$a^{5n}$\\
|
|
$1$&$a$, $a^6$, $a^{11}$, $a^{16}$, $\ldots$&$a^{5n+1}$\\
|
|
$2$&$a^2$, $a^7$, $a^{12}$, $a^{17}$, $\ldots$&$a^{5n+2}$\\
|
|
$3$&$a^3$, $a^8$, $a^{13}$, $a^{18}$, $\ldots$&$a^{5n+3}$\\
|
|
$4$&$a^4$, $a^9$, $a^{14}$, $a^{19}$, $\ldots$&$a^{5n+4}$\\
|
|
\end{tabular}
|
|
}
|
|
\end{center}
|
|
\end{table}
|
|
\end{example}
|
|
This method of constructing a new monoid from an existing monoid together with a relation is one of the most fundamental concepts in abstract algebra.
|
|
|
|
% FIXME: maybe a 'diagram' showing how chapters build on each other?
|
|
|
|
% FIXME: probably don't need to define quotient monoids
|
|
\index{quotient monoid}
|
|
\begin{definition}[Quotient monoid] If $M$ is a monoid and $R$ is a set of equations relating pairs of elements of $M$, then the set of equivalence classes of $\sim_R$ is itself a monoid, called the \emph{quotient monoid} and written as $M/R$. The binary operation takes two equivalence classes on $x, y\in M/R$ and produces an equivalence class $x\otimes y\in M/R$. It can be defined by picking any pair of elements $x'\in x$ and $y'\in y$, and setting $x\otimes y$ to the equivalence class of $x'\otimes y'$ in $\sim_R$. The definition of a monoid congruence can be used to prove that this is well-defined, because the result does not depend on the initial choice of $x'$ and $y'$.
|
|
\end{definition}
|
|
|
|
Finally, I can formally define the central concept in this chapter.
|
|
|
|
% FIXME present this earlier?
|
|
% FIXME reference definition in later chapters better
|
|
\index{finitely-presented monoid}
|
|
\begin{definition}[Finitely-presented monoid] Let $A$ be a finite alphabet, and $R$ a finite set of equations over the (finitely-generated) free monoid $A^*$. Then $M:=A^*/R$ is the \emph{finitely-presented monoid generated by $A$ with set of relations $R$}. The pair $\langle A;R\rangle$ is a \emph{presentation} of $M$.
|
|
\end{definition}
|
|
|
|
In the theory of rewrite systems, it is conventional to call the elements of a finitely-presented monoid \emph{terms} instead of strings, so I'm going to do that below.
|
|
|
|
The material above defines a lot of notation to digest at once. If it's too much for now, just remember the intuitive model: a finitely-presented monoid is the set of terms written in some alphabet; the binary operation is string concatenation, but you can rewrite subterms using equations that show certain terms are equivalent.
|
|
|
|
\section{Examples}
|
|
I will end this chapter with some examples of finitely-presented monoids. These examples are all here just for fun! You don't have to read or understand any of them in order to proceed with the remaining material.
|
|
|
|
\index{free monoid}
|
|
\begin{example}
|
|
Every finitely-generated free monoid $A^*$ can be finitely presented if you take $R$ as the empty relation $\varnothing\subset A^*\times A^*$.
|
|
\end{example}
|
|
|
|
\index{integers modulo $n$}
|
|
\begin{example}
|
|
You already saw how to present the integers modulo 5 under addition. More generally, the integers modulo $n$ under addition can be presented as
|
|
\[\langle a;\; a^n=\varepsilon \rangle\]
|
|
\end{example}
|
|
|
|
\index{bicyclic monoid}
|
|
\begin{example}\label{bicyclic}
|
|
The so-called \emph{bicyclic monoid} can be presented as
|
|
\[\langle a, b;\; ab=\varepsilon\rangle\]
|
|
Here is an interesting way to think about this monoid:
|
|
\begin{itemize}
|
|
\item Write \texttt{(} instead of $a$
|
|
\item Write \texttt{)} instead of $b$
|
|
\item Write the empty string instead of $\varepsilon$
|
|
\end{itemize}
|
|
If you do this, you've built yourself a little machine for balancing parentheses! A matched pair \texttt{()} ``cancels out,'' and more generally a ``balanced'' string---where each opening parenthesis can be paired with a subsequent closing parenthesis---is equivalent to the empty string.
|
|
|
|
For example, starting from \texttt{((())())} you can repeatedly delete adjacent matched pairs until you get the empty string:
|
|
\[\texttt{((())())}\rightarrow\texttt{(()())}\rightarrow\texttt{(())}\rightarrow\texttt{()}\rightarrow\texttt{""}\]
|
|
On the other hand, you can convince yourself that neither \texttt{()))} nor \texttt{())()(} are equivalent to the empty string, because these strings contain unbalanced parentheses.
|
|
\end{example}
|
|
|
|
\index{commutativity}
|
|
\index{free commutative monoid}
|
|
\begin{example}
|
|
The \emph{free commutative monoid with two generators} can be presented as:
|
|
\[\langle a, b;\; ab=ba\rangle\]
|
|
Another way of looking at this is that this is actually the cartesian product $\mathbb{N}\times\mathbb{N}$. To see why, take an arbitrary string of $a$'s and $b$'s, and swap the $a$'s and $b$'s around until the string consists of a run of all $a$'s, followed by a run of all $b$'s. The number of $a$'s is the first component of the pair, and the number of $b$'s is the second component of the pair. The monoid operation corresponds to pairwise addition.
|
|
\end{example}
|
|
|
|
\index{group}
|
|
\index{group of integers}
|
|
\begin{example}
|
|
The \emph{group of integers under addition} $\mathbb{Z}$ can be presented as:
|
|
\[\langle a, b;\;ab=\varepsilon,\;ba=\varepsilon\rangle\]
|
|
(A \emph{group} is a monoid with the additional property that each element has a unique inverse. Groups have a lot more structure than arbitrary monoids, and their study is a major area of abstract algebra, but they're not really important to us here.)
|
|
|
|
If you spell $a$ as $1$ and $b$ as $-1$ (or vice versa, if you swap $a$ and $b$ below), then the relations become $1+(-1)=0$ and $(-1)+1=0$. You can simplify an arbitrary string of $a$'s and $b$'s by cancelling out adjacent $a$'s and $b$'s until the resulting string is either empty, all $a$'s, or all $b$'s. If it is all $a$'s, it corresponds to a positive integer (of that number of $a$'s), if all $b$'s it is a negative integer.
|
|
\end{example}
|
|
|
|
\index{infinite dihedral group}
|
|
\begin{example}
|
|
The \emph{infinite dihedral group} $D_\infty$ can be presented as:
|
|
\[\langle a, b;\; a^2=\varepsilon,\; b^2=\varepsilon\rangle\]
|
|
|
|
% FIXME: define group earlier
|
|
This is a group because $a$ and $b$ are self-inverses. An element of $D_\infty$ is a string of alternating $a$'s and $b$'s, for example $a$, $ba$, $bab$, $ababa$, etc.
|
|
\end{example}
|
|
|
|
\index{binary icosahedral group}
|
|
\begin{example}
|
|
The \emph{binary icosahedral group} $2I$ can be presented as:
|
|
\[\langle s, t;\; (st)^2=\varepsilon,\; s^3=\varepsilon,\; t^5=\varepsilon\rangle\]
|
|
While this presentation looks rather complex, this is in fact a finite group, albeit one with 120 elements. In the general case when presenting a group as a monoid, you have to introduce generators for the inverses of the group generators. However in this group, the group generators have \emph{finite order}; this shows that $s^{-1}=s^2$ and $t^{-1}=t^4$, since
|
|
\begin{align*}
|
|
(s^2)s=s(s^2)=s^3=\varepsilon\\
|
|
(t^4)t=t(t^4)=t^5=\varepsilon
|
|
\end{align*}
|
|
Therefore, the monoid presentation does not need generators for $s^{-1}$ and $t^{-1}$.
|
|
\end{example}
|
|
|
|
\index{finite monoid}
|
|
\begin{example} Every \emph{finite} monoid $M$ can be finitely presented. You can take the generating set $A$ to be $M$ itself, and use the ``multiplication table'' of the binary operation $\otimes$ to generate the relations $R$ (of which there are finitely many, since the size of the multiplication table will equal $|M|^2$). Finally, add a relation that equates the identity element of $M$ with the ``empty string'' of $A^*$.
|
|
|
|
This might not be the simplest possible presentation of $M$, and in fact it is true in general that a finitely-presented monoid can have many different presentations, and neither the size of the generating set $|A|$ nor the number of relations $|R|$ are invariants.
|
|
|
|
As an exercise, write down a presentation of the integers modulo 3 using this method, and compare its efficiency with the presentation $\langle a;\;a^3=\varepsilon\rangle$.
|
|
\end{example}
|
|
|
|
\begin{example} The monoid $(\mathbb{N},\,\times,\,1)$, that is the natural numbers with the operation of multiplication, \textbf{cannot} be finitely presented. The generating set is the set of all prime numbers, which is not finite.
|
|
\end{example}
|
|
|
|
So what does any of this have to do with Swift you ask? Read on...
|
|
|
|
\chapter{Monoids are Protocols}\label{monoidsasprotocols}
|
|
\begin{listing}\caption{Free monoid with two generators}\label{freetwoproto}
|
|
\begin{Verbatim}
|
|
protocol P {
|
|
associatedtype A : P
|
|
associatedtype B : P
|
|
|
|
var a: A { get }
|
|
var b: B { get }
|
|
var id: Self { get }
|
|
}
|
|
|
|
func multiplyByABA<X : P>(_ x: X) -> X.A.B.A {
|
|
return x.a.b.a
|
|
}
|
|
|
|
func multiplyByBB<X : P>(_ x: X) -> X.B.B {
|
|
return x.b.b
|
|
}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
|
|
Consider the generic signature $\gensig{\genericparam{T}}{\genericparam{T}\colon\proto{P}}$, with $\proto{P}$ as shown in Listing \ref{freetwoproto}. The two associated types $\namesym{A}$ and $\namesym{B}$ recursively conform to $\proto{P}$, which generates infinitely many type parameters. These type parameters all begin with $\genericparam{T}$ followed by an arbitrary sequence of $\namesym{A}$'s and $\namesym{B}$'s:
|
|
|
|
\begin{quote}
|
|
\noindent$\genericparam{T}$\\
|
|
$\genericparam{T}.\namesym{A}$\\
|
|
$\genericparam{T}.\namesym{B}$\\
|
|
$\genericparam{T}.\namesym{A}.\namesym{A}$\\
|
|
$\genericparam{T}.\namesym{A}.\namesym{B}$\\
|
|
$\genericparam{T}.\namesym{B}.\namesym{A}$\\
|
|
$\genericparam{T}.\namesym{B}.\namesym{B}$\\
|
|
$\ldots$
|
|
\end{quote}
|
|
|
|
You might (correctly) guess that this definition of $\bm{\mathsf{P}}$ is in fact a representation of the free monoid over two generators $\{a, b\}$ in the Swift language. Compositions of the property accessors \texttt{.a}, \texttt{.b} and \texttt{.id} are actually performing the monoid operation $\otimes$ \emph{at compile time}, with type parameters as monoid elements.
|
|
|
|
Listing \ref{freetwoproto} also shows a pair of function definitions,
|
|
\begin{itemize}
|
|
\item \texttt{multiplyByBB(\_:)}, and
|
|
\item \texttt{multiplyByABA(\_:)}.
|
|
\end{itemize}
|
|
These functions implement ``multiplication'' by $bb$ and $aba$, respectively. Say that \texttt{t} is a $\genericparam{T}$, and $\genericparam{T}$ conforms to $\bm{\mathsf{P}}$. If we first apply \texttt{multiplyByABA(\_:)} to \texttt{t}, and then apply \texttt{multiplyBB(\_:)} to the result, you will have ``multiplied'' the type $\genericparam{T}$ by $ababb$ on the right:
|
|
\begin{itemize}
|
|
\item First, substituting $\genericparam{X}:=\genericparam{T}$ into the type of \texttt{multiplyByABA(\_:)} gives
|
|
\[\genericparam{T}.\namesym{A}.\namesym{B}.\namesym{A}.\]
|
|
\item Then, substituting $\genericparam{X} := \genericparam{T}.\namesym{A}.\namesym{B}.\namesym{A}$ into the type of \texttt{multiplyByBB(\_:)} gives the final result
|
|
\[\genericparam{T}.\namesym{A}.\namesym{B}.\namesym{A}.\namesym{B}.\namesym{B}.\]
|
|
\end{itemize}
|
|
|
|
In a free monoid, each term denotes a unique element; in the world of Swift that means each path of $\namesym{A}$'s and $\namesym{B}$'s is a unique type parameter. This can be formalized as follows:
|
|
\begin{algorithm}[Constructing a protocol from a free monoid]\label{freemonoidproto}
|
|
Let $A^*$ be the free monoid over the alphabet $\{a_1,a_2,\ldots,a_n\}$. A protocol $\proto{P}$ can be constructed from $A^*$ as follows:
|
|
\begin{enumerate}
|
|
\item First, begin with an empty protocol definition:
|
|
\begin{Verbatim}
|
|
protocol P {}
|
|
\end{Verbatim}
|
|
\item Now, for each element $a_i\in A$, declare an associated type conforming to $\proto{P}$ within the protocol's body:
|
|
\begin{Verbatim}
|
|
associatedtype Ai : P
|
|
\end{Verbatim}
|
|
\end{enumerate}
|
|
\end{algorithm}
|
|
|
|
\index{lifting map}
|
|
\index{lowering map}
|
|
\begin{definition}[Lowering and lifting maps]\label{liftingloweringmaps}
|
|
Let $\proto{P}$ be a protocol constructed from a free monoid $A^*$ by the above algorithm, and write $\mathsf{Type}$ for the set of all type parameters of $\proto{P}$. The elements of $\mathsf{Type}$ all begin with the protocol $\genericparam{Self}$ type, followed by zero or more associated type names, joined with ``\texttt{.}''.
|
|
|
|
Define a pair of maps, called the \emph{lifting map} and the \emph{lowering map}. The lowering map sends terms to type parameters, and the lifting map sends type parameters to terms:
|
|
\begin{align*}
|
|
\Lambda_{\proto{P}}&\colon \mathsf{Type}\rightarrow A^*\\
|
|
\Lambda^{-1}_{\proto{P}}&\colon A^*\rightarrow\mathsf{Type}
|
|
\end{align*}
|
|
\index{protocol $\genericparam{Self}$ type}
|
|
\begin{itemize}
|
|
\item The lowering map $\Lambda_{\proto{P}}$ drops the $\genericparam{Self}$ parameter, and maps each associated type name $\namesym{Ai}$ to the corresponding element $a_i\in A$, concatenating all elements to form the final result.
|
|
\item The lifting map $\Lambda^{-1}_{\proto{P}}$ operates in reverse; given an arbitrary term in $A^*$, it replaces each element $a_i\in A$ with the associated type name $\namesym{Ai}$, joins the associated type names with ``\texttt{.}'' to form Swift syntax for a nested type, and finally prepends the protocol $\genericparam{Self}$ type to the result.
|
|
\end{itemize}
|
|
Note that applying the lifting map to the identity element $\varepsilon\in A^*$ produces the protocol $\genericparam{Self}$ type.
|
|
\end{definition}
|
|
\begin{lemma}
|
|
The lowering and lifting maps have a couple of interesting properties:
|
|
\begin{itemize}
|
|
\item They are inverses of each other; that is, for all $x\in A^*$ and $T\in\mathsf{Type}$,
|
|
\begin{align*}
|
|
\Lambda_{\proto{P}}(\Lambda_{\proto{P}}^{-1}(x))&=x,\\
|
|
\Lambda_{\proto{P}}^{-1}(\Lambda_{\proto{P}}(T))&=T.
|
|
\end{align*}
|
|
\item If $T$, $U\in\mathsf{Type}$, define $T[\genericparam{Self}:=U]$ to be the type parameter obtained by substituting the protocol $\genericparam{Self}$ type of $T$ with $U$. This type satisfies the following identity:
|
|
\[\Lambda_{\proto{P}}(T[\genericparam{Self}:=U]) = \Lambda_{\proto{P}}(U)\otimes \Lambda_{\proto{P}}(T).\]
|
|
That is, the lowering and lifting maps are compatible with the monoid operation in $A^*$.
|
|
\end{itemize}
|
|
\end{lemma}
|
|
|
|
The construction performed by Algorithm~\ref{freemonoidproto} can be generalized to finitely-presented monoids. The overall idea is the same as with free monoids, except for the addition of relations, which become same-type requirements in the Swift world.
|
|
|
|
Listing \ref{bicyclicproto} shows a Swift protocol representation of the bicyclic monoid from Example \ref{bicyclic}, together with a \texttt{multiplyByBA(\_:)} function that performs some compile-time monoid arithmetic.
|
|
|
|
\begin{listing}\caption{Bicyclic monoid}\label{bicyclicproto}
|
|
\begin{Verbatim}
|
|
protocol Bicyclic {
|
|
associatedtype A : Bicyclic
|
|
associatedtype B : Bicyclic
|
|
where A.B == Self
|
|
|
|
var a: A { get }
|
|
var b: B { get }
|
|
var id: Self { get }
|
|
}
|
|
|
|
func multiplyByBA<X : Bicyclic>(_ x: X) -> X.B.A {
|
|
return x.b.a
|
|
}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
Unlike our free monoid, the bicyclic monoid's presentation has a relation, so some elements can be spelled in multiple ways; for example, $aabba=a$. What does this identity mean in Swift? Well, an equivalence of monoid elements becomes an equivalence of type parameters.
|
|
|
|
You can write down some type parameters in the signature $\gensig{\genericparam{T}}{\genericparam{T}\colon\proto{Bicyclic}}$, and then pass a pair of values to the \texttt{same(\_:\_:)} function, which will only type check if both values have equivalent types. In Listing \ref{bicycliccheck}, the first call to \texttt{same(\_:\_:)} will type check, since $aabba=a$. The second call will not type check, since $ab\ne ba$.
|
|
\begin{listing}\caption{Checking equivalences in the bicyclic monoid}\label{bicycliccheck}
|
|
\begin{Verbatim}
|
|
func same<X>(_: X, _: X) {}
|
|
|
|
func bicyclicTest<X : Bicyclic>(_ x: X) {
|
|
let s: X.A.A.B.B.A = x.a.a.b.b.a
|
|
let t: X.A = x.a
|
|
|
|
same(s, t) // this is OK
|
|
|
|
let u: X.A.B = x.a.b
|
|
let v: X.B.A = x.b.a
|
|
|
|
same(u, v) // type check failure
|
|
}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
|
|
\index{monoid}
|
|
You can construct similar code examples for any finitely-presented monoid; there is nothing special about the bicyclic monoid here.
|
|
\begin{algorithm}[Constructing a protocol from a finitely-presented monoid]\label{protocolmonoidalgo} Let $\langle A;R\rangle$ be a finitely-presented monoid. A protocol $\proto{P}$ can be constructed from $\langle A;R\rangle$ as follows:
|
|
\begin{enumerate}
|
|
\item First, build $\proto{P}$ from the free monoid $A^*$ using Algorithm~\ref{freemonoidproto}.
|
|
\item Second, introduce an empty \texttt{where} clause in the declaration of $\proto{P}$.
|
|
\item Finally, for every equation $s=t$ of $R$, add a same-type requirement to this \texttt{where} clause, using the lifting map to obtain a pair of type parameters from $s$ and $t$:
|
|
\[\Lambda_{\proto{P}}^{-1}(s)==\Lambda_{\proto{P}}^{-1}(t)\]
|
|
\end{enumerate}
|
|
\end{algorithm}
|
|
\begin{theorem}\label{protocolmonoidthm} Let $\langle A;R\rangle$ be a finitely-presented monoid. The protocol~\proto{P} constructed by Algorithm~\ref{protocolmonoidalgo} has the property that if $x$, $y \in A^*$ are equal as elements of $\langle A;R\rangle$, then applying the \texttt{areSameTypeParametersInContext()} generic signature query to the type parameters $\Lambda_{\proto{P}}^{-1}(x)$ and $\Lambda_{\proto{P}}^{-1}(y)$ should return true. The other direction also holds, showing that type parameter equality is exactly the monoid congruence $\sim_R$ on $A^*$.
|
|
\end{theorem}
|
|
\begin{proof}
|
|
If $x$, $y\in A^*$ are equal as elements of $\langle A;R\rangle$, it means that they satisfy $x\sim_R y$, where $\sim_R$ is the monoid congruence generated by $R$. This means that $y$ can be obtained from $x$ by applying a series of equations from $R$, replacing subterms at different positions.
|
|
\index{derivation path}
|
|
This can be formalized by writing down a \emph{derivation path}, which is a sequence of pairs $(s_i \Rightarrow t_i, k_i)$ where $s_i=t_i$ or $t_i=s_i$ is an equation of $R$ (depending on the direction in which the equation is applied), and $k_i\in\mathbb{N}$ is a non-negative number indicating the position at which to replace $s$ with $t$ with the intermediate term.
|
|
|
|
Derivation paths satisfy a validity property. Let $x_i\in A^*$ be the $i$th intermediate term, obtained by applying the first $i$ components of the derivation path to the initial term $x$. Notice that $x_0=x$ since no components have been applied yet, and if $n$ is the total number of components, then $x_n=y$ is the final term. Also, if the $i$th derivation path component is $(s_i\Rightarrow t_i, k_i)$, the subterm of $x_i$ beginning at position $k$ is equal to $s_i$, and the subterm of $x_{i+1}$ beginning at position $k$ is equal to $t_i$.
|
|
|
|
Each associated type of $\proto{P}$ conforms to $\proto{P}$ itself, which implies that every nested type parameter also conforms to $\proto{P}$. So the $k_i$-length prefix of $\Lambda_{\proto{P}}^{-1}(x_i)$ also conforms to $\proto{P}$. By construction, one of the equations $s=t$ or $t=s$ corresponds to a same-type requirement of $\proto{P}$. By the validity property, this same-type requirement can be applied to the type parameter $\Lambda_{\proto{P}}^{-1}(x_i)$ at position $k_i$ to obtain $\Lambda_{\proto{P}}^{-1}(x_{i+1})$.
|
|
|
|
The derivation path that witnesses an equivalence between $x$ and $y$ via the intermediate terms $x_i$ can be viewed as a proof of the equivalence of the type parameters $\Lambda_{\proto{P}}^{-1}(x)$ and $\Lambda_{\proto{P}}^{-1}(y)$ via a series of same-type requirements applied to the intermediate type parameters $\Lambda_{\proto{P}}^{-1}(x_i)$.
|
|
|
|
The other direction can be shown to hold via a similar argument.
|
|
\end{proof}
|
|
\section{Examples}
|
|
This section re-states examples of finitely-presented monoids from the previous chapter as Swift protocol definitions using Algorithm~\ref{protocolmonoidalgo}. Feel free to skip ahead if you're not interested.
|
|
|
|
\index{integers modulo $n$}
|
|
\begin{example}
|
|
% FIXME: re-state monoid presentation here
|
|
The monoid of integers modulo 5 under addition:
|
|
\begin{Verbatim}
|
|
protocol Z5 {
|
|
associatedtype A : Z5 where A.A.A.A.A == Self
|
|
}
|
|
\end{Verbatim}
|
|
\end{example}
|
|
|
|
\index{free commutative monoid}
|
|
\begin{example}
|
|
The free commutative monoid with two generators:
|
|
\begin{Verbatim}
|
|
protocol F2 {
|
|
associatedtype A : F2
|
|
associatedtype B : F2 where A.B == B.A
|
|
}
|
|
\end{Verbatim}
|
|
\end{example}
|
|
|
|
\index{group of integers}
|
|
\begin{example}
|
|
The group of integers under addition:
|
|
\begin{Verbatim}
|
|
protocol Z {
|
|
associatedtype A : Z where A.B == Self
|
|
associatedtype B : Z where B.A == Self
|
|
}
|
|
\end{Verbatim}
|
|
\end{example}
|
|
|
|
\index{infinite dihedral group}
|
|
\begin{example}
|
|
The infinite dihedral group:
|
|
\begin{Verbatim}
|
|
protocol DInf {
|
|
associatedtype A : DInf where A.A == Self
|
|
associatedtype B : DInf where B.B == Self
|
|
}
|
|
\end{Verbatim}
|
|
\end{example}
|
|
\index{binary icosahedral group}
|
|
\begin{example}
|
|
The binary icosahedral group:
|
|
\begin{Verbatim}
|
|
protocol TwoI {
|
|
associatedtype S : TwoI where S.S.S == Self,
|
|
associatedtype T : TwoI where T.T.T.T.T == Self, S.T.S.T == Self
|
|
}
|
|
\end{Verbatim}
|
|
\end{example}
|
|
\section{Undecidability}
|
|
Algorithm~\ref{protocolmonoidalgo} allows you to write down a ``well-formed'' protocol definition isomorphic to an arbitrary finitely-presented monoid, and Theorem~\ref{protocolmonoidthm} shows this construction can express computations in the monoid at compile-time. Note that I was very careful with the use of ``should'' in the statement of Theorem~\ref{protocolmonoidthm}. This is because it describes the operation of a ``platonic ideal'' Swift compiler. As it turns out, this is unimplementable in the real world, because Swift generics as specified are a little bit \emph{too} expressive.
|
|
|
|
\index{decidability}
|
|
\index{word problem}
|
|
The \emph{word problem on finitely-presented monoids} asks if two strings in the free monoid $A^*$ are equivalent as elements of a finitely-presented monoid $\langle A; R\rangle$. All examples of monoids I've shown so far have decidable word problems. However, finitely-presented monoids with undecidable word problems do exist, meaning there is no computable function which can solve it in the general case.
|
|
\begin{theorem}[From \cite{undecidablegroup}]\label{undecidablemonoid} The monoid presented by the following set of generators and relations has an undecidable word problem:
|
|
\[\langle a, b, c, d, e;\;ac=ca;\;bc=cb;\;bd=db;\;ce=eca;\;de=edb;\;cca=ccae\rangle\]
|
|
\end{theorem}
|
|
Applying Algorithm~\ref{protocolmonoidalgo} to the above presentation produces the Swift program in Listing \ref{undecidableproto}. The requirement machine must be able to solve the word problem in any protocol definition it does accept. Therefore, this protocol definition must be rejected as invalid by the requirement machine. The best we can do is carve out a useful sub-class of protocols where the word problem is decidable, and reject all other protocol definitions. This is the focus of the next chapter.
|
|
\begin{listing}\caption{Protocol with undecidable word problem}\label{undecidableproto}
|
|
\begin{Verbatim}
|
|
protocol Impossible {
|
|
associatedtype A : Impossible
|
|
associatedtype B : Impossible
|
|
associatedtype C : Impossible
|
|
associatedtype D : Impossible
|
|
associatedtype E : Impossible
|
|
where A.C == C.A, A.D == D.A
|
|
B.C == C.B, B.D == D.B
|
|
C.E == E.C.A,
|
|
D.E == E.D.B
|
|
C.C.A == C.C.A.E
|
|
}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
|
|
\chapter{Rewrite Systems}\label{rewritesystemintro}
|
|
|
|
This section presents an informal introduction to the theory of \emph{rewrite systems}. A very thorough treatment of this subject can be found in \cite{andallthat}. This book talks about rewrite systems that manipulate more general ``tree-like'' algebraic terms, of which strings are just a special case. The requirement machine only needs \emph{string} rewriting, which greatly simplifies the theory, so I will re-state some of the key ideas in a self-contained manner below.
|
|
|
|
To motivate some formal definitions, let's look at another finitely-presented monoid:
|
|
\[\langle a,b,c;\; cc=c,\; a=ba,\; ca=a\rangle\]
|
|
The intuitive mental model is that these equations are bi-directional; the equation $c=cc$ could just as easily been written as $cc=c$.
|
|
|
|
The bi-directional nature of these equations will become apparent in the proof that $acca=bcaa$. First, let's list and number the relations:
|
|
\begin{align}
|
|
cc&\Longleftrightarrow c\tag{1}\\
|
|
a&\Longleftrightarrow ba\tag{2}\\
|
|
ca&\Longleftrightarrow a\tag{3}
|
|
\end{align}
|
|
Starting with the term $acca$, you can replace the $cc$ with $c$ by applying equation (1) in the $\Rightarrow$ direction, leaving us with $aca$. Then you can continue applying equations as follows:
|
|
\begin{align}
|
|
a\underline{cc}a&\rightarrow a\underline{c}a\tag{Eq 1, $\Rightarrow$}\\
|
|
\underline{a}ca&\rightarrow \underline{ba}ca\tag{Eq 2, $\Rightarrow$}\\
|
|
b\underline{a}ca&\rightarrow b\underline{ca}ca\tag{Eq 3, $\Leftarrow$}\\
|
|
bca\underline{ca}&\rightarrow bca\underline{a}\tag{Eq 3, $\Rightarrow$}
|
|
\end{align}
|
|
It so happens that the monoid presented above has a decidable word problem. Despite that, from looking at the presentation of the monoid, it is not immediately apparent that $acca=bcaa$, and proving this fact required applying equations in both directions, making the intermediate terms ``larger'' and ``smaller'' at different steps. This doesn't seem to produce a viable evaluation strategy. So clearly some additional structure is needed, even for this simple example.
|
|
|
|
\index{rewrite rules}
|
|
\index{irreducible term}
|
|
\index{reducing a term}
|
|
Instead of looking for a way to transform one term into another by applying equations in both directions, you can ``orient'' the relations by choosing to only apply the direction where the left-hand side is always larger than the right-hand side. This turns the equations into unidirectional rewrite rules:
|
|
\begin{align}
|
|
cc&\Longrightarrow c\tag{1}\\
|
|
ba&\Longrightarrow a\tag{2}\\
|
|
ca&\Longrightarrow a\tag{3}
|
|
\end{align}
|
|
This guarantees that at each step, the original term can only become shorter. If a term can be transformed into another term by applying zero or more unidirectional rewrite rules, the original term is said to \emph{reduce} to the other term. A term which cannot be reduced further is said to be \emph{irreducible}. Now you can reformulate the word problem slightly. Instead of attempting to transform an arbitrary term into another, you reduce both terms as much as possible. If both terms have the same irreducible form, they must be equivalent.
|
|
|
|
Let's attempt this new strategy with our original inputs, $acca$ and $bcaa$. First, $acca$ reduces to $aa$:
|
|
\begin{align}
|
|
a\underline{cc}a&\rightarrow a\underline{c}a\tag{Rule 1}\\
|
|
a\underline{ca}&\rightarrow a\underline{a}\tag{Rule 3}
|
|
\end{align}
|
|
At this point, the term $aa$ is irreducible. Now, $bcaa$ also reduces to $aa$:
|
|
\begin{align}
|
|
b\underline{ca}a&\rightarrow b\underline{a}a\tag{Rule 3}\\
|
|
\underline{ba}a&\rightarrow \underline{a}a\tag{Rule 2}
|
|
\end{align}
|
|
This shows that both $acca$ and $bcaa$ reduce to $aa$; therefore, $acca=bcaa$. In fact, this strategy completely solves the word problem in this specific monoid at least. It won't work in many other interesting cases, as you will see below, but it forms the basis for what comes next.
|
|
|
|
Now, I will formalize what is meant by rewrite rules producing a ``shorter'' term at every step. The following definitions are standard.
|
|
\index{partial order}
|
|
\index{linear order}
|
|
\begin{definition}\label{partialorderdef}
|
|
A \emph{partial order} on a set $S$ is a relation $<$ satisfying two properties:
|
|
\begin{itemize}
|
|
\item (Transitivity) If $x<y$ and $y<z$, then $x<z$ for all $x,y,z \in S$.
|
|
\item (Anti-reflexivity) $x\nless x$ for all $x\in S$.
|
|
\end{itemize}
|
|
A \emph{linear order} is a partial order with the additional property that for all $x, y \in S$, exactly one of $x<y$, $y<x$, or $x=y$ holds.
|
|
\end{definition}
|
|
|
|
The reduction process will always terminate if the order satisfies a special condition.
|
|
\index{well-founded order}
|
|
\index{infinite descending chain}
|
|
\begin{definition}\label{wellfounded}
|
|
A partial order over a set $S$ is \emph{well-founded} if there are no infinite descending chains:
|
|
\[x_1>x_2>x_3>\ldots>x_n>\ldots\]
|
|
\end{definition}
|
|
|
|
Using a well-founded order guarantees that applying a reduction relation until fixed point will always terminate, since a non-terminating reduction sequence would witness an infinite descending chain, contradicting the assumption of well-foundedness.
|
|
|
|
\index{translation-invariant relation}
|
|
A partial order used for reduction must also be translation-invariant (Definition~\ref{transinv}). Translation invariance means that if you have a rule like $ca\Rightarrow a$, then not only is it true that $ca>a$, but also replacing $ca$ with $a$ anywhere in the \emph{middle} of a term produces a smaller term; for example $bcab$ can be reduced to $bab$, because $ca>a$ implies that $bcab>bab$.
|
|
\begin{definition}A \emph{reduction order} on the free monoid $A^*$ is a well-founded and translation invariant partial order.
|
|
\end{definition}
|
|
Next, I will define the specific reduction order used from here on out. This generalizes the canonical type order from Section~\ref{canonicaltypes}.
|
|
|
|
\index{shortlex order}
|
|
\index{translation-invariant relation}
|
|
\index{linear order}
|
|
\index{well-founded order}
|
|
\begin{definition}(Shortlex order)\label{shortlex}
|
|
Suppose $A^*$ is a free monoid where the generating set $A$ is equipped with a partial order $<$. This partial order can be extended to the \emph{shortlex order} on $A^*$, also written as $<$. For $x, y\in A^*$, $x<y$ if:
|
|
\begin{itemize}
|
|
\item $|x|<|y|$, or
|
|
\item $|x|=|y|$, and there exists $0\le j<|x|$ such that
|
|
\begin{itemize}
|
|
\item $x_j<y_j$, and
|
|
\item $x_i=y_i$ for each $0\le i<j$.
|
|
\end{itemize}
|
|
\end{itemize}
|
|
\end{definition}
|
|
\begin{lemma} The shortlex order on $A^*$ has a number of useful properties:
|
|
\begin{itemize}
|
|
\item It is a linear order if the original order on $A^*$ is a linear order.
|
|
\item It is well-founded if the original order on $A^*$ is well-founded.
|
|
\item It is translation-invariant.
|
|
\end{itemize}
|
|
\end{lemma}
|
|
Since the shortlex order on strings of length one coincides with the order on the generating set $A$, there is no ambiguity in using the same symbol $<$ for both.
|
|
|
|
\begin{example} If $A=\{a,b\}$ and $a<b$ defines the order on $A$, then the below equation lists a few elements of $A^*$, ordered from smallest to largest by the shortlex order on $A^*$:
|
|
\[a<b<ab<ba<aab<bbb\]
|
|
\end{example}
|
|
|
|
Note that the shortlex order is not the same as the standard lexicographic order on strings. The standard lexicographic order is not well-founded, because it can have infinite descending chains, making it unsuitable for use as a reduction order:
|
|
\[b>ab>aab>aaab>aaaab>\ldots\]
|
|
|
|
Finally, I can formalize the notion of ``reducing'' terms by making them ``smaller''.
|
|
\index{relation}
|
|
\index{reduction relation}
|
|
\begin{definition}[Reduction relation]\label{transinvdef} If $A^*$ is equipped with a reduction order $<$, then a relation $\rightarrow$ is a \emph{reduction relation} with respect to $<$ if $x\rightarrow y$ implies that $y<x$. Another equivalent definition is that a reduction relation $\rightarrow$ is a \emph{sub-relation} of $>$, the inverse relation of $<$.
|
|
\end{definition}
|
|
\index{rewrite system}
|
|
As you saw in the previous example, a reduction relation for a finitely-presented monoid can be constructed by orienting the equations from the presentation with respect to some reduction order, a process which converts the equations into rewrite rules. Such a set of rewrite rules is called a \emph{rewrite system}. There is a simple algorithm for for reducing a term into an irreducible form:
|
|
\begin{algorithm}[Reducing a term]\label{reducingaterm} Let $t$ be a term in some rewrite system $R$.
|
|
\begin{enumerate}
|
|
\item Initialize the boolean flag to false.
|
|
\item If there is some rewrite rule $x\Rightarrow y$ in $R$ such that $t$ contains $x$ as a subterm,
|
|
\begin{itemize}
|
|
\item write $t=u\otimes x\otimes v$ for some prefix $u$ and suffix $v$,
|
|
\item set $t$ to $u\otimes y\otimes v$, replacing the occurrence of $x$ with $y$,
|
|
\item set the flag to true.
|
|
\end{itemize}
|
|
\item If the flag is now true, go back to Step 1.
|
|
\item Otherwise, the algorithm returns with the final value of $t$.
|
|
\end{enumerate}
|
|
\end{algorithm}
|
|
|
|
\section{Confluence and Completion}\label{confluenceandcompletion}
|
|
Applying Algorithm \ref{reducingaterm} to the relations of a finitely-presented monoid let us solve the word problem, at least in one case. Of course this can't solve the word problem in all cases, since the word problem is undecidable. So where does this approach go wrong?
|
|
|
|
The basic problem is right there in the name---we're using a reduction \emph{relation}, not a ``reduction function,'' so for a given term $x\in A^*$, there might be two (or more) distinct terms $y$, $z\in A^*$ such that both $x\rightarrow y$ and $x\rightarrow z$ can apply. This corresponds to non-determinism in Step~2 of Algorithm~\ref{reducingaterm}, where a choice has to be made between multiple rules which could all apply at a single step.
|
|
|
|
To see this phenomenon in action, consider the following finitely-presented monoid:
|
|
\[\langle a, b, c, d;\; ab=a,\; bc=b,\; d=b\rangle\]
|
|
I'm going to use the order $a<b<c<d<e$ on the generating set $A$, and extend this to the shortlex order on $A^*$. Orienting the relations gives the following three rewrite rules:
|
|
\begin{align}
|
|
ab&\Rightarrow a\tag{1}\\
|
|
bc&\Rightarrow b\tag{2}\\
|
|
d&\Rightarrow b\tag{3}
|
|
\end{align}
|
|
Let's use these rules to reduce the string $adc$. The only reduction you can apply initially is Rule 3, which turns $a\underline{d}c$ into $a\underline{b}c$. At this point though, you have two choices on how to proceed: Rule 1 or Rule 2:
|
|
\begin{enumerate}
|
|
\item Applying Rule 1 turns $\underline{ab}c$ into $\underline{a}c$, which cannot be reduced any further.
|
|
\item Applying Rule 2 turns $a\underline{bc}$ into $a\underline{b}$, which can be reduced via an application of Rule 1, leaving you with $\underline{a}$, which is irreducible.
|
|
\end{enumerate}
|
|
This is a problem. Algorithm \ref{reducingaterm} can only solve the word problem if every term has a unique irreducible form, and yet, the term $adc$ can be reduced to two distinct and irreducible terms $ac$ and $a$ by applying rules in a different order.
|
|
|
|
However, this non-uniqueness can be repaired, in this case at least. Since $adc$ has two distinct irreducible forms $ac$ and $a$, I'm just going to wing it and add a new rule $ac\Rightarrow a$ to our rewrite system:
|
|
\setcounter{equation}{0}
|
|
\begin{align}
|
|
ab&\Rightarrow a\tag{1}\\
|
|
bc&\Rightarrow b\tag{2}\\
|
|
d&\Rightarrow b\tag{3}\\
|
|
ac&\Rightarrow a\tag{4}
|
|
\end{align}
|
|
Now if you try to reduce $adc$ using the above set of rewrite rules, you will always arrive at $a$, regardless of the order in which you apply the rules---try it!
|
|
|
|
\index{canonical form of a term}
|
|
\index{completion}
|
|
A rewrite system that has this property is said to be \emph{confluent}. The process whereby new rules are added to a rewrite system to repair confluence violations is known as \emph{completion}.
|
|
|
|
\index{confluent rewrite system}
|
|
\begin{definition}[Confluence]
|
|
Suppose $\rightarrow$ is a reduction relation on $A^*$ with respect to a well-founded, translation-invariant order $<$. Then $\rightarrow$ is \emph{confluent}\footnote{The proper way of defining this is to introduce confluence and local confluence as separate concepts, then show that local confluence implies confluence if the rewrite system can guarantee termination. What I just showed you is actually the definition of local confluence. This is fine, since all rewrite systems are terminating here.} if whenever $x\rightarrow y$ and $x\rightarrow z$, there exists $t$ such that $y\rightarrow t$ and $z\rightarrow t$.
|
|
|
|
In a confluent rewrite system, Algorithm \ref{reducingaterm} will compute a unique irreducible form for every term $t$, written as $t\,{\downarrow}$ and called the \emph{canonical form} of $t$.
|
|
\begin{figure}\caption{Confluence}
|
|
\begin{center}
|
|
\begin{tikzcd}
|
|
&x \arrow[ld] \arrow[rd]&\\
|
|
y\arrow[dr, dashed]&&z\arrow[dl, dashed]\\
|
|
&t&
|
|
\end{tikzcd}
|
|
\end{center}
|
|
\end{figure}
|
|
\end{definition}
|
|
|
|
Recall the monoid defined earlier:
|
|
\[\langle a, b, c, d;\; ab=a,\; bc=b,\; d=b\rangle\]
|
|
You saw that the reduction relation obtained from the equations in the presentation was \emph{not} confluent. The existence of the term $abc$ which has two distinct irreducible forms $a$ and $ac$, witnesses the confluence violation. Such pairs of elements have a name.
|
|
|
|
\index{critical pair}
|
|
\begin{definition}[Critical pair] If $x, y, z\in A^*$ and $\rightarrow$ is a reduction relation on $A^*$, then $(y, z)$ is a \emph{critical pair} if $x\rightarrow y$ and $x\rightarrow z$, and both $y$ and $z$ are irreducible.
|
|
\end{definition}
|
|
|
|
\index{completion procedure}
|
|
\index{Knuth-Bendix algorithm}
|
|
The definition of a critical pair suggests an implementation strategy for constructing a confluent completion: if you have some way to identify all critical pairs, then you can introduce new rewrite rules to repair confluence violations by \emph{resolving} each critical pair.
|
|
|
|
\index{overlapping rules}
|
|
The \emph{Knuth-Bendix completion procedure} (\cite{Knuth1983}) is the standard algorithm for computing a confluent completion and does exactly this. The key insight is that critical pairs arise when a rewrite system has rules whose left-hand sides ``overlap'' in some manner.
|
|
|
|
In our example, the two rules $ab\Rightarrow a$ and $bc\Rightarrow b$ have the property that the suffix of the left-hand side of the first rule, $b$, just happens to equal a prefix of the left-hand side of the second rule. I was able to exhibit the overlapped term $abc$ by concatenating the left-hand sides of the two rules and deleting the overlapping part in the middle.
|
|
|
|
This ``suffix of one is the prefix of the other'' is one kind of overlap; the other kind is when one rule's left-hand side is entirely contained in the left-hand side of the other.
|
|
|
|
\begin{definition}(Overlapping rules)\label{overlappingrules}
|
|
Two rules $x\Rightarrow y$ and $x'\Rightarrow y'$ \emph{overlap} on the term $t=uvw$ if either of the following holds:
|
|
\begin{itemize}
|
|
\item If $x=uvw$ and $x'=v$ for some $u$, $v$, $w\in A^*$; that is, the second rule's left-hand side is contained entirely within the left-hand side of the first.
|
|
\item If $x=uv$ and $x'=vw$ for some $u$, $v$, $w\in A^*$; that is, the second rule's left-hand side has a prefix equal to a suffix of the left-hand side of the first.
|
|
\end{itemize}
|
|
If the two rules have left-hand sides that are exactly equal, the rules overlap because both conditions hold.
|
|
\end{definition}
|
|
|
|
\begin{algorithm}[Knuth-Bendix completion procedure]\label{knuthbendix} This algorithm takes a rewrite system over some alphabet $A$ as input, and attempts to add rewrite rules until all critical pairs have been resolved.
|
|
\begin{enumerate}
|
|
\item Test all pairs of rules $x\Rightarrow y$ and $x'\Rightarrow y'$ for overlapping left hand sides.
|
|
\item For every pair of overlapping rules, compute the overlapped term $t=uvw$, and reduce $uvw$ with both $x\Rightarrow y$ and $x'\Rightarrow y'$ to produce a pair of terms. This result takes one of the following two forms, depending on if the overlap was of the first or second kind:
|
|
\begin{enumerate}
|
|
\item For an overlap of the first kind, $x=uvw$ and $x'=v$. Reducing $uvw$ with $x\Rightarrow y$ yields $y$, and reducing $uvw$ with $x'\Rightarrow y'$ yields $uy'w$. The pair is $(y, uy'w)$.
|
|
\item For an overlap of the second kind, $x=uv$ and $x'=vw$. Reducing $uwv$ with $x\Rightarrow y$ yields $yw$, and reducing $uvw$ with $x'\Rightarrow y'$ yields $uy'$. The pair is $(yw, uy')$.
|
|
\end{enumerate}
|
|
In both cases, call this pair $(t_1, t_2)$, and proceed as follows:
|
|
\begin{enumerate}
|
|
\item Reduce $t_1$ and $t_2$ as much as possible using the set of rewrite rules added so far to produce the critical pair $({t}_1{\downarrow},\, {t}_2{\downarrow})$.
|
|
\item If ${t}_1{\downarrow}={t}_2{\downarrow}$, the critical pair is \emph{trivial} and can be discarded. \item Otherwise, add a new rewrite rule
|
|
\[{t}_1{\downarrow}\Rightarrow {t}_2{\downarrow}\qquad\hbox{(if ${t}_1{\downarrow}>{t}_2{\downarrow}$),}\]
|
|
or
|
|
\[{t}_2{\downarrow}\Rightarrow {t}_1{\downarrow}\qquad\hbox{(if ${t}_2{\downarrow}>{t}_1{\downarrow}$).}\]
|
|
The process for resolving critical pairs is summarized in Figure \ref{criticalfig}.
|
|
\end{enumerate}
|
|
\item If the above loop added new rules, go back to Step 1 to check if any of the new rules overlap with existing rules. Otherwise, all critical pairs have been resolved and the completion procedure has produced a confluent rewrite system.
|
|
\item There is a final simplification step. For each rule $x\Rightarrow y$,
|
|
\begin{enumerate}
|
|
\item If $x$ can be reduced by some other rule $x'\Rightarrow y'$, meaning $x=ux'w$ for some $u$, $v\in A^*$, delete $x\Rightarrow y$.
|
|
|
|
This deletion is valid; since the rewrite system is now confluent, rewrite rules can be applied in any order, meaning $x'\Rightarrow y'$ can always be applied before $x\Rightarrow y$, so there is never any reason to apply $x\Rightarrow y$.
|
|
\item Otherwise, reduce $y$ to canonical form ${y}\,{\downarrow}$, and replace the rule $x\Rightarrow y$ with $x\Rightarrow {y}\,{\downarrow}$.
|
|
\end{enumerate}
|
|
\end{enumerate}
|
|
\end{algorithm}
|
|
\begin{figure}\caption{Resolving critical pairs in Algorithm \ref{knuthbendix}}\label{criticalfig}
|
|
\begin{center}
|
|
\begin{tikzcd}
|
|
&uvw \arrow[ld, bend right] \arrow[rd, bend left] \\
|
|
t_1\arrow[d]&&t_2\arrow[d]\\
|
|
{t}_1{\downarrow}\arrow[rr, leftrightarrow, dashed]&&{t}_2{\downarrow}
|
|
\end{tikzcd}
|
|
\end{center}
|
|
\end{figure}
|
|
\index{convergent presentation}
|
|
If the Knuth-Bendix completion procedure terminates after a finite number of steps, the monoid is said to be \emph{convergent}. If the monoid is not convergent, the algorithm will continue adding new rewrite rules forever, as longer and longer overlapped terms are discovered in Step 2. In practice, you want an algorithm that will succeed or fail, instead of always succeeding after a possibly-infinite number of steps. This is can be handled by limiting the maximum number of iterations or the maximum length of the left-hand side of a rewrite rule. If either limit is exceeded, the rewrite system is rejected.
|
|
|
|
Previously I showed you a couple of finitely-presented monoids and made some hand-wavy claims about the resulting rewrite system. By applying the Knuth-Bendix algorithm we can verify that those claims were correct.
|
|
|
|
\begin{example}[Trivial case]\label{trivialex}
|
|
In this example, I claimed that orienting the equations to form rewrite rules and applying them in any order is sufficient to solve the word problem:
|
|
\[\langle a,b,c;\; cc=c,\; a=ba,\; ca=a\rangle\]
|
|
To see why, you can check for overlapping rules. There is a single overlapping pair of rules, $cc\Rightarrow c$ and $ca\Rightarrow a$. The overlapped term is $cca$. Reducing this term with both rules produces the pair $(ca,ca)$. Reducing both sides yields the critical pair $(a, a)$. This critical pair is trivial, so the Knuth-Bendix algorithm terminates successfully without adding any new rules; the rewrite system is already confluent. Figure \ref{trivialfig} summarizes this.\footnote{The idea of representing critical pairs as diagrams comes from \cite{guiraud:hal-00818253}.}
|
|
\begin{figure}\caption{Trivial critical pair in Example \ref{trivialex}}\label{trivialfig}
|
|
\begin{center}
|
|
\begin{tikzcd}
|
|
&cca \arrow[ld, bend right] \arrow[rd, bend left] \\
|
|
ca\arrow[rr, equal]&&ca
|
|
\end{tikzcd}
|
|
\end{center}
|
|
\end{figure}
|
|
\end{example}
|
|
|
|
\begin{example}[Adding a single rule]\label{singleruleex}
|
|
In this example, I claimed that adding the single rule $ac\Rightarrow a$ ensures the resulting rewrite system is confluent:
|
|
\[\langle a, b, c, d;\; ab=a,\; bc=b,\; d=b\rangle\]
|
|
Once again, you can check for overlapping rules. There is a single overlapping pair of rules, $ab\Rightarrow a$ and $bc\Rightarrow b$. The overlapped term is $abc$. Reducing this term with both rules produces the pair $(ac,ab)$. While $ac$ is irreducible, you can further reduce $ab$ to $a$. This yields the critical pair $(ac,a)$, which is resolved by adding a new rule $ac\Rightarrow a$. A second iteration of the Knuth-Bendix algorithm does not discover any new critical pairs, so the algorithm terminates successfully. Once again, this can be summarized in a diagram, show in Figure \ref{singleruleex}.
|
|
\begin{figure}\caption{Critical pair in Example \ref{singleruleex}}\label{singlerulefig}
|
|
\begin{center}
|
|
\begin{tikzcd}
|
|
&abc \arrow[ld, bend right] \arrow[rd, bend left] \\
|
|
ac\arrow[d, equal]&&ab \arrow[d]\\
|
|
ac\arrow[rr, dashed]&&a
|
|
\end{tikzcd}
|
|
\end{center}
|
|
\end{figure}
|
|
\end{example}
|
|
|
|
\index{convergent presentation}
|
|
Now I will show you a finitely-presented monoid where the presentation is not convergent.
|
|
\begin{example}[Infinite case]\label{infiniteex}
|
|
Consider the following finitely-presented monoid $M$:
|
|
\[\langle a, b;\; aba=ab\rangle\]
|
|
The rule $aba\Rightarrow ab$ overlaps with itself. The overlapped term is $ababa$. There are two ways to reduce this term using our rule, which yields the pair $(abba, abab)$. The second term in the pair, $abab$, can be reduced with a further application of our original rule, producing the critical pair $(abba, abb)$. Resolving this critical pair adds a new rewrite rule $abba\Rightarrow abb$.
|
|
|
|
A new rule was added, so the algorithm runs again. This time, we have an overlap between the new rule $abba\Rightarrow abb$ and the original rule $aba\Rightarrow ab$. The overlapped term is $abbaba$. Reducing this term with both rules produces the pair $(abbba, abbab)$. The second term in the pair, $abbab$, can be reduced with a further application of $abba\Rightarrow abb$, yields the critical pair $(abbba, abbb)$. Resolving this critical pair adds a new rewrite rule $abbba\Rightarrow abbb$.
|
|
|
|
This process continues forever, adding an infinite series of rewrite rules of the form
|
|
\[ab^na\Rightarrow ab^n\]
|
|
Figure \ref{infinitefig} shows these ``runaway'' critical pairs in diagram form.
|
|
\begin{figure}\caption{Infinitely many critical pairs in Example \ref{infiniteex}}\label{infinitefig}
|
|
\begin{center}
|
|
\begin{tikzcd}
|
|
&ababa \arrow[ld, bend right] \arrow[rd, bend left] \\
|
|
abba\arrow[d, equal]&&abab \arrow[d]\\
|
|
abba\arrow[rr, dashed]&&abb
|
|
\end{tikzcd}
|
|
\begin{tikzcd}
|
|
&abbaba \arrow[ld, bend right] \arrow[rd, bend left] \\
|
|
abbba\arrow[d, equal]&&abbab \arrow[d]\\
|
|
abbba\arrow[rr, dashed]&&abbb
|
|
\end{tikzcd}
|
|
\begin{tikzcd}
|
|
&ab^{n-1}aba \arrow[ld, bend right] \arrow[rd, bend left] \\
|
|
ab^na\arrow[d, equal]&&ab^{n-1}ab \arrow[d]\\
|
|
ab^na\arrow[rr, dashed]&&ab^n
|
|
\end{tikzcd}
|
|
\end{center}
|
|
\end{figure}
|
|
\end{example}
|
|
|
|
The interesting thing about Example \ref{infiniteex} is that the word problem in this monoid is still decidable, just not via this particular application of the Knuth-Bendix algorithm. Indeed, applying the Knuth-Bendix algorithm to a different presentation of the same monoid can still produce a confluent rewrite system.
|
|
|
|
\begin{example}[A different presentation]\label{diffpresex}
|
|
Consider the following equivalent presentation of the above monoid; call it $M'$:
|
|
\[\langle t, u, v;\; uv=t,\; tu=t\rangle\]
|
|
\index{isomorphism}
|
|
First of all, I should prove that $M$ and $M'$ are isomorphic by exhibiting an isomorphism $\varphi\colon~M'\rightarrow M$:
|
|
\begin{align*}
|
|
t&\leftrightarrow ab\\
|
|
u&\leftrightarrow a\\
|
|
v&\leftrightarrow b
|
|
\end{align*}
|
|
To convince yourself that this is an isomorphism, apply $\varphi$ to both sides of the relations in the presentation of $M'$:
|
|
\begin{itemize}
|
|
\item Applying $\varphi$ to $uv=t$ gives $ab=ab$, which is trivial.
|
|
\item Applying $\varphi$ to $tu=t$ gives $aba=ab$, which is the defining relation of $M$.
|
|
\end{itemize}
|
|
Going in the other direction, there is only the single relation in the presentation of $M$ to check:
|
|
\begin{itemize}
|
|
\item $\varphi^{-1}$ applied to $aba=ab$ becomes $tu=t$, which is one of the defining relations of $M'$.
|
|
\end{itemize}
|
|
|
|
Now, if you run the Knuth-Bendix algorithm on $M'$ you will see that $tu\Rightarrow t$ overlaps with $uv\Rightarrow t$. The overlapped term is $tuv$. Reducing this term with both rules produces the critical pair $(tv, tt)$. Orienting this pair produces a new rewrite rule $tv\Rightarrow tt$. This is shown in Figure \ref{diffpresfig}.
|
|
\begin{figure}\caption{Critical pair in Example \ref{diffpresex}}\label{diffpresfig}
|
|
\begin{center}
|
|
\begin{tikzcd}
|
|
&tuv \arrow[ld, bend right] \arrow[rd, bend left] \\
|
|
tv\arrow[rr, leftarrow, dashed]&&tt
|
|
\end{tikzcd}
|
|
\end{center}
|
|
\end{figure}
|
|
|
|
A second iteration of the Knuth-Bendix algorithm does not discover any new critical pairs, so the algorithm terminates successfully. You will encounter this example again in Section \ref{associatedtypes}.
|
|
\end{example}
|
|
|
|
You might ask, can \emph{any} finitely-presented monoid with a decidable word problem be presented as a confluent rewrite system, just maybe with a different set of generators and relations? Unfortunately the answer is ``no,'' meaning there are ``bespoke'' monoids where the word problem is decidable, just not via the Knuth-Bendix algorithm.
|
|
\begin{theorem}[From \cite{SQUIER1994271}] The following finitely-presented monoid has a decidable word problem, but cannot be presented as a confluent rewrite system:
|
|
\[\langle a, b, t, x, y;\; ab=\varepsilon,\; xa=atx,\; xt=tx,\; xb=bx,\; xy=\varepsilon\rangle\]
|
|
\end{theorem}
|
|
This result together with Theorem \ref{undecidablemonoid} means the inclusions here are proper:
|
|
\[
|
|
\begin{array}{c}
|
|
\hbox{Confluent rewrite systems} \\
|
|
\subsetneq \\
|
|
\hbox{Decidable word problems} \\
|
|
\subsetneq \\
|
|
\hbox{Finitely-presented monoids}
|
|
\end{array}
|
|
\]
|
|
|
|
\chapter{Protocols are Monoids}\label{protocolsasmonoids}
|
|
|
|
To recap the most important results from the two previous chapters:
|
|
\begin{itemize}
|
|
\item Algorithm \ref{protocolmonoidalgo} shows how to construct a well-formed protocol definition from a finitely-presented monoid,
|
|
|
|
\item Theorem \ref{protocolmonoidthm} shows that generic signature queries on this protocol can express the word problem,
|
|
\item Theorem \ref{undecidablemonoid} shows that the word problem on finitely-presented monoids is in general undecidable,
|
|
\item Algorithm \ref{knuthbendix} shows how to build a confluent rewrite system to solve the word problem on a finitely-presented monoid with a convergent presentation.
|
|
\end{itemize}
|
|
The ultimate goal here is to solve generic signature queries using a confluent rewrite system, but first a more general method for constructing a finitely-presented monoid from a set of Swift protocol definitions is needed. While Theorem \ref{protocolmonoidthm} defined an isomorphism between finitely-presented monoids and a restricted subset of Swift protocols, it doesn't immediately generalize beyond protocols that satisfy some rather stringent restrictions:
|
|
\begin{itemize}
|
|
\item every associated type must conform to the same protocol,
|
|
\item conformance requirements to other protocols are not allowed,
|
|
\item the only kind of generic requirement allowed in a \texttt{where} clause is a same-type requirement between type parameters.
|
|
\end{itemize}
|
|
\index{conformance requirement}
|
|
\index{same-type requirement}
|
|
This chapter sketches an overview of the more general construction by building the rewrite system from a stripped down set of standard library protocols, shown in Listing \ref{protocolrewritesystemex}. I will call the formulation presented in this chapter ``the requirement machine with name and protocol symbols,'' to distinguish it from the real formulation, introduced in the next chapter.
|
|
|
|
\index{associated type}
|
|
\index{name symbol}
|
|
\index{protocol symbol}
|
|
\index{protocol Self type}
|
|
First of all, the alphabet of this rewrite system will include the names of the associated types: $\namesym{Element}$, $\namesym{Iterator}$, and $\namesym{SubSequence}$. Since there are multiple protocols in play, the alphabet also needs to be extended with additional symbols that represent protocol names.
|
|
|
|
These new protocol symbols are distinct from name symbols, so I'm going to write them differently:
|
|
\begin{itemize}
|
|
\item $\namesym{Horse}$ is a name symbol;
|
|
\item $\protosym{Horse}$ is a protocol symbol.
|
|
\end{itemize}
|
|
The four protocol symbols that will be used in this example are $\protosym{IteratorProtocol}$, $\protosym{Sequence}$, $\protosym{Collection}$, and $\protosym{OptionSet}$. Equipped with these, the type lowering map from Definition~\ref{liftingloweringmaps} can be generalized to produce terms that are ``rooted'' in a protocol symbol.
|
|
|
|
\begin{listing}\caption{Example protocols for building a rewrite system}\label{protocolrewritesystemex}
|
|
\begin{Verbatim}
|
|
protocol IteratorProtocol {
|
|
associatedtype Element
|
|
}
|
|
|
|
protocol Sequence {
|
|
associatedtype Element
|
|
where Iterator.Element == Element
|
|
associatedtype Iterator : IteratorProtocol
|
|
}
|
|
|
|
protocol Collection : Sequence {
|
|
associatedtype SubSequence : Collection
|
|
where SubSequence.SubSequence == SubSequence,
|
|
SubSequence.Element == Element
|
|
}
|
|
|
|
protocol OptionSet : Collection
|
|
where Element == Self {
|
|
}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
|
|
\index{type lowering map}
|
|
\begin{definition}\label{typelowering1} For each protocol $\proto{P}$, define the \emph{type lowering map} $\Lambda_{\proto{P}}:\mathsf{Type}\rightarrow\mathsf{Term}$ as follows:
|
|
\begin{itemize}
|
|
\item The protocol $\genericparam{Self}$ type appearing at the root of of a type parameter maps to the protocol symbol $\protosym{P}$.
|
|
\item Each subsequent associated type name $\namesym{T}$ maps to a name symbol $\namesym{T}$.
|
|
\end{itemize}
|
|
This definition will be refined further in Chapter~\ref{requirementmachine}, but it is good enough for now.
|
|
\end{definition}
|
|
With this new formulation, when a type parameter like $\genericparam{Self}.\namesym{Iterator}.\namesym{Element}$ appears in the requirement signature of $\proto{Sequence}$, the lowered term is now ``qualified'' with the protocol whence it came:
|
|
\[\protosym{Sequence}.\namesym{Iterator}.\namesym{Element}\]
|
|
|
|
\index{requirement lowering map}
|
|
The final step is to encode conformance requirements and same-type requirements as rewrite rules using a requirement lowering map.
|
|
\begin{definition}\label{reqlowering1} The \emph{requirement lowering map} $\Lambda_{\proto{P}}\colon\namesym{Requirement}\rightarrow\namesym{Rule}$ takes as input a generic requirement in the protocol $\proto{P}$, and produces a rewrite rule using the type lowering map $\Lambda_{\proto{P}}\colon\namesym{Type}\rightarrow\namesym{Term}$ to lower types to terms:
|
|
\begin{itemize}
|
|
\item \textbf{Protocol conformance requirements} $\namesym{T}\colon\proto{P}$ lower to a rule eliminating a protocol symbol from the end of the lowered term for $\namesym{T}$:
|
|
\[\Lambda_{\proto{P}}(\namesym{T}).\protosym{P} \Rightarrow \Lambda_{\proto{P}}(\namesym{T})\]
|
|
\item \textbf{Same-type requirements} $\namesym{T}==\namesym{U}$ lower to an equivalence of terms. Assuming that $\Lambda_{\proto{P}}(\namesym{T}) > \Lambda_{\proto{P}}(\namesym{U})$ in the reduction order on terms (if not, flip the terms around):
|
|
\[\Lambda_{\proto{P}}(\namesym{T}) \Rightarrow \Lambda_{\proto{P}}(\namesym{U})\]
|
|
\end{itemize}
|
|
This definition does not support layout, superclass or concrete type requirements. Those will be addressed in Chapter~\ref{requirementmachine}.
|
|
\end{definition}
|
|
Applying the requirement lowering map to the conformance requirements in our example produces eight rules: four same-type requirements, and four conformance requirements:
|
|
\begin{align}
|
|
\protosym{Sequence}.\namesym{Iterator}.\namesym{Element} &\Rightarrow \protosym{Sequence}.\namesym{Element}\tag{1}\\
|
|
\protosym{Collection}.\namesym{SubSequence}.\namesym{SubSequence} &\Rightarrow \protosym{Collection}.\namesym{SubSequence}\tag{2}\\
|
|
\protosym{Collection}.\namesym{SubSequence}.\namesym{Element} &\Rightarrow \protosym{Collection}.\namesym{Element}\tag{3}\\
|
|
\protosym{OptionSet}.\namesym{Element} &\Rightarrow \protosym{OptionSet}\tag{4}\\
|
|
\protosym{Sequence}.\namesym{Iterator}.\protosym{IteratorProtocol} &\Rightarrow \protosym{Sequence}.\namesym{Iterator}\tag{5}\\
|
|
\protosym{Collection}.\protosym{Sequence} &\Rightarrow \protosym{Collection}\tag{6}\\
|
|
\protosym{Collection}.\namesym{SubSequence}.\protosym{Collection} &\Rightarrow \protosym{Collection}.\namesym{SubSequence}\tag{7}\\
|
|
\protosym{OptionSet}.\protosym{Collection} &\Rightarrow \protosym{OptionSet}\tag{8}
|
|
\end{align}
|
|
(Note that protocol inheritance being the trivial case of a conformance requirement on $\genericparam{Self}$ explains why Rule 6 and Rule 8 look the way they do.)
|
|
|
|
Intuitively, a protocol symbol at the \emph{beginning} of a term means ``this rule applies to type parameters that conform to this protocol''; a protocol symbol at the \emph{end} of a term means ``if you can construct this term, you \emph{know} it conforms to this protocol''.
|
|
|
|
\index{confluence}
|
|
\index{Knuth-Bendix algorithm}
|
|
There's one more thing. This rewrite system is not confluent! For example, Rule~6 and Rule~1 overlap on the following term:
|
|
\[\protosym{Collection}.\protosym{Sequence}.\namesym{Iterator}.\namesym{Element}\]
|
|
\index{canonical form of a term}
|
|
Thankfully, the Knuth-Bendix algorithm finishes successfully after three rounds, albeit adding a very large number of new rules, as you will see shortly. Nevertheless, this construction is good enough to solve a couple of generic signature queries, at least for type parameters from generic signatures of the form $\gensig{\genericparam{Self}}{\genericparam{Self}\colon\proto{P}}$. The two queries and their implementation:
|
|
\begin{itemize}
|
|
\item \texttt{areSameTypeParametersInContext(T, U)} answers true if the terms $\Lambda_{\proto{P}}(T)$ and $\Lambda_{\proto{P}}(U)$ both reduce to the same canonical form.
|
|
\item \texttt{requiresProtocol(T, Q)} answers true if the terms $\Lambda_{\proto{P}}(T)$ and $\Lambda_{\proto{P}}(T).\protosym{Q}$ both reduce to the same canonical form.
|
|
\end{itemize}
|
|
\begin{example}
|
|
You can show that $\genericparam{Self}.\namesym{SubSequence}.\namesym{SubSequence}.\namesym{Iterator}.\namesym{Element}$ is equivalent to $\genericparam{Self}$ in the $\proto{OptionSet}$ protocol:
|
|
\begin{align}
|
|
\protosym{OptionSet}.\namesym{SubSequence}.&\namesym{SubSequence}.\namesym{Iterator}.\namesym{Element}\nonumber\\
|
|
&\rightarrow\protosym{OptionSet}.\namesym{SubSequence}.\namesym{Iterator}.\namesym{Element}\tag{Rule 11}\\
|
|
&\rightarrow\protosym{OptionSet}\tag{Rule 20}
|
|
\end{align}
|
|
Rule 11 was added by resolving the overlap between Rule~8 and Rule~3. Rule 20 was added by resolving the overlap between Rule~8 and Rule~15, which was added when resolving the overlap between Rule~10 and Rule~1, and finally, Rule~10 was added by resolving the overlap between Rule~7 and Rule~6.
|
|
\end{example}
|
|
\begin{example}
|
|
The $\genericparam{Self}.\namesym{SubSequence}.\namesym{SubSequence}$ type parameter in the $\proto{Collection}$ protocol conforms to $\proto{Sequence}$:
|
|
\begin{align}
|
|
\protosym{Collection}.\namesym{SubSequence}.&\namesym{SubSequence}.\protosym{Sequence}\nonumber\\
|
|
&\rightarrow\protosym{Collection}.\namesym{SubSequence}.\protosym{Sequence}\tag{Rule 2}\\
|
|
&\rightarrow\protosym{Collection}.\namesym{SubSequence}\tag{Rule 10}
|
|
\end{align}
|
|
\end{example}
|
|
|
|
\begin{listing}\caption{Rewrite system for $\proto{IteratorProtocol}$, $\proto{Sequence}$, $\proto{Collection}$ and $\proto{OptionSet}$}\label{rewritesystemcompleted}
|
|
\begin{itemize}
|
|
\item The initial set of rules obtained by lowering protocol requirement signatures:
|
|
\begin{align}
|
|
\protosym{S}.\namesym{Iterator}.\namesym{Element}&\Rightarrow\protosym{S}.\namesym{Element}\tag{1}\\
|
|
\protosym{C}.\namesym{SubSequence}.\namesym{SubSequence}&\Rightarrow\protosym{C}.\namesym{SubSequence}\tag{2}\\
|
|
\protosym{C}.\namesym{SubSequence}.\namesym{Element}&\Rightarrow\protosym{C}.\namesym{Element}\tag{3}\\
|
|
\protosym{O}.\namesym{Element}&\Rightarrow\protosym{O}\tag{4}\\
|
|
\protosym{S}.\namesym{Iterator}.\protosym{I}&\Rightarrow\protosym{S}.\namesym{Iterator}\tag{5}\\
|
|
\protosym{C}.\protosym{S}&\Rightarrow\protosym{C}\tag{6}\\
|
|
\protosym{C}.\namesym{SubSequence}.\protosym{C}&\Rightarrow\protosym{C}.\namesym{SubSequence}\tag{7}\\
|
|
\protosym{O}.\protosym{C}&\Rightarrow\protosym{O}\tag{8}
|
|
\end{align}
|
|
\item New rules added by the first round of the completion procedure:
|
|
\begin{align}
|
|
\protosym{C}.\namesym{Iterator}.\namesym{Element}&\Rightarrow\protosym{C}.\namesym{Element}\tag{9}\\
|
|
\protosym{C}.\namesym{SubSequence}.\protosym{S}&\Rightarrow\protosym{C}.\namesym{SubSequence}\tag{10}\\
|
|
\protosym{O}.\namesym{SubSequence}.\namesym{SubSequence}&\Rightarrow\protosym{O}.\namesym{SubSequence}\tag{11}\\
|
|
\protosym{O}.\namesym{SubSequence}.\namesym{Element}&\Rightarrow\protosym{O}\tag{12}\\
|
|
\protosym{O}.\protosym{S}&\Rightarrow\protosym{O}\tag{13}\\
|
|
\protosym{O}.\namesym{SubSequence}.\protosym{C}&\Rightarrow\protosym{O}.\namesym{SubSequence}\tag{14}
|
|
\end{align}
|
|
\item New rules added by the second round of the completion procedure:
|
|
\begin{align}
|
|
\protosym{C}.\namesym{SubSequence}.\namesym{Iterator}.\namesym{Element}&\Rightarrow\protosym{C}.\namesym{Element}\tag{15}\\
|
|
\protosym{C}.\namesym{SubSequence}.\namesym{Iterator}.\protosym{I}&\Rightarrow\protosym{C}.\namesym{SubSequence}.\namesym{Iterator}\tag{16}\\
|
|
\protosym{O}.\namesym{Iterator}.\namesym{Element}&\Rightarrow\protosym{O}\tag{17}\\
|
|
\protosym{O}.\namesym{Iterator}.\protosym{I}&\Rightarrow\protosym{O}.\namesym{Iterator}\tag{18}\\
|
|
\protosym{O}.\namesym{SubSequence}.\protosym{S}&\Rightarrow\protosym{O}.\namesym{SubSequence}\tag{19}
|
|
\end{align}
|
|
\item New rules added by the third and final round of the completion procedure:
|
|
\begin{align}
|
|
\protosym{O}.\namesym{SubSequence}.\namesym{Iterator}.\namesym{Element}&\Rightarrow\protosym{O}\tag{20}\\
|
|
\protosym{O}.\namesym{SubSequence}.\namesym{Iterator}.\protosym{I}&\Rightarrow\protosym{O}.\namesym{SubSequence}.\namesym{Iterator}\tag{21}
|
|
\end{align}
|
|
\end{itemize}
|
|
\end{listing}
|
|
|
|
\begin{figure}\caption{Non-trivial critical pairs resolved on the first iteration of the Knuth-Bendix algorithm.}\label{rewritesystemfig1}
|
|
\begingroup
|
|
\tiny
|
|
\begin{center}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{C}.\protosym{S}.\namesym{Iterator}.\namesym{Element}
|
|
}\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{C}.\namesym{Iterator}.\namesym{Element}
|
|
\arrow[d, equal] &&
|
|
\protosym{C}.\protosym{S}.\namesym{Element}
|
|
\arrow[d] \\
|
|
\protosym{C}.\namesym{Iterator}.\namesym{Element}
|
|
\arrow[rr, dashed] &&
|
|
\protosym{C}.\namesym{Element}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{C}.\namesym{SubSequence}.\protosym{C}.\protosym{S}
|
|
}\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{C}.\namesym{SubSequence}.\protosym{S}
|
|
\arrow[d, equal] &&
|
|
\protosym{C}.\namesym{SubSequence}.\protosym{C}
|
|
\arrow[d] \\
|
|
\protosym{C}.\namesym{SubSequence}.\protosym{S}
|
|
\arrow[rr, dashed] &&
|
|
\protosym{C}.\namesym{SubSequence}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{O}.\protosym{C}.\namesym{SubSequence}.\namesym{SubSequence}
|
|
}\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{O}.\namesym{SubSequence}.\namesym{SubSequence}
|
|
\arrow[d, equal] &&
|
|
\protosym{O}.\protosym{C}.\namesym{SubSequence}
|
|
\arrow[d] \\
|
|
\protosym{O}.\namesym{SubSequence}.\namesym{SubSequence}
|
|
\arrow[rr, dashed] &&
|
|
\protosym{O}.\namesym{SubSequence}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{O}.\protosym{C}.\namesym{SubSequence}.\namesym{Element}
|
|
}\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{O}.\namesym{SubSequence}.\namesym{Element}
|
|
\arrow[d, equal] &&
|
|
\protosym{O}.\protosym{C}.\namesym{Element}
|
|
\arrow[d] \\
|
|
\protosym{O}.\namesym{SubSequence}.\namesym{Element}
|
|
\arrow[rr, dashed] &&
|
|
\protosym{O}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{O}.\protosym{C}.\protosym{S}
|
|
}\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{O}.\protosym{S}
|
|
\arrow[d, equal] &&
|
|
\protosym{O}.\protosym{C}
|
|
\arrow[d] \\
|
|
\protosym{O}.\protosym{S}
|
|
\arrow[rr, dashed] &&
|
|
\protosym{O}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{O}.\protosym{C}.\namesym{SubSequence}.\protosym{C}
|
|
}\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{O}.\namesym{SubSequence}.\protosym{C}
|
|
\arrow[d, equal] &&
|
|
\protosym{O}.\protosym{C}.\namesym{SubSequence}
|
|
\arrow[d] \\
|
|
\protosym{O}.\namesym{SubSequence}.\protosym{C}
|
|
\arrow[rr, dashed] &&
|
|
\protosym{O}.\namesym{SubSequence}
|
|
\end{tikzcd}
|
|
|
|
\end{center}
|
|
\endgroup
|
|
\end{figure}
|
|
|
|
\begin{figure}\caption{Non-trivial critical pairs resolved on the second iteration of the Knuth-Bendix algorithm.}\label{rewritesystemfig2}
|
|
\begin{center}
|
|
\begingroup
|
|
\tiny
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{C}.\namesym{SubSequence}.\protosym{C}.\namesym{Iterator}.\namesym{Element}
|
|
}\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{C}.\namesym{SubSequence}.\namesym{Iterator}.\namesym{Element}
|
|
\arrow[d, equal] &&
|
|
\protosym{C}.\namesym{SubSequence}.\namesym{Element}
|
|
\arrow[d] \\
|
|
\protosym{C}.\namesym{SubSequence}.\namesym{Iterator}.\namesym{Element}
|
|
\arrow[rr, dashed] &&
|
|
\protosym{C}.\namesym{Element}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{C}.\namesym{SubSequence}.\protosym{S}.\namesym{Iterator}.\protosym{I}
|
|
}\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{C}.\namesym{SubSequence}.\namesym{Iterator}.\protosym{I}
|
|
\arrow[rr, dashed]
|
|
&&
|
|
\protosym{C}.\namesym{SubSequence}.\namesym{Iterator}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{O}.\protosym{S}.\namesym{Iterator}.\namesym{Element}
|
|
}\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{O}.\namesym{Iterator}.\namesym{Element}
|
|
\arrow[d, equal] &&
|
|
\protosym{O}.\protosym{S}.\namesym{Element}
|
|
\arrow[d] \\
|
|
\protosym{O}.\namesym{Iterator}.\namesym{Element}
|
|
\arrow[rr, dashed] &&
|
|
\protosym{O}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{O}.\protosym{S}.\namesym{Iterator}.\protosym{I}
|
|
}\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{O}.\namesym{Iterator}.\protosym{I}
|
|
\arrow[d, equal] &&
|
|
\protosym{O}.\protosym{S}.\namesym{Iterator}
|
|
\arrow[d] \\
|
|
\protosym{O}.\namesym{Iterator}.\protosym{I}
|
|
\arrow[rr, dashed] &&
|
|
\protosym{O}.\namesym{Iterator}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{O}.\namesym{SubSequence}.\protosym{C}.\protosym{S}
|
|
}\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{O}.\namesym{SubSequence}.\protosym{S}\arrow[rr, dashed]
|
|
&&
|
|
\protosym{O}.\namesym{SubSequence}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{O}.\namesym{SubSequence}.\protosym{C}.\namesym{SubSequence}.\protosym{S}
|
|
}\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{O}.\namesym{SubSequence}.\namesym{SubSequence}.\protosym{S}
|
|
\arrow[d] &&
|
|
\protosym{O}.\namesym{SubSequence}.\protosym{C}.\namesym{SubSequence}
|
|
\arrow[d] \\
|
|
\protosym{O}.\namesym{SubSequence}.\protosym{S}
|
|
\arrow[rr, dashed] &&
|
|
\protosym{O}.\namesym{SubSequence}
|
|
\end{tikzcd}
|
|
|
|
\endgroup
|
|
\end{center}
|
|
\end{figure}
|
|
|
|
\begin{figure}\caption{Non-trivial critical pairs resolved on the third iteration of the Knuth-Bendix algorithm.}\label{rewritesystemfig3}
|
|
\begin{center}
|
|
\begingroup
|
|
\tiny
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{O}.\namesym{SubSequence}.\protosym{S}.\namesym{Iterator}.\namesym{Element}
|
|
}\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{O}.\namesym{SubSequence}.\namesym{Iterator}.\namesym{Element}
|
|
\arrow[d, equal] &&
|
|
\protosym{O}.\namesym{SubSequence}.\protosym{S}.\namesym{Element}
|
|
\arrow[d] \\
|
|
\protosym{O}.\namesym{SubSequence}.\namesym{Iterator}.\namesym{Element}
|
|
\arrow[rr, dashed] &&
|
|
\protosym{O}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{O}.\namesym{SubSequence}.\protosym{S}.\namesym{Iterator}.\protosym{I}
|
|
}\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{O}.\namesym{SubSequence}.\namesym{Iterator}.\protosym{I}
|
|
\arrow[rr, dashed]&&
|
|
\protosym{O}.\namesym{SubSequence}.\namesym{Iterator}
|
|
\end{tikzcd}
|
|
|
|
\endgroup
|
|
\end{center}
|
|
\end{figure}
|
|
|
|
%Listing \ref{rewritesystemcompleted} shows the full list of rules in the confluent rewrite system output by the Knuth-Bendix algorithm. Figures \ref{rewritesystemfig1}, \ref{rewritesystemfig2}, and \ref{rewritesystemfig3} show the non-trivial critical pairs resolved on each iteration, using the diagram notation first introduced in Section \ref{confluenceandcompletion}.
|
|
|
|
To get some of the subsequent listings and diagrams to fit, I abbreviated the protocol symbols, showing only the first letter of each protocol's name---think of it as a particularly silly example of a rewrite system if you want:
|
|
\begin{itemize}
|
|
\item $\protosym{IteratorProtocol}$ becomes $\protosym{I}$,
|
|
\item $\protosym{Sequence}$ becomes $\protosym{S}$,
|
|
\item $\protosym{Collection}$ becomes $\protosym{C}$,
|
|
\item $\protosym{OptionSet}$ becomes $\protosym{O}$.
|
|
\end{itemize}
|
|
|
|
\section{Some Proofs}
|
|
Now, I'm going to prove some results about the requirement machine with name and protocol symbols.
|
|
|
|
\index{domain of a term}
|
|
\begin{definition}\label{domain1} The \emph{domain map} is a partial function that maps a term $T$ to a set of protocol names. If the first symbol in the term is a protocol symbol $\protosym{P}$, then $\domain(T)=\{\proto{P}\}$. Otherwise, the domain map is not defined. If $\domain(T)$ is defined, then the term $T$ is \emph{valid}.
|
|
|
|
For now, the domain of a term always contains exactly one protocol, if the mapping is defined at all. Later on, you will see that in the full requirement machine, certain kinds of valid terms can have an empty domain, or a domain of multiple protocols.
|
|
\end{definition}
|
|
\begin{lemma}\label{domainoflowered} For any type parameter $T$ of a protocol $\proto{P}$, $\domain(\Lambda_{\proto{P}}(T))=\{\proto{P}\}$. In particular, this means the type lowering map $\Lambda_{\proto{P}}$ always outputs valid terms.
|
|
\end{lemma}
|
|
\begin{proof}
|
|
This follows from Definition~\ref{liftingloweringmaps}, where the term constructed by $\Lambda_{\proto{P}}$ always begins with the protocol symbol $\protosym{P}$.
|
|
\end{proof}
|
|
The notion of validity can be extended to rewrite rules.
|
|
\index{domain of a term}
|
|
\begin{definition}\label{validrule}
|
|
A rewrite rule $T\Rightarrow U$ is \emph{valid} if the terms $T$ and $U$ are both valid, and $\domain(T)=\domain(U)$.
|
|
\end{definition}
|
|
|
|
The next proof shows equivalent terms always have the same domain. Intuitively, this makes sense, since terms are always ``relative'' to the protocol in which they appear, and it would not make sense for a type parameter in one protocol to be equivalent to a type parameter in another protocol.
|
|
\begin{lemma} If ${T}{\downarrow}={U}{\downarrow}$ and $\domain(T)$ is defined, then $\domain(U)=\domain(T)$.
|
|
\end{lemma}
|
|
\begin{proof}\index{derivation path}\index{monoid congruence}
|
|
Since ${T}{\downarrow}={U}{\downarrow}$, $T$ and $U$ are equivalent under the monoid congruence generated by the initial set of rewrite rules, and there exists a derivation path transforming $T$ to $U$ using these rules. Say this derivation path has length $n$, and let $(s_i\Rightarrow t_i, k_i)$ be the $i$th component. For each $i$, one of $s_i\Rightarrow t_i$ or $t_i\Rightarrow s_i$ is an initial rewrite rule produced by the requirement lowering map $\Lambda_{\proto{Q}_i}$ for some $\proto{Q}_i$. Let $x_i$ be the $i$th intermediate term in the derivation path, where $x_0=T$ and $x_n=U$.
|
|
|
|
Let $\domain(T)=\{\proto{P}\}$. Induction on the length of the derivation path will show that $\domain(U)=\{\proto{P}\}$.
|
|
\begin{enumerate}
|
|
\item The base case is where $i=0$. Here, the derivation path is empty, so $T=U$ and $\domain(U)=\{\proto{P}\}$.
|
|
|
|
\item Now, assume that all derivation paths of length $i-1$ are domain-preserving. This means that $\domain(x_{i-1})=\{\proto{P}\}$. There are two cases to consider.
|
|
\begin{enumerate}
|
|
\item The first case is there the $i$th component of the derivation path has $k_i>0$, leaving the first symbol of the intermediate term untouched. In this case, $\domain(x_i)=\domain(x_{i-1})$.
|
|
\item The second case is where $k_i=0$, meaning that the derivation path component replaces a prefix $s_i$ with $t_i$. Since $s_i$ is a prefix of $x_{i-1}$, $\domain(s_i)=\{\proto{P}\}$ by the induction hypothesis. By Definition~\ref{reqlowering1} and Lemma~\ref{domainoflowered}, it is also the case that $\domain(t_i)=\{\proto{P}\}$. Therefore, $\domain(x_i)=\{\proto{P}\}$.
|
|
\end{enumerate}
|
|
\item Since the final term $x_n$ is equal to $U$, it follows that $\domain(U)=\{\proto{P}\}$.
|
|
\end{enumerate}
|
|
\end{proof}
|
|
The above lemma has a couple of interesting consequences.
|
|
\begin{lemma} If a term is valid, then its canonical form ${T}{\downarrow}$ is also valid. Furthermore, the identity element $\varepsilon$ is not the canonical form of any valid term.
|
|
\end{lemma}
|
|
\begin{proof} If some term $T$ is a valid term, then $\domain(T)$ is defined, and since ${T}{\downarrow}$ is equivalent to $T$, $\domain({T}{\downarrow})=\domain(T)$. Therefore ${T}{\downarrow}$ is a valid term.
|
|
|
|
For the second part, assume that $T$ is a valid term, and ${T}{\downarrow}=\varepsilon$. Since $|\varepsilon|=0$, $\domain(\varepsilon)$ is not defined, and $\domain(T)$ must not be defined either. This contradicts the assumption that $T$ is a valid term. Therefore, ${T}{\downarrow}\neq\varepsilon$.
|
|
\end{proof}
|
|
|
|
The next results concern the structure of rules derived from conformance requirements.
|
|
\begin{definition} A \emph{type-like} term is one where the first symbol is a protocol symbol, and all subsequent symbols, if there are any, are name symbols.
|
|
\end{definition}
|
|
\begin{lemma} Rewrite rules produced by the requirement lowering map $\Lambda_{\proto{P}}$ take on one of the following two forms:
|
|
\begin{enumerate}
|
|
\item $T.\protosym{Q}\Rightarrow T$ where $T$ is a type-like term and $\proto{Q}$ is some protocol.
|
|
\item $T\Rightarrow U$ where $T$ and $U$ are type-like terms.
|
|
\end{enumerate}
|
|
\end{lemma}
|
|
\begin{proof} This follows from Definition~\ref{reqlowering1}.
|
|
\end{proof}
|
|
|
|
\begin{lemma}\label{propliketerm1} Suppose that $T$ and $U$ are type-like symbols with domain $\{\proto{P}\}$, and the canonical form of $T.\protosym{Q}$ is $U$. Then the canonical form of $T$ is also $U$.
|
|
\end{lemma}
|
|
\begin{proof}
|
|
TODO.
|
|
\end{proof}
|
|
|
|
The notion of canonical types defined in Section~\ref{canonicaltypes} corresponds to canonical terms in this rewrite system.
|
|
\begin{lemma} If $T$ is a canonical type parameter in the protocol $\proto{P}$, then $\Lambda_{\proto{P}}(T)$ is a canonical term.
|
|
\end{lemma}
|
|
\begin{proof}
|
|
TODO.
|
|
\end{proof}
|
|
|
|
Putting all this together, you can prove the following.
|
|
\begin{theorem}\label{finiterqm} Let $\proto{P}$ be a protocol defining an infinite set of unique canonical type parameters $\{T_i\}$ where each $T_i$ conforms to some protocol $\proto{Q}$. Then the rewrite system constructed from this protocol is not convergent.
|
|
\end{theorem}
|
|
\begin{proof}
|
|
For any type parameter $T_i$, the term $\Lambda_{\proto{P}}(T_i)$ is canonical, since $T_i$ is canonical. Together with the fact that $T_i$ conforms to $\proto{Q}$, Lemma~\ref{propliketerm1} shows that the term $\Lambda_{\proto{P}}(T_i).\protosym{Q}$ reduces to $\Lambda_{\proto{P}}(T_i)$. Since no proper suffix of a valid type-like term is valid, this means for each $T_i$, the confluent completion of the rewrite system must contain a valid rewrite rule of the form $\Lambda_{\proto{P}}(T_i).\protosym{Q}\Rightarrow \Lambda_{\proto{P}}(T_i)$. Each of these rules is distinct and there are infinitely many of them, therefore the rewrite system is not convergent.
|
|
\end{proof}
|
|
|
|
Indeed, the toy requirement machine with name and protocol symbols is somewhat limited in what it can do:
|
|
\begin{enumerate}
|
|
\item Perhaps the most serious issue is shown in Theorem~\ref{finiterqm}: this rewrite system cannot cope with recursive conformance requirements. In fact, it is no more expressive than the ancient \texttt{ArchetypeBuilder}, described in Algorithm~\ref{archetypebuilder}.
|
|
|
|
\item Even the \emph{statement} of Theorem~\ref{finiterqm} is awkward. There is no way to talk about what it means for a type parameter to actually ``exist,'' so the theorem must be phrased in terms of there being an infinite number of conformance requirements, when really it should talk about there being infinitely many unique type parameters.
|
|
|
|
\item Reducing terms to canonical form lets you determine if two type parameters are equivalent, but it's insufficient for \texttt{getCanonicalTypeInContext()}.
|
|
|
|
The latter is expected to produce ``resolved'' \texttt{DependentMemberTypes}, where the type is equipped with a pointer to a \texttt{AssociatedTypeDecl}; so the canonical form of $\genericparam{Self}.\namesym{SubSequence}$ actually needs to point at the declaration of $\namesym{SubSequence}$ in the $\proto{Collection}$ protocol, rather than the ``unresolved'' form which consists of a bare identifier.
|
|
|
|
The issue is that this association between associated types and protocols is ``erased'' in this formulation, and there is no way to define a \emph{lifting map} taking terms to types, to serve as the inverse of the type lowering map.
|
|
|
|
\item This rewrite system can only reason about type parameters in the trivial protocol generic signature $\gensig{\genericparam{Self}}{\genericparam{Self}\colon\proto{P}}$ for some protocol $\proto{P}$. This restricts it to answering queries about type parameters written inside protocols, and not top-level generic signatures attached to generic functions and types, which can have multiple generic parameters and requirements.
|
|
\item While the \texttt{requiresProtocol()} generic signature query can be made to work, there doesn't seem to be any easy way to implement \texttt{getRequiredProtocols()}, which returns \emph{all} protocols that a type parameter must conform to.
|
|
\item Layout, superclass and concrete type requirements are not supported.
|
|
\end{enumerate}
|
|
|
|
The first three problems are closely intertwined and the full solution is the subject of the next chapter.
|
|
|
|
Problem 3 has a straightforward solution, described in Section \ref{genericparamsym}; it requires adding more symbols to the alphabet.
|
|
|
|
Problem 4 is really not a shortcoming of the rewrite system itself, but rather something that requires building some machinery on top; that is the topic of Chapter \ref{propertymap}.
|
|
|
|
\chapter{Associated Types}\label{associatedtypes}
|
|
|
|
\index{recursive conformance requirement}
|
|
\begin{listing}\caption{The SwiftUI $\proto{View}$ protocol.}\label{viewproto}
|
|
\begin{Verbatim}
|
|
protocol View {
|
|
associatedtype Body : View
|
|
}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
|
|
To motivate the introduction of the next concept, consider the SwiftUI $\proto{View}$ protocol shown in Listing~\ref{viewproto}. The protocol's requirement signature contains a recursive conformance requirement, $\genericparam{Self}.\namesym{Body}\colon \proto{View}$. By Theorem~\ref{finiterqm}, the rewrite system constructed in the requirement machine with name and protocol symbols is not convergent.
|
|
|
|
First, let's take a closer look at where exactly things go wrong. The initial rewrite system consists of a single rule:
|
|
\[\protosym{View}.\namesym{Body}.\protosym{View}\Rightarrow\protosym{View}.\namesym{Body}\]
|
|
The rule's left-hand side has a prefix $\protosym{View}$ equal to its own suffix, so the rule overlaps with itself with an overlap of the second kind. The overlapped term that can be reduced in two different ways is:
|
|
\[\protosym{View}.\namesym{Body}.\protosym{View}.\namesym{Body}.\protosym{View}\]
|
|
Applying the rule both ways and reducing the result produces the new rule:
|
|
\[\protosym{View}.\namesym{Body}.\namesym{Body}.\protosym{View}\Rightarrow\protosym{View}.\namesym{Body}.\namesym{Body}\]
|
|
The new rule, in turn, overlaps with the first rule, and the process continues forever (or until your algorithm's maximum iteration count is reached, or \emph{in extremis}, when your computer runs out of memory). Figure \ref{swiftuirunaway} shows what this infinite sequence of critical pairs looks like.
|
|
\begin{figure}\caption{Infinitely many critical pairs while completing $\proto{View}$ protocol}\label{swiftuirunaway}
|
|
\begin{center}
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{View}.\namesym{Body}.\protosym{View}.\namesym{Body}.\protosym{View}
|
|
}
|
|
\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{View}.\namesym{Body}.\namesym{Body}.\protosym{View}
|
|
\arrow[d, equal]&&
|
|
\protosym{View}.\namesym{Body}.\protosym{View}.\namesym{Body}
|
|
\arrow[d]\\
|
|
\protosym{View}.\namesym{Body}.\namesym{Body}.\protosym{View}
|
|
\arrow[rr, dashed]&&
|
|
\protosym{View}.\namesym{Body}.\namesym{Body}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{View}.\namesym{Body}.\namesym{Body}.\protosym{View}.\namesym{Body}.\protosym{View}
|
|
}
|
|
\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{View}.\namesym{Body}.\namesym{Body}.\namesym{Body}.\protosym{View}
|
|
\arrow[d, equal]&&
|
|
\protosym{View}.\namesym{Body}.\namesym{Body}.\protosym{View}.\namesym{Body}
|
|
\arrow[d]\\
|
|
\protosym{View}.\namesym{Body}.\namesym{Body}.\namesym{Body}.\protosym{View}
|
|
\arrow[rr, dashed]&&
|
|
\protosym{View}.\namesym{Body}.\namesym{Body}.\namesym{Body}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{View}.\namesym{Body}^{n-1}.\protosym{View}.\namesym{Body}.\protosym{View}
|
|
}
|
|
\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{View}.\namesym{Body}^{n-1}.\namesym{Body}.\protosym{View}
|
|
\arrow[d, equal]&&
|
|
\protosym{View}.\namesym{Body}^{n-1}.\protosym{View}.\namesym{Body}
|
|
\arrow[d]\\
|
|
\protosym{View}.\namesym{Body}^n.\protosym{View}
|
|
\arrow[rr, dashed]&&
|
|
\protosym{View}.\namesym{Body}^n
|
|
\end{tikzcd}
|
|
|
|
\end{center}
|
|
\end{figure}
|
|
|
|
\index{convergent presentation}
|
|
In fact, this is exactly the same setup as monoid $M$ in Example \ref{infiniteex} from earlier, only $a$ is $\protosym{View}$, and $b$ is $\namesym{Body}$:
|
|
\[\langle a, b;\;aba=ab \rangle\]
|
|
While that presentation stumps the Knuth-Bendix algorithm, Example \ref{diffpresex} gave an isomorphic monoid $M'$ with a different presentation which worked just fine:
|
|
\[\langle t, u, v;\;uv=t,\;tu=t\rangle\]
|
|
This seems awfully convenient, almost as if I introduced these examples with the full intention of revisiting them later. Let's take a closer look at the isomorphism $\varphi\colon M'\rightarrow M$ exhibited in Example \ref{diffpresex}:
|
|
\begin{align*}
|
|
t&\leftrightarrow ab\\
|
|
u&\leftrightarrow a\\
|
|
v&\leftrightarrow b
|
|
\end{align*}
|
|
This means that adding a new \emph{generator} to $M$ made the presentation convergent. What does this generator represent in the world of Swift? Well, $u\in M'$ is $a\in M$, which is $\protosym{View}$ in Swift; and $v\in M'$ is $b\in M$, which is $\namesym{Body}$. Therefore $t\in M'$ is $ab\in M$, which is $\protosym{View}.\namesym{Body}$. You may guess that $t$ could be a new kind of symbol, perhaps representing a ``bound'' associated type inside a specific protcol.
|
|
|
|
\index{associated type symbol}
|
|
The crux of the issue is that name symbols like $\namesym{Body}$ don't carry any information, and little can be said about them unless they're prefixed with some other term that is known to conform to a protocol. Thus, you cannot simply add a rewrite rule to say that $\namesym{Body}$ conforms to $\protosym{View}$:
|
|
\[\namesym{Body}.\protosym{View}\Rightarrow\namesym{View}\]
|
|
A rule like this would apply to all associated types named $\namesym{Body}$, ever, \emph{in all protocols}, which is wrong. The best we could do until now is try to introduce a rule for each valid prefix term that conforms to $\proto{View}$, of which there are infinitely many here:
|
|
\[\protosym{View}.\underbrace{\namesym{Body}\ldots\namesym{Body}}_{\textrm{$n$ times}}.\protosym{View}\]
|
|
|
|
If there was instead a symbol representing the unique associated type $\namesym{Body}$ defined in protocol $\proto{View}$, you could introduce a single rewrite rule modeling the conformance requirement on that associated type for any length ``prefix''. This is exactly how it works. An \emph{associated type symbol} is uniquely identified by the combination of a protocol name and an associated type name; they're written like so, where $\proto{P}$ is the protocol and $\namesym{A}$ is an identifier:
|
|
\[\assocsym{P}{A}\]
|
|
The notation is to enclose the entire symbol in square brackets [ and ] to remind you that it is one symbol, and not two; that is, $\assocsym{P}{A}$ is a term of length 1. While I still haven't formally defined the reduction order here, it is also important that associated type symbols come before name symbols. You will see why shortly.
|
|
|
|
To be of any use, rules involving associated type symbols must be introduced when the rewrite system is built. Since rewrite system construction is starting to get more complex, I'm going to encapsulate it with the \emph{protocol lowering map}.
|
|
\begin{definition}[Protocol lowering map]\label{protoloweringmap}
|
|
The map $\Lambda\colon\namesym{Proto}\rightarrow\namesym{Rule}^*$ takes a protocol and outputs a list of zero or more rewrite rules. This list contains two kinds of rules:
|
|
\begin{enumerate}
|
|
\item Every associated type $\namesym{A}$ of $\proto{P}$ adds an \emph{introduction rule}:
|
|
\[\protosym{P}.\namesym{A}\Rightarrow\assocsym{P}{A}.\]
|
|
\item Every generic requirement of $\proto{P}$ adds a rewrite rule using the requirement lowering map $\Lambda_{\proto{P}}:\namesym{Requirement}\rightarrow\namesym{Rule}$ from Definition~\ref{reqlowering1}.
|
|
\end{enumerate}
|
|
This map is further amended in Definition~\ref{protoloweringmap2} in the next section.
|
|
\end{definition}
|
|
With this amended rewrite system construction process, the initial rewrite system for the $\proto{View}$ protocol now has two rules, the first one describing the associated type itself, and the second one describing the protocol conformance requirement on the associated type:
|
|
\begin{align}
|
|
\protosym{View}.\namesym{Body}&\Rightarrow\assocsym{View}{Body}\tag{1}\\
|
|
\protosym{View}.\namesym{Body}.\protosym{View}&\Rightarrow\protosym{View}.\namesym{Body}\tag{2}
|
|
\end{align}
|
|
Rule 1 overlaps with Rule 2 on this term:
|
|
\[\protosym{View}.\namesym{Body}.\protosym{View}.\]
|
|
Resolving this first critical pair introduces a new rewrite rule:
|
|
\begin{align}
|
|
\assocsym{View}{Body}.\protosym{View}&\Rightarrow\assocsym{View}{Body}\tag{3}
|
|
\end{align}
|
|
Next, swapping things around, Rule 2 overlaps with Rule 1 on this term:
|
|
\[\protosym{View}.\namesym{Body}.\protosym{View}.\namesym{Body}.\]
|
|
Resolving this second critical pair also introduces a new rewrite rule:
|
|
\begin{align}
|
|
\assocsym{View}{Body}.\namesym{Body}&\Rightarrow\assocsym{View}{Body}.\assocsym{View}{Body}\tag{4}
|
|
\end{align}
|
|
(Incidentally, this is why it is important that $\assocsym{View}{Body}<\namesym{Body}$. If the above rule was oriented in the other direction, completion would run off into the weeds again.)
|
|
|
|
Finally, Rule 2 overlaps with itself on this term:
|
|
\[\protosym{View}.\namesym{Body}.\protosym{View}.\namesym{Body}.\protosym{View}.\]
|
|
This is the same overlapped term that caused trouble before, and once again this overlap produces the same critical pair:
|
|
\[(\protosym{View}.\namesym{Body}.\namesym{Body}.\protosym{View}, \protosym{View}.\namesym{Body}.\protosym{View}.\namesym{Body})\]
|
|
However, everything gets better from here. The reduced form of the left-hand side is different:
|
|
\begin{align}
|
|
\protosym{View}.\namesym{Body}.\namesym{Body}.\protosym{View}
|
|
&\rightarrow\assocsym{View}{Body}.\namesym{Body}.\protosym{View}\tag{Rule 1}\\
|
|
&\rightarrow\assocsym{View}{Body}.\assocsym{View}{Body}.\protosym{View}\tag{Rule 4}\\
|
|
&\rightarrow\assocsym{View}{Body}.\assocsym{View}{Body}\tag{Rule 3}
|
|
\end{align}
|
|
And the best part is, the right-hand side reduces to the same term:
|
|
\begin{align}
|
|
\protosym{View}.\namesym{Body}.\protosym{View}.\namesym{Body}
|
|
&\rightarrow\protosym{View}.\namesym{Body}.\namesym{Body}\tag{Rule 2}\\
|
|
&\rightarrow\assocsym{View}{Body}.\namesym{Body}\tag{Rule 4}\\
|
|
&\rightarrow\assocsym{View}{Body}.\assocsym{View}{Body}\tag{Rule 3}
|
|
\end{align}
|
|
\begin{listing}\caption{Rewrite system of $\proto{View}$ protocol after completion}\label{swiftuiviewcompleterules}
|
|
\begin{align}
|
|
\protosym{View}.\namesym{Body}&\Rightarrow\assocsym{View}{Body}\tag{1}\\
|
|
\protosym{View}.\namesym{Body}.\protosym{View}&\Rightarrow\protosym{View}.\namesym{Body}\tag{\textbf{Deleted}}\\
|
|
\assocsym{View}{Body}.\protosym{View}&\Rightarrow\assocsym{View}{Body}\tag{3}\\
|
|
\assocsym{View}{Body}.\namesym{Body}&\Rightarrow\assocsym{View}{Body}.\assocsym{View}{Body}\tag{4}
|
|
\end{align}
|
|
\end{listing}
|
|
How exciting---the third critical pair can be discarded, and no more overlaps remain. Figure \ref{swiftuiassocfig} presents this process in diagram form, and Listing \ref{swiftuiviewcompleterules} shows the final list of rules. Note that the left-hand side of Rule 2 contains the left-hand side of Rule 1, so the post-processing step of the Knuth-Bendix algorithm deletes Rule 2.
|
|
|
|
\begin{figure}\caption{Successful completion of $\proto{View}$ protocol with an associated type symbol}\label{swiftuiassocfig}
|
|
\begin{center}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{View}.\namesym{Body}.\protosym{View}
|
|
}
|
|
\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\assocsym{View}{Body}.\protosym{View}
|
|
\arrow[d, equal]&&
|
|
\protosym{View}.\namesym{Body}
|
|
\arrow[d]\\
|
|
\assocsym{View}{Body}.\protosym{View}
|
|
\arrow[rr, dashed]&&
|
|
\assocsym{View}{Body}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{View}.\namesym{Body}.\protosym{View}.\namesym{Body}
|
|
}
|
|
\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{View}.\namesym{Body}.\namesym{Body}
|
|
\arrow[d]&&
|
|
\protosym{View}.\namesym{Body}.\assocsym{View}{Body}
|
|
\arrow[d]\\
|
|
\assocsym{View}{Body}.\namesym{Body}
|
|
\arrow[rr, dashed]&&
|
|
\assocsym{View}{Body}.\assocsym{View}{Body}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{View}.\namesym{Body}.\protosym{View}.\namesym{Body}.\protosym{View}
|
|
}
|
|
\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{View}.\namesym{Body}.\namesym{Body}.\protosym{View}
|
|
\arrow[d]&&
|
|
\protosym{View}.\namesym{Body}.\protosym{View}.\namesym{Body}
|
|
\arrow[d]\\
|
|
\assocsym{View}{Body}.\assocsym{View}{Body}
|
|
\arrow[rr, dashed, equal]&&
|
|
\assocsym{View}{Body}.\assocsym{View}{Body}
|
|
\end{tikzcd}
|
|
|
|
\end{center}
|
|
\end{figure}
|
|
|
|
I'm going to call this ``the requirement machine with name, protocol and associated type symbols.'' Since the rewrite system generated by the $\proto{View}$ protocol now has a confluent completion, the addition of associated type symbols gives you a strictly more powerful formalism.
|
|
|
|
One interesting phenomenon is when terms containing name symbols reduce to associated type symbols:
|
|
\begin{align}
|
|
\protosym{View}.\namesym{Body}.\namesym{Body}.&\namesym{Body}\nonumber\\
|
|
&\rightarrow\assocsym{View}{Body}.\namesym{Body}.\namesym{Body}\tag{Rule 1}\\
|
|
&\rightarrow\assocsym{View}{Body}.\assocsym{View}{Body}.\namesym{Body}\tag{Rule 4}\\
|
|
&\rightarrow\assocsym{View}{Body}.\assocsym{View}{Body}.\assocsym{View}{Body}\tag{Rule 4}
|
|
\end{align}
|
|
|
|
\index{domain of a term}
|
|
Some terms no longer start with a protocol symbol, so an amended definition of the domain map is needed.
|
|
\begin{definition} If the first symbol of a term $T$ is an associated type symbol $\assocsym{P}{A}$, then $\domain(T):=\{\proto{P}\}$.
|
|
\end{definition}
|
|
As before, the rewrite system is domain-preserving. Indeed, the only added rules are the associated type introduction rules, $\proto{P}.\namesym{A}\Rightarrow\assocsym{P}{A}$. These rules are domain-preserving by the above definition.
|
|
|
|
Previously, I mentioned that the earlier formulation of the requirement machine with name and protocol symbols could not express the fact that a type parameter actually exists. Now, associated type symbols offer a solution. If a fully-reduced term contains name symbols, it means that the type parameter represented by this term does not exist:
|
|
\[\protosym{View}.\namesym{Body}.\namesym{Goat}\rightarrow\assocsym{View}{Body}.\namesym{Goat}.\]
|
|
The converse is not true however. A term that only contains associated type symbols might still not be a ``real'' type parameter. For example, the $\proto{View}$ protocol's $\namesym{Body}$ associated type does not conform to $\proto{Sequence}$, so the following term does not represent an actual type parameter, even though it looks legitimate:
|
|
\[\assocsym{View}{Body}.\assocsym{Sequence}{Element}\]
|
|
|
|
This new validity condition can be expressed using the tools I've already introduced. The domain of $\assocsym{Sequence}{Element}$ is $\{\proto{Sequence}\}$, but $\assocsym{View}{Body}$ does not conform to $\proto{Sequence}$, because $\assocsym{View}{Body}.\protosym{Sequence}$ does not reduce to $\assocsym{View}{Body}$. This can be generalized.
|
|
\begin{definition} A term $T=T_1.\ldots.T_n$ is \emph{conformance-valid} if at every position, the prefix conforms to each protocol in the domain of the corresponding suffix.
|
|
|
|
More specifically, for every $1\leq i<n$, let $U=T_1.\ldots.T_i$ and $V=T_{i+1}.\ldots.T_n$, so that $T=UV$. Note that both $U$ and $V$ are non-empty. Then $T$ is conformance-valid if $V$ is valid, and $U.\protosym{P}$ and $U$ reduce to the same canonical form for all $\proto{P}\in\domain(V)$.
|
|
\end{definition}
|
|
Conformance validity has some useful properties.
|
|
\begin{lemma} If the terms produced by applying the type lowering map $\Lambda_{\proto{P}}$ to the initial generic requirements all reduce to conformance-valid terms, then the rewrite system preserves conformance validity.
|
|
\end{lemma}
|
|
\begin{proof} TODO.
|
|
\end{proof}
|
|
|
|
\begin{lemma} If $T$ is conformance-valid, then so is its canonical form ${T}{\downarrow}$.
|
|
\end{lemma}
|
|
\begin{proof} TODO.
|
|
\end{proof}
|
|
|
|
\iffalse
|
|
This solves the quandry presented at the beginning of this chapter, where the earlier formulation of the rewrite system that lacked associated type symbols had no apparent way of implementing the \texttt{getCanonicalTypeInContext()} generic signature query.
|
|
|
|
Now that a term containing name symbols reduces to a canonical term with associated type symbols, it appears that mapping terms back to fully resolved type parameters is now a possibility. Unfortunately, the reality is a little bit more complicated. The inherited associated type construction I will introduce in the next section once again breaks the correspondence between terms and canonical types; this will be repaired in Section \ref{implqueries}.
|
|
\fi
|
|
|
|
\section{Inherited Associated Types}\label{inheritedassoctypes}
|
|
\index{recursive conformance requirement}
|
|
\index{same-type requirement}
|
|
Now that recursive protocol conformances are handled correctly, I can finally show you the full definition of the $\proto{Collection}$ protocol. The previous toy $\proto{Collection}$ protocol did have one recursive associated type, $\namesym{SubSequence}$, but it was ``tied off'' with the same-type requirement:
|
|
\[\genericparam{Self}.\namesym{SubSequence}.\namesym{SubSequence}==\genericparam{Self}.\namesym{SubSequence}\]
|
|
The real protocol also includes an $\namesym{Indices}$ associated type conforming to $\proto{Collection}$, but the recursion is not constrained in any way, so $\genericparam{Self}.\namesym{Indices}$, $\genericparam{Self}.\namesym{Indices}.\namesym{Indices}$, and so on are distinct type parameters, just like the $\namesym{Body}$ associated type in the $\proto{View}$ protocol.
|
|
|
|
Listing \ref{fullcollectionproto} shows the full definition of the protocol. The rewrite system built from this protocol's requirements is convergent with the latest formulation incorporating associated type symbols.
|
|
\begin{listing}\caption{The $\proto{Collection}$ and $\proto{BidirectionalCollection}$ protocols}\label{fullcollectionproto}
|
|
\begin{Verbatim}
|
|
protocol Collection : Sequence {
|
|
associatedtype Index : Comparable
|
|
|
|
associatedtype SubSequence : Collection
|
|
where SubSequence.Index == Index,
|
|
SubSequence.Element == Element,
|
|
SubSequence.SubSequence == SubSequence
|
|
|
|
associatedtype Indices : Collection
|
|
where Indices.Element == Index,
|
|
Indices.Index == Index,
|
|
Indices.SubSequence == Indices
|
|
}
|
|
|
|
protocol BidirectionalCollection : Collection
|
|
where SubSequence : BidirectionalCollection,
|
|
Indices : BidirectionalCollection {
|
|
}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
|
|
The next problem comes up when you try to build a confluent rewrite system for $\proto{BidirectionalCollection}$. The rewrite system for $\proto{BidirectionalCollection}$ inherits all of the rewrite rules from $\proto{Collection}$, $\proto{Sequence}$ and $\proto{IteratorProtocol}$, and also adds three more rules, corresponding to these requirements:
|
|
\begin{itemize}
|
|
\item $\genericparam{Self}\colon\proto{Collection}$
|
|
\item $\genericparam{Self}.\namesym{SubSequence}\colon\proto{BidirectionalCollection}$
|
|
\item $\genericparam{Self}.\namesym{Indices}\colon\proto{BidirectionalCollection}$
|
|
\end{itemize}
|
|
I'm not going to talk about the new $\namesym{SubSequence}$ requirement; again, the recursion via $\namesym{SubSequence}$ is ``tied off,'' so it doesn't do anything interesting. Also once again, to keep long lines in check, I will abbreviate protocol names, writing $\proto{C}$ in place of $\proto{Collection}$ and $\proto{BC}$ in place of $\proto{BidirectionalCollection}$. The initial rewrite system is now rather large, so I'm only going to show the relevant rewrite rules below:
|
|
\begin{align}
|
|
&\cdots\nonumber\\
|
|
\protosym{C}.\namesym{Indices}&\Rightarrow\assocsym{C}{Indices}\tag{1}\\
|
|
\assocsym{C}{Indices}.\protosym{C}&\Rightarrow\assocsym{C}{Indices}\tag{2}\\
|
|
&\cdots\nonumber\\
|
|
\protosym{BC}.\protosym{C}&\Rightarrow\protosym{BC}\tag{3}\\
|
|
\protosym{BC}.\namesym{Indices}.\protosym{BC}&\Rightarrow\protosym{BC}.\namesym{Indices}\tag{4}
|
|
\end{align}
|
|
Let's take a closer look at a handful of critical pairs introduced by the new rules. First of all, Rule~3 overlaps with Rule~1 on the term $\protosym{BC}.\protosym{C}.\namesym{Indices}$. Resolving this first critical pair introduces a rewrite rule:
|
|
\begin{align}
|
|
\protosym{BC}.\namesym{Indices}&\Rightarrow\protosym{BC}.\assocsym{C}{Indices}\tag{5}
|
|
\end{align}
|
|
Next, Rule 5 overlaps with Rule 4 on the term $\protosym{BC}.\namesym{Indices}.\protosym{BC}$.
|
|
Resolving this critical pair introduces a rewrite rule:
|
|
\begin{align}
|
|
\protosym{BC}.\assocsym{C}{Indices}.\protosym{BC}&\Rightarrow\protosym{BC}.\assocsym{C}{Indices}\tag{6}
|
|
\end{align}
|
|
The new Rule~6 overlaps with itself, and resolving this critical pair produces yet another rule:
|
|
\begin{align*}
|
|
\protosym{BC}.\assocsym{C}{Indices}.\assocsym{C}{Indices}.\protosym{BC}&\Rightarrow\protosym{BC}.\assocsym{C}{Indices}.\assocsym{C}{Indices}
|
|
\end{align*}
|
|
Oops! The annoying problem with the recursive associated type is back. The new rule also overlaps with Rule~6, and this continues forever, producing ever-longer rules of the form:
|
|
\begin{align*}
|
|
\protosym{BC}.\underbrace{\assocsym{C}{Indices}\ldots\assocsym{C}{Indices}}_{\text{$n$ times}}.\protosym{BC}
|
|
&\Rightarrow\protosym{BC}.\underbrace{\assocsym{C}{Indices}\ldots\assocsym{C}{Indices}}_{\text{$n$ times}}
|
|
\end{align*}
|
|
Introducing associated type symbols was supposed to have solved this problem already! What went wrong? The issue is that, yet again, you want to be able to impose a conformance requirement on an arbitrary path comprised of $\assocsym{C}{Indices}$ that is ``rooted'' at $\protosym{BC}$. The conformance requirement doesn't apply to \emph{all} instances of $\assocsym{C}{Indices}$, only those that come from a $\protosym{BC}$.
|
|
|
|
The solution is to generalize associated type symbols to encode associated type inheritance. That is, rewrite rules should be able to talk about the symbol $\assocsym{P}{A}$ if $\proto{P}$ \emph{inherits} some other protocol $\proto{Q}$, and $\proto{Q}$ \emph{defines} $\namesym{A}$. Then, $\proto{P}$ can impose additional requirements on $\assocsym{P}{A}$ without blowing up the completion procedure.
|
|
|
|
\begin{leftbar}
|
|
\noindent For this reason, associated type symbols can't be represented by a pointer to an \texttt{AssociatedTypeDecl} in the actual implementation, because inherited associated type symbols correspond to ``virtual'' associated types which have no corresponding declaration node in the AST. Instead associated type symbols store a pointer to a \texttt{ProtocolDecl} together with an \texttt{Identifier}.
|
|
\end{leftbar}
|
|
|
|
These new inherited associated type symbols are introduced by extending the protocol lowering map from Definition~\ref{protoloweringmap}.
|
|
\begin{definition}[Protocol lowering map with inheritance]\label{protoloweringmap2}
|
|
The map $\Lambda\colon\namesym{Proto}\rightarrow\namesym{Rule}^*$ takes a protocol and outputs a list of zero or more rewrite rules. This list contains two kinds of rules:
|
|
\begin{enumerate}
|
|
\item Every associated type $\namesym{A}$ of $\proto{P}$ adds an \emph{introduction rule}:
|
|
\[\protosym{P}.\namesym{A}\Rightarrow\assocsym{P}{A}.\]
|
|
\item Every associated type $\namesym{A}$ of $\proto{Q}$, where $\proto{P}$ inherits from $\proto{Q}$, possibly transitively via some intermediate protocol, adds an \emph{inheritance rule}:
|
|
\[\protosym{P}.\assocsym{Q}{A}\Rightarrow\assocsym{P}{A}.\]
|
|
\item Every generic requirement of $\proto{P}$ adds a rewrite rule using the requirement lowering map $\Lambda_{\proto{P}}:\namesym{Requirement}\rightarrow\namesym{Rule}$ from Definition~\ref{reqlowering1}.
|
|
\end{enumerate}
|
|
\end{definition}
|
|
Coming back to the $\proto{BidirectionalCollection}$ example, we're going to start over with the following initial rules, where the first four are the same as before, but Rule 5 is new, coming from Step~2 of the amended protocol lowering map:
|
|
\begin{align}
|
|
&\cdots\nonumber\\
|
|
\protosym{C}.\namesym{Indices}&\Rightarrow\assocsym{C}{Indices}\tag{1}\\
|
|
\assocsym{C}{Indices}.\protosym{C}&\Rightarrow\assocsym{C}{Indices}\tag{2}\\
|
|
&\cdots\nonumber\\
|
|
\protosym{BC}.\protosym{C}&\Rightarrow\protosym{BC}\tag{3}\\
|
|
\protosym{BC}.\namesym{Indices}.\protosym{BC}&\Rightarrow\protosym{BC}.\namesym{Indices}\tag{4}\\
|
|
\protosym{BC}.\assocsym{C}{Indices}&\Rightarrow\assocsym{BC}{Indices}\tag{5}
|
|
\end{align}
|
|
Recall that Rule 3 overlaps with Rule 1 on the term $\protosym{BC}.\protosym{C}.\namesym{Indices}$. Resolving this critical pair introduces a slightly different rewrite rule this time, though; look at the right-hand side:
|
|
\begin{align}
|
|
\protosym{BC}.\namesym{Indices}&\Rightarrow\assocsym{BC}{Indices}\tag{6}
|
|
\end{align}
|
|
Rule 4 overlaps with Rule 6 on the term $\protosym{BC}.\namesym{Indices}.\protosym{BC}$. Resolving the critical pair introduces this rewrite rule:
|
|
\begin{align}
|
|
\assocsym{BC}{Indices}.\protosym{BC}&\Rightarrow\assocsym{BC}{Indices}\tag{7}
|
|
\end{align}
|
|
Rule 7 overlaps with Rule 6 on the term $\assocsym{BC}{Indices}.\protosym{BC}.\namesym{Indices}$. Resolving the critical pair introduces this rewrite rule:
|
|
\begin{align}
|
|
\assocsym{BC}{Indices}.\namesym{Indices}&\Rightarrow\assocsym{BC}{Indices}\assocsym{BC}{Indices}\tag{8}
|
|
\end{align}
|
|
At this point, you might guess that we've closed off the runaway recursion. For instance, look at the overlap of Rule 4 with itself on the term $\protosym{BC}.\namesym{Indices}.\protosym{BC}.\namesym{Indices}.\protosym{BC}$. The critical pair from reducing the term both ways is
|
|
\[(\protosym{BC}.\namesym{Indices}.\namesym{Indices}.\protosym{BC}, \protosym{BC}.\namesym{Indices}.\protosym{BC}.\namesym{Indices})\]
|
|
The left-hand side reduces as follows:
|
|
\begin{align*}
|
|
\protosym{BC}.\namesym{Indices}.\namesym{Indices}.\protosym{BC}&\rightarrow
|
|
\assocsym{BC}{Indices}.\namesym{Indices}.\protosym{BC}\tag{Rule 6}\\
|
|
&\rightarrow
|
|
\assocsym{BC}{Indices}.\assocsym{BC}{Indices}.\protosym{BC}\tag{Rule 8}\\
|
|
&\rightarrow
|
|
\assocsym{BC}{Indices}.\assocsym{BC}{Indices}\tag{Rule 7}
|
|
\end{align*}
|
|
The right-hand side similarly reduces to the same term:
|
|
\begin{align*}
|
|
\protosym{BC}.\namesym{Indices}.\protosym{BC}.\namesym{Indices}&\rightarrow
|
|
\assocsym{BC}{Indices}.\protosym{BC}.\namesym{Indices}\tag{Rule 6}\\
|
|
&\rightarrow
|
|
\assocsym{BC}{Indices}.\namesym{Indices}\tag{Rule 7}\\
|
|
&\rightarrow
|
|
\assocsym{BC}{Indices}.\assocsym{BC}{Indices}\tag{Rule 8}
|
|
\end{align*}
|
|
This critical pair is now trivial and can be discarded. We've successfully neutralized a formidable adversary---the recursive inherited associated type. Figure~\ref{bidirectionalfig} shows the resolution of the above critical pairs in diagram form.
|
|
|
|
Before moving on to the next topic, there is one final thing worth mentioning. While it's not apparent from looking at this example, for the inherited associated type trick to work, the reduction order must be defined so that if a protocol $\proto{P}$ inherits from $\proto{Q}$, then $\protosym{P}<\protosym{Q}$ as well as $\assocsym{P}{A}<\assocsym{Q}{A}$ for all associated types $\namesym{A}$ of $\proto{Q}$. That is, protocols with a ``deeper'' inheritance graph order lower in the order.
|
|
|
|
In this section, I've been implicitly using a lexicographic order on protocol names, and everything worked ``on accident'' because $\proto{BidirectionalCollection}<\proto{Collection}$ anyway; however this would not have been the case if $\proto{BidirectionalCollection}$ was named differently, for example $\proto{TwoWayCollection}$. The formal definition of the reduction order in Definition \ref{protocolorder} will take protocol inheritance into account.
|
|
|
|
\begin{figure}\caption{A few critical pairs resolved while completing the $\proto{BidirectionalCollection}$ protocol with an inherited associated type symbol}\label{bidirectionalfig}
|
|
\begin{center}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{BC}.\protosym{C}.\namesym{Indices}
|
|
}
|
|
\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{BC}.\namesym{Indices}
|
|
\arrow[d, equal]&&
|
|
\protosym{BC}.\assocsym{C}{Indices}
|
|
\arrow[d]\\
|
|
\protosym{BC}.\namesym{Indices}
|
|
\arrow[rr, dashed]&&
|
|
\assocsym{BC}{Indices}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{BC}.\namesym{Indices}.\protosym{BC}
|
|
}
|
|
\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{BC}.\namesym{Indices}
|
|
\arrow[d]&&
|
|
\assocsym{BC}{Indices}.\protosym{BC}
|
|
\arrow[d, equal]\\
|
|
\assocsym{BC}{Indices}
|
|
\arrow[rr, dashed, leftarrow]&&
|
|
\assocsym{BC}{Indices}.\protosym{BC}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\assocsym{BC}{Indices}.\protosym{BC}.\namesym{Indices}
|
|
}
|
|
\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\assocsym{BC}{Indices}.\namesym{Indices}
|
|
\arrow[d, equal]&&
|
|
\assocsym{BC}{Indices}.\assocsym{BC}{Indices}
|
|
\arrow[d, equal]\\
|
|
\assocsym{BC}{Indices}.\namesym{Indices}
|
|
\arrow[rr, dashed]&&
|
|
\assocsym{BC}{Indices}.\assocsym{BC}{Indices}
|
|
\end{tikzcd}
|
|
|
|
\vspace{10mm}
|
|
|
|
\begin{tikzcd}
|
|
&\mathmakebox[0pt][c]{
|
|
\protosym{BC}.\namesym{Indices}.\protosym{BC}.\namesym{Indices}.\protosym{BC}
|
|
}
|
|
\arrow[ld, yshift=-3pt, shorten=6pt] \arrow[rd, yshift=-3pt, shorten=6pt] \\
|
|
\protosym{BC}.\namesym{Indices}.\namesym{Indices}.\protosym{BC}
|
|
\arrow[d, equal]&&
|
|
\protosym{BC}.\namesym{Indices}.\protosym{BC}.\namesym{Indices}
|
|
\arrow[d, equal]\\
|
|
\assocsym{BC}{Indices}.\assocsym{BC}{Indices}
|
|
\arrow[rr, dashed, equal]&&
|
|
\assocsym{BC}{Indices}.\assocsym{BC}{Indices}
|
|
\end{tikzcd}
|
|
|
|
\end{center}
|
|
\end{figure}
|
|
|
|
\section{Merged Associated Types}\label{mergedassoctypes}
|
|
|
|
The previous section showed an example of what can be called ``vertical composition,'' where the $\proto{BidirectionalCollection}$ protocol imposed an additional protocol conformance requirement on the $\namesym{Indices}$ associated type it inherits from $\proto{Collection}$. The recursive conformance on the $\namesym{Indices}$ associated type caused some trouble, which was resolved with the introduction of inherited associated type symbols.
|
|
|
|
Now consider ``horizontal composition,'' where a type parameter conforms to two unrelated protocols, and both protocols define nested associated types with the same name. In this case, the requirements on both nested types are ``merged'' to form a single type parameter. Listing~\ref{horizontalcomp} shows a contrived example, which once again prominently features recursive protocol conformances.
|
|
\begin{listing}\caption{Example of horizontal composition}\label{horizontalcomp}
|
|
\begin{Verbatim}
|
|
protocol P1 {
|
|
associatedtype A : P1
|
|
}
|
|
|
|
protocol P2 {
|
|
associatedtype A : P2
|
|
}
|
|
|
|
protocol P3 {
|
|
associatedtype T : P1, P2
|
|
}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
|
|
Note that $\genericparam{Self}.\namesym{T}$ conforms to both $\proto{P1}$ and $\proto{P2}$. Furthermore, $\proto{P1}$ and $\proto{P2}$ both define an associated type name $\namesym{A}$. The associated type in $\proto{P1}$ conforms to $\proto{P1}$, and the associated type in $\proto{P2}$ conforms to $\proto{P2}$. This means that the associated type $\namesym{T}$ of $\proto{P3}$ defines an infinite sequence of nested type parameters, all of which conform to \emph{both} $\proto{P1}$ and $\proto{P2}$:
|
|
\begin{align*}
|
|
&\genericparam{Self}.\namesym{T}\\
|
|
&\genericparam{Self}.\namesym{T}.\namesym{A}\\
|
|
&\genericparam{Self}.\namesym{T}.\namesym{A}.\namesym{A}\\
|
|
&\cdots
|
|
\end{align*}
|
|
Once again, this causes trouble with the rewrite system, and fixing this requires the final generalization to the concept of associated type symbols. Let's start by listing the initial rewrite rules for the above three protocols. First, $\proto{P1}$ defines an associated type introduction rule, and a conformance rule for this associated type:
|
|
\begin{align}
|
|
&\protosym{P1}.\namesym{A}&\Rightarrow\assocsym{P1}{A}\tag{1}\\
|
|
&\assocsym{P1}{A}.\protosym{P1}&\Rightarrow\assocsym{P1}{A}\tag{2}
|
|
\end{align}
|
|
Similarly for $\proto{P2}$:
|
|
\begin{align}
|
|
&\protosym{P2}.\namesym{A}&\Rightarrow\assocsym{P2}{A}\tag{3}\\
|
|
&\assocsym{P2}{A}.\protosym{P2}&\Rightarrow\assocsym{P2}{A}\tag{4}
|
|
\end{align}
|
|
Finally, $\proto{P3}$ defines an associated type conforming to both $\proto{P1}$ and $\proto{P2}$:
|
|
\begin{align}
|
|
&\protosym{P3}.\namesym{T}&\Rightarrow\assocsym{P3}{T}\tag{5}\\
|
|
&\assocsym{P3}{T}.\protosym{P1}&\Rightarrow\assocsym{P3}{T}\tag{6}\\
|
|
&\assocsym{P3}{T}.\protosym{P2}&\Rightarrow\assocsym{P3}{T}\tag{7}
|
|
\end{align}
|
|
The completion procedure looks for overlaps in the above set of rules. The first critical pairs to be resolved are the overlap of Rule~2 with Rule~1 on $\assocsym{P1}{A}.\protosym{P1}.\namesym{A}$, and Rule~4 with Rule~3 on $\assocsym{P2}{A}.\protosym{P2}.\namesym{A}$:
|
|
\begin{align}
|
|
\assocsym{P1}{A}.\namesym{A}&\Rightarrow\assocsym{P1}{A}.\assocsym{P1}{A}\tag{8}\\
|
|
\assocsym{P2}{A}.\namesym{A}&\Rightarrow\assocsym{P2}{A}.\assocsym{P2}{A}\tag{9}
|
|
\end{align}
|
|
Next, let's look at overlaps arising from the rules introduced by protocol $\proto{P3}$. Rule~6 overlaps with Rule~1 on the term $\assocsym{P3}{T}.\protosym{P1}.\namesym{A}$. Resolving this critical pair introduces the rewrite rule:
|
|
\begin{align}
|
|
\assocsym{P3}{T}.\namesym{A}\Rightarrow\assocsym{P3}{T}.\assocsym{P1}{A}\tag{10}
|
|
\end{align}
|
|
Similarly, Rule~7 overlaps with Rule~1 on the term $\assocsym{P3}{T}.\protosym{P2}.\namesym{A}$. Resolving this critical pair introduces the rewrite rule:
|
|
\begin{align}
|
|
\assocsym{P3}{T}.\namesym{A}\Rightarrow\assocsym{P3}{T}.\assocsym{P2}{A}\tag{11}
|
|
\end{align}
|
|
These two new rules, Rule~10 and Rule~11, have identical left hand sides, so they overlap on the term $\assocsym{P3}{T}.\namesym{A}$. Resolving this critical pair introduces the rewrite rule:
|
|
\begin{align}
|
|
\assocsym{P3}{T}.\assocsym{P2}{A}\Rightarrow\assocsym{P3}{T}.\assocsym{P1}{A}\tag{12}
|
|
\end{align}
|
|
The new Rule~12 overlaps with Rule~4 on the term $\assocsym{P3}{T}.\assocsym{P2}{A}.\protosym{P2}$. Resolving this critical pair introduces the rewrite rule:
|
|
\begin{align}
|
|
\assocsym{P3}{T}.\assocsym{P1}{A}.\protosym{P2}\Rightarrow\assocsym{P3}{T}.\assocsym{P1}{A}\tag{13}
|
|
\end{align}
|
|
The new Rule~12 also overlaps with Rule~9 on the term $\assocsym{P3}{T}.\assocsym{P2}{A}.\namesym{A}$. Let's look at this one more carefully. Call the overlapped term $t$. Reducing $t$ with each one of Rule~12 and Rule~9 produces the following pair of terms:
|
|
\begin{align*}
|
|
t_0&=\assocsym{P3}{T}.\assocsym{P1}{A}.\underline{\namesym{A}}\\
|
|
t_1&=\assocsym{P3}{T}.\assocsym{P2}{A}.\underline{\assocsym{P2}{A}}
|
|
\end{align*}
|
|
The term $t_0$ can be further reduced by an application of Rule~8:
|
|
\begin{align*}
|
|
t_0=&\assocsym{P3}{T}.\assocsym{P1}{A}.\underline{\namesym{A}}\\
|
|
\rightarrow&\assocsym{P3}{T}.\assocsym{P1}{A}.\underline{\assocsym{P1}{A}}
|
|
\end{align*}
|
|
At this point, $t_0$ is now irreducible. The term $t_1$ can be further reduced by another an application of Rule~12:
|
|
\begin{align*}
|
|
t_1=&\underline{\assocsym{P3}{T}.\assocsym{P2}{A}}.\assocsym{P2}{A}\\
|
|
\rightarrow&\underline{\assocsym{P3}{T}.\assocsym{P1}{A}}.\assocsym{P2}{A}
|
|
\end{align*}
|
|
If you orient this critical pair, you get the rewrite rule:
|
|
\begin{align}
|
|
\assocsym{P3}{T}.\assocsym{P1}{A}.\assocsym{P2}{A}&\Rightarrow \assocsym{P3}{T}.\assocsym{P1}{A}.\assocsym{P1}{A}\tag{14}
|
|
\end{align}
|
|
Now, you're surely experiencing d\'ej\`a vu, because the completion procedure is about to wander off into the weeds with an infinite sequence of critical pairs. Rule~14 overlaps with Rule~4 on the term $\assocsym{P3}{T}.\assocsym{P1}{A}.\assocsym{P2}{A}.\protosym{P2}$. Resolving this critical pair introduces the rewrite rule:
|
|
\begin{align}
|
|
\assocsym{P3}{T}.\assocsym{P1}{A}.\assocsym{P1}{A}.\protosym{P2}&\Rightarrow \assocsym{P3}{T}.\assocsym{P1}{A}.\assocsym{P1}{A}\tag{15}
|
|
\end{align}
|
|
Similarly, Rule~14 overlaps with Rule~9 on the term $\assocsym{P3}{T}.\assocsym{P1}{A}.\assocsym{P2}{A}.\namesym{A}$. Resolving this critical pair introduces the rewrite rule:
|
|
\begin{align}
|
|
\assocsym{P3}{T}.\assocsym{P1}{A}.\assocsym{P1}{A}.\assocsym{P2}{A}&\Rightarrow \assocsym{P3}{T}.\assocsym{P1}{A}.\assocsym{P1}{A}.\assocsym{P1}{A}\tag{16}
|
|
\end{align}
|
|
This process will continue forever, introducing an infinite sequence of rewrite rules, for all $n\in\mathbb{N}$:
|
|
\begin{align*}
|
|
\assocsym{P3}{T}.\underbrace{\assocsym{P1}{A}.\assocsym{P1}{A}.\assocsym{P1}{A}}_{\text{$n$ times}}.\protosym{P2}&\Rightarrow \assocsym{P3}{T}.\underbrace{\assocsym{P1}{A}.\assocsym{P1}{A}.\assocsym{P1}{A}}_{\text{$n$ times}}\\
|
|
\assocsym{P3}{T}.\underbrace{\assocsym{P1}{A}.\assocsym{P1}{A}.\assocsym{P1}{A}}_{\text{$n$ times}}.\assocsym{P2}{A}&\Rightarrow \assocsym{P3}{T}.\underbrace{\assocsym{P1}{A}.\assocsym{P1}{A}.\assocsym{P1}{A}}_{\text{$n$ times}}.\assocsym{P1}{A}
|
|
\end{align*}
|
|
What happened here is that the rewrite system wants to normalize a mix of $\assocsym{P1}{A}$ and $\assocsym{P2}{A}$ that follows $\assocsym{P3}{T}$ into the same-length sequence of $\assocsym{P1}{A}$ alone. Additionally, each one of these type parameters needs to conform to $\protosym{P2}$ as well, even though $\assocsym{P1}{T}$ alone does \emph{not} conform to $\proto{P2}$.
|
|
|
|
Unfortunately, this cannot be expressed with a convergent rewrite system over the existing alphabet. Recall the two other examples of this phenomenon shown in this chapter:
|
|
\begin{enumerate}
|
|
\item Introducing associated type symbols made recursive protocol conformance requirements convergent.
|
|
\item Introducing inherited associated type symbols made vertical composition of recursive protocol conformance requirements convergent.
|
|
\end{enumerate}
|
|
Similarly, the solution here is to introduce a new kind of symbol, the merged associated type symbol, which will make horizontal composition of recursive conformance requirements convergent.
|
|
|
|
\index{merged associated type symbol}
|
|
\begin{definition}[Merged associated type symbol]
|
|
If $P:=\{\proto{P}_1,\ldots,\proto{P}_n\}$ is an ordered list of protocols and $\namesym{A}$ is an associated type present in each $\proto{P}_i\in P$, the \emph{merged associated type symbol} $[\proto{P}_1\cap\ldots\cap\proto{P}_n\colon\namesym{T}]$ represents the horizontal composition of the associated type symbols $[\proto{P}_i\colon\namesym{A}]$ for all $\proto{P}_i\in P$.
|
|
|
|
Note that no protocol $\proto{P}_i$ must inherit from any other protocol $\proto{P}_j$ in the list. If $\proto{P}_i$ inherits from $\proto{P}_j$, then $\proto{P}_j$ can be deleted from the list. Taken to the limit, this reduces to the inherited associated type case from the previous section.
|
|
\end{definition}
|
|
|
|
In the reduction order, merged associated type symbols must precede plain associated type symbols, or other merged associated type symbols with fewer protocols. That is, the following identities must hold:
|
|
\begin{align*}
|
|
[\proto{P1}\cap\proto{P2}\colon\namesym{A}]&<\assocsym{P1}{A}\\
|
|
[\proto{P1}\cap\proto{P2}\colon\namesym{A}]&<\assocsym{P2}{A}
|
|
\end{align*}
|
|
The full definition of the reduction order that you will see described in Definition~\ref{symbolorder} takes this into account.
|
|
|
|
Whereas plain associated type symbols and inherited associated type symbols come into existence when the initial set of rewrite rules are built for a protocol, merged associated type symbols are introduced by the completion procedure itself. This is done via a heuristic which attempts to detect when they would be beneficial.
|
|
|
|
\begin{definition}[Merged associated type candidate] If $\{\proto{P}_i\}_{1\leq i\leq m}$ and $\{\proto{Q}_i\}_{1\leq i\leq n}$ are two lists of protocols, $\namesym{A}$ is the name of an associated type, and $T$ is some term, then a rewrite rule of the form
|
|
\[T.[\proto{P}_1\cap\ldots\cap\proto{P}_m\colon\namesym{A}] \Rightarrow T.[\proto{Q}_1\cap\ldots\cap\proto{Q}_n\colon\namesym{A}]\]
|
|
is a \emph{merged associated type candidate}.
|
|
\end{definition}
|
|
|
|
Merged associated type candidates are identified when new rules are added by the completion procedure. Note that the candidate rule itself might already contain merged associated type symbols. The merging procedure can run multiple times, building up wider and wider merged associated type symbols. Initially, the first merge candidate necessarily only involves plain associated type symbols containing one protocol each.
|
|
|
|
In the running example of this section, Rule~12 described above is a merged associated type candidate:
|
|
\begin{align}
|
|
\assocsym{P3}{T}.\assocsym{P2}{A}\Rightarrow\assocsym{P3}{T}.\assocsym{P1}{A}\tag{12}
|
|
\end{align}
|
|
The requirement machine uses a modified form of the Knuth-Bendix completion procedure described in Algorithm~\ref{knuthbendix}. When a merged associated type candidate is detected, the completion procedure stops, and the candidate rule is processed via an algorithm, which possibly adds new rewrite rules.
|
|
|
|
\begin{algorithm}[Processing a merged associated type candidate]
|
|
This algorithm takes a rewrite system $S$, and a merged associated type candidate rule of the form
|
|
\[T.[\proto{P}_1\cap\ldots\cap\proto{P}_m\colon\namesym{A}] \Rightarrow T.[\proto{Q}_1\cap\ldots\cap\proto{Q}_n\colon\namesym{A}],\]
|
|
where $\{\proto{P}_i\}_{1\leq i\leq m}$ and $\{\proto{Q}_i\}_{1\leq i\leq n}$ are two lists of protocols, and $\namesym{A}$ is the name of an associated type.
|
|
|
|
The algorithm adds zero or more new rewrite rules to $S$ to resolve the merged associated type candidate, at which point the completion procedure can continue checking for overlaps and resolving critical pairs.
|
|
\begin{enumerate}
|
|
\item First, merge the two lists of protocols $\{\proto{P}_i\}_{1\leq i\leq m}$ and $\{\proto{Q}_i\}_{1\leq i\leq n}$, deleting any protocols that are inherited by other protocols in the list. Let $\{\proto{R}_i\}_{0\leq i\leq k}$ be the merged list of protocols.
|
|
|
|
If the list $\{\proto{R}_i\}_{0\leq i\leq k}$ is exactly equal to $\{\proto{Q}_i\}_{1\leq i\leq n}$, return immediately, since no new rules would get added to $S$ below. One example of this is if the merge candidate is a rule $T.\assocsym{P}{A}\Rightarrow T.\assocsym{Q}{A}$, and $\proto{P}$ inherits from $\proto{Q}$.
|
|
\item Construct the merged associated type symbol $[\proto{R}_1\cap\ldots\cap\proto{R}_k\colon\namesym{A}]$ and add it to the rewrite system's alphabet.
|
|
|
|
\item Next, add a rewrite rule to $S$:
|
|
\begin{align*}
|
|
T.[\proto{Q}_1\cap\ldots\cap\proto{Q}_n\colon\namesym{A}] &\Rightarrow T.[\proto{R}_1\cap\ldots\cap\proto{R}_k\colon\namesym{A}]
|
|
\end{align*}
|
|
|
|
\item Finally, lift conformance requirements on $[\proto{P}_1\cap\ldots\cap\proto{P}_m\colon\namesym{A}]$ and $[\proto{Q}_1\cap\ldots\cap\proto{Q}_n\colon\namesym{A}]$. For each rewrite rule $R$ in $S$,
|
|
\begin{enumerate}
|
|
\item If $R$ is of the following form, where $\proto{P'}$ is some protocol,
|
|
\[[\proto{P}_1\cap\ldots\cap\proto{P}_m\colon\namesym{A}].\protosym{P'}\Rightarrow [\proto{P}_1\cap\ldots\cap\proto{P}_m\colon\namesym{A}],\]
|
|
then add a new rewrite rule to $S$:
|
|
\[[\proto{R}_1\cap\ldots\cap\proto{R}_k\colon\namesym{A}].\protosym{P'}\Rightarrow [\proto{R}_1\cap\ldots\cap\proto{R}_k\colon\namesym{A}]\]
|
|
\item If $R$ is of the following form, where $\proto{P'}$ is some protocol,
|
|
\[[\proto{Q}_1\cap\ldots\cap\proto{Q}_n\colon\namesym{A}].\protosym{P'}\Rightarrow [\proto{Q}_1\cap\ldots\cap\proto{Q}_n\colon\namesym{A}]\]
|
|
then add a new rewrite rule to $S$:
|
|
\[[\proto{R}_1\cap\ldots\cap\proto{R}_k\colon\namesym{A}].\protosym{P'}\Rightarrow [\proto{R}_1\cap\ldots\cap\proto{R}_k\colon\namesym{A}]\]
|
|
\end{enumerate}
|
|
|
|
\end{enumerate}
|
|
\end{algorithm}
|
|
|
|
Returning to the example, consider how the completion procedure now proceeds once equipped with the above algorithm. Let's rewind back to Rule~12 and discard the Rules~13--16, since they were added after the train went off the rails.
|
|
|
|
Indeed, once Rule~12 is introduced, it can be processed by the merged associated type candidate algorithm, which adds four new rewrite rules:
|
|
\begin{enumerate}
|
|
\item Step~1 adds the merged associated type symbol $[\proto{P1}\cap\proto{P2}\colon\namesym{A}]$ to the rewrite system.
|
|
|
|
\item Next, Step~2 adds the rule:
|
|
\begin{align}
|
|
\assocsym{P3}{T}.\assocsym{P1}{A}&\Rightarrow\assocsym{P3}{T}.[\proto{P1}\cap\proto{P2}\colon\namesym{A}]\tag{13}
|
|
\end{align}
|
|
|
|
\item Finally, Step~3 lifts the conformance requirements on $\assocsym{P1}{A}$ and $\assocsym{P2}{A}$ to the merged associated type symbol $[\proto{P1}\cap\proto{P2}\colon\namesym{A}]$:
|
|
\begin{align*}
|
|
[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\protosym{P1}&\Rightarrow [\proto{P1}\cap\proto{P2}\colon\namesym{A}]\tag{14}\\
|
|
[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\protosym{P2}&\Rightarrow [\proto{P1}\cap\proto{P2}\colon\namesym{A}]\tag{15}
|
|
\end{align*}
|
|
|
|
\end{enumerate}
|
|
The new rules added by the merged associated type algorithm have some overlaps with a couple of existing rules. In particular, Rule~14 and Rule~15 overlap with Rule~1 and Rule~3 respectively. Resolving these critical pairs introduces two new rewrite rules:
|
|
\begin{align}
|
|
[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\namesym{A}&\Rightarrow[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\assocsym{P1}{A}\tag{16}\\
|
|
[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\namesym{A}&\Rightarrow[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\assocsym{P2}{A}\tag{17}
|
|
\end{align}
|
|
Now, the newly-added Rule~16 and Rule~17 overlap with each other, since they have equal left hand sides. Resolving this critical pair introduces a new rewrite rule:
|
|
\begin{align}
|
|
[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\assocsym{P2}{A}\Rightarrow [\proto{P1}\cap\proto{P2}\colon\namesym{A}].\assocsym{P1}{A}\tag{18}
|
|
\end{align}
|
|
The above rule is actually another merged associated type candidate, so the merged associated type candidate algorithm will run again.
|
|
\begin{enumerate}
|
|
\item Step~1 computes the merged associated type symbol $[\proto{P1}\cap\proto{P2}\colon\namesym{A}]$, which has already been added to the rewrite system when Rule~12 was processed by this algorithm.
|
|
\item Step~2 adds the rule:
|
|
\begin{align}
|
|
[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\assocsym{P1}{A}&\Rightarrow [\proto{P1}\cap\proto{P2}\colon\namesym{A}].[\proto{P1}\cap\proto{P2}\colon\namesym{A}]\tag{19}
|
|
\end{align}
|
|
\item Step~3 looks for conformance requirements on $\assocsym{P1}{A}$ and $\assocsym{P2}{A}$, which have already been lifted to $[\proto{P1}\cap\proto{P2}\colon\namesym{A}]$ when Rule~12 was processed by this algorithm.
|
|
\end{enumerate}
|
|
|
|
The merged associated type candidate algorithm has now run twice, introducing some new rewrite rules involving the merged associated type symbol $[\proto{P1}\cap\proto{P2}\colon\namesym{A}]$. It is now time to revisit some of the critical pairs which caused trouble before. Recall that Rule~12 overlapped with Rule~4 on the term $\assocsym{P3}{T}.\assocsym{P2}{A}.\protosym{P2}$. Reducing this term with both rules yields a pair of terms as before:
|
|
\begin{align*}
|
|
t_0&=\assocsym{P3}{T}.\assocsym{P1}{A}.\protosym{P2}\\
|
|
t_1&=\assocsym{P3}{T}.\assocsym{P2}{A}
|
|
\end{align*}
|
|
With the newly-added rewrite rules, the first term $t_0$ now reduces as follows:
|
|
\begin{align}
|
|
t_0=&\assocsym{P3}{T}.\assocsym{P1}{A}.\protosym{P2}\nonumber\\
|
|
\rightarrow&\assocsym{P3}{T}.[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\protosym{P2}\tag{Rule 13}\\
|
|
\rightarrow&\assocsym{P3}{T}.[\proto{P1}\cap\proto{P2}\colon\namesym{A}]\tag{Rule 15}
|
|
\end{align}
|
|
The second term $t_1$ serendipitously reduces to the same term:
|
|
\begin{align}
|
|
t_1=&\assocsym{P3}{T}.\assocsym{P2}{A}\nonumber\\
|
|
t_1=&\assocsym{P3}{T}.\assocsym{P1}{A}\tag{Rule 12}\\
|
|
\rightarrow&\assocsym{P3}{T}.[\proto{P1}\cap\proto{P2}\colon\namesym{A}]\tag{Rule 13}
|
|
\end{align}
|
|
This critical pair is now trivial, and can be discarded.
|
|
|
|
Rule~12 also overlapped with Rule~9 on the term $\assocsym{P3}{T}.\assocsym{P2}{A}.\namesym{A}$. Again, reducing this term with both rules yields a pair of terms as before:
|
|
\begin{align*}
|
|
t_0&=\assocsym{P3}{T}.\assocsym{P1}{A}.\namesym{A}\\
|
|
t_1&=\assocsym{P3}{T}.\assocsym{P2}{A}.\assocsym{P2}{A}
|
|
\end{align*}
|
|
This time though, the first term $t_0$ reduces as follows:
|
|
\begin{align}
|
|
t_0=&\assocsym{P3}{T}.\assocsym{P1}{A}.\namesym{A}\nonumber\\
|
|
\rightarrow&\assocsym{P3}{T}.[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\namesym{A}\tag{Rule 13}\\
|
|
\rightarrow&\assocsym{P3}{T}.[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\assocsym{P1}{A}\tag{Rule 16}\\
|
|
\rightarrow&\assocsym{P3}{T}.[\proto{P1}\cap\proto{P2}\colon\namesym{A}].[\proto{P1}\cap\proto{P2}\colon\namesym{A}]\tag{Rule 19}
|
|
\end{align}
|
|
The second term $t_1$ miraculously reduces to the same term:
|
|
\begin{align}
|
|
t_1=&\assocsym{P3}{T}.\assocsym{P2}{A}.\assocsym{P2}{A}\nonumber\\
|
|
\rightarrow&\assocsym{P3}{T}.[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\assocsym{P2}{A}\tag{Rule 13}\\
|
|
\rightarrow&\assocsym{P3}{T}.[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\assocsym{P1}{A}\tag{Rule 18}\\
|
|
\rightarrow&\assocsym{P3}{T}.[\proto{P1}\cap\proto{P2}\colon\namesym{A}].[\proto{P1}\cap\proto{P2}\colon\namesym{A}]\tag{Rule 19}
|
|
\end{align}
|
|
Yet again, the critical pair is trivial and can be discarded.
|
|
|
|
In fact, all remaining overlaps have been resolved, and we have a confluent rewrite system. Listing~\ref{mergedlisting} shows the full list of rewrite rules. Note that the final post-processing cleanup pass deletes Rule~10, Rule~12, Rule~17 and Rule~19 because their left hand sides can be reduced by other rules. Furthermore, the right hand sides of Rule~11 and Rule~18 are simplified by applying other rules.
|
|
\begin{listing}\caption{Confluent rewrite system for merged associated type example}\label{mergedlisting}
|
|
\begin{align}
|
|
\protosym{P1}.\namesym{A}&\Rightarrow\assocsym{P1}{A}\tag{1}\\
|
|
\assocsym{P1}{A}.\protosym{P1}&\Rightarrow\assocsym{P1}{A}\tag{2}\\
|
|
\protosym{P2}.\namesym{A}&\Rightarrow\assocsym{P2}{A}\tag{3}\\
|
|
\assocsym{P2}{A}.\protosym{P2}&\Rightarrow\assocsym{P2}{A}\tag{4}\\
|
|
\protosym{P3}.\namesym{T}&\Rightarrow\assocsym{P3}{T}\tag{5}\\
|
|
\assocsym{P3}{T}.\protosym{P1}&\Rightarrow\assocsym{P3}{T}\tag{6}\\
|
|
\assocsym{P3}{T}.\protosym{P2}&\Rightarrow\assocsym{P3}{T}\tag{7}\\
|
|
\assocsym{P1}{A}.\namesym{A}&\Rightarrow\assocsym{P1}{A}.\assocsym{P1}{A}\tag{8}\\
|
|
\assocsym{P2}{A}.\namesym{A}&\Rightarrow\assocsym{P2}{A}.\assocsym{P2}{A}\tag{9}\\
|
|
\assocsym{P3}{T}.\namesym{A}&\Rightarrow\assocsym{P3}{T}.\assocsym{P1}{A}\tag{\textbf{Deleted}}\\
|
|
\assocsym{P3}{T}.\namesym{A}&\Rightarrow\assocsym{P3}{T}.[\proto{P1}\cap\proto{P2}\colon\namesym{A}]\tag{11}\\
|
|
\assocsym{P3}{T}.\assocsym{P2}{A}&\Rightarrow\assocsym{P3}{T}.[\proto{P1}\cap\proto{P2}\colon\namesym{A}]\tag{12}\\
|
|
\assocsym{P3}{T}.\assocsym{P1}{A}&\Rightarrow\assocsym{P3}{T}.[\proto{P1}\cap\proto{P2}\colon\namesym{A}]\tag{13}\\
|
|
[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\protosym{P1}&\Rightarrow [\proto{P1}\cap\proto{P2}\colon\namesym{A}]\tag{14}\\
|
|
[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\protosym{P2}&\Rightarrow [\proto{P1}\cap\proto{P2}\colon\namesym{A}]\tag{15}\\
|
|
[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\namesym{A}&\Rightarrow[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\assocsym{P1}{A}\tag{\textbf{Deleted}}\\
|
|
[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\namesym{A}&\Rightarrow[\proto{P1}\cap\proto{P2}\colon\namesym{A}].[\proto{P1}\cap\proto{P2}\colon\namesym{A}]\tag{17}\\
|
|
[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\assocsym{P2}{A}&\Rightarrow [\proto{P1}\cap\proto{P2}\colon\namesym{A}].[\proto{P1}\cap\proto{P2}\colon\namesym{A}]\tag{18}\\
|
|
[\proto{P1}\cap\proto{P2}\colon\namesym{A}].\assocsym{P1}{A}&\Rightarrow [\proto{P1}\cap\proto{P2}\colon\namesym{A}].[\proto{P1}\cap\proto{P2}\colon\namesym{A}]\tag{19}
|
|
\end{align}
|
|
\end{listing}
|
|
|
|
\iffalse
|
|
For example, $\proto{MutableCollection}$ is another protocol inheriting from $\proto{Collection}$ which imposes additional requirements on $\proto{Collection}$'s associated types. Listing~\ref{bidirectionalandmutable} shows a protocol with an associated type $\namesym{A}$ that conforms to \emph{both} $\proto{BidirectionalCollection}$ and $\proto{MutableCollection}$. The type parameter $\genericparam{Self}.\namesym{A}.\namesym{SubSequence}$ (and $\genericparam{Self}.\namesym{A}.\namesym{SubSequence}.\namesym{SubSequence}$, and so on) will conform to both $\proto{BidirectionalCollection}$ and $\proto{MutableCollection}$.
|
|
\begin{listing}\caption{Example of horizontal composition}\label{bidirectionalandmutable}
|
|
\begin{Verbatim}
|
|
protocol BidirectionalCollection : Collection
|
|
where SubSequence : BidirectionalCollection,
|
|
Indices : BidirectionalCollection {
|
|
...
|
|
}
|
|
|
|
protocol MutableCollection : Collection
|
|
where SubSequence : MutableCollection {
|
|
...
|
|
}
|
|
|
|
protocol P {
|
|
// A.SubSequence is both a BidirectionalCollection, and a
|
|
// MutableCollection.
|
|
associatedtype A : BidirectionalCollection, MutableCollection
|
|
}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
\fi
|
|
|
|
\chapter{The Requirement Machine}\label{requirementmachine}
|
|
Chapter \ref{rewritesystemintro} introduced rewrite systems, and Chapter~\ref{protocolsasmonoids} and Chapter~\ref{associatedtypes} worked through a series of examples to show how a rewrite system can be constructed to answer a couple of simple generic signature queries. In this chapter, I will define the Swift generics rewrite system in its entirety.
|
|
|
|
First, let's define the alphabet of symbols used by the requirement machine. The symbols are categorized into seven kinds. You've already seen name symbols, protocol symbols and associated type symbols. The last four symbol kinds are new.
|
|
|
|
\index{symbol}
|
|
\index{term}
|
|
\begin{definition}[Symbols]\label{symboldef}
|
|
The requirement machine operates on the below alphabet:
|
|
\begin{itemize}
|
|
\index{protocol symbol}
|
|
\item \textbf{Protocol symbols}: $\protosym{P}$ where $\proto{P}$ is a Swift protocol.
|
|
\index{associated type symbol}
|
|
\index{merged associated type symbol}
|
|
\item \textbf{Associated type symbols}: $[\proto{P}_0\cap\ldots\cap\proto{P}_n\colon\namesym{T}]$ where the $\proto{P}_i$ are protocols and $\namesym{T}$ is an associated type name. Each protocol must define or inherit an associated type named $\namesym{T}$. The list of protocols always contains at least one protocol, so $n\geq 1$. If $n > 1$, the associated type symbol is called a merged associated type symbol. These were discussed in detail in Section~\ref{mergedassoctypes}.
|
|
\index{name symbol}
|
|
\item \textbf{Name symbols}: $\namesym{T}$ for any valid Swift identifier.
|
|
\index{generic parameter symbol}
|
|
\item \textbf{Generic parameter symbols}: $\genericsym{d}{i}$ where $d$, $i\geq 0$ are the depth and index of the generic parameter, respectively.
|
|
\index{layout symbol}
|
|
\item \textbf{Layout symbols}: $\layoutsym{L}$ where $\namesym{L}$ is a Swift layout constraint.
|
|
\index{substitution}
|
|
\index{superclass symbol}
|
|
\item \textbf{Superclass symbols}: $\supersym{\namesym{T};\;\sigma_0,\ldots,\sigma_n}$ where $\namesym{T}$ is a Swift type, and the $\{\sigma_i\}_{0\le i \le n}$ are a (possibly empty) ordered list of terms, referred to as \emph{substitutions}.
|
|
\item \index{concrete type symbol} \textbf{Concrete type symbols}: $\concretesym{\namesym{T};\;\sigma_0,\ldots,\sigma_n}$ where $T$ and $\sigma_i$ are as above.
|
|
\end{itemize}
|
|
\end{definition}
|
|
|
|
Generic parameter symbols are the subject of Section \ref{genericparamsym}. Layout, superclass and concrete type symbols are described in excruciating detail in Section~\ref{concretetypes}.
|
|
|
|
\index{reduction order}
|
|
First though, I will define the reduction order on symbols.
|
|
|
|
\index{linear order}
|
|
\index{reduction protocol order}
|
|
\begin{definition}[Reduction protocol order]\label{protocolorder} First, for each protocol $\proto{P}$, define the \emph{depth} of $\proto{P}$ as one greater than the maximum depth of each protocol $\proto{Q}_i$ inherited by $\proto{P}$:
|
|
\[\gpdepth(\proto{P}) = 1 + \max(\gpdepth(\proto{Q}_i))\quad\hbox{where}\quad \bm{\mathsf{Q}}_i \in \hbox{protocols inherited by $\proto{P}$}\]
|
|
If $\proto{P}$ does not inherit from any protocols, then $\max(\varnothing)=0$, and $\gpdepth(\proto{P})=1$. Now, given two protocols $\proto{P}$ and $\proto{Q}$, $\proto{P}<\proto{Q}$ in the reduction protocol order if:
|
|
|
|
\begin{itemize}
|
|
\item $\gpdepth(\proto{P}) > \gpdepth(\proto{Q})$, or
|
|
\item $\gpdepth(\proto{P}) = \gpdepth(\proto{Q})$, and $\proto{P}$ precedes $\proto{Q}$ with the canonical protocol order from Definition~\ref{canonicalprotocol}.
|
|
\end{itemize}
|
|
\end{definition}
|
|
\begin{listing}\caption{The standard library's Collection protocol tower}\label{collectiontower}
|
|
\begin{Verbatim}
|
|
protocol Sequence {}
|
|
protocol Collection : Sequence {}
|
|
protocol BidirectionalCollection : Collection {}
|
|
protocol MutableCollection : Collection {}
|
|
protocol RangeReplaceableCollection : Collection {}
|
|
protocol RandomAccessCollection : BidirectionalCollection {}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
\begin{example}
|
|
Consider the collection protocol tower from the standard library, shown in Listing \ref{collectiontower}. The depth of each protocol is as follows:
|
|
\begin{itemize}
|
|
\item $\proto{Sequence}$ has depth 1.
|
|
\item $\proto{Collection}$ has depth 2.
|
|
\item $\proto{BidirectionalCollection}$, $\proto{MutableCollection}$, and $\proto{RangeReplaceableCollection}$ all have depth 3.
|
|
\item $\proto{RandomAccessCollection}$ has depth 4.
|
|
\end{itemize}
|
|
Here is the linear order among these protocols:
|
|
\begin{align*}\proto{RandomAccessCollection}<\proto{BidirectionalCollection}&<\proto{MutableCollection}\\
|
|
<\proto{RangeReplaceableCollection}&<\proto{Collection}<\proto{Sequence}
|
|
\end{align*}
|
|
You can see that protocols deeper in the inheritance graph precede other protocols. If you recall, associated type inheritance relies on this in Section \ref{inheritedassoctypes}.
|
|
\end{example}
|
|
|
|
\index{partial order}
|
|
\begin{definition}[Reduction order on symbols]\label{symbolorder}
|
|
Say the two symbols are $\alpha$ and $\beta$.
|
|
\begin{figure}\caption{symbol kind order}\label{kindorder}
|
|
\[
|
|
\begin{array}{c}
|
|
\text{Protocol symbol}\\
|
|
<\\
|
|
\text{Associated type symbol}\\
|
|
<\\
|
|
\text{Name symbol}\\
|
|
<\\
|
|
\text{Generic parameter symbol}\\
|
|
<\\
|
|
\text{Layout symbol}\\
|
|
<\\
|
|
\text{Superclass symbol}\\
|
|
<\\
|
|
\text{Concrete type symbol}
|
|
\end{array}
|
|
\]
|
|
\end{figure}
|
|
|
|
If $\alpha$ and $\beta$ have different kinds, then $\alpha<\beta$ if the kind of $\alpha$ precedes the kind of $\beta$ in Figure~\ref{kindorder}. Note that this is the same kind order as in Definition \ref{symboldef}.
|
|
If $\alpha$ and $\beta$ have the same kind, then they are compared as follows:
|
|
|
|
\begin{itemize}
|
|
\item \textbf{Protocol symbols:} Let $\alpha=\protosym{P}$ and $\beta=\protosym{Q}$. Then $\alpha<\beta$ if $\proto{P}<\proto{Q}$ in the reduction protocol order from Definition \ref{protocolorder}.
|
|
\item \textbf{Associated type symbols:} Let
|
|
$
|
|
\alpha=[\proto{P}_0\cap\ldots\cap\proto{P}_m:\namesym{T}]$, $
|
|
\beta=[\proto{Q}_0\cap\ldots\cap\proto{Q}_n:\namesym{T}]$.
|
|
\begin{itemize}
|
|
\item If the identifier $T$ precedes $U$ in lexicographic order, then $\alpha < \beta$.
|
|
\item If $T=U$, and $m>n$, then $\alpha < \beta$.
|
|
\item If $T=U$, $m=n$, and there is some $j$ such that for all $i<j$, $\proto{P}_i=\proto{Q}_i$ and $\proto{P}_j<\proto{Q}_j$, then $\alpha<\beta$.
|
|
\end{itemize}
|
|
\item \textbf{Name symbols} $\alpha<\beta$ if the identifier of $\alpha$ precedes the identifier of $\beta$ lexicographically.
|
|
\item \textbf{Generic parameter symbols} Let $\alpha=\genericsym{d}{i}$, $\beta=\genericsym{d'}{i'}$. Then $\alpha < \beta$ if either $d<d'$, or if $d=d'$ and $i<i'$.
|
|
\end{itemize}
|
|
\end{definition}
|
|
The ordering of merged associated type symbols is subtle. A symbol with more protocols precedes one with fewer, so for example
|
|
$[\proto{P}\cap\proto{Q}:\namesym{A}]<\assocsym{P}{A}$. This order remains well-founded, because only a set of protocol symbols exist, which imposes an upper bound on the size of a merged associated type symbol.
|
|
|
|
\index{shortlex order}
|
|
\begin{definition}[Reduction order on terms]
|
|
The reduction order on terms is the shortlex order obtained from the reduction order on symbols. The shortlex order is defined in Definition~\ref{shortlex}.
|
|
\end{definition}
|
|
|
|
\index{type parameter}
|
|
\index{requirement signature}
|
|
Next, we define a way to build terms from type parameters written in a protocol requirement signature. This supersedes Definition~\ref{typelowering1}. The new definition builds terms with associated type symbols, instead of a protocol symbol followed by zero or more name symbols. It turns out both possible definitions are functionally equivalent, because the former kind of term reduces to the latter by repeated application of associated type introduction rules $\protosym{P}.\namesym{A}\Rightarrow\assocsym{P}{A}$. The definition in Algorithm~\ref{lowertypeinproto} produces a term that is a few steps closer to canonical form, saving the overhead of applying some rewrite rules.
|
|
|
|
\begin{algorithm}[Type parameter lowering for protocol requirement signatures]\label{lowertypeinproto}
|
|
\index{requirement signature}
|
|
Each protocol $\proto{P}$ defines a lowering map $\Lambda_{\proto{P}}\colon\namesym{Type}\rightarrow\namesym{Term}$. Let $X$ be a type parameter in the requirement signature of $\proto{P}$:
|
|
\[X:=\genericparam{Self}.X_1.X_2\ldots X_n\]
|
|
This algorithm constructs a new term $Y:=\Lambda_{\proto{P}}(X)$ from the type parameter $X$.
|
|
\begin{itemize}
|
|
\item
|
|
\index{protocol Self type}
|
|
\index{protocol symbol}
|
|
If $|X|=1$, $X$ is just the protocol $\genericparam{Self}$ type. Set $Y:=\protosym{P}$ and return.
|
|
\item
|
|
\index{associated type symbol}
|
|
Otherwise, initialize $Y$ to an empty term, and build up $Y$ by adding associated type symbols corresponding to each $X_i$ in order.
|
|
\item
|
|
The first associated type $X_1$ is handled differently than the rest. Let $X_1$ be the name of the first associated type. This associated type must either be defined in $\proto{P}$ itself, or some protocol $\proto{Q}$ where $\proto{P}$ inherits from $\proto{Q}$. Add the symbol $[\proto{P}\colon\namesym{X}_1]$ to $Y$.
|
|
\item
|
|
Each remaining associated type of $X$ is handled as follows. Let $X_i$ be the name of the $i$th associated type of $X$, defined in some protocol $\proto{P}_i$. Add the symbol $[\proto{P}_i\colon\namesym{X}_i]$ to $Y$.
|
|
\end{itemize}
|
|
\end{algorithm}
|
|
The special handling of the first associated type ensures that the first symbol of $\Lambda_{\proto{P}}(X)$ is either the protocol symbol $\proto{P}$, or some associated type symbol $\assocsym{P}{A}$. This is important when the requirement signature of a protocol constraints an inherited associated type.
|
|
\begin{example}
|
|
Recall the $\proto{BidirectionalCollection}$ protocol from Section \ref{inheritedassoctypes}. The requirement signature defines the requirement
|
|
\[\genericparam{Self}.\namesym{SubSequence}\colon\proto{BidirectionalCollection}.\]
|
|
The requirement is stated on the inherited $\namesym{SubSequence}$ associated type from $\proto{Collection}$, however the above algorithm ensures that this type lowers to
|
|
\[\assocsym{BidirectionalCollection}{SubSequence},\]
|
|
and not
|
|
\[\assocsym{Collection}{SubSequence}.\]
|
|
The latter would certainly be wrong; the requirement needs to apply to $\namesym{SubSequence}$ of $\proto{BidirectionalCollection}$ only.
|
|
\end{example}
|
|
|
|
Finally, I will generalize the requirement lowering map from Definition~\ref{reqlowering1} to handle layout, superclass and concrete type requirements.
|
|
|
|
\index{generic requirement}
|
|
\index{requirement signature}
|
|
\begin{algorithm}[Protocol requirement lowering]\label{lowerreqinproto}
|
|
The map $\Lambda_{\proto{P}}\colon\namesym{Requirement}\rightarrow\namesym{Rule}^+$ takes a generic requirement and produces one or more rewrite rules (that's what the ``$+$'' in $\namesym{Rule}^+$ means). In practice, the only kind of requirement that produces more than one rule is a superclass requirement, which introduces an additional implied $\namesym{AnyObject}$ layout requirement. The requirement lowering map builds on top of the type lowering map $\Lambda_{\proto{P}}\colon \namesym{Type} \rightarrow \namesym {Term}$.
|
|
|
|
Each kind of generic requirement is handled as follows.
|
|
\begin{itemize}
|
|
\index{conformance requirement}
|
|
\item \textbf{Protocol conformance requirements} $\namesym{T}\colon\proto{P}$ lower to a rule eliminating a protocol symbol from the end of the type term for $\namesym{T}$:
|
|
\[\Lambda_{\proto{P}}(\namesym{T}).\protosym{P} \Rightarrow \Lambda_{\proto{P}}(\namesym{T})\]
|
|
\index{layout requirement}
|
|
\item \textbf{Layout requirements} $\namesym{T}\colon\namesym{L}$ lower to a rule eliminating a layout symbol from the end of the type term for $\namesym{T}$:
|
|
\[\Lambda_{\proto{P}}(\namesym{T}).\layoutsym{L} \Rightarrow \Lambda_{\proto{P}}(\namesym{T})\]
|
|
\index{superclass requirement}
|
|
\item \textbf{Superclass requirements} $\namesym{T}\colon\namesym{C}$ lower to \emph{two} rules, the first one eliminating a superclass symbol from the end of the type term for $\namesym{T}$:
|
|
\[\Lambda_{\proto{P}}(\namesym{T}).\supersym{\namesym{C}} \Rightarrow \Lambda_{\proto{P}}(\namesym{T}),\]
|
|
and the second one eliminating an $\namesym{AnyObject}$ layout symbol:
|
|
\[\Lambda_{\proto{P}}(\namesym{T}).\layoutsym{\namesym{AnyObject}} \Rightarrow \Lambda_{\proto{P}}(\namesym{T}).\]
|
|
\index{concrete type requirement}
|
|
\item \textbf{Concrete type requirements} $\namesym{T}==\namesym{C}$ (where the right-hand side is a concrete type) lower to a rule eliminating a concrete type symbol from the end of the type term for $\namesym{T}$:
|
|
\[\Lambda_{\proto{P}}(\namesym{T}).\concretesym{\namesym{C}}\Rightarrow\Lambda_{\proto{P}}(\namesym{T})\]
|
|
\index{same-type requirement}
|
|
\index{shortlex order}
|
|
\index{reduction order}
|
|
\item \textbf{Same-type requirements} $\namesym{T}==\namesym{U}$ (where the right-hand side is a type parameter) lower to an equivalence of terms. I'm going to assume that $\Lambda_{\proto{P}}(\namesym{T}) > \Lambda_{\proto{P}}(\namesym{U})$ in the shortlex order on terms (if not, flip the terms around):
|
|
\[\Lambda_{\proto{P}}(\namesym{T}) \Rightarrow \Lambda_{\proto{P}}(\namesym{U})\]
|
|
\end{itemize}
|
|
\end{algorithm}
|
|
|
|
\section{Generic Parameters}\label{genericparamsym}
|
|
\index{generic parameter symbol}
|
|
So far, I've only shown you how to build rewrite rules from requirements in the requirement signature of some protocol $\proto{P}$. When lowering a type parameter, the protocol $\genericparam{Self}$ type lowers to the protocol symbol $\protosym{P}$. Once such a rewrite system is built, queries can be performed against the protocol generic signature $\gensig{\genericparam{Self}}{\genericparam{Self}\colon\proto{P}}$. When lowering parameters and requirements in an arbitary generic signature, generic parameter types instead become generic parameter symbols.
|
|
|
|
Generic parameter symbols should only ever appear as the initial symbol in a term. While the rewrite system would have no trouble with terms where generic parameter symbols appear elsewhere in the abstract, they don't actually make sense semantically, since they do not correspond to valid Swift type parameter types.
|
|
|
|
The lowering of type parameters in a generic signature is similar to Algorithm \ref{lowertypeinproto}. The first associated type no longer plays a special rule, since the term is always ``rooted'' at a generic parameter symbol.
|
|
|
|
\index{type parameter}
|
|
\index{generic requirement}
|
|
\index{generic signature}
|
|
\begin{algorithm}[Type parameter lowering for generic signatures]\label{lowertypeinsig}
|
|
The lowering map $\Lambda\colon\namesym{Type}\rightarrow\namesym{Term}$ takes a type parameter $X$ as input:
|
|
\[X:=\genericsym{d}{i}.X_1.X_2\ldots X_n\]
|
|
This algorithm constructs a new term $Y:=\Lambda(X)$ from the type parameter $X$ as follows:
|
|
\begin{itemize}
|
|
\item
|
|
The first element of $X$ is a generic parameter type at depth $d$ and index $i$, so set $Y_0:=\genericsym{d}{i}$.
|
|
\item
|
|
All subsequent elements are associated types. If the $i$th element is an associated type $A_i$ defined in a protocol $\protosym{P}_i$, then set $Y_i:=\assocsym{P}{A}$.
|
|
\end{itemize}
|
|
\end{algorithm}
|
|
|
|
\begin{algorithm}[Generic requirement lowering]
|
|
The generic signature requirement lowering map $\Lambda\colon \namesym{Requirement}\rightarrow\namesym{Rule}^+$ is virtually identical to protocol requirement lowering map in Algorithm \ref{lowerreqinproto}. The only difference is that types should be lowered to terms via $\Lambda\colon\namesym{Type}\rightarrow\namesym{Term}$ defined above, in place of $\Lambda_{\proto{P}}\colon\namesym{Type}\rightarrow\namesym{Term}$ from Algorithm~\ref{lowertypeinproto}.
|
|
\end{algorithm}
|
|
|
|
\begin{example}
|
|
Consider the following generic signature:
|
|
\[\gensig{\genericsym{0}{0},\genericsym{0}{1}}{\genericsym{0}{0}\colon\proto{Collection},\;\genericsym{0}{1}\colon\proto{Collection},\;\genericsym{0}{0}.\namesym{Element}==\genericsym{0}{1}.\namesym{Element}}\]
|
|
The signature's requirements lower to the following rewrite rules:
|
|
\begin{align}
|
|
\genericsym{0}{0}.\protosym{Collection}&\Rightarrow\genericsym{0}{0}\tag{1}\\
|
|
\genericsym{0}{1}.\protosym{Collection}&\Rightarrow\genericsym{0}{1}\tag{2}\\
|
|
\genericsym{0}{1}.\namesym{Element}&\Rightarrow\genericsym{0}{0}.\namesym{Element}\tag{3}
|
|
\end{align}
|
|
Rule 1 and Rule 2 are lowered conformance requirements of the form $\namesym{T}.\protosym{P}\Rightarrow\namesym{T}$ just like before, and Rule 3 is the lowered same-type requirement.
|
|
|
|
This rewrite system will also need to include the requirements of the $\proto{Collection}$ protocol, as well as $\proto{Sequence}$ and $\proto{IteratorProtocol}$, which are referenced from the requirement signatures of $\proto{Collection}$ and $\proto{Sequence}$.
|
|
\end{example}
|
|
\begin{definition}The \emph{protocol dependencies} of a generic signature (or protocol requirement signature) are all the protocols that appear on the right-hand side of the conformance requirements of the generic signature (or protocol requirement signature). The \emph{complete} protocol dependencies of a generic signature is the transitive closure of its protocol dependencies.
|
|
\end{definition}
|
|
|
|
While generic parameters are uniquely identified by their depth and index within a \emph{single} generic signature, they are not unique \emph{between} generic signatures, so each generic signature needs its own requirement machine. This process of constructing a requirement machine from a generic signature can be formalized as follows.
|
|
\begin{algorithm}[Requirement machine construction]\label{rqmalgo}
|
|
Let $G$ be the input generic signature $\gensig{\genericsym{0}{0},\;\ldots,\;\genericsym{m}{n}}{R_0,\;\ldots\;R_i}$. Outputs a confluent rewrite system, or fails with a timeout.
|
|
\begin{enumerate}
|
|
\item Let $S$ be an empty rewrite system.
|
|
\item Let $W$ be an empty stack of protocols.
|
|
\item Let $V$ be an empty set of protocols.
|
|
\item For each requirement $R$ of $G$:
|
|
\begin{enumerate}
|
|
\item Lower $R$ to a rewrite rule $\Lambda(R)$, and add the new rule to $S$.
|
|
\item If $R$ is a conformance requirement $\namesym{T}\colon\proto{P}$ and $\proto{P}\notin V$, push $\proto{P}$ onto $W$, and insert $\proto{P}$ into $V$.
|
|
\end{enumerate}
|
|
\item While $W$ is not empty,
|
|
\begin{enumerate}
|
|
\item Pop the next protocol $\proto{P}$ from $W$.
|
|
\item Add rewrite rules corresponding to the associated types and requirements of $\proto{P}$ using the protocol lowering map from Definition~\ref{protoloweringmap2}.
|
|
\item For each conformance requirement $\namesym{T}\colon\proto{Q}$ in the requirement signature of $\proto{P}$,
|
|
\begin{enumerate}
|
|
\item If $\proto{Q}\notin V$, push $\proto{Q}$ onto $W$, and insert $\proto{Q}$ into $V$.
|
|
\end{enumerate}
|
|
\end{enumerate}
|
|
\item Run the Knuth-Bendix completion procedure on $S$ (Algorithm~\ref{knuthbendix}).
|
|
\item If the completion succeeds with the configured iteration and depth limits, return $S$; otherwise, diagnose an error.
|
|
\end{enumerate}
|
|
\end{algorithm}
|
|
|
|
\section{Concrete Types}\label{concretetypes}
|
|
|
|
\index{concrete type symbol}
|
|
\index{superclass symbol}
|
|
\index{layout symbol}
|
|
The time has come to reveal the mystery of how layout, superclass and concrete type requirements work.
|
|
|
|
Definition \ref{lowerreqinproto} showed that just like a conformance requirement $\namesym{T}\colon\proto{P}$ becomes a rewrite rule $\namesym{T}.\protosym{P}\Rightarrow\namesym{T}$, a layout requirement $\namesym{T}\colon\namesym{L}$ becomes a rewrite rule $\namesym{T}.\layoutsym{L}\Rightarrow\namesym{T}$. The situation with superclass and concrete type requirements is analogous, except that superclass and concrete type symbols are constructed in a more elaborate manner, described in Algorithm~\ref{concretesymbolcons}.
|
|
|
|
In fact, this phenomenon where rules eliminate a symbol from the end of a term can be formalized. Figure \ref{symbolclass} classifies the alphabet into the \emph{property-like} and \emph{type-like} symbols (protocol symbols straddle both classifications, because they can also arise as the protocol $\proto{Self}$ type). This notion of property-like symbols also generalizes to property-like rules.
|
|
\begin{definition} A rewrite rule is \emph{property-like} if it is of the form $\namesym{T}.\namesym{\Pi}\Rightarrow\namesym{T}$ with $\namesym{\Pi}$ a property-like symbol.
|
|
\end{definition}
|
|
|
|
\index{property-like!symbol}
|
|
\index{type-like!symbol}
|
|
\begin{figure}\caption{symbol kind classification}\label{symbolclass}
|
|
\begin{center}
|
|
\begin{tabular}{|l|l|}
|
|
\hline
|
|
\multirow{3}{14em}{Property-like}& layout\\
|
|
&superclass\\
|
|
&concrete type\\
|
|
\hline
|
|
\multirow{3}{14em}{Type-like}& associated type\\
|
|
&identifier\\
|
|
&generic parameter\\
|
|
\hline
|
|
\multirow{1}{14em}{Both property and type-like}& protocol\\
|
|
\hline
|
|
\end{tabular}
|
|
\end{center}
|
|
\end{figure}
|
|
|
|
\index{substitution}
|
|
Recall that superclass and concrete type symbols store a Swift type together with a list of substitutions. What do these represent exactly, and why is it not enough to store a Swift type alone? Well, consider this generic signature:
|
|
\begin{align*}
|
|
\gensig{\genericsym{0}{0},\;\genericsym{0}{1}}
|
|
{
|
|
&\genericsym{0}{0}\colon\proto{Sequence},\\
|
|
&\genericsym{0}{1}\colon\proto{Sequence},\\
|
|
&\genericsym{0}{0}.\namesym{Element}==
|
|
\namesym{Array}\langle\genericsym{0}{1}.\namesym{Element}\rangle
|
|
}
|
|
\end{align*}
|
|
The right-hand side of the concrete type requirement contains the type parameter $\genericsym{0}{1}.\namesym{Element}$. This type parameter lowers to the term $\genericsym{0}{1}.\assocsym{Sequence}{Element}$. It would be nice if the Swift type could directly contain this term, but a \texttt{BoundGenericType} like $\namesym{Array}\langle\bullet\rangle$ only contains other types, not terms or any other arbitrary objects.
|
|
\begin{leftbar}
|
|
\noindent In theory it would be possible to duplicate the entire Swift type hierarchy in the world of concrete type symbols, but it would not be very practical. The parallel hierarchy would be quite large, with its own versions of metatypes, function types, tuple types, and so on. this would also be a maintenance burden going forward, since any addition to a Swift type representation, for example adding a new attribute to function types, would have to be mirrored in the world of concrete type terms.
|
|
|
|
Another option would be to introduce a special kind of placeholder type in the Swift AST, which can store a term, but this would also have undesirable ripple effects throughout the codebase.
|
|
\end{leftbar}
|
|
There is a simple solution. Concrete type symbols store the child terms off to the side in a list of substitutions. The substitution terms are referenced from within the Swift type using ``phantom'' generic parameters, disregarding the depth and using the index to refer to an element of the substitution list.
|
|
|
|
\begin{algorithm}[Concrete type symbol construction]\label{concretesymbolcons}
|
|
Takes as input a Swift type $X$ containing arbitrary type parameters, and as output returns a new type where the type parameters have been replaced with generic parameters indexing into a substitution list, together with the substitution list itself.
|
|
|
|
This algorithm can build a symbol for use in a rule constructed from a requirement in a protocol $\proto{P}$, or a requirement in the generic signature of a function or type. The only difference is whether types are lowered via $\Lambda_{\proto{P}}$ (Algorithm~\ref{lowertypeinsig}) or $\Lambda$ (Algorithm~\ref{lowertypeinproto}).
|
|
\begin{enumerate}
|
|
\item Initialize $S$ with an empty list of terms.
|
|
\item For each position $\pi$ where $X|_{\pi}$ is a type parameter,
|
|
\begin{enumerate}
|
|
\item Get the type parameter $T$ stored at $X|_{\pi}$.
|
|
\item Replace $X|_{\pi}$ with a new generic parameter type $\genericsym{0}{j}$, where $j$ is the number of elements in $S$ so far.
|
|
\item Append the term $\Lambda(T)$ (or $\Lambda_{\proto{P}}(T)$) to $S$.
|
|
\end{enumerate}
|
|
\item Build the concrete type symbol $\concretesym{X;S}$ with the modified Swift type $X$ and substitutions $S$.
|
|
\end{enumerate}
|
|
\end{algorithm}
|
|
The same algorithm also constructs superclass symbols, if you replace $\concretesym{X;S}$ with $\supersym{X;S}$ in Step~3.
|
|
|
|
\begin{example}
|
|
The type $\namesym{Array}\langle\genericsym{0}{1}.\namesym{Element}\rangle$, when written in a generic signature where $\genericsym{0}{1}\colon\proto{Sequence}$, becomes the following concrete type symbol:
|
|
\[\concretesym{\namesym{Array}\langle\genericsym{0}{0}\rangle;\;\sigma_0:=\genericsym{0}{1}.\assocsym{Sequence}{Element}}.\]
|
|
The generic parameter $\genericsym{0}{0}$ appearing within $\namesym{Array}\langle\genericsym{0}{0}\rangle$ is not a real generic parameter from the current generic context; instead, its just an index into the substitution list, here referring to the first element, $\genericsym{0}{1}.\namesym{Element}$.
|
|
|
|
To aid with readability, let's write a ``phantom'' generic parameter $\genericsym{0}{i}$ as $\sigma_i$. Now, the above symbol looks a little neater:
|
|
\[\concretesym{\namesym{Array}\langle\sigma_0\rangle;\;\sigma_0:=\genericsym{0}{1}.\namesym{Element}}.\]
|
|
\end{example}
|
|
|
|
\begin{example}
|
|
The function type $(\namesym{Array}\langle\genericsym{1}{0}\rangle,\; \namesym{Int})\rightarrow \genericsym{1}{1}.\namesym{Element}$, when written in a generic signature where $\genericsym{1}{1}\colon\proto{Sequence}$, maps to the following concrete type symbol:
|
|
\begin{align*}
|
|
\concretesym{&(\namesym{Array}\langle\sigma_0\rangle,\; \namesym{Int})\rightarrow \sigma_1;\\
|
|
&\sigma_0:=\genericsym{1}{0},\\
|
|
&\sigma_1:=\genericsym{1}{1}.\assocsym{Sequence}{Element}}.
|
|
\end{align*}
|
|
\end{example}
|
|
\begin{example}
|
|
The tuple type $(\genericparam{Self}.\namesym{Magnitude},\; \genericparam{Self}.\namesym{Words})$, when written in a protocol $\proto{P}$ that defines associated types $\namesym{Magnitude}$ and $\namesym{Words}$, maps to the following concrete type symbol:
|
|
\begin{align*}
|
|
\concretesym{&(\sigma_0,\;\sigma_1);\\
|
|
&\sigma_0:=\assocsym{P}{Magnitude},\\
|
|
&\sigma_1:=\assocsym{P}{Words}}.
|
|
\end{align*}
|
|
\end{example}
|
|
Note that the Swift type in a superclass or concrete type symbol cannot itself be a type parameter. That is, the following is never valid:
|
|
\[\concretesym{\sigma_0;\; \sigma_0=\hbox{some term}}\]
|
|
A same-type requirement between a type parameter and another type parameter is always represented as an equivalence of terms, no concrete type symbols are involved.
|
|
|
|
\index{partial order}
|
|
One thing to note is that the reduction order on symbols (Definition \ref{symbolorder}) is a partial order, as layout, superclass and concrete type symbols are incomparable amongst themselves. Ordinarily, this would imply that the Knuth-Bendix completion procedure (Algorithm~\ref{knuthbendix}) can fail in a new way: when resolving a critical pair $(t_0, t_1)$ to a rewrite rule, it might be the case that there is no way to orient the rule; that is, neither $t_0<t_1$ nor $t_1<t_0$. In practice, because of the special way in which property-like symbols only appear in property-like rules, this cannot happen with a rewrite system constructed from generic requirements. I'll show you the proof later.
|
|
|
|
\section{Overlapping Concrete Type Adjustment}\label{concretetypeadjust}
|
|
\index{overlapping rules}
|
|
\index{critical pair}
|
|
The next issue is rather subtle. An adjustment needs to be made to the Knuth-Bendix completion procedure to correctly handle overlaps between rules involving concrete type and superclass symbols. In the below, I will only talk about concrete type symbols, but as usual, superclass symbols are identical, just replace $\concretesym{\cdots}$ with $\supersym{\cdots}$.
|
|
|
|
First, let's extend the notion of validity of rules to add a new precondition that a concrete type rule must satisfy in order to be valid. This will be important in the implementation of generic signature queries, described in the next chapter.
|
|
\begin{definition}\label{concretevalid}
|
|
\index{domain of a term}
|
|
A concrete type rule $\namesym{T}.\concretesym{\namesym{C};\,\sigma_0,\ldots,\sigma_n}\Rightarrow\namesym{T}$ is \emph{valid} if
|
|
\begin{enumerate}
|
|
\item the rule is valid in the sense of Definition~\ref{validrule}, and additionally,
|
|
\item for each $\sigma_i$, $\domain(\sigma_i)=\domain(\namesym{T})$.
|
|
\end{enumerate}
|
|
\end{definition}
|
|
|
|
Recall Definition~\ref{overlappingrules}, and suppose the two overlapping rules are $x\Rightarrow y$ and $x'\Rightarrow y'$, where the overlap is of the second kind, that is, $x$ has a suffix equal to a prefix of $x'$. Furthermore, assume $x'$ ends with a concrete type symbol. Borrowing the notation from this definition, this means that
|
|
\begin{align*}
|
|
x&=uv\\
|
|
x'&=vw.\concretesym{\namesym{T};\;\sigma_0,\ldots,\sigma_n}
|
|
\end{align*}
|
|
If you just go by this previous definition, the overlapped term $t$ is
|
|
\[t=uvw.\concretesym{\namesym{T};\;\sigma_0,\ldots,\sigma_n}.\]
|
|
Reducing $t$ with $x\Rightarrow y$ and $x'\Rightarrow y'$ gives the critical pair:
|
|
\begin{align*}
|
|
t_0&=yw.\concretesym{\namesym{T};\;\sigma_0,\ldots,\sigma_n}\\
|
|
t_1&=uy'
|
|
\end{align*}
|
|
Unfortunately, this violates the new notion of validity in Definition~\ref{concretevalid}. The trick is that when forming the overlapped term $t$, you need to prepend $u$ to each substitution term $\sigma_i$. So the adjusted overlapped term $t$ is
|
|
\[t=uvw.\concretesym{\namesym{T};\;u\sigma_0,\ldots,u\sigma_n}.\]
|
|
Reducing the adjusted term $t$ with $x\Rightarrow y$ and $x'\Rightarrow y'$ gives the correct critical pair:
|
|
\begin{align*}
|
|
t_0&=yw.\concretesym{\namesym{T};\;u\sigma_0,\ldots,u\sigma_n}\\
|
|
t_1&=uy'
|
|
\end{align*}
|
|
Note the substitutions in $t_0$ now have $u$ prepended to each term $\sigma_i$.
|
|
|
|
(Incidentally, can we say anything more about $t_1$? Is it the case that $t_0>t_1$? Since $x'\Rightarrow y'$ is a property-like rule, $x'$ is equal to $y'$ with a concrete type symbol appended, or in other words, $y'=vw$, so $t_1=uvw$. But $x=uv$, so $t_1=uvw$ reduces to $yw$. So indeed, the above critical pair either becomes trivial if $t_0$ can be reduced by some other rule, or it introduces the rewrite rule $t_0\Rightarrow yw$.)
|
|
|
|
\begin{example}
|
|
Consider the generic signature of class $\namesym{C}$ from Listing~\ref{overlapconcreteex}:
|
|
\begin{listing}\caption{Example with overlap involving concrete type term}\label{overlapconcreteex}
|
|
\begin{Verbatim}
|
|
struct G<A> {}
|
|
|
|
protocol S {
|
|
associatedtype E
|
|
}
|
|
|
|
protocol P {
|
|
associatedtype T
|
|
associatedtype U where U == G<V>
|
|
associatedtype V
|
|
}
|
|
|
|
class C<X>
|
|
where X : S,
|
|
X.E : P,
|
|
X.E.U == X.E.T {}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
\begin{align*}
|
|
\gensig{\genericsym{0}{0}}{&\genericsym{0}{0}\colon\proto{S},\\
|
|
&\genericsym{0}{0}.\namesym{E}\colon\proto{P},\\
|
|
&\genericsym{0}{0}.\namesym{E}.\namesym{U}==\genericsym{0}{0}.\namesym{E}.\namesym{T}}
|
|
\end{align*}
|
|
The relevant subset of this generic signature's rewrite rules:
|
|
\begin{align}
|
|
%&\protosym{P}.\namesym{T}&\Rightarrow\;&\assocsym{P}{T}\tag{Rule 1}\\
|
|
%&\protosym{P}.\namesym{U}&\Rightarrow\;&\assocsym{P}{U}\tag{Rule 2}\\
|
|
%&\protosym{P}.\namesym{V}&\Rightarrow\;&\assocsym{P}{V}\tag{Rule 3}\\
|
|
&\assocsym{P}{U}.\concretesym{\namesym{G}\langle\sigma_0\rangle;\;\sigma_0:=\assocsym{P}{V}}&\Rightarrow\;&\assocsym{P}{U}\tag{Rule 1}\\
|
|
&\genericsym{0}{0}.\protosym{S}&\Rightarrow\;&\genericsym{0}{0}\tag{Rule 2}\\
|
|
&\genericsym{0}{0}.\assocsym{S}{E}.\protosym{P}&\Rightarrow\;&\genericsym{0}{0}.\assocsym{S}{E}\tag{Rule 3}\\
|
|
&\genericsym{0}{0}.\assocsym{S}{E}.\assocsym{P}{U}&\Rightarrow\;&\genericsym{0}{0}.\assocsym{S}{E}.\assocsym{P}{T}\tag{Rule 4}
|
|
\end{align}
|
|
Observe that Rule 4 overlaps with Rule 1. The prefix $\genericsym{0}{0}.\assocsym{S}{E}$ must be prepended to the substitution $\sigma_0$ in the concrete type symbol when computing the critical pair:
|
|
\begin{align*}
|
|
t_0&=\genericsym{0}{0}.\assocsym{S}{E}.\assocsym{P}{T}.\concretesym{\namesym{G}\langle\sigma_0\rangle;\;\sigma_0:=\genericsym{0}{0}.\assocsym{S}{E}.\assocsym{P}{V}}\\
|
|
t_1&=\genericsym{0}{0}.\assocsym{S}{E}.\assocsym{P}{U}
|
|
\end{align*}
|
|
Now, $t_0$ cannot be reduced further, whereas Rule 7 reduces $t_1$ to
|
|
$\genericsym{0}{0}.\assocsym{S}{E}.\assocsym{P}{T}$.
|
|
This means that resolving the critical pair introduces the new rewrite rule:
|
|
\[\genericsym{0}{0}.\assocsym{S}{E}.\assocsym{P}{T}.\concretesym{\namesym{G}\langle\sigma_0\rangle;\;\sigma_0:=\genericsym{0}{0}.\assocsym{S}{E}.\assocsym{P}{V}}\Rightarrow\genericsym{0}{0}.\assocsym{S}{E}.\assocsym{P}{T}.
|
|
\]
|
|
Intuitively, the completion process began with the fact that
|
|
\[\assocsym{P}{U}==\namesym{G}\langle\assocsym{P}{V}\rangle,\]
|
|
and derived that\
|
|
\[\genericsym{0}{0}.\assocsym{S}{E}.\assocsym{P}{T}==\namesym{G}\langle\genericsym{0}{0}.\assocsym{S}{E}.\assocsym{P}{T}\rangle.\]
|
|
Adjusting the concrete type symbol by prepending the prefix $\genericsym{0}{0}.\assocsym{S}{E}$ to the substitution $\sigma_0$ appearing in the left-hand side of Rule 7 ``re-rooted'' the concrete type, giving the correct result shown above and preserving validity as per Definition~\ref{concretevalid}. Without the adjustment, we would instead have derived the fact
|
|
\[\genericsym{0}{0}.\assocsym{S}{E}.\assocsym{P}{T}==\namesym{G}\langle\assocsym{P}{T}\rangle,\]
|
|
which does not make sense.
|
|
\end{example}
|
|
The concrete type adjustment actually comes up again in the next chapter, during property map construction (Algorithm~\ref{propmapconsalgo}) and lookup (Algorithm~\ref{propmaplookupalgo}).
|
|
|
|
\chapter{The Property Map}\label{propertymap}
|
|
|
|
Until now, you've seen how to solve the \texttt{requiresProtocol()} generic signature
|
|
query. If $T$ is a type term, then the type parameter represented by $T$ conforms to a protocol
|
|
$\proto{P}$ if $T$ and $T.\protosym{P}$ both reduce to the same canonical form ${T}{\downarrow}$. The next
|
|
step is to solve more general queries, such as \texttt{getRequiredProtocols()}. Here, you want to find
|
|
\emph{all} protocol symbols $\protosym{P}$ such that $T.\protosym{P}$ and $T$ reduce to some
|
|
${T}{\downarrow}$.
|
|
|
|
\index{layout requirement}
|
|
\index{conformance requirement}
|
|
\index{concrete type requirement}
|
|
\index{property-like symbol}
|
|
One potential implementation would use exhaustive enumeration. A rewrite system's rules only mention a
|
|
finite set of protocol symbols, so it would be enough to suffix a type term with every known
|
|
protocol symbol and attempt to reduce the result. While this shows that the query is implementable,
|
|
it is an unsatisfying solution. The approach I'm going to outline below is more efficient, and also more generally useful with generic signature queries involving layout, superclass and concrete type requirements as well.
|
|
\begin{definition} If $T$ and $U$ are terms and there is some term $Z$ such that $T\rightarrow Z$
|
|
and $U\rightarrow Z$, then $T$ and $U$ are said to be \emph{joinable}, and this is written as $T\downarrow U$.
|
|
\end{definition}
|
|
\begin{definition}
|
|
If $\Pi$ is a property-like symbol and $T$ is a term, then $T$ \emph{satisfies} $\Pi$ if $T.\Pi\downarrow T$. The set of properties satisfied by $T$ is defined as the set of all symbols $\Pi$ such that $T.\Pi\downarrow T$.
|
|
\end{definition}
|
|
|
|
\begin{theorem}\label{propertymaptheorem} If $T$ is a type term with canonical form ${T}{\downarrow}$, $\Pi$ is a property-like
|
|
symbol, and $T$ satisfies $\Pi$, then ${T}{\downarrow}.\Pi\rightarrow{T}{\downarrow}$. Furthermore, this
|
|
reduction sequence consists of a single rule $V.\Pi\Rightarrow V$, for some non-empty suffix $V$ of ${T}{\downarrow}$.
|
|
\end{theorem}
|
|
\begin{proof}
|
|
Since $T$ can be reduced to ${T}{\downarrow}$, the same reduction sequence when applied to $T.\Pi$ will
|
|
produce ${T}{\downarrow}.\Pi$. This means that $T.\Pi$ can be reduced to both ${T}{\downarrow}$ (by
|
|
assumption), and ${T}{\downarrow}.\Pi$. By confluence, ${T}{\downarrow}.\Pi$ must reduce to ${T}{\downarrow}$.
|
|
|
|
Since ${T}{\downarrow}$ is canonical, ${T}{\downarrow}.\Pi$ cannot be reduced further except with a rewrite rule
|
|
of the form $V.\Pi\Rightarrow V'$, where ${T}{\downarrow}=UV$, for terms $U$, $V$ and $V'$. It remains to show
|
|
that $V=V'$. (TODO: This needs an additional assumption about conformance-valid rules.)
|
|
\end{proof}
|
|
|
|
By Theorem~\ref{propertymaptheorem}, the properties satisfied by a type term can be discovered by
|
|
considering all non-empty suffixes of ${T}{\downarrow}$, and collecting rewrite rules of the form
|
|
$V.\Pi\rightarrow V$ where $\Pi$ is some property-like symbol.
|
|
|
|
\begin{listing}\caption{Motivating example for property map}\label{propmaplisting1}
|
|
\begin{Verbatim}
|
|
|
|
protocol P1 {}
|
|
|
|
protocol P2 {}
|
|
|
|
protocol P3 {
|
|
associatedtype T : P1
|
|
associatedtype U : P2
|
|
}
|
|
|
|
protocol P4 {
|
|
associatedtype A : P3 where A.T == A.U
|
|
associatedtype B : P3
|
|
}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
|
|
\begin{example}\label{propmapexample1}
|
|
Consider the protocol definitions in Listing~\ref{propmaplisting1}. These definitions are used in a couple of examples below, so let's look at the constructed rewrite system first. Protocols $\proto{P1}$ and $\proto{P2}$ do not define any associated types or requirements, so they do not contribute any initial rewrite rules. Protocol $\proto{P3}$ has two associated types $\namesym{T}$ and $\namesym{U}$ conforming to $\proto{P1}$ and $\proto{P2}$ respectively, so a pair of rules introduce each associated type, and another pair impose conformance requirements:
|
|
\begin{align}
|
|
\protosym{P3}.\namesym{T}&\Rightarrow\assocsym{P3}{T}\tag{1}\\
|
|
\protosym{P3}.\namesym{U}&\Rightarrow\assocsym{P3}{U}\tag{2}\\
|
|
\assocsym{P3}{T}.\protosym{P1}&\Rightarrow\assocsym{P3}{T}\tag{3}\\
|
|
\assocsym{P3}{U}.\protosym{P2}&\Rightarrow\assocsym{P3}{U}\tag{4}
|
|
\end{align}
|
|
Protocol $\proto{P4}$ adds five additional rules. A pair of rules introduce the associated types $\namesym{A}$ and $\namesym{B}$. Next, both associated types conform to $\proto{P3}$, and $\namesym{A}$ has a same-type requirement between it's nested types $\namesym{T}$ and $\namesym{U}$:
|
|
\begin{align}
|
|
\protosym{P4}.\namesym{A}&\Rightarrow\assocsym{P4}{A}\tag{5}\\
|
|
\protosym{P4}.\namesym{B}&\Rightarrow\assocsym{P4}{B}\tag{6}\\
|
|
\assocsym{P4}{A}.\protosym{P3}&\Rightarrow\assocsym{P4}{A}\tag{7}\\
|
|
\assocsym{P4}{B}.\protosym{P3}&\Rightarrow\assocsym{P4}{B}\tag{8}\\
|
|
\assocsym{P4}{A}.\assocsym{P3}{U}&\Rightarrow\assocsym{P4}{A}.\assocsym{P3}{T}\tag{9}
|
|
\end{align}
|
|
When applied to the above initial rewrite system, the Knuth-Bendix algorithm adds a handful of new rules to resolve critical pairs. First, there are four overlaps between the conformance requirements of $\proto{P4}$ and the associated type introduction rules of $\proto{P3}$:
|
|
\begin{align}
|
|
\assocsym{P4}{A}.\namesym{T}&\Rightarrow\assocsym{P4}{A}.\assocsym{P3}{T}\tag{10}\\
|
|
\assocsym{P4}{A}.\namesym{U}&\Rightarrow\assocsym{P4}{A}.\assocsym{P3}{T}\tag{11}\\
|
|
\assocsym{P4}{B}.\namesym{T}&\Rightarrow\assocsym{P4}{B}.\assocsym{P3}{T}\tag{12}\\
|
|
\assocsym{P4}{B}.\namesym{U}&\Rightarrow\assocsym{P4}{B}.\assocsym{P3}{U}\tag{13}
|
|
\end{align}
|
|
Finally, there is an overlap between Rule~9 and Rule~4:
|
|
\begin{align}
|
|
\assocsym{P4}{A}.\assocsym{P3}{T}.\protosym{P2}&\Rightarrow\assocsym{P4}{A}.\assocsym{P3}{T}\tag{14}
|
|
\end{align}
|
|
Consider the type parameter $\genericparam{Self}.\namesym{A}.\namesym{U}$ in the generic signature of $\proto{P4}$. This type parameter is equivalent to $\genericparam{Self}.\namesym{A}.\namesym{T}$ via the same-type requirement in $\proto{P4}$. The associated type $\namesym{T}$ of $\proto{P3}$ conforms to $\proto{P1}$, and $\namesym{U}$ conforms to $\proto{P2}$. This means that $\genericparam{Self}.\namesym{A}.\namesym{U}$ conforms to \emph{both} $\proto{P1}$ and $\proto{P2}$.
|
|
|
|
Let's see how this fact can be derived from the rewrite system. Applying $\Lambda_{\proto{P4}}$ to $\genericparam{Self}.\namesym{A}.\namesym{U}$ produces the type term $\assocsym{P4}{A}.\assocsym{P3}{U}$. This type term can be reduced to the canonical term $\assocsym{P4}{A}.\assocsym{P3}{T}$ with a single application of Rule~9. By the result in Theorem~\ref{propertymaptheorem}, it suffices to look at rules of the form $V.\Pi\Rightarrow V$, where $V$ is some suffix of $\assocsym{P4}{A}.\assocsym{P3}{T}$. There are two such rules:
|
|
\begin{enumerate}
|
|
\item Rule~3, which says that $\assocsym{P3}{T}$ conforms to $\proto{P1}$.
|
|
\item Rule~14, which says that $\assocsym{P4}{A}.\assocsym{P4}{T}$ conforms to $\proto{P2}$.
|
|
\end{enumerate}
|
|
This shows that the set of properties satisfied by the type parameter $\genericparam{Self}.\namesym{A}.\namesym{U}$ is exactly $\{\protosym{P1},\protosym{P2}\}$.
|
|
\end{example}
|
|
The above example might suggest that looking up the set of properties satisfied by a type parameter requires iterating over the list of rewrite rules, but in reality, it is possible to construct a multi-map of pairs $(V, \Pi)$ once, after the completion procedure ends.
|
|
|
|
As you saw in the example, a type term can satisfy multiple properties via different suffixes. For the material presented in Section~\ref{moreconcretetypes}, it is convenient to avoid having to take the union of sets in the lookup path. For this reason, the construction algorithm explicitly
|
|
``inherits'' the symbols associated with a term $V$ when adding an entry for a term $UV$ that has $V$ as a suffix. As a result, the lookup algorithm only has to look for the longest suffix that appears in the multi-map to find all properties satisfied by a term.
|
|
|
|
The multi-map construction and lookup can be formalized in a pair of algorithms.
|
|
\begin{algorithm}[Property map construction]\label{propmapconsalgo}
|
|
This algorithm runs after the completion procedure has constructed a confluent rewrite system with
|
|
simplified right hand sides. As output, it produces a multi-map mapping terms to sets of
|
|
symbols.
|
|
|
|
\begin{enumerate}
|
|
\item Initialize $S$ to the list of all rewrite rules of the form $V.\Pi\Rightarrow V$.
|
|
\item Initialize $P$ to a multi-map mapping terms to sets of symbols, initially empty.
|
|
\item Sort $S$ in ascending order by the lengths of the rewrite rules' left-hand sides. The
|
|
relative order of rules whose left hand sides have the same length is irrelevant.
|
|
\item For each rule $V.\Pi\Rightarrow V\in S$,
|
|
\begin{enumerate}
|
|
\item If $V\notin P$, initialize $P[V]$ first as follows.
|
|
|
|
If $P$ contains some $V''$ where $V=V'V''$, copy the symbols from $P[V'']$ to $P[V]$.
|
|
When copying superclass or concrete type symbols, the substitution
|
|
terms inside the symbol must be adjusted by prepending $V'$.
|
|
|
|
This is the point where the algorithm relies on the sorting of rules done in Step~2. Since
|
|
$|V''|<|V|$, all rules of the form $V''.\Pi\Rightarrow V''$ have already been visited by the time
|
|
the algorithm can encounter a rule involving $V$.
|
|
|
|
In fact, since the map is constructed in
|
|
bottom-up order, it suffices to only check the \emph{longest} suffix $V''$ of $V$ such that $V''\in P$.
|
|
|
|
\item Insert $\Pi$ in $P[V]$.
|
|
\end{enumerate}
|
|
\end{enumerate}
|
|
\end{algorithm}
|
|
Once the property map has been built, lookup is very simple.
|
|
\begin{algorithm}[Property map lookup]\label{propmaplookupalgo} Given a type parameter $T$ and a property map $P$, this
|
|
algorithm outputs the set of properties satisfied by $T$.
|
|
|
|
\begin{enumerate}
|
|
\item First, lower $T$ to a type term $\Lambda(T)$, and reduce this term to canonical form $\Lambda(T){\downarrow}$.
|
|
\item If no suffix of $\Lambda(T){\downarrow}$ appears in $P$, return the empty set.
|
|
\item Otherwise, let $\Lambda(T){\downarrow}:=UV$, where $V$ is the longest suffix of $\Lambda(T){\downarrow}$ appearing in $P$.
|
|
\item Let $S:=V[P]$, the set of property symbols associated with $V$ in $P$.
|
|
\item For each superclass or concrete type symbol $\Pi\in S$, prepend $U$ to every substitution
|
|
term inside the symbol.
|
|
\end{enumerate}
|
|
\end{algorithm}
|
|
Notice how in both algorithms, superclass and concrete type symbols are adjusted by prepending a
|
|
prefix to each substitution. This is the same operation as described in
|
|
Section~\ref{concretetypeadjust}.
|
|
|
|
\begin{example}\label{propmapexample2}
|
|
Recall Example~\ref{propmapexample1}, where a rewrite system was constructed from Listing~\ref{propmaplisting}. The complete rewrite system contains five rewrite rules of the form $V.\Pi\Rightarrow V$:
|
|
\begin{enumerate}
|
|
\item Rule~3 and Rule~4, which state that the associated types $\namesym{T}$ and $\namesym{U}$ of $\proto{P3}$ conform to $\proto{P1}$ and $\proto{P2}$, respectively.
|
|
\item Rule~7 and Rule~8, which state that the associated types $\namesym{A}$ and $\namesym{B}$ of $\proto{P4}$ both conform to $\proto{P3}$.
|
|
\item Rule~13, which states that the nested type $\genericparam{A}.\genericparam{T}$ of $\proto{P4}$ also conforms to $\proto{P2}$.
|
|
\end{enumerate}
|
|
The property map constructed by Algorithm~\ref{propmapconsalgo} from the above rules is shown in Table~\ref{propmapexample2table}.
|
|
\end{example}
|
|
\begin{table}\caption{Property map constructed from Example~\ref{propmapexample2}}\label{propmapexample2table}
|
|
\begin{center}
|
|
\begin{tabular}{|l|l|}
|
|
\hline
|
|
Key&Values\\
|
|
\hline
|
|
\hline
|
|
$\assocsym{P3}{T}$&$\protosym{P1}$\\
|
|
$\assocsym{P3}{U}$&$\protosym{P2}$\\
|
|
$\assocsym{P4}{A}$&$\protosym{P3}$\\
|
|
$\assocsym{P4}{B}$&$\protosym{P3}$\\
|
|
$\assocsym{P4}{A}.\assocsym{P3}{T}$&$\protosym{P1}$, $\protosym{P2}$\\
|
|
\hline
|
|
\end{tabular}
|
|
\end{center}
|
|
\end{table}
|
|
\begin{example}\label{propmapexample3}
|
|
The second example explores layout, superclass and concrete type requirements. Consider the protocol definitions in Listing~\ref{propmaplisting} together with the generic signature:
|
|
\[\gensig{\genericsym{0}{0}}{\genericsym{0}{0}\colon\proto{P}, \genericsym{0}{0}.\namesym{B}\colon\proto{Q}}\]
|
|
The three protocols $\proto{R}$, $\proto{Q}$ and $\proto{P}$ together with the generic signature generate the following initial rewrite rules:
|
|
\begin{align*}
|
|
\protosym{Q}.\protosym{R}&\Rightarrow\protosym{Q}\tag{1}\\
|
|
\protosym{P}.\namesym{A}&\Rightarrow\assocsym{P}{A}\tag{2}\\
|
|
\protosym{P}.\namesym{B}&\Rightarrow\assocsym{P}{B}\tag{3}\\
|
|
\protosym{P}.\namesym{C}&\Rightarrow\assocsym{P}{C}\tag{4}\\
|
|
\assocsym{P}{A}.\layoutsym{AnyObject}&\Rightarrow\assocsym{P}{A}\tag{5}\\
|
|
\assocsym{P}{B}.\supersym{\namesym{Cache}\langle\sigma_0\rangle;\,\sigma_0:=\assocsym{P}{A}}&\Rightarrow\assocsym{P}{B}\tag{6}\\
|
|
\assocsym{P}{B}.\layoutsym{\_NativeClass}&\Rightarrow\assocsym{P}{B}\tag{7}\\
|
|
\assocsym{P}{C}.\concretesym{\namesym{Array}\langle\sigma_0\rangle;\,\sigma_0:=\assocsym{P}{A}}&\Rightarrow\assocsym{P}{C}\tag{8}\\
|
|
\genericsym{0}{0}.\protosym{P}&\Rightarrow\genericsym{0}{0}\tag{9}\\
|
|
\genericsym{0}{0}.\assocsym{P}{B}.\protosym{Q}&\Rightarrow\genericsym{0}{0}.\assocsym{P}{B}\tag{10}
|
|
\end{align*}
|
|
The Knuth-Bendix algorithm adds the following rules to make the rewrite system confluent:
|
|
\begin{align*}
|
|
\genericsym{0}{0}.\namesym{A}&\Rightarrow\genericsym{0}{0}.\assocsym{P}{A}\tag{11}\\
|
|
\genericsym{0}{0}.\namesym{B}&\Rightarrow\genericsym{0}{0}.\assocsym{P}{B}\tag{12}\\
|
|
\genericsym{0}{0}.\namesym{C}&\Rightarrow\genericsym{0}{0}.\assocsym{P}{C}\tag{13}\\
|
|
\genericsym{0}{0}.\assocsym{P}{B}.\protosym{R}&\Rightarrow\genericsym{0}{0}.\assocsym{P}{B}\tag{14}
|
|
\end{align*}
|
|
\begin{listing}\caption{Protocol with concrete type requirements}\label{propmaplisting}
|
|
\begin{Verbatim}
|
|
class Cache<Key> {}
|
|
|
|
protocol R {}
|
|
protocol Q : R {}
|
|
|
|
protocol P {
|
|
associatedtype A : AnyObject
|
|
associatedtype B : Cache<A>
|
|
associatedtype C where C == Array<A>
|
|
}
|
|
\end{Verbatim}
|
|
\end{listing}
|
|
The following rewrite rules take the form $V.\Pi\Rightarrow V$, where $\Pi$ is a property-like symbol:
|
|
\begin{enumerate}
|
|
\item Rule~1, which states that protocol $\proto{Q}$ inherits from $\proto{R}$.
|
|
\item Rule~5, which states that the associated type $\namesym{A}$ in protocol $\proto{P}$ is represented as an $\namesym{AnyObject}$.
|
|
\item Rule~6, which states that the associated type $\namesym{B}$ in protocol $\proto{P}$ must inherit from $\namesym{Cache}\langle\namesym{A}\rangle$.
|
|
\item Rule~7, which states that the associated type $\namesym{B}$ in protocol $\proto{P}$ is also represented as a $\namesym{\_NativeClass}$.
|
|
\item Rule~8, which states that the associated type $\namesym{C}$ in protocol $\proto{P}$ is fixed to the concrete type $\namesym{Array}\langle\namesym{A}\rangle$.
|
|
\item Rule~9, which states that the generic parameter $\genericsym{0}{0}$ conforms to $\proto{P}$.
|
|
\item Rule~10, which states that the type parameter $\genericsym{0}{0}.\namesym{B}$ conforms to $\proto{Q}$.
|
|
\item Rule~14, which states that the type parameter $\genericsym{0}{0}.\namesym{B}$ conforms to $\proto{R}$.
|
|
|
|
This final rule was added by the completion procedure to resolve the overlap of Rule~10 and Rule~1 on the term $\genericsym{0}{0}.\assocsym{P}{B}.\protosym{Q}.\protosym{R}$.
|
|
\end{enumerate}
|
|
When constructing the property map, sorting the rules by the length of their left hand sides guarantees that Rule~6 and Rule~7 are processed before Rule~10 and Rule~14. This is important because the subject type of Rule~6 and Rule~7 ($\assocsym{P}{B}$), is a suffix of the subject type of Rule~10 and Rule~14 ($\genericsym{0}{0}.\assocsym{P}{B}$), which means that the property map entries for both Rule~10 and Rule~14 inherit the superclass and layout requirements from Rule~6 and Rule~7. Furthermore, the substitution $\sigma_0:=\assocsym{P}{A}$ in the superclass requirement is adjusted by prepending the prefix $\genericsym{0}{0}$.
|
|
|
|
The property map constructed by Algorithm~\ref{propmapconsalgo} from the above rules is shown in Table~\ref{propmapexample2table}.
|
|
In the next section, you will see how this example property map can solve generic signature queries.
|
|
\begin{table}\caption{Property map constructed from Example~\ref{propmapexample3}}\label{propmapexample2table}
|
|
\begin{center}
|
|
\begin{tabular}{|l|l|}
|
|
\hline
|
|
Key&Values\\
|
|
\hline
|
|
\hline
|
|
$\protosym{Q}$&$\protosym{R}$\\
|
|
$\assocsym{P}{A}$&$\layoutsym{AnyObject}$\\
|
|
$\assocsym{P}{B}$&$\supersym{\namesym{Cache}\langle\sigma_0\rangle;\,\sigma_0:=\assocsym{P}{A}}$, $\layoutsym{\_NativeClass}$\\
|
|
$\assocsym{P}{C}$&$\concretesym{\namesym{Array}\langle\sigma_0\rangle;\,\sigma_0:=\assocsym{P}{A}}$\\
|
|
$\genericsym{0}{0}$&$\protosym{P}$\\
|
|
$\genericsym{0}{0}.\assocsym{P}{B}$&$\protosym{Q}$, $\protosym{R}$, $\supersym{\namesym{Cache}\langle\sigma_0\rangle;\,\sigma_0:=\genericsym{0}{0}.\assocsym{P}{A}}$, $\layoutsym{\_NativeClass}$\\
|
|
\hline
|
|
\end{tabular}
|
|
\end{center}
|
|
\end{table}
|
|
\end{example}
|
|
|
|
\section{Generic Signature Queries}\label{implqueries}
|
|
|
|
Recall the categorization of generic signature queries into predicates, properties and canonical type queries previously shown in Section~\ref{intqueries}. The predicates can be implemented in a straightforward manner using the property map. Each predicate takes a subject type parameter $T$.
|
|
|
|
Generic signature queries are always posed relative to a generic signature, and not a protocol requirement signature, hence the type parameter $T$ is lowered with the generic signature type lowering map $\Lambda\colon\namesym{Type}\rightarrow\namesym{Term}$ (Definition~\ref{lowertypeinsig}) and not a protocol type lowering map $\Lambda_{\proto{P}}\colon\namesym{Type}\rightarrow\namesym{Term}$ for some protocol $\proto{P}$ (Definition~\ref{lowertypeinproto}).
|
|
|
|
The first step is to look up the set of properties satisfied by $T$ using Algorithm~\ref{propmaplookupalgo}. Then, each predicate can be tested as follows:
|
|
\begin{description}
|
|
\item[\texttt{requiresProtocol()}] A type parameter $T$ conforms to a protocol $\proto{P}$ if the property map entry for some suffix of $T$ stores $\protosym{P}$ for some suffix of $T$.
|
|
\index{layout constraints}
|
|
\index{join of layout constraints}
|
|
\item[\texttt{requiresClass()}] A type parameter $T$ is represented as a retainable pointer if the property map entry for some suffix of $T$ stores a layout symbol $L$ such that $L\leq\layoutsym{AnyObject}$.
|
|
|
|
The order relation comes into play because there exist layout symbols which further refine $\layoutsym{AnyObject}$, for example $\layoutsym{\_NativeClass}$, so it is not enough to check for a layout symbol exactly equal to $\layoutsym{AnyObject}$.
|
|
|
|
Given two layout symbols $A$ and $B$, $A\wedge B$ is the most general symbol that satisfies both $A$ and $B$. The two elements are ordered $A\leq B$ if $A=A\wedge B$.
|
|
\item[\texttt{isConcreteType()}] A type parameter $T$ is fixed to a concrete type if the property map entry for some suffix of $T$ stores a concrete type symbol.
|
|
\end{description}
|
|
\begin{leftbar}
|
|
\noindent Layout symbols store a layout constraint as an instance of the \texttt{LayoutConstraint} class. The join operation used in the implementation of the \texttt{requiresClass()} query is defined in the \texttt{merge()} method on \texttt{LayoutConstraint}.
|
|
\end{leftbar}
|
|
You've already seen the \texttt{requiresProtocol()} query in Chapter~\ref{protocolsasmonoids}, where it was shown that it can be implemented by checking if $\Lambda(T).\protosym{P}\downarrow\Lambda(T)$. The property map implementation is perhaps slightly more efficient, since it only simplifies a single term and not two. The $\texttt{requiresClass()}$ and $\texttt{isConcreteType()}$ queries are new on the other hand, and demonstrate the power of the property map. With the rewrite system alone, they cannot be implemented except by exhaustive enumeration over all known layout and concrete type symbols.
|
|
|
|
All of the subsequent examples reference the protocol definitions from Example~\ref{propmapexample3}, and the resulting property map shown in Table~\ref{propmapexample2table}.
|
|
\begin{example} Consider the canonical type term $\genericsym{0}{0}.\assocsym{P}{B}$. This type parameter conforms to $\proto{Q}$ via a requirement stated in the generic signature, and also to $\proto{R}$, because $\proto{Q}$ inherits from $\proto{R}$. The $\texttt{requiresProtocol()}$ query will confirm these facts, because the property map entry for $\genericsym{0}{0}.\assocsym{P}{B}$ contains the protocol symbols $\protosym{Q}$ and $\protosym{R}$:
|
|
\begin{enumerate}
|
|
\item The conformance to $\proto{Q}$ is witnessed by the rewrite rule $\genericsym{0}{0}.\assocsym{P}{B}.\protosym{Q}\Rightarrow \genericsym{0}{0}.\assocsym{P}{B}$, which is Rule~10 in Example~\ref{propmapexample2}. This is the initial rule generated by the conformance requirement.
|
|
\item The conformance to $\proto{R}$ is witnessed by the rewrite rule $\genericsym{0}{0}.\assocsym{P}{B}.\protosym{R}\Rightarrow \genericsym{0}{0}.\assocsym{P}{B}$, which is Rule~14 in Example~\ref{propmapexample2}. This rule was added by the completion procedure to resolve the overlap between Rule~10 above, which states that $\genericsym{0}{0}.\assocsym{P}{B}$ conforms to $\proto{Q}$, and Rule~1, which states that anything conforming to $\proto{Q}$ also conforms to $\proto{R}$.
|
|
\end{enumerate}
|
|
\end{example}
|
|
\begin{example} This example shows the \texttt{requiresClass()} query on two different type terms.
|
|
|
|
First, consider the canonical type term $\genericsym{0}{0}.\assocsym{P}{A}$. The query returns true, because the longest suffix with an entry in the property map is $\assocsym{P}{A}$, which stores a single symbol, $\layoutsym{AnyObject}$. The corresponding rewrite rule is $\assocsym{P}{A}.\layoutsym{AnyObject}\Rightarrow\assocsym{P}{A}$, or Rule~5 in Example~\ref{propmapexample2}. This is the initial rule generated by the $\namesym{A}\colon\namesym{AnyObject}$ layout requirement in protocol $\proto{P}$.
|
|
|
|
Now, consider the canonical type term $\genericsym{0}{0}.\assocsym{P}{B}$. The query also returns true. Here, the longest suffix is the entire type term, because the property map stores an entry for $\genericsym{0}{0}.\assocsym{P}{B}$ with layout symbol $\layoutsym{\_NativeClass}$. This symbol satisfies
|
|
\[\layoutsym{\_NativeClass}\leq\layoutsym{AnyObject},\]
|
|
because
|
|
\[\layoutsym{\_NativeClass}\wedge \layoutsym{AnyObject}=\layoutsym{\_NativeClass}.\]
|
|
\end{example}
|
|
\begin{example}
|
|
The final predicate is the \texttt{isConcreteType()} query. Consider the canonical type term $\genericsym{0}{0}.\assocsym{P}{C}$. The longest suffix that appears in the property map is $\assocsym{P}{C}$. This entry stores the concrete type symbol $\concretesym{\namesym{Array}\langle\sigma_0\rangle;\,\sigma_0:=\assocsym{P}{A}}$, and so the query returns true.
|
|
\end{example}
|
|
|
|
Next, I will describe the generic signature queries that return properties of type parameters, but this requires building up a little more machinery first. The first step is to define the invariants satisfied by the list of protocols returned by the \texttt{getRequiredProtocols()} query.
|
|
|
|
\begin{definition}\label{minimalproto} A list of protocols $\{\proto{P}_i\}$ is \emph{minimal} if
|
|
no
|
|
protocol inherits from any other protocol in the list; that is, there do not exist $i,
|
|
j\in\mathbb{N}$ such that $i\neq j$ and $\proto{P}_i$ inherits from $\proto{P}_j$. The list is
|
|
\emph{canonical}
|
|
if it is sorted in canonical protocol order.
|
|
|
|
A minimal canonical list of protocols
|
|
can be constructed from an arbitrary list of protocols
|
|
$P=\{\proto{P}_1,\ldots,\proto{P}_n\}$ via the following algorithm:
|
|
\begin{enumerate}
|
|
\item Let $G=(V, E)$ be the directed acyclic graph\footnote{Invalid code seen by the
|
|
type checker can have circular protocol inheritance. The ``request evaluator''
|
|
framework in the compiler handles cycle breaking in a principled manner, so the
|
|
requirement machine does not have to deal with this explicitly.} where $V$ is the set
|
|
of all protocols, and an edge in $E$ connects $\proto{P}\in V$ to $\proto{Q}\in V$ if
|
|
$\proto{P}$ inherits from $\proto{Q}$.
|
|
\item Construct the subgraph $H\subseteq G$ generated by $P$.
|
|
\item Compute the set of root nodes of $H$ (that is, the nodes with no incoming edges, or zero in-degree) to obtain the minimal set protocols of
|
|
$P$.
|
|
\item Sort the elements of this set using the canonical protocol order (Definition~\ref{canonicalprotocol}) to obtain the
|
|
final minimal canonical list of protocols from $P$.
|
|
\end{enumerate}
|
|
\end{definition}
|
|
The second step is to define a mapping from type terms to Swift type parameters, for use by the \texttt{getSuperclassBound()} and \texttt{getConcreteType()} queries when mapping substitutions back to Swift types.
|
|
\begin{algorithm} The type lifting map $\Lambda^{-1}:\namesym{Term}\rightarrow\namesym{Type}$ takes as input
|
|
a type term $T$ and maps it back to a Swift type parameter. This is the inverse of the type lowering
|
|
map $\Lambda\colon\namesym{Type}\rightarrow\namesym{Term}$ from Algorithm~\ref{lowertypeinproto}.
|
|
\begin{enumerate}
|
|
\item Initialize $S$ to an empty type parameter.
|
|
\item The first symbol of $T$ must be a generic parameter symbol $\tau_{d,i}$, which is mapped to a
|
|
\texttt{GenericTypeParamType} with depth $d$ and index $i$. Set $S$ to this type.
|
|
\item Any subsequent symbol in $T$ must be some associated type symbol
|
|
$[\proto{P}_1\cap\ldots\cap\proto{P}_n\colon\namesym{A}]$. This symbol is mapped to a
|
|
\texttt{DependentMemberType} whose base type is the previous value of $S$, and the associated type
|
|
declaration is found as follows:
|
|
\begin{enumerate}
|
|
\item For each $\proto{P}_i$, either $\proto{P}_i$ directly defines an associated type named
|
|
$\namesym{A}$, or $\namesym{A}$ was declared in some protocol $\proto{Q}$ such that $\proto{P}_i$
|
|
inherits from $\proto{Q}$. In both cases, collect all associated type declarations in a list.
|
|
\item If any associated type found above is a non-root associated type declaration, replace it with
|
|
its anchor (Definition~\ref{rootassoctypedef}).
|
|
\item Pick the associated type declaration from the above set that is minimal with respect to the
|
|
associated type order (Definition~\ref{canonicaltypeorder}).
|
|
\end{enumerate}
|
|
\end{enumerate}
|
|
\end{algorithm}
|
|
The third and final step before the queries themselves can be presented is the algorithm for mapping a superclass or concrete type symbol back to a Swift type. This algorithm uses the above type lifting map on type parameters appearing in substitutions.
|
|
\begin{algorithm}[Constructing a concrete type from a symbol]\label{symboltotype} As input, this algorithm takes a
|
|
superclass symbol $\supersym{\namesym{T}\colon\sigma_0,\ldots,\sigma_n}$ or
|
|
concrete type symbol $\concretesym{\namesym{T}\colon\sigma_0,\ldots,\sigma_n}$. This is the inverse of Algorithm~\ref{concretesymbolcons}.
|
|
|
|
\begin{enumerate}
|
|
\item Let $\pi_0,\ldots,\pi_n$ be the set of positions such that $\namesym{T}|_{\pi_i}$ is a
|
|
\texttt{GenericTypeParamType} with index $i$.
|
|
\item For each $i$, replace $\namesym{T}|_{\pi_i}$ with $\Lambda^{-1}(\sigma_i)$, the type
|
|
parameter obtained by applying the lifting map to $\sigma_i$.
|
|
\item Return the final value of type $\namesym{T}$ after performing all substitutions above.
|
|
\end{enumerate}
|
|
\end{algorithm}
|
|
|
|
Now, the time has finally come to describe the implementation of the four property queries.
|
|
\begin{description}
|
|
\item[\texttt{getRequiredProtocols()}] The list of protocol requirements satisfied by a type parameter $T$ is recorded in the form of protocol symbols in the property map. This list is transformed into a minimal canonical list of protocols using Definition~\ref{minimalproto}.
|
|
\index{layout constraints}
|
|
\index{join of layout constraints}
|
|
\item[\texttt{getLayoutContraint()}] A type parameter $T$ might be subject to multiple layout constraints, in which case the property map entry will store a list of layout constraints $L_1,\ldots,L_n$. This query needs to compute their join, which is the largest layout constraint that simultaneously satisfies all of them:
|
|
\[L_1\wedge\cdots\wedge L_n.\]
|
|
Some layout constraints are disjoint on concrete types, meaning their join is the uninhabited ``bottom'' layout constraint, which precedes all other layout constraints in the partial order. In this case, the original generic signature is said to have conflicting requirements. While such a signature does not violate the requirement machine's invariants, it cannot be satisfied by any valid set of concrete substitutions. Detecting and diagnosing conflicting requirements is discussed later.
|
|
|
|
\item[\texttt{getSuperclassBound()}] If the type parameter $T$ does not satisfy any superclass symbols, returns the empty type.
|
|
Otherwise, $T$ can be written as $T=UV$, where $V$ is the longest suffix of $T$ present in the property map. Let $\supersym{\namesym{C};\,\sigma_0,\ldots,\sigma_n}$ be a superclass symbol in $T[V]$.
|
|
|
|
The first step is to adjust the symbol by prepending $U$ to each substitution $\sigma_i$, to produce the superclass symbol
|
|
\[\supersym{\namesym{C};\,\sigma_0,\ldots,U\sigma_n}.\]
|
|
Then, Algorithm~\ref{symboltotype} can be applied to convert the symbol to a Swift type.
|
|
\item\texttt{getConcreteType()}: This query is almost identical to \texttt{getSuperclassBound()}; you can replace ``superclass symbol'' with ``concrete type symbol'' above.
|
|
\end{description}
|
|
Note how the \texttt{getLayoutConstraint()} query deals with a multiplicity of layout symbols by computing their join, whereas the \texttt{getSuperclassBound()} and \texttt{getConcreteType()} queries just arbitrarily pick one superclass or concrete type symbol. Indeed in Section~\ref{moreconcretetypes}, you will see that picking one symbol is not always sufficient, and a complete implementation must perform joins on superclass and concrete type symbols as well, and furthermore, a situation analogous to the uninhabited layout constraint can arise, where a type parameter can be subject to conflicting superclass or concrete type requirements. For now though, the current formulation is sufficient.
|
|
|
|
Now, let's look at some examples of the four property queries. Once again, these examples use the property map shown in Table~\ref{propmapexample2table}.
|
|
\begin{example}
|
|
Consider the computation of the \texttt{getRequiredProtocols()} query on the canonical type term $\genericsym{0}{0}.\assocsym{P}{B}$. The property map stores the protocol symbols $\{\protosym{Q},\protosym{R}\}$, but $\proto{Q}$ inherits from $\proto{R}$, so the minimal canonical list of protocols is just $\{\protosym{Q}\}$.
|
|
\end{example}
|
|
\begin{example}
|
|
Consider the computation of the \texttt{getSuperclassBound()} query on the canonical type term $\genericsym{0}{0}.\assocsym{P}{B}$.
|
|
The superclass symbol $\supersym{\namesym{Cache}\langle\sigma_0\rangle;\,\sigma_0:=\assocsym{P}{A}}$ does not need to be adjusted by prepending a prefix to each substitution term, because the property map entry is associated with the entire term $\genericsym{0}{0}.\assocsym{P}{B}$.
|
|
|
|
Applying Algorithm~\ref{symboltotype} to the superclass symbol produces the Swift type:
|
|
\[\namesym{Cache}\langle\genericsym{0}{0}.\namesym{A}\rangle\].
|
|
\end{example}
|
|
\begin{example}
|
|
Consider the computation of the \texttt{getConcreteType()} query on the canonical type term $\genericsym{0}{0}.\assocsym{P}{C}$. Here, the property map entry is associated with the suffix $\assocsym{P}{C}$, which means an adjustment must be performed on the concrete type symbol
|
|
$\concretesym{\namesym{Array}\langle\sigma_0\rangle;\,\sigma_0:=\assocsym{P}{A}}$. The adjusted symbol is
|
|
\[\concretesym{\namesym{Array}\langle\sigma_0\rangle;\,\sigma_0:=\genericsym{0}{0}\assocsym{P}{A}}.\]
|
|
Applying Algorithm~\ref{symboltotype} to the adjusted concrete type symbol produces the Swift type:
|
|
\[\namesym{Array}\langle\genericsym{0}{0}.\namesym{A}\rangle.\]
|
|
\end{example}
|
|
|
|
\section{Canonical Types}
|
|
\index{canonical anchor}
|
|
\index{concrete type requirement}
|
|
The canonical type queries pull everything together.
|
|
\begin{description}
|
|
\item[\texttt{areSameTypeParametersInContext()}] Two type parameters $T$ and $U$ are equivalent if $\Lambda(T)\downarrow\Lambda(U)$, which is true if and only if $\Lambda(T){\downarrow}=\Lambda(U){\downarrow}$.
|
|
|
|
Note that this query doesn't do anything useful if either $T$ or $U$ are fixed to a concrete type.
|
|
|
|
This is also the one and only generic signature query
|
|
that is solved with the rewrite system alone, and not the property map, but it is presented here for completeness.
|
|
\item[\texttt{isCanonicalTypeInContext()}] This query performs a series of checks on a type $T$; if any of them fail, then $T$ is not canonical and false is returned.
|
|
|
|
There are two cases to consider; $T$ is either a type parameter, or a concrete type (which might possibly contain type parameters in nested positions):
|
|
\begin{enumerate}
|
|
\item If $T$ is a type parameter, then $T$ is a canonical type if it is both a canonical anchor, and not fixed to a concrete type.
|
|
\begin{enumerate}
|
|
\item
|
|
Peculiarities of inherited and merged associated types mean that a type $T$ can be a canonical anchor at the \emph{type} level, even if $\Lambda(T)$ is not a canonical \emph{term}. However there is a weaker condition that relates the two notions of canonical-ness: $T$ is a canonical anchor if and only if applying the type lowering map to $T$, reducing the result, and then finally applying the type lifting map produces $T$:
|
|
\[\Lambda^{-1}(\Lambda(T){\downarrow})=T.\]
|
|
\item
|
|
Once a type parameter $T$ is known to be a canonical anchor, checking if the \texttt{isConcreteType()} query returns false is enough to determine that it is a canonical type parameter.
|
|
\end{enumerate}
|
|
\item Otherwise, $T$ is a concrete type. Let $\pi_0,\ldots,\pi_n$ be the set of positions of $T$ such
|
|
that $T|_{\pi_i}$ is a type parameter. Then $T$ is canonical if and only if all
|
|
projections $T|_{\pi_i}$ are canonical type parameters.
|
|
\end{enumerate}
|
|
|
|
\item[\texttt{getCanonicalTypeInContext()}] Once again, $T$ is either a type parameter, or a concrete type. The type parameter case is described first, and the concrete type case is implemented recursively by considering all nested positions that contain type parameters.
|
|
\begin{enumerate}
|
|
\item
|
|
If $T$ is a type parameter, the \texttt{isConcreteType()} query will determine if $T$ is fixed to a concrete type or not.
|
|
\begin{enumerate}
|
|
\item If $T$ is fixed to some concrete type $T'$, the canonical type of $T$ is equal to the canonical type of $T'$. This can be computed by recursively calling \texttt{getCanonicalTypeInContext()} on the result of \texttt{getConcreteType()}.
|
|
\item Otherwise, $T$ is not fixed to a concrete type, which means that the canonical type of $T$ is the canonical anchor of $T$. Let $\Lambda(T)$ be the type term corresponding to $T$, and let $\Lambda(T){\downarrow}$ be the canonical form of the term $\Lambda(T)$. The canonical anchor of $T$ is $\Lambda^{-1}(\Lambda(T){\downarrow})$.
|
|
\end{enumerate}
|
|
|
|
\item
|
|
Otherwise, $T$ is a concrete type. Let $\pi_0,\ldots,\pi_n$ be the set of positions of $T$ such
|
|
that $T|_{\pi_i}$ is a type parameter. The canonical type of $T$ is the type obtained by substituting the type parameter at each position $\pi_i$ with the result of a recursive call to \texttt{getCanonicalTypeInContext()} on $T|_{\pi_i}$.
|
|
\end{enumerate}
|
|
\end{description}
|
|
|
|
\begin{example} This example shows how protocol inheritance leads to a situation where a canonical anchor $T$ lowers to a non-canonical term $\Lambda(T)$. Consider the generic signature $\gensig{\genericsym{0}{0}}{\genericsym{0}{0}\colon\proto{P}}$ with the protocol definitions below:
|
|
\begin{Verbatim}
|
|
protocol Q {
|
|
associatedtype A
|
|
}
|
|
|
|
protocol P : Q {}
|
|
\end{Verbatim}
|
|
The rewrite system has two associated type introduction rules, one for the declaration of $\namesym{A}$ in $\proto{Q}$, and another for the inherited type $\namesym{A}$ in $\proto{P}$:
|
|
\begin{align}
|
|
\protosym{Q}.\namesym{A}&\Rightarrow\assocsym{Q}{A}\tag{1}\\
|
|
\protosym{P}.\assocsym{Q}{A}&\Rightarrow \assocsym{P}{A}\tag{2}
|
|
\end{align}
|
|
The protocol inheritance relationship also introduces a rewrite rule:
|
|
\begin{align}
|
|
\protosym{P}.\protosym{Q}&\Rightarrow\protosym{P}\tag{3}
|
|
\end{align}
|
|
Finally, the conformance requirement in the generic signature adds the rewrite rule:
|
|
\begin{align}
|
|
\genericsym{0}{0}.\protosym{P}&\Rightarrow\genericsym{0}{0}\tag{4}
|
|
\end{align}
|
|
Resolving critical pairs adds a few additional rules:
|
|
\begin{align}
|
|
\protosym{P}.\namesym{A}&\Rightarrow\assocsym{P}{A}\tag{5}\\
|
|
\genericsym{0}{0}.\protosym{Q}&\Rightarrow\genericsym{0}{0}\tag{6}\\
|
|
\genericsym{0}{0}.\assocsym{Q}{A}&\Rightarrow\genericsym{0}{0}.\assocsym{P}{A}\tag{7}\\
|
|
\genericsym{0}{0}.\namesym{A}&\Rightarrow\genericsym{0}{0}.\assocsym{P}{A}\tag{8}
|
|
\end{align}
|
|
Now consider the type parameter $T:=\genericsym{0}{0}.\namesym{A}$. This type parameter is a canonical anchor by Definition~\ref{canonicalanchor}. Since Swift type parameters always point to an actual associated type declaration, the type term $\Lambda(T)$ is $\genericsym{0}{0}.\assocsym{Q}{A}$, and not $\genericsym{0}{0}.\assocsym{P}{A}$. However, $\genericsym{0}{0}.\assocsym{Q}{A}$ is not canonical as a term, and reduces to $\genericsym{0}{0}.\assocsym{P}{A}$ via Rule~7, therefore $T$ is a canonical anchor and yet $\Lambda(T)$ is not a canonical term.
|
|
|
|
Essentially, the term $\genericsym{0}{0}.\assocsym{P}{A}$ is ``more canonical'' than any type parameter that can be output by $\Lambda:\namesym{Type}\rightarrow\namesym{Term}$. Protocol $\proto{P}$ does not actually define an associated type named $\namesym{A}$, therefore $\Lambda$ can only construct terms containing the symbol $\assocsym{Q}{A}$, and yet $\assocsym{P}{A}<\assocsym{Q}{A}$.
|
|
|
|
The key invariant here though is that $\Lambda^{-1}(\genericsym{0}{0}.\assocsym{Q}{A})=\Lambda^{-1}(\genericsym{0}{0}.\assocsym{P}{A})=T$, or in other words:
|
|
\[\Lambda^{-1}(\Lambda(T){\downarrow})=T.\]
|
|
|
|
A similar situation arises with merged associated type symbols, which are also smaller than any ``real'' associated type symbol output by $\Lambda$. Once again, you can have a canonical type parameter $T$ whose lowered type term $\Lambda(T)$ is not canonical, but just as before, $\Lambda^{-1}$ will map both $\Lambda(T)$ and it's canonical form $\Lambda(T){\downarrow}$ back to $T$, because the only possible reduction path from $\Lambda(T)$ to $\Lambda(T){\downarrow}$ introduces merged associated type symbols, which is invariant under the type lifting map.
|
|
\end{example}
|
|
\begin{example} \label{concretecanonicalpropertymapex}
|
|
The next example demonstrates canonical type computation in the presence of concrete types. Table~\ref{concretecanonicalpropertymap} shows the property map built from the generic signature:
|
|
\[\gensig{\genericsym{0}{0}}{\genericsym{0}{0}\colon\proto{P},\,\genericsym{0}{0}.\namesym{B}==\namesym{Int}},\]
|
|
together with the below protocol definition:
|
|
\begin{Verbatim}
|
|
protocol P {
|
|
associatedtype A where A == Array<B>
|
|
associatedtype B
|
|
}
|
|
\end{Verbatim}
|
|
\begin{table}\caption{Property map from Example~\ref{concretecanonicalpropertymapex}}\label{concretecanonicalpropertymap}
|
|
\begin{center}
|
|
\begin{tabular}{|l|l|}
|
|
\hline
|
|
Keys&Values\\
|
|
\hline
|
|
\hline
|
|
$\assocsym{P}{A}$&$\concretesym{\namesym{Array}\langle\sigma_0\rangle;\,\sigma_0:=\assocsym{P}{B}}$\\
|
|
$\genericsym{0}{0}$&$\protosym{P}$\\
|
|
$\genericsym{0}{0}.\assocsym{P}{B}$&$\concretesym{\namesym{Int}}$\\
|
|
\hline
|
|
\end{tabular}
|
|
\end{center}
|
|
\end{table}
|
|
|
|
Consider the type parameter $T:=\genericsym{0}{0}.\namesym{A}$. This type parameter is a canonical anchor because $\Lambda(T)=\genericsym{0}{0}.\assocsym{P}{A}$ is a canonical term, however $T$ is still not a canonical type, because it is fixed to a concrete type. Therefore \texttt{isCanonicalTypeInContext()} returns false on $T$.
|
|
|
|
The \texttt{getConcreteType()} query on $T$ finds that the longest suffix of $\Lambda(T)$ with a property map entry is $\assocsym{P}{A}$, and the corresponding prefix is $\genericsym{0}{0}$. This property map entry stores the concrete type symbol
|
|
\[\concretesym{\namesym{Array}\langle\sigma_0\rangle;\,\sigma_0:=\assocsym{P}{B}}.\]
|
|
Prepending $\genericsym{0}{0}$ to the substitution term $\sigma_0$ produces the adjusted concrete type symbol:
|
|
\[\concretesym{\namesym{Array}\langle\sigma_0\rangle;\,\sigma_0:=\genericsym{0}{0}.\assocsym{P}{B}}.\]
|
|
Converting this symbol to a Swift type yields $\namesym{Array}\langle\genericsym{0}{0}.\namesym{B}\rangle$. However, this is still not a canonical type, because the type parameter $\genericsym{0}{0}.\assocsym{P}{B}$ appearing in nested position is not canonical. A recursive application of \texttt{getCanonicalTypeInContext()} on the type parameter $\genericsym{0}{0}.\namesym{B}$ returns $\namesym{Int}$. Therefore, the original call to \texttt{getCanonicalTypeInContext()} on $T$ returns
|
|
\[\namesym{Array}\langle\namesym{Int}\rangle.\]
|
|
\end{example}
|
|
|
|
\section{More Concrete Types}\label{moreconcretetypes}.
|
|
|
|
\bibliographystyle{IEEEtran}
|
|
\bibliography{RequirementMachine}
|
|
|
|
\printindex
|
|
|
|
\end{document}
|