The AST walker calling getGenericParams on a protocol decl would eventually
call computeNominalType. computeNominalType checks that the protocol (as a
nomimnal type decl) appears within a type context, and if so, asks for the
SelfNominalTypeDecl of that context. If the context is an extension, that
ends up asking for its extended nominal, which is a problem if we're walking
a pre-typechecked AST – we hit the assertion "Extension must have already been
bound (by bindExtensions)").
Unlike other nominal types, it's not valid Swift for protocols to appear within
type contexts, so exclude protocol decls from taking this code path. This
results in us providing Type() as the parent type of the produced NominalType,
just like we do for protocols inside functions or other invalid contexts.
Resolves <rdar://problem/58549036>
This commit introduces a request to type-check a
default argument expression and splits
`getDefaultValue` into 2 accessors:
- `getStructuralDefaultExpr` which retrieves the
potentially un-type-checked default argument
expression.
- `getTypeCheckedDefaultExpr` which retrieves a
fully type-checked default argument expression.
In addition, this commit adds `hasDefaultExpr`,
which allows checking for a default expr without
kicking off a request.
Switch most callers to explicit indices. The exceptions lie in things that needs to manipulate the parsed output directly including the Parser and components of the ASTScope. These are included as friend class exceptions.
TypeCheckPattern used to splat the interface type into this, and
different parts of the compiler would check one or the other. There is
now one source of truth: The interface type. The type repr is now just
a signal that the user has written an explicit type annotation on
a parameter. For variables, we will eventually be able to just grab
this information from the parent pattern.
Make getRawValueExpr() return a checked value.
This entails a strange kind of request that effectively acts like
a cache warmer. In order to properly check the raw value expression for
a single case, we actually need all the other cases for the
autoincrementing synthesis logic. The strategy is therefore to have the
request act at the level of the parent EnumDecl and check all the values
at once. We also cache at the level of the EnumDecl so the cache
"warms" for all enum elements simultaneously.
The request also abuses TypeResolutionStage to act as an indicator for
how much information to compute. In the minimal case, we will return
a complete accounting of (auto-incremented) raw values. In the maximal
case we will also check and record types and emit diagnostics. The
minimal case is uncached to support repeated evaluation.
Note that computing the interface type of an @objc enum decl *must*
force this request. The enum's raw values are part of the ABI, and we
should not get all the way to IRGen before discovering that we cannot
possibly lay out the enum. In the future, we might want to consider
moving this check earlier or have IRGen tolerate broken cases but for
now we will maintain the status quo and not have IRGen emit
diagnostics.
The distinction between the type checked raw value expression and the regular raw value expression was never important. Downstream clients were ignoring the type checked form and pulling the text out of the supposed "plain" form. Drop the distinction and simply don't set back into the raw value expr if we don't have to.
Pushing this through naturally enables some cleanup in checkEnumRawValues. Factor out type checking the literal value into an helper on the typechecker and pull a common diagnostic into the decl checker.
Computing the interface type of a typealias used to push validation forward and recompute the interface type on the fly. This was fragile and inconsistent with the way interface types are computed in the rest of the decls. Separate these two notions, and plumb through explicit interface type computations with the same "computeType" idiom. This will better allow us to identify the places where we have to force an interface type computation.
Also remove access to the underlying type loc. It's now just a cache location the underlying type request will use. Push a type repr accessor to the places that need it, and push the underlying type accessor for everywhere else. Getting the structural type is still preferred for pre-validated computations.
This required the resetting of a number of places where we were - in many cases tacitly - asking the question "does the interface type exist". This enables the removal of validateDeclForNameLookup
Introduce the notion of "one-way" binding constraints of the form
$T0 one-way bind to $T1
which treats the type variables $T0 and $T1 as independent up until
the point where $T1 simplifies down to a concrete type, at which point
$T0 will be bound to that concrete type. $T0 won't be bound in any
other way, so type information ends up being propagated right-to-left,
only. This allows a constraint system to be broken up in more
components that are solved independently. Specifically, the connected
components algorithm now proceeds as follows:
1. Compute connected components, excluding one-way constraints from
consideration.
2. Compute a directed graph amongst the components using only the
one-way constraints, where an edge A -> B indicates that the type
variables in component A need to be solved before those in component
B.
3. Using the directed graph, compute the set of components that need
to be solved before a given component.
To utilize this, implement a new kind of solver step that handles the
propagation of partial solutions across one-way constraints. This
introduces a new kind of "split" within a connected component, where
we collect each combination of partial solutions for the input
components and (separately) try to solve the constraints in this
component. Any correct solution from any of these attempts will then
be recorded as a (partial) solution for this component.
For example, consider:
let _: Int8 = b ? Builtin.one_way(int8Or16(17)) :
Builtin.one_way(int8Or16(42\
))
where int8Or16 is overloaded with types `(Int8) -> Int8` and
`(Int16) -> Int16`. There are two one-way components (`int8Or16(17)`)
and (`int8Or16(42)`), each of which can produce a value of type `Int8`
or `Int16`. Those two components will be solved independently, and the
partial solutions for each will be fed into the component that
evaluates the ternary operator. There are four ways to attempt that
evaluation:
```
[Int8, Int8]
[Int8, Int16]
[Int16, Int8]
[Int16, Int16]
To test this, introduce a new expression builtin `Builtin.one_way(x)` that
introduces a one-way expression constraint binding the result of the
expression 'x'. The builtin is meant to be used for testing purposes,
and the one-way constraint expression itself can be synthesized by the
type checker to introduce one-way constraints later on.
Of these two, there are only two (partial) solutions that can work at
all, because the types in the ternary operator need a common
supertype:
[Int8, Int8]
[Int16, Int16]
Therefore, these are the partial solutions that will be considered the
results of the component containing the ternary expression. Note that
only one of them meets the final constraint (convertibility to
`Int8`), so the expression is well-formed.
Part of rdar://problem/50150793.
When looking for the SyntaxNode corresponding to a type attribute (like
@escaping), ModelASTWalker would look for one whose range *started* at the type
attribute's source location. It never found one, though, because the
SyntaxNode's range included the @, while the type attribute's source location
pointed to the name *after* the @.
Accessors logically belong to their storage and can be synthesized
on the fly, so removing them from the members list eliminates one
source of mutability (but doesn't eliminate it; there are also
witnesses for derived conformances, and implicit constructors).
Since a few ASTWalker implementations break in non-trivial ways when
the traversal is changed to visit accessors as children of the storage
rather than peers, I hacked up the ASTWalker to optionally preserve
the old traversal order for now. This is ugly and needs to be cleaned up,
but I want to avoid breaking _too_ much with this commit.
This fixes custom attribute syntax highlighting on parameters and functions
(where function builders can be applied). They weren't being walked in
the function position previously and were walked out of source order in the
parameter position.
It also fixes rename of the property wrapper and function builder type
names that can appear in custom attributes, as well as rename of property
wrapper constructors, that can appear after the type names, e.g.
`@Wrapper(initialValue: 10)`. The index now also records these constructor
occurrences, along with implicit occurrences whenever a constructor is
called via default value assignment, e.g. `@Wrapper var foo = 10`, so that
finding calls/references to the constructor includes these locations.
Resolves rdar://problem/49036613
Resolves rdar://problem/50073641
The initializer associated with a lazy property should not be executed
directly, because it is subsumed by code synthesized into the
getter. Generalize the terminology here so we can re-use this path for
property delegate initialization.
To represent the abstracted interface of an opaque type, we need a generic signature that refines
the outer context generic signature with an additional generic parameter representing the underlying
type and its exposed constraints. Opaque types also need to be keyed by their originating decl, so
that we can treat values of the same opaque type as the same. When we check a FuncDecl with an
opaque type specified as its return type, create an OpaqueTypeDecl and associate it with the
originating decl. (A representation for *types* derived from the opaque decl will come next.)
This is a defensive move to avoid duplicated work and guard against crashes
when a multi-expression closure body or TapExpr has not been type checked yet.
Fixes <rdar://problem/48852402>.
Instead of building ArgumentShuffleExprs, lets just build a TupleExpr,
with explicit representation of collected varargs and default
arguments.
This isn't quite as elegant as it should be, because when re-typechecking,
SanitizeExpr needs to restore the 'old' parameter list by stripping out
the nodes inserted by type checking. However that hackery is all isolated
in one place and will go away soon.
Note that there's a minor change the generated SIL. Caller default
arguments (#file, #line, etc) are no longer delayed and are instead
evaluated in their usual argument position. I don't believe this actually
results in an observable change in behavior, but if it turns out to be
a problem, we can pretty easily change it back to the old behavior with a
bit of extra work.
TupleShuffleExpr could not express the full range of tuple conversions that
were accepted by the constraint solver; in particular, while it could re-order
elements or introduce and eliminate labels, it could not convert the tuple
element types to their supertypes.
This was the source of the annoying "cannot express tuple conversion"
diagnostic.
Replace TupleShuffleExpr with DestructureTupleExpr, which evaluates a
source expression of tuple type and binds its elements to OpaqueValueExprs.
The DestructureTupleExpr's result expression can then produce an arbitrary
value written in terms of these OpaqueValueExprs, as long as each
OpaqueValueExpr is used exactly once.
This is sufficient to express conversions such as (Int, Float) => (Int?, Any),
as well as the various cases that were already supported, such as
(x: Int, y: Float) => (y: Float, x: Int).
https://bugs.swift.org/browse/SR-2672, rdar://problem/12340004
Before extending TupleShuffleExpr to represent all tuple
conversions allowed by the constraint solver, remove the
parts of TupleShuffleExpr that are no longer needed; this is
support for default arguments, varargs, and scalar-to-tuple and
tuple-to-scalar conversions.
Right now we use TupleShuffleExpr for two completely different things:
- Tuple conversions, where elements can be re-ordered and labels can be
introduced/eliminated
- Complex argument lists, involving default arguments or varargs
The first case does not allow default arguments or varargs, and the
second case does not allow re-ordering or introduction/elimination
of labels. Furthermore, the first case has a representation limitation
that prevents us from expressing tuple conversions that change the
type of tuple elements.
For all these reasons, it is better if we use two separate Expr kinds
for these purposes. For now, just make an identical copy of
TupleShuffleExpr and call it ArgumentShuffleExpr. In CSApply, use
ArgumentShuffleExpr when forming the arguments to a call, and keep
using TupleShuffleExpr for tuple conversions. Each usage of
TupleShuffleExpr has been audited to see if it should instead look at
ArgumentShuffleExpr.
In sequent commits I plan on redesigning TupleShuffleExpr to correctly
represent all tuple conversions without any unnecessary baggage.
Longer term, we actually want to change the representation of CallExpr
to directly store an argument list; then instead of a single child
expression that must be a ParenExpr, TupleExpr or ArgumentShuffleExpr,
all CallExprs will have a uniform representation and ArgumentShuffleExpr
will go away altogether. This should reduce memory usage and radically
simplify parts of SILGen.
We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.
Patch produced by
for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done
`#assert` is a new static assertion statement that will let us write
tests for the new constant evaluation infrastructure that we are working
on. `#assert` works by lowering to a `Builtin.poundAssert` SIL
instruction. The constant evaluation infrastructure will look for these
SIL instructions, const-evaluate their conditions, and emit errors if
the conditions are non-constant or false.
This commit implements parsing, typechecking and SILGen for `#assert`.
`\.self` is the final chosen syntax. Implement support for this syntax, and remove the stopgap builtin and `WritableKeyPath._identity` property that were in place before.
* [AST] Remove stored TypeLoc from TypedPattern
TypedPattern was only using this TypeLoc as a means to a TypeRepr, which
caused it to store the pattern type twice (through the superclass and through
the TypeLoc itself.)
This also fixes a bug where deserializing a TypedPattern doesn't store
the type correctly and generally cleans up TypedPattern initialization.
Resolves rdar://44144435
* Address review comments
Parsed declarations would create an untyped 'self' parameter;
synthesized, imported and deserialized declarations would get a
typed one.
In reality the type, if any, depends completely on the properties
of the function in question, so we can just lazily create the
'self' parameter when needed.
If the function already has a type, we give it a type right there;
otherwise, we check if a 'self' was already created when we
compute a function's type and set the type of 'self' then.
I needed this for materializeForSet remission, but it makes inherited
variadic initializers work, too.
I tried to make this a reasonable starting point for a real language
feature. Here's what's still missing:
- syntax
- semantic restrictions to ensure that the expression isn't written in
invalid places or arbitrarily converted
- SILGen support for expansions that aren't the only variadic argument
rdar://16331406
For now, the accessors have been underscored as `_read` and `_modify`.
I'll prepare an evolution proposal for this feature which should allow
us to remove the underscores or, y'know, rename them to `purple` and
`lettuce`.
`_read` accessors do not make any effort yet to avoid copying the
value being yielded. I'll work on it in follow-up patches.
Opaque accesses to properties and subscripts defined with `_modify`
accessors will use an inefficient `materializeForSet` pattern that
materializes the value to a temporary instead of accessing it in-place.
That will be fixed by migrating to `modify` over `materializeForSet`,
which is next up after the `read` optimizations.
SIL ownership verification doesn't pass yet for the test cases here
because of a general fault in SILGen where borrows can outlive their
borrowed value due to being cleaned up on the general cleanup stack
when the borrowed value is cleaned up on the formal-access stack.
Michael, Andy, and I discussed various ways to fix this, but it seems
clear to me that it's not in any way specific to coroutine accesses.
rdar://35399664