There's still plenty of more work to do here for
pattern diagnostics, including introducing a
bunch of new locator elements, and handling things
like argument list mismatches. This at least lets
us fall back to a generic mismatch diagnostic.
This is wrong because there's nowhere to put any
conversion that is introduced, meaning that we'll
likely crash in SILGen. Change the constraint to
equality, which matches what we do outside of the
constraint system.
rdar://107709341
Previously we would wait until CSApply, which
would trigger their type-checking in
`coercePatternToType`. This caused a number of
bugs, and hampered solver-based completion, which
does not run CSApply. Instead, form a conjunction
of all the ExprPatterns present, which preserves
some of the previous isolation behavior (though
does not provide complete isolation).
We can then modify `coercePatternToType` to accept
a closure, which allows the solver to take over
rewriting the ExprPatterns it has already solved.
This then sets the stage for the complete removal
of `coercePatternToType`, and doing all pattern
type-checking in the solver.
Instead of walking the single ASTNode from the
target, walk all AST nodes associated with the
target to find the completion expr. This is needed
to find the completion expr in a pattern for an
initialization target.
Just the $*T -> $*@moveOnly T variant for addresses. Unlike the object version
this acts like a cast rather than something that provides semantics from the
frontend to the optimizer.
The reason why I am using a different instruction for addresses and objects here
is that the object checker doesnt have to deal with things like initialization.
drop_deinit only exists in ownership SIL. Remove IRGen support.
A drop_deinit can only ever be destroyed or destructured.
A destructure of a struct-with-deinit requires a drop_deinit operand.
The value `self` is mutable (i.e., var-bound) in
a `consuming` method. Since you're allowed to
reinitialize a var after consuming, that means
you were also naturally allowed to reinitialize
self after `discard self`. But that capability was
not intended; after you discard self you shouldn't
be reinitializing it, as that's probably a mistake.
This change makes reinitialization of `self`
reachable from a `discard self` statement an error.
rdar://106098163
Deallocate dynamic allocas done for metadata/wtable packs. These
stackrestore calls are inserted on the dominance frontier and then stack
nesting is fixed up. That was achieved as follows:
Added a new IRGen pass PackMetadataMarkerInserter; it
- determines if there are any instructions which might allocate on-stack
pack metadata
- if there aren't, no changes are made
- if there are, alloc_pack_metadata just before instructions that could
allocate pack metadata on the stack and dealloc_pack_metadata on the
dominance frontier of those instructions
- fixup stack nesting
During IRGen, the allocations done for metadata/wtable packs are
recorded and IRGenSILFunction associates them with the instruction that
lowered. It must be the instruction after some alloc_pack_metadata
instruction. Then, when visiting the dealloc_pack_metadata instructions
corresponding to that alloc_pack_metadata, deallocate those packs.
The new alloc_pack_metadata and dealloc_pack_metadata are inserted as
part of IRGen lowering. The former indicates that the next instruction
might result in on-stack pack metadata being emitted. The latter
indicates that this is the position at which metadata emitted on behalf
of its operand should be cleaned up.
The algorithm requires no critical edges, but that doesn't mean
require ownership. Remove the assert to allow the utility to be called
from code where the caller has manually split edges.
In the fullness of time, there should no passes should introduce
critical edges.
The Swift backtracer's frame pointer unwinder cannot work on Linux
without this change, because the compiler omits the frame pointer from
the function in libSwift_Backtracing that actually captures the stack.
rdar://110260855
As part of SE-390, you're required to write either:
- `consume self`
- pass self as a `consuming` parameter to a function
- `discard self`
before the function ends in a context that contains a
`discard self` somewhere. This prevents people from accidentally
invoking the deinit due to implicit destruction of `self` before
exiting the function.
rdar://106099027
Pattern matching as currently implemented is consuming, but that's not
necessarily what we want to be the default behavior when borrowing pattern
matching is implemented. When a binding of noncopyable type is pattern-matched,
require it to be annotated with the `consume` operator explicitly. That way,
when we introduce borrowing pattern matching later, we have the option to make
`switch x` do the right thing without subtly changing the behavior of existing
code. rdar://110073984
Lazy member loading has been in use and the default for several years
now. However, the lazy loading was disabled for any type whose primary
definition was parsed even though some of its extensions could have
been deserialized, e.g., from a Clang module. Moreover, the non-lazy
path walked all of the extensions of such a type for all member name
lookup operations. Faced with a large number of extensions to the same
type (in my example, 6,000), this walk of the list of the extensions
could dominate type-checking time.
Eliminate all effects of the `-disable-named-lazy-member-loading`
flag, and always use the "lazy" path, which effectively does no work
for parsed type definitions and extensions thereof. The example with
6,000 extensions of a single type goes from type checking in 6 seconds
down to type checking in 0.6 seconds, and name lookup completely
disappears from the profiling trace.
The deleted tests relied on the flag that is now inert. They aren't by
themselves providing much value nowadays, and it's better to have the
simpler (and more efficient) implementation of member name lookup be
the only one.