A PackExpansionType is the interface type of the explicit expansion of a
corresponding set of variadic generic parameters.
Pack expansions are spelled as single-element tuples with a single variadic
component in most contexts except functions where they are allowed to appear without parentheses to match normal variadic declaration syntax.
```
func expand<T...>(_ xs: T...) -> (T...)
~~~~ ~~~~~~
```
A pack expansion type comes equipped with a pattern type spelled before
the ellipses - `T` in the examples above. This pattern type is the subject
of the expansion of the pack that is tripped when its variadic generic
parameter is substituted for a `PackType`.
A pack type looks a lot like a tuple in the surface language, except there
is no way for the user to spell a pack. Pack types are created by the solver
when it encounters an apply of a variadic generic function, as in
```
func print<T...>(_ xs: T...) {}
// Creates a pack type <String, Int, String>
print("Macs say Hello in", 42, " different languages")
```
Pack types substituted into the variadic generic arguments of a
PackExpansionType "trip" the pack expansion and cause it to produce a
new pack type with the pack expansion pattern applied.
```
typealias Foo<T...> = (T?...)
Foo<Int, String, Int> // Forces expansion to (Int?, String?, Int?)
```
The GSB will drop same-type requirements sometimes, when the
right hand side is smaller than the left. Change the order
when adding these requirements, which seems to work well
enough for my reduced testcase.
Also, ensure that everything works correctly with the
RequirementMachine, which doesn't have the same underlying
problems with same-type requirement handling.
Fixes rdar://86431977.
AutoDiffAllocateSubcontext and AutoDiffProjectTopLevelSubcontext
return RawPointer. They cannot be ForwardingBorrows. This was
triggering an assert in ForwardingOperand's constructor.
During copy propagation (for which -enable-copy-propagation must still
be passed), also try to shrink borrow scopes by hoisting end_borrows
using the newly added ShrinkBorrowScope utility.
Allow end_borrow instructions to be hoisted over instructions that are
not deinit barriers for the value which is borrowed. Deinit barriers
include uses of the value, loads of memory, loads of weak references
that may be zeroed during deinit, and "synchronization points".
rdar://79149830
This instruction is similar to a copy_addr except that it marks a move of an
address that has to be checked. In order to keep the memory lifetime verifier
happy, the semantics before the checker runs are the mark_unresolved_move_addr is
equivalent to copy_addr [init] (not copy_addr [take][init]).
The use of this instruction is that Mandatory Inlining converts builtin "move"
to a mark_unresolved_move_addr when inlining the function "_move" (the only
place said builtin is invoked).
This is then run through a special checker (that is later in this PR) that
either proves that the mark_unresolved_move_addr can actually be a move in which
case it converts it to copy_addr [take][init] or if it can not be a move, emit
an error and convert the instruction to a copy_addr [init]. After this is done
for all instructions, we loop back through again and emit an error on any
mark_unresolved_move_addr that were not processed earlier allowing for us to
know that we have completeness.
NOTE: The move kills checker for addresses is going to run after Mandatory
Inlining, but before predictable memory opts and friends.
This cleans up 90 instances of this warning and reduces the build spew
when building on Linux. This helps identify actual issues when
building which can get lost in the stream of warning messages. It also
helps restore the ability to build the compiler with gcc.
Required for UnsafeRawPointer.withMemoryReboud(to:).
%out_token = rebind_memory %0 : $Builtin.RawPointer to %in_token
%0 must be of $Builtin.RawPointer type
%in_token represents a cached set of bound types from a prior memory state.
%out_token is an opaque $Builtin.Word representing the previously bound
types for this memory region.
This instruction's semantics are identical to ``bind_memory``, except
that the types to which memory will be bound, and the extent of the
memory region is unknown at compile time. Instead, the bound-types are
represented by a token that was produced by a prior memory binding
operation. ``%in_token`` must be the result of bind_memory or
Required for UnsafeRawPointer.withMemoryRebound(to:)
%token = bind_memory %0 : $Builtin.RawPointer, %1 : $Builtin.Word to $T
%0 must be of $Builtin.RawPointer type
%1 must be of $Builtin.Word type
%token is an opaque $Builtin.Word representing the previously bound types
for this memory region.
The functions in llvm-project `AttributeList` have been
renamed/refactored to help remove uses of `AttributeList::*Index`.
Update to use these new functions where possible. There's one use of
`AttrIndex` remaining as `replaceAttributeTypeAtIndex` still takes the
index and there is no `param` equivalent. We could add one locally, but
presumably that will be added eventually.
This change separates out the formation of the generic signature and
substitutions for a SIL substituted function type as a pre-pass
before doing the actual function type lowering. The only input we
really need to form this signature is the original abstraction pattern
that a type is being lowered against, and pre-computing it should make
the code less side-effecty and confusing. It also allows us to handle
generic nominal types in a more robust way; we transfer over all of
the nominal type requirements to the generalized generic signature,
then when recursively visiting the bindings, we same-type-constrain
the generic parameters used in those requirements to the newly-generalized
generic arguments. This ensures that the minimized signature preserves
any non-trivial requirements imposed by the nominal type, such as
conditional conformances on its type arguments, same-type constraints
among associated types, etc.
This approach does lead to less-than-optimal generalized generic
signatures getting generated, since nominal type generic arguments
get same-type-bound either to other generic arguments or fixed to
concrete types almost always. It would be useful to do a minimization
pass on the final generic signature to eliminate these unnecessary
generic arguments, but that can be done in a follow-up PR.
When SILGenWitnessTable creates a decl ref for the witness of a derivative function requirement, it is using the requirement's derivative generic signature in the resulting witness decl ref. This is wrong because the witness may have a different derivative generic signature than the requirement, leading to a crash. This bug was never discovered because GSB's dark magic made it "just work", until requirement machine.
The fix is to store the matched witness derivative generic signature in `Witness` during type checking, and during witness table generation, use the witness' generic signature to create a witness decl ref.
Resolves rdar://84716758, rdar://84213107 and rdar://84987079.
This is a signal to the move value kill analysis that this is a move that should
have diagnostics emitted for it. It is a temporary addition until we add
MoveOnly to the SIL type system.
It just trampolines to calling ValueBase::dump(). The reason I added this is
sometimes one wants to do this in the debugger and confuses if one has a
ValueBase or a SILValue causing lldb to be unhappy. By mirroring this API, one
can do the same operation on either type and just move on with ones life.
I also wrapped this in LLVM_ATTRIBUTE_DEPRECATED with a msg saying only for use
in the debugger so people do not call dump() in the codebase by mistake.
Leaks checking is not thread safe and e.g. lldb creates multiple SILModules in multiple threads, which would result in false alarms.
Ideally we would make it thread safe, e.g. by putting the instruction counters in the SILModule, but this would be a big effort and it's not worth doing it. Leaks checking in the frontend's and SILOpt's SILModule (not including SILModules created for module interface building) is a good enough test.
rdar://84688015
I am purposely doing this in SILGen rather than at the type system level to
avoid having to have to add a bunch of boilerplate to the type system. Instead
of doing that, I am in SILGen checking for the isNoImplicitCopy bit on the
ParamDecl when we emit arguments. At that point, I set on the specific
SILArgument being emitted the bit that it is no implicit copy. In terms of
printing at the SIL level, I just printed it in front of the function argument
type like @owned, e.x.:
func myFunc(_ x: @_noImplicitCopy T) -> T {
...
}
becomes:
bb0(%0 : @noImplicitCopy @owned $T):
Some notes:
* Just to be explicit, I am making it so that no implicit copy parameters by
default are always passed at +1. The reason why I think this makes sense is
that this is the natural way of working with a move only value.
* As always, one can not write no implicit copy the attribute without passing
the flag -enable-experimental-move-only so this is NFC.
rdar://83957088
The key thing is that the move checker will not consider the explicit copy value
to be a copy_value that can be rewritten, ensuring that any uses of the result
of the explicit copy_value (consuming or other wise) are not checked.
Similar to the _move operator I recently introduced, this is a transparent
function so we can perform one level of specialization and thus at least be
generic over all concrete types.
Replace the dynamic initialization of trivial globals with statically initialized globals, even in -Onone.
This is required to be able to use global variables in performance-annotated functions.
Also, it's a small performance improvement for -Onone.
This patch introduces a new stdlib function called _move:
```Swift
@_alwaysEmitIntoClient
@_transparent
@_semantics("lifetimemanagement.move")
public func _move<T>(_ value: __owned T) -> T {
#if $ExperimentalMoveOnly
Builtin.move(value)
#else
value
#endif
}
```
It is a first attempt at creating a "move" function for Swift, albeit a skleton
one since we do not yet perform the "no use after move" analysis. But this at
leasts gets the skeleton into place so we can built the analysis on top of it
and churn tree in a manageable way. Thus in its current incarnation, all it does
is take in an __owned +1 parameter and returns it after moving it through
Builtin.move.
Given that we want to use an OSSA based analysis for our "no use after move"
analysis and we do not have opaque values yet, we can not supporting moving
generic values since they are address only. This has stymied us in the past from
creating this function. With the implementation in this PR via a bit of
cleverness, we are now able to support this as a generic function over all
concrete types by being a little clever.
The trick is that when we transparent inline _move (to get the builtin), we
perform one level of specialization causing the inlined Builtin.move to be of a
loadable type. If after transparent inlining, we inline builtin "move" into a
context where it is still address only, we emit a diagnostic telling the user
that they applied move to a generic or existential and that this is not yet
supported.
The reason why we are taking this approach is that we wish to use this to
implement a new (as yet unwritten) diagnostic pass that verifies that _move
(even for non-trivial copyable values) ends the lifetime of the value. This will
ensure that one can write the following code to reliably end the lifetime of a
let binding in Swift:
```Swift
let x = Klass()
let _ = _move(x)
// hypotheticalUse(x)
```
Without the diagnostic pass, if one were to write another hypothetical use of x
after the _move, the compiler would copy x to at least hypotheticalUse(x)
meaning the lifetime of x would not end at the _move, =><=.
So to implement this diagnostic pass, we want to use the OSSA infrastructure and
that only works on objects! So how do we square this circle: by taking advantage
of the mandatory SIL optimzier pipeline! Specifically we take advantage of the
following:
1. Mandatory Inlining and Predictable Dead Allocation Elimination run before any
of the move only diagnostic passes that we run.
2. Mandatory Inlining is able to specialize a callee a single level when it
inlines code. One can take advantage of this to even at -Onone to
monomorphosize code.
and then note that _move is such a simple function that predictable dead
allocation elimination is able to without issue eliminate the extra alloc_stack
that appear in the caller after inlining without issue. So we (as the tests
show) get SIL that for concrete types looks exactly like we just had run a
move_value for that specific type as an object since we promote away the
stores/loads in favor of object operations when we eliminate the allocation.
In order to prevent any issue with this being used in a context where multiple
specializations may occur, I made the inliner emit a diagnostic if it inlines
_move into a function that applies it to an address only value. The diagnostic
is emitted at the source location where the function call occurs so it is easy
to find, e.x.:
```
func addressOnlyMove<T>(t: T) -> T {
_move(t) // expected-error {{move() used on a generic or existential value}}
}
moveonly_builtin_generic_failure.swift:12:5: error: move() used on a generic or existential value
_move(t)
^
```
To eliminate any potential ABI impact, if someone calls _move in a way that
causes it to be used in a context where the transparent inliner will not inline
it, I taught IRGen that Builtin.move is equivalent to a take from src -> dst and
marked _move as always emit into client (AEIC). I also took advantage of the
feature flag I added in the previous commit in order to prevent any cond_fails
from exposing Builtin.move in the stdlib. If one does not pass in the flag
-enable-experimental-move-only then the function just returns the value without
calling Builtin.move, so we are safe.
rdar://83957028
Adds two new IRGen-level builtins (one for allocating, the other for deallocating), a stdlib shim function for enhanced stack-promotion heuristics, and the proposed public stdlib functions.