Similarly to how we've always handled parameter types, we
now recursively expand tuples in result types and separately
determine a result convention for each result.
The most important code-generation change here is that
indirect results are now returned separately from each
other and from any direct results. It is generally far
better, when receiving an indirect result, to receive it
as an independent result; the caller is much more likely
to be able to directly receive the result in the address
they want to initialize, rather than having to receive it
in temporary memory and then copy parts of it into the
target.
The most important conceptual change here that clients and
producers of SIL must be aware of is the new distinction
between a SILFunctionType's *parameters* and its *argument
list*. The former is just the formal parameters, derived
purely from the parameter types of the original function;
indirect results are no longer in this list. The latter
includes the indirect result arguments; as always, all
the indirect results strictly precede the parameters.
Apply instructions and entry block arguments follow the
argument list, not the parameter list.
A relatively minor change is that there can now be multiple
direct results, each with its own result convention.
This is a minor change because I've chosen to leave
return instructions as taking a single operand and
apply instructions as producing a single result; when
the type describes multiple results, they are implicitly
bound up in a tuple. It might make sense to split these
up and allow e.g. return instructions to take a list
of operands; however, it's not clear what to do on the
caller side, and this would be a major change that can
be separated out from this already over-large patch.
Unsurprisingly, the most invasive changes here are in
SILGen; this requires substantial reworking of both call
emission and reabstraction. It also proved important
to switch several SILGen operations over to work with
RValue instead of ManagedValue, since otherwise they
would be forced to spuriously "implode" buffers.
Take apart exploded one-element tuples and be more careful with
passing around tuple abstraction patterns.
Also, now we can remove the inputSubstType parameter from
emitOrigToSubstValue() and emitSubstToOrigValue(), making the
signatures of these functions nice and simple once again.
Fixes <rdar://problem/19506347> and <rdar://problem/22502450>.
Right now, re-abstraction thunks are set up to convert values
as follows, where L is type lowering:
- OrigToSubst: L(origType, substType) -> L(substType)
- SubstToOrig: L(substType) -> L(origType, substType)
This assumes there's no AST-level type conversion, because
when we visit a type in contravariant position, we flip the
direction of the transform but we're still converting *to*
substType -- which will now equal to the type of the input,
not the type of the expected result!
This caused several problems:
- VTable thunk generation had a bunch of special logic to
compute a substOverrideType, and wrap the thunk result
in an optional, duplicating work done in the transform
- Witness thunk generation similarly had to handle the case
of upcasting to a base class to call the witness, and
casting the result of materializeForSet back to the right
type for properties defined on the base.
Now the materializeForSet cast sequence is a bit longer,
we unpack the returned tuple and do a convert_function
on the function, then pack it again -- before we would
unchecked_ref_cast the tuple, which is technically
incorrect since the tuple is not a ref, but IRGen didn't
seem to care...
To handle the conversions correctly, we add a third AST type
parameter to a transform, named inputType. Now, transforms
perform these conversions:
- OrigToSubst: L(origType, inputType) -> L(substType)
- SubstToOrig: L(inputType) -> L(origType, substType)
When we flip the direction of the transform while visiting
types in contravariant position, we also swap substType with
inputType.
Note that this is similar to how bridging thunks work, for
the same reason -- bridging thunks convert between AST types.
This is mostly just a nice cleanup that fixes some obscure
corner cases for now, but this functionality will be used
in a subsequent patch.
Swift SVN r31486
The main thing that this patch does is work around a shortcoming of
SILGenApply namely that we in certain cases emit self before we know
what the callee is. We work around this by emitting self at +0 assuming
that the callee does pass self at +0 and set a flag. After we know what
the callee is, if the flag is set, we emit an extra retain for self.
rdar://15729033
Swift SVN r27553
This is necessary for correctly dealing with non-standard
ownership conventions in secondary positions, and it should
also help with non-injective type imports (like BOOL/_Bool).
But right now we aren't doing much with it.
Swift SVN r26954
the call instead of during the formal evaluation of the argument.
This is the last major chunk of the semantic changes proposed
in the accessors document. It has two purposes, both related
to the fact that it shortens the duration of the formal access.
First, the change isolates later evaluations (as long as they
precede the call) from the formal access, preventing them from
spuriously seeing unspecified behavior. For example::
foo(&array[0], bar(array))
Here the value passed to bar is a proper copy of 'array',
and if bar() decides to stash it aside, any modifications
to 'array[0]' made by foo() will not spontaneously appear
in the copy. (In contrast, if something caused a copy of
'array' during foo()'s execution, that copy would violate
our formal access rules and would therefore be allowed to
have an arbitrary value at index 0.)
Second, when a mutating access uses a pinning addressor, the
change limits the amount of arbitrary code that falls between
the pin and unpin. For example::
array[0] += countNodes(subtree)
Previously, we would begin the access to array[0] before the
call to countNodes(). To eliminate the pin and unpin, the
optimizer would have needed to prove that countNodes didn't
access the same array. With this change, the call is evaluated
first, and the access instead begins immediately before the call
to +=. Since that operator is easily inlined, it becomes
straightforward to eliminate the pin/unpin.
A number of other changes got bundled up with this in ways that
are hard to tease apart. In particular:
- RValueSource is now ArgumentSource and can now store LValues.
- It is now illegal to use emitRValue to emit an l-value.
- Call argument emission is now smart enough to emit tuple
shuffles itself, applying abstraction patterns in reverse
through the shuffle. It also evaluates varargs elements
directly into the array.
- AllowPlusZero has been split in two. AllowImmediatePlusZero
is useful when you are going to immediately consume the value;
this is good enough to avoid copies/retains when reading a 'var'.
AllowGuaranteedPlusZero is useful when you need a stronger
guarantee, e.g. when arbitrary code might intervene between
evaluation and use; it's still good enough to avoid copies
from a 'let'. The upshot is that we're now a lot smarter
about generally avoiding retains on lets, but we've also
gotten properly paranoid about calling non-mutating methods
on vars.
(Note that you can't necessarily avoid a copy when passing
something in a var to an @in_guaranteed parameter! You
first have to prove that nothing can assign to the var during
the call. That should be easy as long as the var hasn't
escaped, but that does need to be proven first, so we can't
do it in SILGen.)
Swift SVN r24709