Similarly to how we've always handled parameter types, we
now recursively expand tuples in result types and separately
determine a result convention for each result.
The most important code-generation change here is that
indirect results are now returned separately from each
other and from any direct results. It is generally far
better, when receiving an indirect result, to receive it
as an independent result; the caller is much more likely
to be able to directly receive the result in the address
they want to initialize, rather than having to receive it
in temporary memory and then copy parts of it into the
target.
The most important conceptual change here that clients and
producers of SIL must be aware of is the new distinction
between a SILFunctionType's *parameters* and its *argument
list*. The former is just the formal parameters, derived
purely from the parameter types of the original function;
indirect results are no longer in this list. The latter
includes the indirect result arguments; as always, all
the indirect results strictly precede the parameters.
Apply instructions and entry block arguments follow the
argument list, not the parameter list.
A relatively minor change is that there can now be multiple
direct results, each with its own result convention.
This is a minor change because I've chosen to leave
return instructions as taking a single operand and
apply instructions as producing a single result; when
the type describes multiple results, they are implicitly
bound up in a tuple. It might make sense to split these
up and allow e.g. return instructions to take a list
of operands; however, it's not clear what to do on the
caller side, and this would be a major change that can
be separated out from this already over-large patch.
Unsurprisingly, the most invasive changes here are in
SILGen; this requires substantial reworking of both call
emission and reabstraction. It also proved important
to switch several SILGen operations over to work with
RValue instead of ManagedValue, since otherwise they
would be forced to spuriously "implode" buffers.
top level driver . Move the top level driver of the pairing analysis into
ARCSequenceOpts and have ARCSequenceOpts use ARCMatchingSetBuilder directly.
This patch is the first in a series of patches that improve ARC compile
time performance by ensuring that ARC only visits the full CFG at most
one time.
Previously when ARC was split into an analysis and a pass, the split in
the codebase occurred at the boundary in between ARCSequenceOpts and
ARCPairingAnalysis. I used a callback to allow ARCSequenceOpts to inject
code into ARCPairingAnalysis.
Now that the analysis has been moved together with the pass this
unnecessarily complicates the code. More importantly though it creates
obstacles towards reducing compile time by visiting the CFG only once.
Specifically, we need to visit the full cfg once to gather interesting
instructions. Then when performing the actual dataflow analysis, we only
visit the interesting instructions. This causes an interesting problem
since retains/releases can have dependencies on each other implying that
I need to be able to update where various "interesting instructions" are
located after ARC moves it. The "interesting instruction" information is
stored at the pairing analysis level, but the moving/removal of
instructions is injected in via the callback.
By moving the top level driver part of ARCPairingAnalysis into
ARCSequenceOpts, we simplify the code by eliminating the dependency
injection callback and also make it easier to manage the cached CFG
state in the face of the ARC optimizer moving/removing retains/releases.