The effect of this tiny change is that local variables will be described
by llvm.dbg.values, which will get lowered into an accurate location list
instead of a stack slot that is valid for the entire scope of the variable.
This means the debugger can now accurately track the liveness of variables
knowing exactly when they are initialized and when there values go away.
Function arguments are still kept in stack slots because (1) they are
already initialized at the function entry and (2) LLDB really needs self
to be available at all times for the expression evaluator.
This was made possible by recent advancements in LLVM such as the live
debug variables pass and various related bugfixes.
<rdar://problem/15746520>
This is a hotfix for recent regressions in the LLDB testsuite caused
by lazy loading of metadata.
Long-term we will explore emitting DWARF expressions for accessing the
type metadata.
rdar://problem/24781494, SR-797
The effect of this tiny change is that local variables will be described
by llvm.dbg.values, which will get lowered into an accurate location list
instead of a stack slot that is valid for the entire scope of the variable.
This means the debugger can now accurately track the liveness of variables
knowing exactly when they are initialized and when there values go away.
Function arguments are still kept in stack slots because (1) they are
already initialized at the function entry and (2) LLDB really needs self
to be available at all times for the expression evaluator.
This was made possible by recent advancements in LLVM such as the live
debug variables pass and various related bugfixes.
<rdar://problem/15746520>
At the moment it is only possible to test the effects that SIL
optimization passes have on debug information by observing the
effects of a full .swift -> LLVM IR compilation. This change enable us
to write targeted testcases for single SIL optimization passes.
The new syntax is as follows:
sil-scope-ref ::= 'scope' [0-9]+
sil-scope ::= 'sil_scope' [0-9]+ '{'
sil-loc
'parent' scope-parent
('inlined_at' sil-scope-ref )?
'}'
scope-parent ::= sil-function-name ':' sil-type
scope-parent ::= sil-scope-ref
sil-loc ::= 'loc' string-literal ':' [0-9]+ ':' [0-9]+
Each instruction may have a debug location and a SIL scope reference
at the end. Debug locations consist of a filename, a line number, and
a column number. If the debug location is omitted, it defaults to the
location in the SIL source file. SIL scopes describe the position
inside the lexical scope structure that the Swift expression a SIL
instruction was generated from had originally. SIL scopes also hold
inlining information.
<rdar://problem/22706994>
With this re-abstraction a specialized function has the same calling convention as if it would have been written with the specialized types in the first place.
In general this results in less alloc_stacks and load/stores.
It also can eliminate some re-abstraction thunks, e.g. if a generic closure is used in a non-generic context.
It some (hopefully rare) cases it may require to add re-abstraction thunks.
In case a function has multiple indirect results, only the first is converted to a direct result. This is an open TODO.
from the witness tables for their associations rather than passing
them separately.
This drastically reduces the number of physical arguments required
to invoke a generic function with a complex protocol hierarchy. It's
also an important step towards allowing recursive protocol
constraints. However, it may cause some performance problems in
generic code that we'll have to figure out ways to remediate.
There are still a few places in IRGen that rely on recursive eager
expansion of associated types and protocol witnesses. For example,
passing generic arguments requires us to map from a dependent type
back to an index into the all-dependent-types list in order to
find the right Substitution; that's something we'll need to fix
more generally. Specific to IRGen, there are still a few abstractions
like NecessaryBindings that use recursive expansion and are therefore
probably extremely expensive under this patch; I intend to fix those
up in follow-ups to the greatest extent possible.
There are also still a few things that could be made lazier about
type fulfillment; for example, we eagerly project the dynamic type
metadata of class parameters rather than waiting for the first place
we actually need to do so. We should be able to be lazier about
that, at least when the parameter is @guaranteed.
Technical notes follow. Most of the basic infrastructure I set up
for this over the last few months stood up, although there were
some unanticipated complexities:
The first is that the all-dependent-types list still does not
reliably contain all the dependent types in the minimized signature,
even with my last patch, because the primary type parameters aren't
necessarily representatives. It is, unfortunately, important to
give the witness marker to the primary type parameter because
otherwise substitution won't be able to replace that parameter at all.
There are better representations for all of that, but it's not
something I wanted to condition this patch on; therefore, we have to
do a significantly more expensive check in order to figure out a
dependent type's index in the all-dependent-types list.
The second is that the ability to add requirements to associated
types in protocol refinements means that we have to find the *right*
associatedtype declaration in order to find the associated witness
table. There seems to be relatively poor AST support for this
operation; maybe I just missed it.
The third complexity (so far) is that the association between an
archetype and its parent isn't particularly more important than
any other association it has. We need to be able to recover
witness tables linked with *all* of the associations that lead
to an archetype. This is, again, not particularly well-supported
by the AST, and we may run into problems here when we eliminate
recursive associated type expansion in signatures.
Finally, it's a known fault that this potentially leaves debug
info in a bit of a mess, since we won't have any informaton for
a type parameter unless we actually needed it somewhere.
Similarly to how we've always handled parameter types, we
now recursively expand tuples in result types and separately
determine a result convention for each result.
The most important code-generation change here is that
indirect results are now returned separately from each
other and from any direct results. It is generally far
better, when receiving an indirect result, to receive it
as an independent result; the caller is much more likely
to be able to directly receive the result in the address
they want to initialize, rather than having to receive it
in temporary memory and then copy parts of it into the
target.
The most important conceptual change here that clients and
producers of SIL must be aware of is the new distinction
between a SILFunctionType's *parameters* and its *argument
list*. The former is just the formal parameters, derived
purely from the parameter types of the original function;
indirect results are no longer in this list. The latter
includes the indirect result arguments; as always, all
the indirect results strictly precede the parameters.
Apply instructions and entry block arguments follow the
argument list, not the parameter list.
A relatively minor change is that there can now be multiple
direct results, each with its own result convention.
This is a minor change because I've chosen to leave
return instructions as taking a single operand and
apply instructions as producing a single result; when
the type describes multiple results, they are implicitly
bound up in a tuple. It might make sense to split these
up and allow e.g. return instructions to take a list
of operands; however, it's not clear what to do on the
caller side, and this would be a major change that can
be separated out from this already over-large patch.
Unsurprisingly, the most invasive changes here are in
SILGen; this requires substantial reworking of both call
emission and reabstraction. It also proved important
to switch several SILGen operations over to work with
RValue instead of ManagedValue, since otherwise they
would be forced to spuriously "implode" buffers.
For long names this is easier to read and in most cases the omitted information can be seen in the actual SIL code.
With the option -Xllvm -sil-full-demangle the old behavior can be restored.
This prevents the linker from trying to emit relative relocations to locally-defined public symbols into dynamic libraries, which gives ld.so heartache.
inlined-at chain.
The previous implementation was only correct for cases where the inliner
inlined bottom-up in the call graph, which happened to cover the majority
of all cases.
rdar://problem/24462475
Recent versions of LLDB can deal with line 0 locations much better and
due to a subtle bug in the heuristic instructions immediately following
the prologue could end up without debug locations which can cause serious
problems for the LLVM inliner when constructing inline debug scope info.
<rdar://problem/24394944>
It looks like this only fails on stdlib+debuginfo builds after:
8a5ed4 Make var parameters an error for Swift 3
<rdar://problem/24428756> DebugInfo/basic.swift test fails after fixing var params