It's the same thing as for alloc_ref: the optional [tail_elems ...] attribute specify the tail elements to allocate.
For details see docs/SIL.rst
This feature is needed so that we can allocate a MangedBuffer with alloc_ref_dynamic.
The ManagedBuffer.create() function uses the dynamic self type to create the buffer instance.
Previously there was a mismatch between SIL's concept of a
struct (padding is implicit) and LLVM's (padding is explicit) in IRGen's
constant evaluator. The explicit padding fields need to be given a value
in the IR.
This patch also moves the (currently small) list of constant evaluation
functions into their own file.
Fixes SR-716.
Even at -Onone one SILValue can back up more than one local source variable.
This commit changes the list of variable values into a set.
<rdar://problem/28467349>
basic block, revert to line number 0 instead of reusing the last location.
This avoids emitting illegal IR if there was no previous location and the
instruction being emitted is a function call.
rdar://problem/28237133
The new instructions are: ref_tail_addr, tail_addr and a new attribute [ tail_elems ] for alloc_ref.
For details see docs/SIL.rst
As these new instructions are not generated so far, this is a NFC.
The Objective-C partial application forwarder that one gets when
using, e.g., "super.foo" as a function value was doing a normal
objc_msgSend. Fix the miscompile by threading all of the information
about the Objective-C message send through the forwarder.
Fixes rdar://problem/28140758.
The new instructions are: ref_tail_addr, tail_addr and a new attribute [ tail_elems ] for alloc_ref.
For details see docs/SIL.rst
As these new instructions are not generated so far, this is a NFC.
contain a variable live range extension intrinsic.
This is working around a shortcoming of LLVM.
LiveDebugValues should be doing this but can't in general because it
currently only tracks register locations.
rdar://problem/27348117
These inline asm instructions are used to block branch folding when
optimizing, to ensure that we have unique trap locations (and associated
debug info) for each cond_fail.
We only need these when we're optimizing, and emitting them can block
fast isel, so only emit them when optimizing.
Strict aliasing only applies to memory operations that use strict
addresses. The optimizer needs to be aware of this flag. Uses of raw
addresses should not have their address substituted with a strict
address.
Also add Builtin.LoadRaw which will be used by raw pointer loads.
It is not valid LLVM IR to have a function call without a location to an
inlinable function inside a function with debug info — this makes it impossible
to construct inline information.
This patch adds an assertion and fixes up several places across IRGen where
such a situation could happen.
rdar://problem/26955467
For many local values we can avoid a shadow alloca by directly
describing them with a dbg.value. This also enables precise liveness
so variables don't show up in the debugger before they are
initialized. Unfortunately this also means that values will disappear
when they are no longer needed.
This patch inserts an empty inline assembler expression depending on
the llvm::Value that is being described in the blocks dominated by it.
This uses less stack space than full shadow copies *and* allows us to
track the liveness of the variable completely. It may cause values to
be spilled onto the stack, though.
<rdar://problem/26627376>
We were recovering metadata from generic boxes by reading
the instantiated payload metadata from the box's metadata,
but this approach doesn't work for fixed-size boxes, whose
metadata does not store the payload metadata at all.
Instead, emit a capture descriptor with no metadata sources
and a single capture, using the lowered AST type appearing
in the alloc_box instruction that emitted the box.
Since box metadata is shared by all POD types of the same
size, and all single-retainable pointer payloads, the
AST type might not accurately reflect what is actually in
the box.
However, this type is *layout compatible* with the box
payload, at least enough to know where the retainable
pointers are, because after all IRGen uses this type to
synthesize the destructor.
Fixes <rdar://problem/26314060>.
Now we can discern the types of values in heap boxes at runtime!
Closure reference captures are a common way of creating reference
cycles, so this provides some basic infrastructure for detecting those
someday.
A closure capture descriptor has the following:
- The number of captures.
- The number of sources of metadata reachable from the closure.
This is important for substituting generics at runtime since we
can't know precisely what will get captured until we observe a
closure.
- The number of types in the NecessaryBindings structure.
This is a holding tank in a closure for sources of metadata that
can't be gotten from the captured values themselves.
- The metadata source map, a list of pairs, for each
source of metadata for every generic argument needed to perform
substitution at runtime.
Key: The typeref for the generic parameter visible from the closure
in the Swift source.
Value: The metadata source, which describes how to crawl the heap from
the closure to get to the metadata for that generic argument.
- A list of typerefs for the captured values themselves.
Follow-up: IRGen tests for various capture scenarios, which will include
MetadataSource encoding tests.
rdar://problem/24989531
Properly lower reference counting SIL instructions with nonatomic attribute as invocations of corresponding non-atomic reference counting runtime functions.
...even if the 'self' type is generic. Additionally, Objective-C generic
types cannot be used as a source of type metadata, because Objective-C
generics are erased at runtime by default. (This may need to change.)
With these two changes, we now pass type metadata explicitly when we need
to, and /don't/ try to pass it to Objective-C methods that would have
needed it if they were Swift methods.
In 2e3c0b6, code was added to emit unique trap blocks for each
cond_fail, in order to make post-mortem debugging simpler (e.g. stack
traces have correct line/column information for the trapping location,
and it's easy to trace back to the specific jump that reaches the trap).
We didn't, however, do anything to ensure that LLVM wouldn't merge these
back together again. This is an attempt to do exactly that, after seeing
BranchFolding in the code generator merging traps into a single block.
The idea here is to emit empty inline asm strings marked as
side-effecting, and taking a unique integer argument. These come before
the trap call, so they should block any valid attempt at merging the
blocks back together.
Ideally trap would take an argument which uniquely identifies it, but
that isn't possible today.
This solution is potentially brittle in that in theory LLVM could still
merge the trap/unreachable and then branch to those after the unique asm
calls. We cannot fix that by putting another asm call after the trap,
because LLVM's CFG simplification will delete code after a trap.
rdar://problem/25216969
This occured if a stack-promoted object with a devirtualized final release is not actually allocated on the stack.
Now the ReleaseDevirtualizer models the procedure of a final release more accurately.
It inserts a set_deallocating instruction and calles the deallocator (instead of just the deinit).
This changes also includes two peephole optimizations in IRGen and LLVMStackPromotion which get rid of
unused runtime calls in case the stack promoted object is really allocated on the stack.
This fixes rdar://problem/25068118
The effect of this tiny change is that local variables will be described
by llvm.dbg.values, which will get lowered into an accurate location list
instead of a stack slot that is valid for the entire scope of the variable.
This means the debugger can now accurately track the liveness of variables
knowing exactly when they are initialized and when there values go away.
Function arguments are still kept in stack slots because (1) they are
already initialized at the function entry and (2) LLDB really needs self
to be available at all times for the expression evaluator.
This was made possible by recent advancements in LLVM such as the live
debug variables pass and various related bugfixes.
<rdar://problem/15746520>
This instruction creates a "virtual" address to represent a property with a behavior that supports definite initialization. The instruction holds references to functions that perform the initialization and 'set' logic for the property. It will be DI's job to rewrite assignments into this virtual address into calls to the initializer or setter based on the initialization state of the property at the time of assignment.
This is a hotfix for recent regressions in the LLDB testsuite caused
by lazy loading of metadata.
Long-term we will explore emitting DWARF expressions for accessing the
type metadata.
rdar://problem/24781494, SR-797
The effect of this tiny change is that local variables will be described
by llvm.dbg.values, which will get lowered into an accurate location list
instead of a stack slot that is valid for the entire scope of the variable.
This means the debugger can now accurately track the liveness of variables
knowing exactly when they are initialized and when there values go away.
Function arguments are still kept in stack slots because (1) they are
already initialized at the function entry and (2) LLDB really needs self
to be available at all times for the expression evaluator.
This was made possible by recent advancements in LLVM such as the live
debug variables pass and various related bugfixes.
<rdar://problem/15746520>
for a alloc_stack, debug_value, and debug_value_addr disagreeing on the
type of the same variable.
For -Onone, this commit is NFC.
A testcase for generic specialization will follow as soon as SIL debug
info serialization efforts are complete.
<rdar://problem/24785336>