A follow-up PR adds a flag to control an inline namespace that allows
symbols in libDemangling to be distinguished between the runtime and
the compiler. These dependencies ensure that the flag is plumbed
through for inclusions of Demangling headers that aren't already
covered by existing `target_link_libraries`.
When backward deploying to an OS that may not have these entry points, weak-link them so that they
can be used conditionally in availability contexts that check for them.
rdar://problem/50731151
I left in the old entrypoint as well that doesn't take the AAQueryInfo by having
it delegate to the AAQueryInfo entrypoint passing a default constructed
AAQueryInfo. This matches how people upstream have handled this.
rdar://49450147
rdar://problem/48833545
From the LLVM Manual regarding tail/musttail : "Both markers imply that the callee does not access allocas from the caller”
Swift’s LLVMARCContract just marks all the calls it creates as tail call without any analysis and/or checking if we are allowed to do that. This created an interesting runtime crash that was a pain to debug - story time:
I traced a runtime crash back to Swift’s LLVMARCContract, but could not grok why the transformation there is wrong: we replaced two consecutive _swift_bridgeObjectRelease(x) calls with _swift_bridgeObjectRelease_n(x, 2), which is a perfectly valid thing to do.
I noticed that the new call is marked as a tail call, disabling that portion of the pass “solved” the runtime crash, but I wanted to understand *why*:
This code worked:
pushq $2
popq %rsi
movq -168(%rbp), %rdi
callq _swift_bridgeObjectRelease_n
leaq -40(%rbp), %rsp
popq %rbx
popq %r12
popq %r13
popq %r14
popq %r15
popq %rbp
retq
While this version crashed further on during the run:
movq -168(%rbp), %rdi
leaq -40(%rbp), %rsp
popq %rbx
popq %r12
popq %r13
popq %r14
popq %r15
popq %rbp
jmp _swift_bridgeObjectRelease_n
As you can see, the call is the last thing we do before returning, so nothing appeared out of the ordinary at first…
Dumping the heap object at the release basic block looked perfectly fine: the ref count was 2 and all the fields looked valid.
However, when we reached the callee the value was modified / dumping it showed it changed somewhere. Which did not make any sense.
Setting up a memory watchpoint on the heap object and/or its reference count did not get us anywhere: the watchpoint triggered on unrelated code in the entry to the callee..
I then realized what’s going on, here’s a an amusing reproducer that you can checkout in LLDB:
Experiment 1:
Setup a breakpoint at leaq -40(%rbp), %rsp
Dump the heap object - it looks good
Experiment 2:
Rerun the same test with a small modification:
Setup a breakpoint at popq %rbx (the instruction after leaq, do not set a breakpoint at leaq)
Dump the heap object - it looks bad!
So what is going on there? The SIL Optimizer changed an alloc_ref instruction into an alloc_ref [stack], which is a perfectly valid thing to do.
However, this means we allocated the heap object on the stack, and then tail-called into the swift runtime with said object. After having modified the stack pointer in the caller’s epilogue.
So why does experiment 2 show garbage? We’ve updated the stack pointer, and it just so happens that we are after the red zone on the stack. When the breakpoint is hit (the OS passes control back to LLDB), it is perfectly allowed to use the memory where the heap object used to reside.
Note: I then realized something even more concerning, that we were lucky not have hit so far: not only did we not check if we are allowed to mark a call as ’tail’ in this situation, which could have been considered a corner case, we could have if we have not promoted it from heap to stack, but we marked *ALL* the call instructions created in this pass as tail call even if they are not the last thing that occurred in the calling function! Looking at the LVMPasses/contract.ll test case, which is modified in this PR, we see some scary checks that are just wrong: we are checking if a call is marked as ‘tail’ in the middle of the function, then check the rest of the function in CHECK-NEXT lines. Knowing full well that the new ‘tail call’ is not the last thing that should execute in the caller.
We used to represent these just as normal LLVM functions, e.x.:
declare objc_object* @objc_retain(objc_object*)
declare void @objc_release(objc_object*)
Recently, special objc intrinsics were added to LLVM. This pass updates these
small (old) passes to use the new intrinsics.
This turned out to not be too difficult since we never create these
instructions. We only analyze them, move them, and delete them.
rdar://47852297
We used to represent these just as normal LLVM functions, e.x.:
declare objc_object* @objc_retain(objc_object*)
declare void @objc_release(objc_object*)
Recently, special objc intrinsics were added to LLVM. This pass updates these
small (old) passes to use the new intrinsics.
This turned out to not be too difficult since we never create these
instructions. We only analyze them, move them, and delete them.
rdar://47852297
Dtrace has special linker support to detect and patch probe call sites.
Each call to a dtrace probe must resolve to a unique patchpoint.
rdar://45738058
This reverts commit 121f5b64be.
Sorry to revert this again. This commit makes some pretty big changes. After
messing with the merge-conflict created by this internally, I did not feel
comfortable landing this now. I talked with Saleem and he agreed with me that
this was the right thing to do.
The key thing here is that all of the underlying code is exactly the same. I
purposely did not debride anything. This is to ensure that I am not touching too
much and increasing the probability of weird errors from occurring. Thus the
exact same code should be executed... just the routing changed.
These functions don't accept local variable heap memory, although the names make it sound like they work on anything. When you try, they mistakenly identify such things as ObjC objects, call through to the equivalent objc_* function, and crash confusingly. This adds Object to the name of each one to make it more clear what they accept.
rdar://problem/37285743
In rare corner cases the pass merged two functions which contain incompatible call instructions.
See source comment in the change for details.
rdar://problem/43051718
Since we introduce the declaration for bridgeRetainN, its result type may be out
of sync with bridgeRetain's. This means that when we perform a RAUW of one for
the other, the types do not match and we get an LLVM error. Instead, just cast
the bridgeRetainN's type to bridgeRetain's result type.
rdar://40507281
Sometimes when running ARCContract on LLVM-IR certain required declarations will
be deleted. This triggers an assert that makes it difficult to work on running
test cases through the pass.
Instead, if we can not find by name the runtime function we are looking for,
recreate the named type as an opaque struct. Since we can not find the function
by name, we can assume that either this is a runtime function that we are the
only passes that create them or that the declarations were dead. Thus, there is
no earlier data, so we can just create a new opaque struct type and use pointers
to that struct_type. If we later need to merge this with another module that did
not delete that type definition, LLVM IR will set the type's body. Since we are
just using pointers to the type, there will be no codegen differences.
rdar://40491584
Upstream has renamed the DEBUG() macro to LLVM_DEBUG. This updates swift
accordingly:
$ find . -name \*.cpp -print -exec sed -i "" -E "s/ DEBUG\(/ LLVM_DEBUG(/g" {} \;