The reason why we are doing this is that we want move only addresses to be
checked by the move only address checker and not the move operator address
checker.
rdar://102056097
A long while back we added a copy of UIApplicaitonMain to the UIKit
overlay. This was intended to fix the signature of UIApplicationMain by
forcing clients to choose a non-deprecated copy with revised nullability
annotations that were compatible with the (then) interface of the (then)
CommandLine stdlib type. Since then, we've gone in a completely different
direction with type-based entrypoints and @main, but the compatibility
shims remain. This code was assuming that it would always find one
declaration of UIApplicationMain in the clang module, but that's not
a sound assumption to make. In certain scenarios, we can wind up finding
the overlay's copy of UIApplicationMain too. When that's the Swift
function, we reference a native-to-foreign thunk that we don't actually
wind up generating in the open-coding and this results in a nasty linker
error. Filter the available candidates here looking for the C copy of
UIApplicationMain.
rdar://101932338
The variants are produced by SILGen when opaque values are enabled.
They are necessary because otherwise SILGen would produce
address_to_pointer of values.
They will be lowered by AddressLowering.
When branching to the exit block, flatten an @out tuple return value
into its components, as is done for all other return values.
In the exit block, when constructing the @out tuple to return, visit the
tuple-type-tree of the return value to reconstruct the nested tuple
structure: @out tuple returns are not flattened, unlike regular tuple
returns.
These still get verified as part of SILGenModule's verification during
SILGenModule's destructor. But moving this earlier catches the error earlier and
makes it easier to debug any SIL issues immediately in the debugger rather than
having to backtrack from the SILGenModule's destructor.
When branching to the exit block, flatten an @out tuple return value
into its components, as is done for all other return values.
In the exit block, when constructing the @out tuple to return, visit the
tuple-type-tree of the return value to reconstruct the nested tuple
structure: @out tuple returns are not flattened, unlike regular tuple
returns.
Introduce `MacroExpansionExpr` and `MacroExpansionDecl` and plumb it through. Parse them in roughly the same way we parse `ObjectLiteralExpr`.
The syntax is gated under `-enable-experimental-feature Macros`.
The problem here is that the AutoreleasingUnsafeMutablePointer's LValue
component would given the following Swift:
```
public class C {}
@inline(never)
func updateC(_ p: AutoreleasingUnsafeMutablePointer<C>) -> () {
p.pointee = C()
}
public func test() -> C {
var cVar = C()
updateC(&cVar)
return cVar
}
```
generate the following SIL as part of setting cVar after returning from updateC.
```
%18 = function_ref @$s11autorelease7updateCyySAyAA1CCGF : $@convention(thin) (AutoreleasingUnsafeMutablePointer<C>) -> () // user: %19
%19 = apply %18(%17) : $@convention(thin) (AutoreleasingUnsafeMutablePointer<C>) -> ()
%20 = load [trivial] %8 : $*@sil_unmanaged C // user: %21
%21 = unmanaged_to_ref %20 : $@sil_unmanaged C to $C // user: %22
%22 = copy_value %21 : $C // user: %23
assign %22 to %7 : $*C // id: %23
end_access %7 : $*C // id: %24
```
Once we are in Canonical SIL, we get the following SIL:
```
%18 = function_ref @$s11autorelease7updateCyySAyAA1CCGF : $@convention(thin) (AutoreleasingUnsafeMutablePointer<C>) -> () // user: %19
%19 = apply %18(%17) : $@convention(thin) (AutoreleasingUnsafeMutablePointer<C>) -> ()
%20 = load [trivial] %8 : $*@sil_unmanaged C // user: %21
%21 = unmanaged_to_ref %20 : $@sil_unmanaged C to $C // user: %22
%22 = copy_value %21 : $C // user: %23
store %22 to [assign] %7 : $*C // id: %23
end_access %7 : $*C // id: %24
```
which destroy hoisting then modifies by hoisting the destroy by the store assign
before the copy_value:
```
%18 = function_ref @$s11autorelease7updateCyySAyAA1CCGF : $@convention(thin) (AutoreleasingUnsafeMutablePointer<C>) -> () // user: %19
%19 = apply %18(%17) : $@convention(thin) (AutoreleasingUnsafeMutablePointer<C>) -> ()
destroy_addr %7 : $*C
%20 = load [trivial] %8 : $*@sil_unmanaged C // user: %21
%21 = unmanaged_to_ref %20 : $@sil_unmanaged C to $C // user: %22
%22 = copy_value %21 : $C // user: %23
store %22 to [init] %7 : $*C // id: %23
end_access %7 : $*C // id: %24
```
Given the appropriate Swift code, one could have that %21 is actually already
stored in %7 and has a ref count of 1. In such a case, the destroy_addr would
cause %21 to be deallocated before we can retain the value.
In order to fix this edge case as a bug fix, in the setter for
AutoreleasingUnsafeMutablePointer, we introduce a mark_dependence from the
copied value onto the memory. This ensures that destroy hoisting does not hoist
the destroy_addr past the mark_dependence, yielding correctness. That is we
generate the following SIL:
```
%18 = function_ref @$s11autorelease7updateCyySAyAA1CCGF : $@convention(thin) (AutoreleasingUnsafeMutablePointer<C>) -> () // user: %19
%19 = apply %18(%17) : $@convention(thin) (AutoreleasingUnsafeMutablePointer<C>) -> ()
%20 = load [trivial] %8 : $*@sil_unmanaged C // user: %21
%21 = unmanaged_to_ref %20 : $@sil_unmanaged C to $C // user: %22
%22 = copy_value %21 : $C // user: %23
%23 = mark_dependence %22 : $C on %7 : $*C
assign %23 to %7 : $*C // id: %23
end_access %7 : $*C // id: %24
```
For those unfamiliar, mark_dependence specifies that any destroy of the memory
in %7 can not move before any use of %23.
rdar://99402398
Instead of creating and destroying a SILProfiler
per TopLevelCodeDecl, setup a single profiler
for the top-level entry point function, and visit
all the TopLevelCodeDecls when mapping regions.
Emit a call to a helper function that determines whether the symbols associated with the declaration referenced in the `#_hasSymbol(...)` condition are non-null at runtime. An upcoming change will emit a definition for this function during IRGen.
Resolves rdar://100130015
For now this just wraps an ASTNode, but in the
future it will allow us to model counters
that cannot simply hang off ASTNodes, e.g
error branch counters.
Remove a case where we know we don't have a
profiler, and avoid incrementing for a distributed
factory method. Otherwise such cases could pass
a null ASTNode, which will become an assertion
failure in a future commit.
While we're here, let's standardize on emitting
the profiler increment for function entry after
the prolog, as if there's e.g an unreachable
parameter, the increment can be safely elided
anyway.