The variants are produced by SILGen when opaque values are enabled.
They are necessary because otherwise SILGen would produce
address_to_pointer of values.
They will be lowered by AddressLowering.
This is the start of the removal of the C++ implementation of libSyntax
in favor of the new Swift Parser and Swift Syntax libraries. Now that
the Swift Parser has switched the SwiftSyntaxParser library over to
being a thin wrapper around the Swift Parser, there is no longer any
reason we need to retain any libSyntax infrastructure in the swift
compiler.
As a first step, delete the infrastructure that builds
lib_InternalSwiftSyntaxParser and convert any scripts that mention
it to instead mention the static mirror libraries. The --swiftsyntax
build-script flag has been retained and will now just execute the
SwiftSyntax and Swift Parser builds with the just-built tools.
This is a dedicated instruction for incrementing a
profiler counter, which lowers to the
`llvm.instrprof.increment` intrinsic. This
replaces the builtin instruction that was
previously used, and ensures that its arguments
are statically known. This ensures that SIL
optimization passes do not invalidate the
instruction, fixing some code coverage cases in
`-O`.
rdar://39146527
By using the keyword instead of the function, we actually get a much simpler
implementation since we avoid all of the machinery of SILGenApply. Given that we
are going down that path, I am removing the old builtin implementation since it
is dead code.
The reason why I am removing this now is that in a subsequent commit, I want to
move all of the ownership checking passes to run /before/ mandatory inlining. I
originally placed the passes after mandatory inlining since the function version
of the move keyword was transparent and needing to be inlined before we could
process it. Since we use the keyword now, that is no longer an issue.
The new intrinsic, exposed via static functions on Task<T, Never> and
Task<T, Error> (rethrowing), begins an asynchronous context within a
synchronous caller's context. This is only available for use under the
task-to-thread concurrency model, and even then only under SPI.
Required for UnsafeRawPointer.withMemoryReboud(to:).
%out_token = rebind_memory %0 : $Builtin.RawPointer to %in_token
%0 must be of $Builtin.RawPointer type
%in_token represents a cached set of bound types from a prior memory state.
%out_token is an opaque $Builtin.Word representing the previously bound
types for this memory region.
This instruction's semantics are identical to ``bind_memory``, except
that the types to which memory will be bound, and the extent of the
memory region is unknown at compile time. Instead, the bound-types are
represented by a token that was produced by a prior memory binding
operation. ``%in_token`` must be the result of bind_memory or
Required for UnsafeRawPointer.withMemoryRebound(to:)
%token = bind_memory %0 : $Builtin.RawPointer, %1 : $Builtin.Word to $T
%0 must be of $Builtin.RawPointer type
%1 must be of $Builtin.Word type
%token is an opaque $Builtin.Word representing the previously bound types
for this memory region.
The key thing is that the move checker will not consider the explicit copy value
to be a copy_value that can be rewritten, ensuring that any uses of the result
of the explicit copy_value (consuming or other wise) are not checked.
Similar to the _move operator I recently introduced, this is a transparent
function so we can perform one level of specialization and thus at least be
generic over all concrete types.