The following upstream commit removed the ScalableVecArgument:
https://reviews.llvm.org/D80107
I've remove that case from IntrinsicTypeDecoder::decodeImmediate()
The aforementioned patch also describes vector widths in terms of
ElementCounts, and not unsigned ints. I've updated an access to the
vector width to make use of the ElementCount representation via access
of the Min member.
Add a private scratch context to the ASTContext and allow IntrinsicInfo sole access to it so it can allocate attributes into it. This removes the final dependency on the global context.
Delete all of the formalism and infrastructure around maintaining our own copy of the global context. The final frontier is the Builtins, which need to lookup intrinsics in a given scratch context and convert them into the appropriate Swift annotations and types. As these utilities have wormed their way through the compiler, I have decided to leave this alone for now.
Define type signatures and SILGen for the following builtins:
```
/// Applies the {jvp|vjp} of `f` to `arg1`, ..., `argN`.
func applyDerivative_arityN_{jvp|vjp}(f, arg1, ..., argN) -> jvp/vjp return type
/// Applies the transpose of `f` to `arg`.
func applyTranspose_arityN(f, arg) -> transpose return type
/// Makes a differentiable function from the given `original`, `jvp`, and
/// `vjp` functions.
func differentiableFunction_arityN(original, jvp, vjp)
/// Makes a linear function from the given `original` and `transpose` functions.
func linearFunction_arityN(original, transpose)
```
Add SILGen FileCheck tests for all builtins.
(BaseT, @inout @unowned(unsafe) T) -> @guaranteed T
The reason for the weird signature is that currently the Builtin infrastructure
does not handle results well. Also, note that we are not actually performing a
call here. We are SILGening directly so we can create a guaranteed result.
The intended semantics is that one passes in a base value that guarantees the
lifetime of the unowned(unsafe) value. The builtin then:
1. Borrows the base.
2. Loads the trivial unowned (unsafe), converts that value to a guaranteed ref
after unsafely unwrapping the optional.
3. Uses mark dependence to tie the lifetimes of the guaranteed base to the
guaranteed ref.
I also updated my small UnsafeValue.swift test to make sure we get the codegen
we expect.
The signature is:
(T, @inout @unowned(unsafe) Optional<T>) -> ()
The reason for the weird signature is that currently the Builtin infrastructure
does not handle results well.
The semantics of this builtin is that it enables one to store the first argument
into an unowned unsafe address without any reference counting operations. It
does this just by SILGening the relevant code. The optimizer chews through this
code well, so we get the expected behavior.
I also included a small proof of concept to validate that this builtin works as
expected.
Since getSpecifier() now kicks off a request instead of always
returning what was previously set, we can't pass a ParamSpecifier
to the ParamDecl constructor anymore. Instead, callers either
call setSpecifier() if the ParamDecl is synthesized, or they
rely on the request, which can compute the specifier in three
specific cases:
- Ordinary parsed parameters get their specifier from the TypeRepr.
- The 'self' parameter's specifier is based on the self access kind.
- Accessor parameters are either the 'newValue' parameter of a
setter, or a cloned subscript parameter.
For closure parameters with inferred types, we still end up
calling setSpecifier() twice, once to set the initial defalut
value and a second time when applying the solution in the
case that we inferred an 'inout' specifier. In practice this
should not be a big problem because expression type checking
walks the AST in a pre-determined order anyway.
Structurally prevent a number of common anti-patterns involving generic
signatures by separating the interface into GenericSignature and the
implementation into GenericSignatureBase. In particular, this allows
the comparison operators to be deleted which forces callers to
canonicalize the signature or ask to compare pointers explicitly.
This removes it from the AST and largely replaces it with AnyObject
at the SIL and IRGen layers. Some notes:
- Reflection still uses the notion of "unknown object" to mean an
object with unknown refcounting. There's no real reason to make
this different from AnyObject (an existential containing a
single object with unknown refcounting), but this way nothing
changes for clients of Reflection, and it's consistent with how
native objects are represented.
- The value witness table and reflection descriptor for AnyObject
use the mangling "BO" instead of "yXl".
- The demangler and remangler continue to support "BO" because it's
still in use as a type encoding, even if it's not an AST-level
Type anymore.
- Type-based alias analysis for Builtin.UnknownObject was incorrect,
so it's a good thing we weren't using it.
- Same with enum layout. (This one assumed UnknownObject never
referred to an Objective-C tagged pointer. That certainly wasn't how
we were using it!)
This will ensure that if an expert user is using this feature and makes a
mistake as a result of tweaking their code, they get an error. This will ensure
they do not ship and look into why this is happening.
This is not intended to be used by anyone except for expert stdlib users.
TLDR: This patch introduces a new kind of builtin, "a polymorphic builtin". One
calls it like any other builtin, e.x.:
```
Builtin.generic_add(x, y)
```
but it has a contract: it must be specialized to a concrete builtin by the time
we hit Lowered SIL. In this commit, I add support for the following generic
operations:
Type | Op
------------------------
FloatOrVector |FAdd
FloatOrVector |FDiv
FloatOrVector |FMul
FloatOrVector |FRem
FloatOrVector |FSub
IntegerOrVector|AShr
IntegerOrVector|Add
IntegerOrVector|And
IntegerOrVector|ExactSDiv
IntegerOrVector|ExactUDiv
IntegerOrVector|LShr
IntegerOrVector|Mul
IntegerOrVector|Or
IntegerOrVector|SDiv
IntegerOrVector|SRem
IntegerOrVector|Shl
IntegerOrVector|Sub
IntegerOrVector|UDiv
IntegerOrVector|Xor
Integer |URem
NOTE: I only implemented support for the builtins in SIL and in SILGen. I am
going to implement the optimizer parts of this in a separate series of commits.
DISCUSSION
----------
Today there are polymorphic like instructions in LLVM-IR. Yet, at the
swift and SIL level we represent these operations instead as Builtins whose
names are resolved by splatting the builtin into the name. For example, adding
two things in LLVM:
```
%2 = add i64 %0, %1
%2 = add <2 x i64> %0, %1
%2 = add <4 x i64> %0, %1
%2 = add <8 x i64> %0, %1
```
Each of the add operations are done by the same polymorphic instruction. In
constrast, we splat out these Builtins in swift today, i.e.:
```
let x, y: Builtin.Int32
Builtin.add_Int32(x, y)
let x, y: Builtin.Vec4xInt32
Builtin.add_Vec4xInt32(x, y)
...
```
In SIL, we translate these verbatim and then IRGen just lowers them to the
appropriate polymorphic instruction. Beyond being verbose, these prevent these
Builtins (which need static types) from being used in polymorphic contexts where
we can guarantee that eventually a static type will be provided.
In contrast, the polymorphic builtins introduced in this commit can be passed
any type, with the proviso that the expert user using this feature can guarantee
that before we reach Lowered SIL, the generic_add has been eliminated. This is
enforced by IRGen asserting if passed such a builtin and by the SILVerifier
checking that the underlying builtin is never called once the module is in
Lowered SIL.
In forthcoming commits, I am going to add two optimizations that give the stdlib
tool writer the tools needed to use this builtin:
1. I am going to add an optimization to constant propagation that changes a
"generic_*" op to the type of its argument if the argument is a type that is
valid for the builtin (i.e. integer or vector).
2. I am going to teach the SILCloner how to specialize these as it inlines. This
ensures that when we transparent inline, we specialize the builtin automatically
and can then form SSA at -Onone using predictable memory access operations.
The main implication around these polymorphic builtins are that if an author is
not able to specialize the builtin, they need to ensure that after constant
propagation, the generic builtin has been DCEed. The general rules are that the
-Onone optimizer will constant fold branches with constant integer operands. So
if one can use a bool of some sort to trigger the operation, one can be
guaranteed that the code will not codegen. I am considering putting in some sort
of diagnostic to ensure that the stdlib writer has a good experience (e.x. get
an error instead of crashing the compiler).
Most of AST, Parse, and Sema deal with FileUnits regularly, but SIL
and IRGen certainly don't. Split FileUnit out into its own header to
cut down on recompilation times when something changes.
No functionality change.
1. builtin "int_expect", which makes the evaluator work on more
integer operations such as left/right shift (with traps) and
integer conversions.
2. builtin "_assertConfiguration", which enables the evaluator
to work with debug stdlib.
3. builtin "ptrtoint", which enables the evaluator to track
StaticString
4. _assertionFailure API, which enables the evaluator to report
stdlib assertion failures encountered during constant evaluation.
Also, enable attaching auxiliary data with the enum "UnknownReason"
and use it to improve diagnostics for UnknownSymbolicValues,
which represent failures during constant evaluation.
This memoizes the result, which is fine for all callers; the only
exception is open existential types where each new open existential
now explicitly gets a unique generic environment, allocated by
calling GenericEnvironment::getIncomplete().