This ensures that the optimizer has a summary of where the ref_tail_addr will no
longer be used. This is important when analyzing the lifetime of the base of the
ref_tail_addr.
I also cleaned up a little the description around the specification for this in
OperandOwnership. Now all instructions that are "INTERIOR_POINTER_PROJECTIONS"
have their own section/macro as a form of self documenting.
Most of AST, Parse, and Sema deal with FileUnits regularly, but SIL
and IRGen certainly don't. Split FileUnit out into its own header to
cut down on recompilation times when something changes.
No functionality change.
We have to be source compatible to be able to parse old swiftinterface files where the old Builtin.condfail is used in inlineable functions.
rdar://problem/53176692
The SIL generation for this builtin also changes: instead of generating the cond_fail instructions upfront, let the optimizer generate it, if the operand is a static string literal.
In worst case, if the second operand is not a static string literal, the Builtin.condfail is lowered at the end of the optimization pipeline with a default message: "unknown program error".
The SIL generation for this builtin also changes: instead of generating the cond_fail instructions upfront, let the optimizer generate it, if the operand is a static string literal.
In worst case, if the second operand is not a static string literal, the Builtin.condfail is lowered at the end of the optimization pipeline with a default message: "unknown program error".
String -> Builtin.RawPointer that given a string constructed from a
literal, returns the address of the string literal in the global string
table of the compiled binary as a pointer.
For anything else, we can decompose the argument list on the spot.
Note that builtins that are implemented as EarlyEmitters now take a
the argument list as a PreparedArguments instead of a single Expr.
Since the PreparedArguments can still be a scalar with an
ArgumentShuffleExpr, we have to jump through some hoops to turn
it into a list of argument Exprs. This will all go away soon.
Right now we use TupleShuffleExpr for two completely different things:
- Tuple conversions, where elements can be re-ordered and labels can be
introduced/eliminated
- Complex argument lists, involving default arguments or varargs
The first case does not allow default arguments or varargs, and the
second case does not allow re-ordering or introduction/elimination
of labels. Furthermore, the first case has a representation limitation
that prevents us from expressing tuple conversions that change the
type of tuple elements.
For all these reasons, it is better if we use two separate Expr kinds
for these purposes. For now, just make an identical copy of
TupleShuffleExpr and call it ArgumentShuffleExpr. In CSApply, use
ArgumentShuffleExpr when forming the arguments to a call, and keep
using TupleShuffleExpr for tuple conversions. Each usage of
TupleShuffleExpr has been audited to see if it should instead look at
ArgumentShuffleExpr.
In sequent commits I plan on redesigning TupleShuffleExpr to correctly
represent all tuple conversions without any unnecessary baggage.
Longer term, we actually want to change the representation of CallExpr
to directly store an argument list; then instead of a single child
expression that must be a ParenExpr, TupleExpr or ArgumentShuffleExpr,
all CallExprs will have a uniform representation and ArgumentShuffleExpr
will go away altogether. This should reduce memory usage and radically
simplify parts of SILGen.
The ownership kind is Any for trivial types, or Owned otherwise, but
whether a type is trivial or not will soon depend on the resilience
expansion.
This means that a SILModule now uniques two SILUndefs per type instead
of one, and serialization uses two distinct sentinel IDs for this
purpose as well.
For now, the resilience expansion is not actually used here, so this
change is NFC, other than changing the module format.
The goal here is for the SILGen of these builtins to receive an owned
value so that it will perform an owned->owned conversion and therefore
produce a +1 result, as generally expected. Without this, SILGen will
perform a borrowed->borrowed conversion, and the copy of the result (if
it even happens) may happen after the argument's borrow scope has ended.
`\.self` is the final chosen syntax. Implement support for this syntax, and remove the stopgap builtin and `WritableKeyPath._identity` property that were in place before.
This is NFC for now, but I plan to build on this to (1) immediately
remove some unnecessary materialization and loads of the base value
and (2) to allow clients to load a borrowed value.
Make sure the implementation can handle a key path with zero components by removing inappropriate assumptions that the number of components is always non-empty. Identity key paths also need some special behavior:
- Appending an identity key path should be an identity operation for the other operand
- Identity key paths have a `MemoryLayout.offset(of:)` zero
- Identity key paths interop with KVC as key paths to `@"self"`
To be able to exercise and test this behavior, add a `Builtin.identityKeyPath()` function and `WritableKeyPath._identity` accessor in lieu of finalized syntax.
We’re not using this parameter, and don’t expect to do so in the future,
so remove it. Also fold away TypeBase::usesNativeReferenceCounting()
and irgen::getReferenceCountingForType(), both of which are trivial.
This flag supports promoting KeyPath access violations to an error in
Swift 4+, while building the standard library in Swift 3 mode. This is
only necessary as long as the standard library continues to build in
Swift 3 mode. Once the standard library build migrates, it can all be
ripped out.
<rdar://problem/40115738> [Exclusivity] Enforce Keypath access as an error, not a warning in 4.2.
Use begin_unpaired_access [no_nested_conflict] for
Builtin.performInstantaneousReadAccess. This can't be optimized away
and is the proper marker to use when the access scope is unknown.
Drop the requirement that
_semantics("optimize.sil.preserve_exclusivity") be @inline(never). We
actually want theses inlined into user code. Verify that the
@_semantic functions are not inlined or otherwise tampered with prior
to serialization.
Make *no* change to propagate @inline(__always) into LLVM. This no longer has
any relationship to this PR and can be investigated seperately.
Since the functions produce pointers with tightly-scoped lifetimes there's no formal reason these have to only work on `inout` things. Now that arguments can be +0, we can even do this without copying values that we already have at +0.
NOTE:
1. This can not happen today with the +1 runtime.
2. This is tested by an updated builtins.swift test for +0 that I am going to
commit in a little bit.
3. We immediately forward the value when we put it into memory.
rdar://34222540
Otherwise, we break SILGen tests when they are updated for +0. This breakage
occurs since we use the original cleanup from args[0] instead of forwarding that
cleanup and creating a new cleanup on the result.
rdar://34222540
This eliminates a case where we were not moving a cleanup onto a forwarding case
and instead just jammed the original cleanup and the new value into 1 managed
value.
NFC, but helps the ownership verifier.
rdar://34222540
* Reduce array abstraction on apple platforms dealing with literals
Part of the ongoing quest to reduce swift array literal abstraction
penalties: make the SIL optimizer able to eliminate bridging overhead
when dealing with array literals.
Introduce a new classify_bridge_object SIL instruction to handle the
logic of extracting platform specific bits from a Builtin.BridgeObject
value that indicate whether it contains a ObjC tagged pointer object,
or a normal ObjC object. This allows the SIL optimizer to eliminate
these, which allows constant folding a ton of code. On the example
added to test/SILOptimizer/static_arrays.swift, this results in 4x
less SIL code, and also leads to a lot more commonality between linux
and apple platform codegen when passing an array literal.
This also introduces a couple of SIL combines for patterns that occur
in the array literal passing case.
These will be used for unit-testing the Type::join functionality in the
type checker. The result of the join is replaced during constraint
generation with the actual type.
There is currently no checking for whether the arguments can be used to
statically compute the value, so bad things will likely happen if
e.g. they are type variables. Once more of the basic functionality of
Type::join is working I'll make this a bit more bullet-proof in that
regard.
They include:
// Compute the join of T and U and return the metatype of that type.
Builtin.type_join<T, U, V>(_: T.Type, _: U.Type) -> V.Type
// Compute the join of &T and U and return the metatype of that type.
Builtin.type_join_inout<T, U, V>(_: inout T, _: U.Type) -> V.Type
// Compute the join of T.Type and U.Type and return that type.
Builtin.type_join_meta<T, U, V>(_: T.Type, _: U.Type) -> V.Type
I've added a couple simple tests to start off, based on what currently
works (aka doesn't cause an assert, crash, etc.).
This is already an RValue invariant that used to be enforced upon RValue
construction. We put in a hack to work around a bug where that was not occuring
and changed RValue constructors to instead load stored objects when they needed
to. But the problem is that since then we have added more constructors that
provide other manners to create such an invalid RValue.
I added verification to many parts of RValue and exposed an additional verify
method that we can invoke at the end of emitRValue() eventually to verify our
invariants. This will give me the comfort to make that assumption in other parts
of SILGen without worry.
I also performed a small amount of cleanup of RValue construction.
rdar://33358110
SILBuilder now tracks data dependencies between instructions
that open existentials and uses of the opened type, so
SILGen's mechanism for this is no longer needed.
In particular, this simplifies ArchetypeCalleeBuilder.
With the introduction of special decl names, `Identifier getName()` on
`ValueDecl` will be removed and pushed down to nominal declarations
whose name is guaranteed not to be special. Prepare for this by calling
to `DeclBaseName getBaseName()` instead where appropriate.