Made it clear that one code path was to be followed if
-enable-experimental-lexical-lifetimes was passed and an entirely
different one was to be followed otherwise.
In 79bc4cba31, when
-enable-experimental-lexical-lifetimes was passed, all non-trivial types
were given borrow scopes. That is incorrect because address types
should not receive borrow scopes. Here, that is fixed by only
introducing lexical borrow scopes for arguments whose types is not an
address type.
In 79bc4cba31, when
-enable-experimental-lexical-lifetimes was passed, every argument marked
to receive an end_borrow, although not every argument was borrowed (that
part was correct). Fix the mistake by only marking arguments which are
actually borrowed as such.
Previously, the SILInliner had a few lines of special-case code to
handle adding end_borrow instructions to the throw block corresponding
to a try_apply instruction. Because there's a mechanism for tracking
all places where these end_borrows need to be inserted, just use that
mechanism rather than having a special case for the throw block.
This is a new instruction that can be used by SILGen to perform a semantic move
in between two entities that are considered separate variables at the AST
level. I am going to use it to implement an experimental borrow checker.
This PR contains the following:
1. I define move_value, setup parsing, printing, serializing, deserializing,
cloning, and filled in all of the visitors as appropriate.
2. I added createMoveValue and emitMoveValueOperation SILBuilder
APIs. createMoveValue always creates a move and asserts is passed a trivial
type. emitMoveValueOperation in contrast, will short circuit if passed a
trivial value and just return the trivial value.
3. I added IRGen tests to show that we can push this through the entire system.
This is all just scaffolding for the instruction to live in SIL land and as of
this PR doesn't actually do anything.
Track in-use iterators and update them both when instructions are
deleted and when they are added.
Safe iteration in the presence of arbitrary changes now looks like
this:
for (SILInstruction *inst : deleter.updatingRange(&bb)) {
modify(inst);
}
Fix innumerable latent bugs with iterator invalidation and callback invocation.
Removes dead code earlier and chips away at all the redundant copies the compiler generates.
I recently have been running into the issue that many of these APIs perform the
deletion operation themselves and notify the caller it is going to delete
instead of allowing the caller to specify how the instruction is deleted. This
causes interesting semantic issues (see the loop in deleteInstruction I
simplified) and breaks composition since many parts of the optimizer use
InstModCallbacks for this purpose.
To fix this, I added a notify will be deleted construct to InstModCallback. In a
similar way to the rest of it, if the notify is not set, we do not call any code
implying that we should have good predictable performance in loops since we will
always skip the function call.
I also changed InstModCallback::deleteInst() to notify before deleting so we
have a default safe behavior. All previous use sites of this API do not care
about being notified and the only new use sites of this API are in
InstructionDeleter that perform special notification behavior (it notifies for
certain sets of instructions it is going to delete before it deletes any of
them). To work around this, I added a bool to deleteInst to control this
behavior and defaulted to notifying. This should ensure that all other use sites
still compose correctly.
Instead, put the archetype->instrution map into SIlModule.
SILOpenedArchetypesTracker tried to maintain and reconstruct the mapping locally, e.g. during a use of SILBuilder.
Having a "global" map in SILModule makes the whole logic _much_ simpler.
I'm wondering why we didn't do this in the first place.
This requires that opened archetypes must be unique in a module - which makes sense. This was the case anyway, except for keypath accessors (which I fixed in the previous commit) and in some sil test files.
Through various means, it is possible for a synchronous actor-isolated
function to escape to another concurrency domain and be called from
outside the actor. The problem existed previously, but has become far
easier to trigger now that `@escaping` closures and local functions
can be actor-isolated.
Introduce runtime detection of such data races, where a synchronous
actor-isolated function ends up being called from the wrong executor.
Do this by emitting an executor check in actor-isolated synchronous
functions, where we query the executor in thread-local storage and
ensure that it is what we expect. If it isn't, the runtime complains.
The runtime's complaints can be controlled with the environment
variable `SWIFT_UNEXPECTED_EXECUTOR_LOG_LEVEL`:
0 - disable checking
1 - warn when a data race is detected
2 - error and abort when a data race is detected
At an implementation level, this introduces a new concurrency runtime
entry point `_checkExpectedExecutor` that checks the given executor
(on which the function should always have been called) against the
executor on which is called (which is in thread-local storage). There
is a special carve-out here for `@MainActor` code, where we check
against the OS's notion of "main thread" as well, so that `@MainActor`
code can be called via (e.g.) the Dispatch library's
`DispatchQueue.main.async`.
The new SIL instruction `extract_executor` performs the lowering of an
actor down to its executor, which is implicit in the `hop_to_executor`
instruction. Extend the LowerHopToExecutor pass to perform said
lowering.
My goal was to reduce the size of SILLocation. It now contains only of a storage union, which is basically a pointer and a bitfield containing the Kind, StorageKind and flags. By far, most locations are only single pointers to an AST node. For the few cases where more data needs to be stored, this data is allocated separately: with the SILModule's bump pointer allocator.
While working on this, I couldn't resist to do a major refactoring to simplify the code:
* removed unused stuff
* The term "DebugLoc" was used for 3 completely different things:
- for `struct SILLocation::DebugLoc` -> renamed it to `FilePosition`
- for `hasDebugLoc()`/`getDebugSourceLoc()` -> renamed it to `hasASTNodeForDebugging()`/`getSourceLocForDebugging()`
- for `class SILDebugLocation` -> kept it as it is (though, `SILScopedLocation` would be a better name, IMO)
* made SILLocation more "functional", i.e. replaced some setters with corresponding constructors
* replaced the hand-written bitfield `KindData` with C bitfields
* updated and improved comments
This makes it easier to understand conceptually why a ValueOwnershipKind with
Any ownership is invalid and also allowed me to explicitly document the lattice
that relates ownership constraints/value ownership kinds.
This instructions ensures that all instructions, which need to run on the specified executor actually run on that executor.
For details see the description in SIL.rst.
I think unconditional branches should be free, period. They will
mostly be removed during LLVM code gen. However, fixing this requires
signficant adjustments to inlining heuristics to avoid microbenchmark
regressions at -Osize. So, instead I am just making this less
sensitive to critical edges for the sake of pipeline stability.
`get_async_continuation[_addr]` begins a suspend operation by accessing the continuation value that can resume
the task, which can then be used in a callback or event handler before executing `await_async_continuation` to
suspend the task.
bind_memory has no actual code size cost, and this is the only way to
allow rebinding memory within critical standard library
code like SmallString without regressing performance.
Today unchecked_bitwise_cast returns a value with ObjCUnowned ownership. This is
important to do since the instruction can truncate memory meaning we want to
treat it as a new object that must be copied before use.
This means that in OSSA we do not have a purely ossa forwarding unchecked
layout-compatible assuming cast. This role is filled by unchecked_value_cast.
The ``base_addr_for_offset`` instruction creates a base address for offset calculations.
The result can be used by address projections, like ``struct_element_addr``, which themselves return the offset of the projected fields.
IR generation simply creates a null pointer for ``base_addr_for_offset``.
* a new [immutable] attribute on ref_element_addr and ref_tail_addr
* new instructions: begin_cow_mutation and end_cow_mutation
These new instructions are intended to be used for the stdlib's COW containers, e.g. Array.
They allow more aggressive optimizations, especially for Array.
Add `linear_function` and `linear_function_extract` instructions.
`linear_function` creates a `@differentiable(linear)` function-typed value from
an original function operand and a transpose function operand (optional).
`linear_function_extract` extracts either the original or transpose function
value from a `@differentiable(linear)` function.
Resolves TF-1142 and TF-1143.
Add `differentiable_function` and `differentiable_function_extract`
instructions.
`differentiable_function` creates a `@differentiable` function-typed
value from an original function operand and derivative function operands
(optional).
`differentiable_function_extract` extracts either the original or
derivative function value from a `@differentiable` function.
The differentiation transform canonicalizes `differentiable_function`
instructions, filling in derivative function operands if missing.
Resolves TF-1139 and TF-1140.
The `differentiability_witness_function` instruction looks up a
differentiability witness function (JVP, VJP, or transpose) for a referenced
function via SIL differentiability witnesses.
Add round-trip parsing/serialization and IRGen tests.
Notes:
- Differentiability witnesses for linear functions require more support.
`differentiability_witness_function [transpose]` instructions do not yet
have IRGen.
- Nothing currently generates `differentiability_witness_function` instructions.
The differentiation transform does, but it hasn't been upstreamed yet.
Resolves TF-1141.
I found this to be really useful outside of the inliner since this is exactly
what I needed to ensure that borrowed values used by a begin_apply, have the
end_apply/abort_apply as uses. I am adding that in a forthcoming commit.
NFC.
https://forums.swift.org/t/improving-the-representation-of-polymorphic-interfaces-in-sil-with-substituted-function-types/29711
This prepares SIL to be able to more accurately preserve the calling convention of
polymorphic generic interfaces by letting the type system represent "substituted function types".
We add a couple of fields to SILFunctionType to support this:
- A substitution map, accessed by `getSubstitutions()`, which maps the generic signature
of the function to its concrete implementation. This will allow, for instance, a protocol
witness for a requirement of type `<Self: P> (Self, ...) -> ...` for a concrete conforming
type `Foo` to express its type as `<Self: P> (Self, ...) -> ... for <Foo>`, preserving the relation
to the protocol interface without relying on the pile of hacks that is the `witness_method`
protocol.
- A bool for whether the generic signature of the function is "implied" by the substitutions.
If true, the generic signature isn't really part of the calling convention of the function.
This will allow closure types to distinguish a closure being passed to a generic function, like
`<T, U> in (*T, *U) -> T for <Int, String>`, from the concrete type `(*Int, *String) -> Int`,
which will make it easier for us to differentiate the representation of those as types, for
instance by giving them different pointer authentication discriminators to harden arm64e
code.
This patch is currently NFC, it just introduces the new APIs and takes a first pass at updating
code to use them. Much more work will need to be done once we start exercising these new
fields.
This does bifurcate some existing APIs:
- SILFunctionType now has two accessors to get its generic signature.
`getSubstGenericSignature` gets the generic signature that is used to apply its
substitution map, if any. `getInvocationGenericSignature` gets the generic signature
used to invoke the function at apply sites. These differ if the generic signature is
implied.
- SILParameterInfo and SILResultInfo values carry the unsubstituted types of the parameters
and results of the function. They now have two APIs to get that type. `getInterfaceType`
returns the unsubstituted type of the generic interface, and
`getArgumentType`/`getReturnValueType` produce the substituted type that is used at
apply sites.
The XXOptUtils.h convention is already established and parallels
the SIL/XXUtils convention.
New:
- InstOptUtils.h
- CFGOptUtils.h
- BasicBlockOptUtils.h
- ValueLifetime.h
Removed:
- Local.h
- Two conflicting CFG.h files
This reorganization is helpful before I introduce more
utilities for block cloning similar to SinkAddressProjections.
Move the control flow utilies out of Local.h, which was an
unreadable, unprincipled mess. Rename it to InstOptUtils.h, and
confine it to small APIs for working with individual instructions.
These are the optimizer's additions to /SIL/InstUtils.h.
Rename CFG.h to CFGOptUtils.h and remove the one in /Analysis. Now
there is only SIL/CFG.h, resolving the naming conflict within the
swift project (this has always been a problem for source tools). Limit
this header to low-level APIs for working with branches and CFG edges.
Add BasicBlockOptUtils.h for block level transforms (it makes me sad
that I can't use BBOptUtils.h, but SIL already has
BasicBlockUtils.h). These are larger APIs for cloning or removing
whole blocks.
This provides a singular instruction for convert an unmanaged value to a ref,
then strong_retain it. I expanded the definition of UNCHECKED_REF_STORAGE to
include these copy like instructions. This instruction is valid in all SIL.
The reason why I am adding this instruction is that currently when we emit an
access to an unowned (unsafe) ivar, we use an unmanaged_to_ref and a strong
retain. This can look to the optimizer like a strong retain that can potentially
be optimized. By combining the two together into a new instruction, we can avoid
this potential problem since the pattern matching will break.
With the advent of dynamic_function_ref the actual callee of such a ref
my vary. Optimizations should not assume to know the content of a
function referenced by dynamic_function_ref. Introduce
getReferencedFunctionOrNull which will return null for such function
refs. And getInitialReferencedFunction to return the referenced
function.
Use as appropriate.
rdar://50959798
The recursivelyDeleteTriviallyDeadInstructions utility takes a
callBack to be called for every deleted instruction. However, it
wasn't passing this callBack to eraseFromParentWithdebugInsts. The
callback was used to update an iterator in some cases, so not calling
it resulted in iterator invalidation.
Doing this also cleans up the both APIs:
recursivelyDeleteTriviallyDeadInstructions and eraseFromParentWithdebugInsts.
The ownership kind is Any for trivial types, or Owned otherwise, but
whether a type is trivial or not will soon depend on the resilience
expansion.
This means that a SILModule now uniques two SILUndefs per type instead
of one, and serialization uses two distinct sentinel IDs for this
purpose as well.
For now, the resilience expansion is not actually used here, so this
change is NFC, other than changing the module format.
Beside fixing the compiler crash, this change also improves the stack-nesting correction mechanisms in the inliners:
* Instead of trying to correct the nesting after each inlining of a callee, correct the nesting once when inlining is finished for a caller function.
This fixes a potential compile time problem, because StackNesting iterates over the whole function.
In worst case this can lead to quadratic behavior in case many begin_apply instructions with overlapping stack locations are inlined.
* Because we are doing it only once for a caller, we can remove the complex logic for checking if it is necessary.
We can just do it unconditionally in case any coroutine gets inlined.
The inliners iterate over all instruction of a function anyway, so this does not increase the computational complexity (StackNesting is roughly linear with the number of instructions).
rdar://problem/47615442