When creating a tuple, the type needs to be specified because otherwise, if the original tuple has labels, it will cause a type mismatch verification error.
rdar://152588539
Prevent sinking of stores if there are instructions other than `load` which may read from memory.
This kind of memory dependencies were ignored.
Fixes SIL verifier crashes or - in worst case - miscompiles.
rdar://150205299
Currently we don't support hoisting ownership instructions.
But the check was missing a store of Optional.none because such a value has no ownership even if the optional is not trivial.
Fixes a SIL verifier crash.
https://github.com/swiftlang/swift/issues/79491
LICM in OSSA is enabled only for instructions involving trvial values.
begin_access/end_access is eligible for LICM in OSSA. However we need to
collect applies to check for conflicts.
rdar://143835241
Use RegularLocation::getAutoGeneratedLocation() while hoisting a load.
Previously the source location of the preheader's terminator was used,
this need not be a RegularLocationKind triggering verification errors.
Although I don't plan to bring over new assertions wholesale
into the current qualification branch, it's entirely possible
that various minor changes in main will use the new assertions;
having this basic support in the release branch will simplify that.
(This is why I'm adding the includes as a separate pass from
rewriting the individual assertions)
When computing an access path, if a struct with unreferenceable storage
is encountered, bail out. Otherwise, an attempt will be made to
construct such a struct via the struct instruction which isn't possible.
rdar://127013278
For years, optimizer engineers have been hitting a common bug caused by passes
assuming all SILValues have a parent function only to be surprised by SILUndef.
Generally we see SILUndef not that often so we see this come up later in
testing. This patch eliminates that problem by making SILUndef uniqued at the
function level instead of the module level. This ensures that it makes sense for
SILUndef to have a parent function, eliminating this possibility since we can
define an API to get its parent function.
rdar://123484595
This is phase-1 of switching from llvm::Optional to std::optional in the
next rebranch. llvm::Optional was removed from upstream LLVM, so we need
to migrate off rather soon. On Darwin, std::optional, and llvm::Optional
have the same layout, so we don't need to be as concerned about ABI
beyond the name mangling. `llvm::Optional` is only returned from one
function in
```
getStandardTypeSubst(StringRef TypeName,
bool allowConcurrencyManglings);
```
It's the return value, so it should not impact the mangling of the
function, and the layout is the same as `std::optional`, so it should be
mostly okay. This function doesn't appear to have users, and the ABI was
already broken 2 years ago for concurrency and no one seemed to notice
so this should be "okay".
I'm doing the migration incrementally so that folks working on main can
cherry-pick back to the release/5.9 branch. Once 5.9 is done and locked
away, then we can go through and finish the replacement. Since `None`
and `Optional` show up in contexts where they are not `llvm::None` and
`llvm::Optional`, I'm preparing the work now by going through and
removing the namespace unwrapping and making the `llvm` namespace
explicit. This should make it fairly mechanical to go through and
replace llvm::Optional with std::optional, and llvm::None with
std::nullopt. It's also a change that can be brought onto the
release/5.9 with minimal impact. This should be an NFC change.
`getValue` -> `value`
`getValueOr` -> `value_or`
`hasValue` -> `has_value`
`map` -> `transform`
The old API will be deprecated in the rebranch.
To avoid merge conflicts, use the new API already in the main branch.
rdar://102362022
Andy some time ago already created the new API but didn't go through and update
the old occurences. I did that in this PR and then deprecated the old API. The
tree is clean, so I could just remove it, but I decided to be nicer to
downstream people by deprecating it first.
This seems to compile correctly in release builds. But it does go
through an llvm_unreachable path, so really isn't safe to leave unfixed.
When the accessPath has an offset, propagate it through the recursive
calls. This may have happened when the offset was moved outside of the
sequence of path indices. The code rebuilds a path from the indices
without adding back the offset.
LICM asserts during projectLoadValue when it needs to rematerialize a
loaded value within the loop using projections and the loop-invariant
address is an index_addr.
Basically:
%a = index_addr %4 : $*Wrapper, %1 : $Builtin.Word
store %_ to %a : $*Wrapper
br loop:
loop:
%f = struct_element_addr %a
%v = load %f : $Value
%s = struct $Wrapper (%v : $Value)
store %s to %a : $*Wrapper
Where the store inside the loop is deleted. And where the load is
hoisted out of the loop, but now loads $Wrapper instead of $Value.
Fixes rdar://92191909 (LICM assertion: isSubObjectProjection(), MemAccessUtils.h, line 1069)
AccessPathWithBase::compute can return a valid access path with unidentified base.
In such cases, we cannot LICM stores, because there is no base address to check if it is invariant
Instead of caching alias results globally for the module, make AliasAnalysis a FunctionAnalysisBase which caches the alias results per function.
Why?
* So far the result caches could only grow. They were reset when they reached a certain size. This was not ideal. Now, they are invalidated whenever the function changes.
* It was not possible to actually invalidate an alias analysis result. This is required, for example in TempRValueOpt and TempLValueOpt (so far it was done manually with invalidateInstruction).
* Type based alias analysis results were also cached for the whole module, while it is actually dependent on the function, because it depends on the function's resilience expansion. This was a potential bug.
I also added a new PassManager API to directly get a function-base analysis:
getAnalysis(SILFunction *f)
The second change of this commit is the removal of the instruction-index indirection for the cache keys. Now the cache keys directly work on instruction pointers instead of instruction indices. This reduces the number of hash table lookups for a cache lookup from 3 to 1.
This indirection was needed to avoid dangling instruction pointers in the cache keys. But this is not needed anymore, because of the new delayed instruction deletion mechanism.
Generalize the AccessUseDefChainCloner in MemAccessUtils. It was
always meant to work this way, just needed a client.
Add a new API AccessUseDefChainCloner::canCloneUseDefChain().
Add a bailout for begin_borrow and mark_dependence. Those
projections may appear on an access path, but they can't be
individually cloned without compensating.
Delete InteriorPointerAddressRebaseUseDefChainCloner.
Add a check in OwnershipRAUWHelper for canCloneUseDefChain.
Add test cases for begin_borrow and mark_dependence.
For combined load-store hoisting, split loads that contain the
loop-stored value into a single load from the same address as the
loop-stores, and a set of loads disjoint from the loop-stores. The
single load will be hoisted while sinking the stores to the same
address. The disjoint loads will be hoisted normally in a subsequent
iteration on the same loop.
loop:
load %outer
store %inner1
exit:
Will be split into
loop:
load %inner1
load %inner2
store %inner1
exit:
Then, combined load/store hoisting will produce:
load %inner1
loop:
load %inner2
exit:
store %inner1
The LICM algorithm was not robust with respect to address projection
because it identifies a projected address by its SILValue. This should
never be done! Use AccessPath instead.
Fixes regressions caused by rdar://66791257 (Print statement provokes
"Can't unsafeBitCast between types of different sizes" when
optimizations enabled)
Add AccesssedStorage::compute and computeInScope to mirror AccessPath.
Allow recovering the begin_access for Nested storage.
Adds AccessedStorage.visitRoots().
Things that have come up recently but are somewhat blocked on this:
- Moving AccessMarkerElimination down in the pipeline
- SemanticARCOpts correctness and improvements
- AliasAnalysis improvements
- LICM performance regressions
- RLE/DSE improvements
Begin to formalize the model for valid memory access in SIL. Ignoring
ownership, every access is a def-use chain in three parts:
object root -> formal access base -> memory operation address
AccessPath abstracts over this path and standardizes the identity of a
memory access throughout the optimizer. This abstraction is the basis
for a new AccessPathVerification.
With that verification, we now have all the properties we need for the
type of analysis requires for exclusivity enforcement, but now
generalized for any memory analysis. This is suitable for an extremely
lightweight analysis with no side data structures. We currently have a
massive amount of ad-hoc memory analysis throughout SIL, which is
incredibly unmaintainable, bug-prone, and not performance-robust. We
can begin taking advantage of this verifably complete model to solve
that problem.
The properties this gives us are:
Access analysis must be complete over memory operations: every memory
operation needs a recognizable valid access. An access can be
unidentified only to the extent that it is rooted in some non-address
type and we can prove that it is at least *not* part of an access to a
nominal class or global property. Pointer provenance is also required
for future IRGen-level bitfield optimizations.
Access analysis must be complete over address users: for an identified
object root all memory accesses including subobjects must be
discoverable.
Access analysis must be symmetric: use-def and def-use analysis must
be consistent.
AccessPath is merely a wrapper around the existing accessed-storage
utilities and IndexTrieNode. Existing passes already very succesfully
use this approach, but in an ad-hoc way. With a general utility we
can:
- update passes to use this approach to identify memory access,
reducing the space and time complexity of those algorithms.
- implement an inexpensive on-the-fly, debug mode address lifetime analysis
- implement a lightweight debug mode alias analysis
- ultimately improve the power, efficiency, and maintainability of
full alias analysis
- make our type-based alias analysis sensistive to the access path
It is legal for the optimizer to consider code after a loop always
reachable, but when a loop has no exits, or when the loops exits are
dominated by a conditional statement, we should not consider
conditional statements within the loop as dominating all possible
execution paths through the loop. At least not when there is at least
one path through the loop that contains a "synchronization point",
such as a function that may contain a memory barrier, perform I/O, or
exit the program.
Sadly, we still don't model synchronization points in the optimizer,
so we need to conservatively assume all loops have a synchronization
point and avoid hoisting conditional traps that may never be executed.
Fixes rdar://66791257 (Print statement provokes "Can't unsafeBitCast
between types of different sizes" when optimizations enabled)
Originated in 2014.
Specifically:
1. I made methods, variables camelCase.
2. I expanded out variable names (e.x.: bb -> block, predBB -> predBlocks, U -> wrappedUse).
3. I changed typedef -> using.
4. I changed a few c style for loops into for each loops using llvm::enumerate.
NOTE: I left the parts needed for syncing to LLVM in the old style since LLVM
needs these to exist for CRTP to work correctly for the SILSSAUpdater.
For use outside access enforcement passes.
Add isUniquelyIdentifiedAfterEnforcement.
Rename functions for clarity and generality.
Rename isUniquelyIdentifiedOrClass to isFormalAccessBase.
Rename findAccessedStorage to identifyFormalAccess.
Rename findAccessedStorageNonNested to findAccessedStorage.
Part of generalizing the utility for use outside the access
enforcement passes.
Even if a store is not dominating the loop exits, it makes sense to move it out of the loop if the pre-header also as a store to the same memory location.
When this is done, dead-store-elimination can then most likely remove the store in the pre-header.
This loop optimization hoists and sinks a group of loads and stores to
the same address.
Consider this SIL...
PRELOOP:
%stackAddr = alloc_stack $Index
%outerAddr1 = struct_element_addr %stackAddr : $*Index, #Index.value
%innerAddr1 = struct_element_addr %outerAddr1 : $*Int, #Int._value
%outerAddr2 = struct_element_addr %stackAddr : $*Index, #Index.value
%innerAddr2 = struct_element_addr %outerAddr2 : $*Int, #Int._value
LOOP:
%_ = load %innerAddr2 : $*Builtin.Int64
store %_ to %outerAddr2 : $*Int
%_ = load %innerAddr1 : $*Builtin.Int64
There are two bugs:
1) LICM miscompiles code during combined load/store hoisting and sinking.
When the loop contains an aliasing load from a difference projection
value, the optimization sinks the store but never replaces the
load. At runtime, the load reads a stale value.
FIX: isOnlyLoadedAndStored needs to check for other load instructions
before hoisting/sinking a seemingly unrelated set of
loads/stores. Checking side effect instructions is insufficient. The
same bug could happen with stores, which also do not produce side
effects.
Fixes <rdar://61246061> LICM miscompile:
Combined load/store hoisting/sinking with aliases
2) The LICM algorithm is not robust with respect to address projection
because it identifies a projected address by its SILValue. This
should never be done! It is trivial to represent a project path
using an IndexTrieNode (there is also an abstraction called
"ProjectionPath", but it should _never_ actually be stored by an
analysis because of the time and space complexity of doing so).
The second bug is not necessary to fix for correctness, so will be
fixed in a follow-up commit.
Global initializers are executed only once.
Therefore it's possible to hoist such an initializer call to a loop pre-header - in case there are no conflicting side-effects in the loop before the call.
Also, the call must post-dominate the loop pre-header. Otherwise it would be executed speculatively.
and eliminate dead code. This is meant to be a replacement for the utility:
recursivelyDeleteTriviallyDeadInstructions. The new utility performs more aggresive
dead-code elimination for ownership SIL.
This patch also migrates most non-force-delete uses of
recursivelyDeleteTriviallyDeadInstructions to the new utility.
and migrates one force-delete use of recursivelyDeleteTriviallyDeadInstructions
(in IRGenPrepare) to use the new utility.
This is a combination of load hoisting and store sinking, e.g.
preheader:
br header_block
header_block:
%x = load %not_aliased_addr
// use %x and define %y
store %y to %not_aliased_addr
...
exit_block:
is transformed to:
preheader:
%x = load %not_aliased_addr
br header_block
header_block:
// use %x and define %y
...
exit_block:
store %y to %not_aliased_addr