To verify if a function may read from an indirect argument, don't use AliasAnalysis.
Instead use the CalleeCache to get the list of callees of an apply instruction.
Then use a simple call-back into the swift Function to check if a callee has any relevant memory effect set.
This avoids a dependency from SIL to the Optimizer.
It fixes a linker error when building some unit tests in debug.
llvm::SmallSetVector changed semantics
(https://reviews.llvm.org/D152497) resulting in build failures in Swift.
The old semantics allowed usage of types that did not have an
`operator==` because `SmallDenseSet` uses `DenseSetInfo<T>::isEqual` to
determine equality. The new implementation switched to using
`std::find`, which internally uses `operator==`. This type is used
pretty frequently with `swift::Type`, which intentionally deletes
`operator==` as it is not the canonical type and therefore cannot be
compared in normal circumstances.
This patch adds a new type-alias to the Swift namespace that provides
the old semantic behavior for `SmallSetVector`. I've also gone through
and replaced usages of `llvm::SmallSetVector` with the
`Swift::SmallSetVector` in places where we're storing a type that
doesn't implement or explicitly deletes `operator==`. The changes to
`llvm::SmallSetVector` should improve compile-time performance, so I
left the `llvm::SmallSetVector` where possible.
Reformatting everything now that we have `llvm` namespaces. I've
separated this from the main commit to help manage merge-conflicts and
for making it a bit easier to read the mega-patch.
This is phase-1 of switching from llvm::Optional to std::optional in the
next rebranch. llvm::Optional was removed from upstream LLVM, so we need
to migrate off rather soon. On Darwin, std::optional, and llvm::Optional
have the same layout, so we don't need to be as concerned about ABI
beyond the name mangling. `llvm::Optional` is only returned from one
function in
```
getStandardTypeSubst(StringRef TypeName,
bool allowConcurrencyManglings);
```
It's the return value, so it should not impact the mangling of the
function, and the layout is the same as `std::optional`, so it should be
mostly okay. This function doesn't appear to have users, and the ABI was
already broken 2 years ago for concurrency and no one seemed to notice
so this should be "okay".
I'm doing the migration incrementally so that folks working on main can
cherry-pick back to the release/5.9 branch. Once 5.9 is done and locked
away, then we can go through and finish the replacement. Since `None`
and `Optional` show up in contexts where they are not `llvm::None` and
`llvm::Optional`, I'm preparing the work now by going through and
removing the namespace unwrapping and making the `llvm` namespace
explicit. This should make it fairly mechanical to go through and
replace llvm::Optional with std::optional, and llvm::None with
std::nullopt. It's also a change that can be brought onto the
release/5.9 with minimal impact. This should be an NFC change.
Optimizations can rely on alias analysis to know that an in-argument (or parts of it) is not actually read.
We have to do the same in the verifier: if alias analysis says that an in-argument is not read, there is no need that the memory location is initialized.
Fixes a false verifier error.
rdar://106806899
copy_value is marked as having no side effects.
Teach LoopRotate to avoid moving it to the pre header and instead duplicate it while rotating the loop.
`getValue` -> `value`
`getValueOr` -> `value_or`
`hasValue` -> `has_value`
`map` -> `transform`
The old API will be deprecated in the rebranch.
To avoid merge conflicts, use the new API already in the main branch.
rdar://102362022
Andy some time ago already created the new API but didn't go through and update
the old occurences. I did that in this PR and then deprecated the old API. The
tree is clean, so I could just remove it, but I decided to be nicer to
downstream people by deprecating it first.
This seems to compile correctly in release builds. But it does go
through an llvm_unreachable path, so really isn't safe to leave unfixed.
When the accessPath has an offset, propagate it through the recursive
calls. This may have happened when the offset was moved outside of the
sequence of path indices. The code rebuilds a path from the indices
without adding back the offset.
LICM asserts during projectLoadValue when it needs to rematerialize a
loaded value within the loop using projections and the loop-invariant
address is an index_addr.
Basically:
%a = index_addr %4 : $*Wrapper, %1 : $Builtin.Word
store %_ to %a : $*Wrapper
br loop:
loop:
%f = struct_element_addr %a
%v = load %f : $Value
%s = struct $Wrapper (%v : $Value)
store %s to %a : $*Wrapper
Where the store inside the loop is deleted. And where the load is
hoisted out of the loop, but now loads $Wrapper instead of $Value.
Fixes rdar://92191909 (LICM assertion: isSubObjectProjection(), MemAccessUtils.h, line 1069)
A guaranteed value produced by a begin_borrow can't be both used as
an operand of an ownership forwarding-instruction and also reborrowed by
being used as a phi argument.
Avoid that by stopping rotation when encountering a header block
containing an ownership-forwarding instruction whose forwarded ownership
kind is guaranteed; such a rotation may result in using both the
original guaranteed value and the resulting guaranteed value as phi
arguments.
AccessPathWithBase::compute can return a valid access path with unidentified base.
In such cases, we cannot LICM stores, because there is no base address to check if it is invariant
Fix innumerable latent bugs with iterator invalidation and callback invocation.
Removes dead code earlier and chips away at all the redundant copies the compiler generates.
Instead of caching alias results globally for the module, make AliasAnalysis a FunctionAnalysisBase which caches the alias results per function.
Why?
* So far the result caches could only grow. They were reset when they reached a certain size. This was not ideal. Now, they are invalidated whenever the function changes.
* It was not possible to actually invalidate an alias analysis result. This is required, for example in TempRValueOpt and TempLValueOpt (so far it was done manually with invalidateInstruction).
* Type based alias analysis results were also cached for the whole module, while it is actually dependent on the function, because it depends on the function's resilience expansion. This was a potential bug.
I also added a new PassManager API to directly get a function-base analysis:
getAnalysis(SILFunction *f)
The second change of this commit is the removal of the instruction-index indirection for the cache keys. Now the cache keys directly work on instruction pointers instead of instruction indices. This reduces the number of hash table lookups for a cache lookup from 3 to 1.
This indirection was needed to avoid dangling instruction pointers in the cache keys. But this is not needed anymore, because of the new delayed instruction deletion mechanism.
This is a quick fix for a stack overflow in case of very large functions.
TODO: Ideally this algorithm would be implemented as an iterative worklist algorithm.
rdar://77563057
Generalize the AccessUseDefChainCloner in MemAccessUtils. It was
always meant to work this way, just needed a client.
Add a new API AccessUseDefChainCloner::canCloneUseDefChain().
Add a bailout for begin_borrow and mark_dependence. Those
projections may appear on an access path, but they can't be
individually cloned without compensating.
Delete InteriorPointerAddressRebaseUseDefChainCloner.
Add a check in OwnershipRAUWHelper for canCloneUseDefChain.
Add test cases for begin_borrow and mark_dependence.
Saves a bunch of compile time when compiling the stdlib. Specifically, when
compiling the iOS 32 bit stdlib + sil-verify-all, this shaves off ~20% of the
compile time.