Commit Graph

1033 Commits

Author SHA1 Message Date
Michael Gottesman
38e2a3bfcf [inst-simplify] Eliminate address_to_pointer/pointer_to_address identity transforms using new interior pointer handling utility.
Btw for those curious, the reason why this optimization is important is that the
stdlib often times performs reinterpret casts using address to pointer, pointer
to address to perform the case. So we need to be able to reliably eliminate
these. Since the underlying base object's liveness is already what provides the
liveness to the underlying address, our approach of treating this as a lifetime
extension requiring pattern makes sense since we are adding information into the
IR by canonicalizing these escapes to be normal address operations.

NOTE: In this commit, we only handle the identity transform. In a subsequent
commit, I am going to hit the non-identity transform that is handled by
SILCombine.

Also, I added some specific more general OSSA RAUW tests for this case to
ossa_rauw_tests using this functionality in a more exhaustive way that would be
in appropriate in sil_combine_ossa.sil (since I am stress testing this specific
piece of code).
2021-01-17 20:08:24 -08:00
Michael Gottesman
73ba521e56 [ownership] Add a new API to OwnershipFixupContext::replaceAllAddressUsesFixingInteriorPointerOwnership.
In OSSA, we enforce that addresses from interior pointer instructions are scoped
within a borrow scope. This means that it is invalid to use such an address
outside of its parent borrow scope and as a result one can not just RAUW an
address value by a dominating address value since the latter may be invalid at
the former. I foresee that I am going to have to solve this problem and so I
decided to write this API to handle the vast majority of cases.

The way this API works is that it:

1. Computes an access path with base for the new value. If we do not have a base
value and a valid access path with root, we bail.

2. Then we check if our base value is the result of an interior pointer
instruction. If it isn't, we are immediately done and can RAUW without further
delay.

3. If we do have an interior pointer instruction, we see if the immediate
guaranteed value we projected from has a single borrow introducer value. If not,
we bail. I think this is reasonable since with time, all guaranteed values will
always only have a single borrow introducing value (once struct, tuple,
destructure_struct, destructure_tuple become reborrows).

4. Then we gather up all inner uses of our access path. If for some reason that
fails, we bail.

5. Then we see if all of those uses are within our borrow scope. If so, we can
RAUW without any further worry.

6. Otherwise, we perform a copy+borrow of our interior pointer's operand value
at the interior pointer, create a copy of the interior pointer instruction upon
this new borrow and then RAUW oldValue with that instead. By construction all
uses of oldValue will be within this new interior pointer scope.
2021-01-17 20:08:24 -08:00
Michael Gottesman
fe4c345d0d [ownership] Make OwnershipFixupContext a dump context struct and instead put its RAUW functionality on OwnershipRAUWHelper.
The reason why I am doing this is that I am building up more utilities based on
passing around this struct of context that do not want it for RAUWing
purposes. So it makes sense on a helper (OwnershipRAUWHelper) that composes with
its state.

Just a refactor, should be NFC.
2021-01-17 20:08:24 -08:00
Michael Gottesman
aa38be6d98 [inst-simplify] Hide simplifyInstruction in favor of using simplifyAndReplaceAllSimplifiedUsesAndErase.
Currently all of these places in the code base perform simplifyInstruction and
then a replaceAllSimplifiedUsesAndErase(...). This is a bad pattern since:

1. simplifyInstruction assumes its result will be passed to
   replaceAllSimplifiedUsesAndErase. So by leaving these as separate things, we
   allow for users to pointlessly make this mistake.

2. I am going to implement in a subsequent commit a utility that lifetime
   extends interior pointer bases when replacing an address with an interior
   pointer derived address. To do this efficiently, I want to reuse state I
   compute during simplifyInstruction during the actual RAUW meaning that if the
   two operations are split, that is difficult without extending the API. So by
   removing this, I can make the transform and eliminate mistakes at the same
   time.
2021-01-17 20:08:24 -08:00
Andrew Trick
1ba3066950 CanonicalOSSALifetime: Add support for overlapping access scopes.
Access scopes for enforcing exclusivity are currently the only
exception to our ability to canonicalize OSSA lifetime purely based on
the SSA value's known uses. This is because access scopes have
semantics relative to object deinitializers.

In general, deinitializers are asynchronous with respect to code that
is unrelated to the object's uses. Ignoring exclusivity, the optimizer
may always destroy objects as early as it wants, as long as the object
won't be used again. The optimizer may also extend the lifetime
(although in the future this lifetime extension should be limited by
"synchronization points").

The optimizer's freedom is however limited by exclusivity
enforcement. Optimization may never introduce new exclusivity
violations. Destroying an object within an access scope is an
exclusivity violation if the deinitializer accesses the same variable.

To handle this, OSSA canonicalization must detect access scopes that
overlap with the end of the pruned extended lifetime. Essentially:

    %def
    begin_access // access scope unrelated to def
    use %def     // pruned liveness ends here
    end_access
    destroy %def

Support for access scopes composes cleanly with the existing algorithm
without adding significant cost in the usual case. Overlapping access
scopes are unusual. A single CFG walk within the original extended
lifetime is normally sufficient. Only the blocks that are not already
LiveOut in the pruned liveness need to be visited. During this walk,
local overlapping access are detected by scanning for end_access
instructions after the last use point. Global overlapping accesses are
detected by checking NonLocalAccessBlockAnalysis. This avoids scanning
instructions in the common case. NonLocalAccessBlockAnalysis is a
trivial analysis that caches the rare occurence of nonlocal access
scopes. The analysis itself is a single linear scan over the
instruction stream. This analysis can be preserved across most
transformations and I expect it to be used to speed up other
optimizations related to access marker.

When an overlapping access is detected, pruned liveness is simply
extended to include the end_access as a new use point. Extending the
lifetime is iterative, but with each iteration, blocks that are now
marked LiveOut no longer need to be visited. Furthermore, interleaved
accessed scopes are not expected to happen in practice.
2021-01-15 19:48:33 -08:00
Michael Gottesman
1b14b5bcee [sil] Move swift::replaceAllUsesAndErase from Analysis/SimplifyInstruction -> Utils/InstOptUtils
This is a low level API already being used in multiple places besides
InstSimplify (e.x.: Utils/OwnershipOptUtils), so it makes sense to move it into
InstOptUtil.
2021-01-13 10:43:41 -08:00
Michael Gottesman
ec50d03d76 [ownership] Rename OwnershipFixupContext::replaceAllUsesAndErase{FixingOwnership,}.
The class name is already OwnershipFixupContext... why do we need to include
FixingOwnership in its helpers... its redundant.
2021-01-13 10:43:41 -08:00
Erik Eckstein
f7296ed903 SIL: fix bugs in the MemBehavior cache invalidation mechanism.
When a non-value instruction (e.g. a destroy_addr) was deleted, the corresponding cache entry in MemoryBehaviorCache was not invalidated.
If a new instruction was allocated at the same memory location, the old - and invalid - cache entry was re-used.

This bug triggered a SIL memory lifetime failure in TempRValueElimination.

https://bugs.swift.org/browse/SR-13985
rdar://problem/72614608

This change also fixes another problem (which I found by inspection): Individual result values of MultipleValueInstructions were not invalidated correctly.
2021-01-06 11:29:57 +01:00
Michael Gottesman
2f6ffae4b0 [sil-inst-opt] Change InstModCallbacks to specify a setUseValue callback instead of RAUW callbacks.
This allows for me to do a couple of things improving quality/correctness/ease of use:

1. I reimplemented InstMod's RAUW and RAUW/erase helpers on top of
   setUseValue/deleteInst. Beyond allowing the caller to specify less things, we
   gain an orthogonality preventing bugs like overriding erase/RAUW but not
   overriding erase or having the erase used in erase/RAUW act differently than
   the erase for deleteInst.

2. There were a bunch of places using InstModCallback that also were setting
   uses without having the ability for InstModCallbacks perform it (since it
   only supported RAUW). This is an anti-pattern and could cause subtle bugs to
   be introduced by appropriate state in the caller not being updated.
2021-01-04 16:47:13 -08:00
Michael Gottesman
0de00d1ce4 [sil-inst-opt] Improve performance of InstModCallbacks by eliminating indirect call along default callback path.
Specifically before this PR, if a caller did not customize a specific callback
of InstModCallbacks, we would store a static default std::function into
InstModCallbacks. This means that we always would have an indirect jump. That is
unfortunate since this code is often called in loops.

In this PR, I eliminate this problem by:

1. I made all of the actual callback std::function in InstModCallback private
   and gave them a "Func" postfix (e.x.: deleteInst -> deleteInstFunc).

2. I created public methods with the old callback names to actually call the
   callbacks. This ensured that as long as we are not escaping callbacks from
   InstModCallback, this PR would not result in the need for any source changes
   since we are changing a call of a std::function field to a call to a method.

3. I changed all of the places that were escaping inst mod's callbacks to take
   an InstModCallback. We shouldn't be doing that anyway.

4. I changed the default value of each callback in InstModCallbacks to be a
   nullptr and changed the public helper methods to check if a callback is
   null. If the callback is not null, it is called, otherwise the getter falls
   back to an inline default implementation of the operation.

All together this means that the cost of a plain InstModCallback is reduced and
one pays an indirect function cost price as one customizes it further which is
better scalability.

P.S. as a little extra thing, I added a madeChange field onto the
InstModCallback. Now that we have the helpers calling the callbacks, I can
easily insert instrumentation like this, allowing for users to pass in
InstModCallback and see if anything was RAUWed without needing to specify a
callback.
2021-01-04 12:51:55 -08:00
Meghana Gupta
db24e3e94c Fix crash in EpilogueArcAnalysis
EpilogueARCState for unreachable blocks can be non-existent. Fix the
EpilogueARCContext::getState method and its users.
2020-12-17 13:39:22 -08:00
Meghana Gupta
81107e4235 Improve handling of copy_value and destroy_value in (#35011)
MemoryBehaviorVisitor

- Also, compute use points for destroy_value
- Cleanup explicit checks for refcount instructions in RLE
2020-12-14 22:44:58 -08:00
Richard Wei
8d8614058b [AudoDiff] NFC: Replace 'SILAutoDiffIndices' with 'AutoDiffConfig'. (#35079)
Resolve rdar://71678394 / SR-13889.
2020-12-14 14:32:40 -08:00
Michael Gottesman
259d2bb182 [ownership] Commit a generic replaceAllUsesAndEraseFixingOwnership api and enable SimplifyInstruction on OSSA.
This is a generic API that when ownership is enabled allows one to replace all
uses of a value with a value with a differing ownership by transforming/lifetime
extending as appropriate.

This API supports all pairings of ownership /except/ replacing a value with
OwnershipKind::None with a value without OwnershipKind::None. This is a more
complex optimization that we do not support today. As a result, we include on
our state struct a helper routine that callers can use to know if the two values
that they want to process can be handled by the algorithm.

My moticiation is to use this to to update InstSimplify and SILCombiner in a
less bug prone way rather than just turn stuff off.

Noting that this transformation inserts ownership instructions, I have made sure
to test this API in two ways:

1. With Mandatory Combiner alone (to make sure it works period).

2. With Mandatory Combiner + Semantic ARC Opts to make sure that we can
   eliminate the extra ownership instructions it inserts.

As one can see from the tests, the optimizer today is able to handle all of
these transforms except one conditional case where I need to eliminate a dead
phi arg. I have a separate branch that hits that today but I have exposed unsafe
behavior in ClosureLifetimeFixup that I need to fix first before I can land
that. I don't want that to stop this PR since I think the current low level ARC
optimizer may be able to help me here since this is a simple transform it does
all of the time.
2020-12-09 11:53:56 -08:00
swift-ci
540ecacaa5 Merge pull request #34908 from atrick/comment-escape 2020-12-01 11:01:35 -08:00
Andrew Trick
4396ebad19 Update a comment in EscapeAnalysis.
To reflect the most recent compile time fix.
2020-12-01 08:28:20 -08:00
Erik Eckstein
f14916832f EscapeAnalysis: fix a quadratic behavior in ConnectionGraph::getNode
Fixes a compile time problem. The single linked list of merge targets in connection graph nodes can be very large.
Update the final merge target in the map, so that it has to be traversed only once for a given SILValue.

rdar://problem/71602804
2020-11-24 15:47:46 +01:00
Ben Barham
b07c06e839 [Serialization] Fix crashes when allowing compiler errors in modules 2020-11-18 12:35:21 +10:00
Michael Gottesman
c3681a6eef [membehavior] Fix base memory behavior/releasing of Load/Store for ownership.
load, store in ossa can have side-effects and stores can release. Specifically:

Memory Behavior
---------------

* Load: unqualified, trivial, take have a read side-effect, but copy retains so
  has side-effects.

* Store: unqualified, trivial, init may write but assign releases so it may have
  side-effects.

Release Behavior
----------------

* Load: No changes.

* Store: May release if store has assign as an ownership qualifier.
2020-11-11 15:32:32 -08:00
Meghana Gupta
483321c360 Enable ArrayElementValuePropagation on ownership SIL 2020-11-04 11:54:47 -08:00
Andrew Trick
5be168a382 Merge pull request #34430 from atrick/check-nested-semantics
Add NestedSemanticFunctionCheck warning diagnostic
2020-10-28 08:32:22 -07:00
Erik Eckstein
e8e613bd6a RCIdentity: fix another case of not-RC-identity-preserving casts.
When casting from existentials to class - and vice versa - it can happen that a cast is not RC identity preserving (because of potential bridging).
This also affects mayRelease() of such cast instructions.
For details see the comments in SILDynamicCastInst::isRCIdentityPreserving().

This change also includes some refactoring: I centralized the logic in SILDynamicCastInst::isRCIdentityPreserving().

rdar://problem/70454804
2020-10-28 08:10:41 +01:00
Andrew Trick
40afb68daf Remove internal references to "*_no_escaping_closure" semantics.
This @_semantics string is buried in the compiler but it is completely
undocumented and has no in-tree uses.
2020-10-26 17:02:33 -07:00
Andrew Trick
3128eae3f0 Add NestedSemanticFunctionCheck diagnostic
to check for improperly nested '@_semantic' functions.

Add a missing @_semantics("array.init") in ArraySlice found by the
diagnostic.

Distinguish between array.init and array.init.empty.

Categorize the types of semantic functions by how they affect the
inliner and pass pipeline, and centralize this logic in
PerformanceInlinerUtils. The ultimate goal is to prevent inlining of
"Fundamental" @_semantics calls and @_effects calls until the late
pipeline where we can safely discard semantics. However, that requires
significant pipeline changes.

In the meantime, this change prevents the situation from getting worse
and makes the intention clear. However, it has no significant effect
on the pass pipeline and inliner.
2020-10-26 17:02:33 -07:00
Erik Eckstein
c6be347d3a SILCombine: fix metatype simplification
Don't reuse argument metadata if it's an indirect argument.
Fixes a verifier crash.

https://bugs.swift.org/browse/SR-13731
rdar://problem/70338666
2020-10-21 10:10:59 +02:00
Andrew Trick
fd8f723f60 Merge pull request #34126 from atrick/add-accesspath
Add an AccessPath abstraction and formalize memory access
2020-10-19 10:17:27 -07:00
Andrew Trick
6f2cda1390 Add AccessUseVisitor and cleanup related APIs.
Add AccesssedStorage::compute and computeInScope to mirror AccessPath.

Allow recovering the begin_access for Nested storage.

Adds AccessedStorage.visitRoots().
2020-10-16 15:00:10 -07:00
Andrew Trick
cc0aa2f8b8 Add an AccessPath abstraction and formalize memory access
Things that have come up recently but are somewhat blocked on this:

- Moving AccessMarkerElimination down in the pipeline
- SemanticARCOpts correctness and improvements
- AliasAnalysis improvements
- LICM performance regressions
- RLE/DSE improvements

Begin to formalize the model for valid memory access in SIL. Ignoring
ownership, every access is a def-use chain in three parts:

object root -> formal access base -> memory operation address

AccessPath abstracts over this path and standardizes the identity of a
memory access throughout the optimizer. This abstraction is the basis
for a new AccessPathVerification.

With that verification, we now have all the properties we need for the
type of analysis requires for exclusivity enforcement, but now
generalized for any memory analysis. This is suitable for an extremely
lightweight analysis with no side data structures. We currently have a
massive amount of ad-hoc memory analysis throughout SIL, which is
incredibly unmaintainable, bug-prone, and not performance-robust. We
can begin taking advantage of this verifably complete model to solve
that problem.

The properties this gives us are:

Access analysis must be complete over memory operations: every memory
operation needs a recognizable valid access. An access can be
unidentified only to the extent that it is rooted in some non-address
type and we can prove that it is at least *not* part of an access to a
nominal class or global property. Pointer provenance is also required
for future IRGen-level bitfield optimizations.

Access analysis must be complete over address users: for an identified
object root all memory accesses including subobjects must be
discoverable.

Access analysis must be symmetric: use-def and def-use analysis must
be consistent.

AccessPath is merely a wrapper around the existing accessed-storage
utilities and IndexTrieNode. Existing passes already very succesfully
use this approach, but in an ad-hoc way. With a general utility we
can:

- update passes to use this approach to identify memory access,
  reducing the space and time complexity of those algorithms.

- implement an inexpensive on-the-fly, debug mode address lifetime analysis

- implement a lightweight debug mode alias analysis

- ultimately improve the power, efficiency, and maintainability of
  full alias analysis

- make our type-based alias analysis sensistive to the access path
2020-10-16 15:00:10 -07:00
Andrew Trick
85ff15acd3 Add indexTrieRoot to the SILModule to share across Analyses.
...and avoid reallocation.

This is immediately necessary for LICM, in addition to its current
uses. I suspect this could be used by many passes that work with
addresses. RLE/DSE should absolutely migrate to it.
2020-10-16 15:00:09 -07:00
Erik Eckstein
007351223e MemBehavior: handle begin_access when checking for apply side-effects. 2020-10-16 16:32:48 +02:00
Erik Eckstein
b0c9e69b6f SideEffectAnalysis: don't assume that arguments with trivial type cannot be pointers.
Even values of trivial type can contain a Builtin.RawPointer, which can be used to read/write from/to.

To compensate for the removed check, enable the escape-analysis check in MemBehavior (as it was before).

This fixes a recently introduced miscompile.
rdar://problem/70220876
2020-10-13 11:32:47 +02:00
Erik Eckstein
77f1647195 RCIdentityAnalysis: some refactoring to improve clarity.
NFC.
2020-10-12 17:28:30 +02:00
Erik Eckstein
d4a6bd39b6 SILOptimizer: improve MemBehavior for apply instructions.
1. Do a better alias analysis for "function-local" objects, like alloc_stack and inout parameters
2. Fully support try_apply and begin/end/abort_apply

So far we fully relied on escape analysis. But escape analysis has some shortcomings with SIL address-types.
Therefore, handle two common cases, alloc_stack and inout parameters, with alias analysis.
This gives better results.
The biggest change here is to do a quick check if the address escapes via an address_to_pointer instructions.
2020-10-09 20:54:58 +02:00
Erik Eckstein
aced5c74df SILOptimizer: Remove InspectionMode from MemBehehaviorVisitor
The InspectionMode was never set to anything else than "IgnoreRetains"
2020-10-09 20:54:58 +02:00
eeckstein
9c9f3f53b3 Merge pull request #34183 from eeckstein/fix-rcidentity
SILOptimizer: fix ARC code motion problem for metatype casts.
2020-10-06 09:34:39 +02:00
Erik Eckstein
f3d668e362 SILOptimizer: fix ARC code motion problem for metatype casts.
RCIdentityAnalysis must not look through casts which cast from a trivial type, like a metatype, to something which is retainable, like an AnyObject.
On some platforms such casts dynamically allocate a ref-counted box for the metatype.
Now, if the RCRoot of such an AnyObject would be a trivial type, ARC optimizations get confused and might eliminate a retain of such an object completely.

rdar://problem/69900051
2020-10-05 22:18:19 +02:00
Meghana Gupta
f4bbafb392 Revert "[PassManager] Update PassManager's function worklist for newly added SILFunctions" 2020-10-05 10:23:31 -07:00
Andrew Trick
5ae231eaab Rename getFieldNo() to getFieldIndex().
Do I really need to justify this?
2020-09-24 22:44:13 -07:00
Michael Gottesman
646fcf6678 Merge pull request #33754 from gottesmm/pr-d1a9d022618a039a807ff68c0d46d898e8fe5578
[opt-remark] When looking for debug_value users, look modulo RC Identity preserving users.
2020-09-04 11:04:26 -07:00
Dan Zheng
5c3cb36a7c Merge pull request #33611 from dan-zheng/autodiff-fix-tangent-value-category
[AutoDiff] Fix PullbackCloner tangent value category mismatch issues.
2020-09-03 11:51:56 -07:00
Michael Gottesman
2daf1d2050 [opt-remark] When looking for debug_value users, look modulo RC Identity preserving users.
A key concept in late ARC optimization is "RC Identity". In short, a result of
an instruction is rc-identical to an operand of the instruction if one can
safely move a retain (release) from before the instruction on the result to one
after on the operand without changing the program semantics. This creates a
simple model where one can work on equivalence classes of rc-identical values
(using a dominating definition generally as the representative) and thus
optimize/pair retain, release.

When preparing for late ARC optimization, the optimizer will normalize aggregate
ARC operations (retain_value, release_value) into singular strong_retain,
strong_release operations on leaf types of the aggregate that are
non-trivial. As an example, a retain_value on a KlassPair would be canonicalized
into two strong_retain, one for the lhs and one for the rhs. When this is done,
the optimizer generally just creates new struct_extract at the point where the
retain is. In such a case, we may have that the debug_value for the underlying
type is actually on a reformed aggregate whose underlying parts we are
retaining:

```
bb0(%0 : $Builtin.NativeObject):
  strong_retain %0
  %1 = struct $Array(%0 : $Builtin.NativeObject, ...)
  debug_value %1 : $Array, ...
```

By looking through RC identical uses, we can handle a large subset of these
cases without much effort: ones were there is a single owning pointer like Array.
To handle more complex cases we would have to calculate an inverse access path needed to get
back to our value and somehow deal with all of the complexity therein (I am sure
we can do it I just haven't thought through all of the details).

The only interesting behavior that this results in is that when we emit
diagnostics, we just use the rc-identical transitive use debug_value's name
without a projection path. This is because the source location associated with
that debug_value is with a separate value that is rc-identical to the actual
value that we visited during our opt-remark traversal up the def-use
graph. Consider the following example below, noting the comments that show in
the SIL itself what I attempted to explain above.

```
struct KlassPair {
  var lhs: Klass
  var rhs: Klass
}

struct StateWithOwningPointer {
  var state: TrivialState
  var owningPtr: Klass
}

sil @theFunction : $@convention(thin) () -> () {
bb0:
  %0 = apply %getKlassPair() : $@convention(thin) () -> @owned KlassPair
  // This debug_value's name can be combined...
  debug_value %0 : $KlassPair, name "myPair"
  // ... with the access path from the struct_extract here...
  %1 = struct_extract %0 : $KlassPair, #KlassPair.lhs
  // ... to emit a nice diagnostic that 'myPair.lhs' is being retained.
  strong_retain %1 : $Klass

  // In contrast in the case below, we rely on looking through rc-identity uses
  // to find the debug_value. In this case, the source info associated with the
  // debug_value (%2) is no longer associated with the underlying access path we
  // have been tracking upwards (%1 is in our access path list). Instead, we
  // know that the debug_value is rc-identical to whatever value we were
  // originally tracking up (%1) and thus the correct identifier to use is the
  // direct name of the identifier alone (without access path) since that source
  // identifier must be some value in the source that by itself is rc-identical
  // to whatever is being manipulated. Thus if we were to emit the access path
  // here for na rc-identical use we would get "myAdditionalState.owningPtr"
  // which is misleading since ArrayWrapperWithMoreState does not have a field
  // named 'owningPtr', its subfield array does. That being said since
  // rc-identity means a retain_value on the value with the debug_value upon it
  // is equivalent to the access path value we found by walking up the def-use
  // graph from our strong_retain's operand.
  %0a = apply %getStateWithOwningPointer() : $@convention(thin) () -> @owned StateWithOwningPointer
  %1 = struct_extract %0a : $StateWithOwningPointer, #StateWithOwningPointer.owningPtr
  strong_retain %1 : $Klass
  %2 = struct $Array(%0 : $Builtin.NativeObject, ...)
  %3 = struct $ArrayWrapperWithMoreState(%2 : $Array, %moreState : MoreState)
  debug_value %2 : $ArrayWrapperWithMoreState, name "myAdditionalState"
}
```
2020-09-01 23:25:59 -07:00
Meghana Gupta
b1db77adff Fix ConnectionGraph verification for calls to no return functions. (#33655)
Functions that do not have a return, and instead end with 'unreachable'
due to NoReturnFolding will not have a ReturnNode in the connection
graph.

A caller calling a no return function then may not have a CGNode
corresponding to the call's result.

Fix the ConnectionGraph's verifier so we don't assert in such cases.
2020-08-27 20:42:23 -07:00
Meghana Gupta
b1c0bd3096 Minor cleanup in ARCSequenceOpts (#33578)
* Remove NewInsts from ARCSequenceOpts

* Remove more instances of InsertPts

* Address comments from #33504

* Make bottom up loop traversal simpler. Use better apis

* Update LoopRegion printer with more info
2020-08-24 21:21:11 -07:00
Dan Zheng
8102d91941 [AutoDiff] Make DifferentiableActivityInfo::dump print an end marker.
End markers are useful for FileCheck.
2020-08-24 10:10:57 -07:00
Andrew Trick
6ecbeefd50 Add AccessedStorage::Tail access kind and remove more checks.
Distinguish ref_tail_addr storage from the other storage classes.

We didn't have this originally because be don't expect a begin_access
to directly operate on tail storage. It could occur after inlining, at
least with static access markers. More importantly it helps ditinguish
regular formal accesses from other unidentified access, so we probably
should have always had this.

At any rate, it's particularly important when AccessedStorage is
generalized to arbitrary memory access.

The immediate motivation is to add an AccessPath utility, which will
need to distinguish tail storage.

In the process, rewrite AccessedStorage::isDistinct. This could have a
large positive impact on exclusivity performance.
2020-07-20 16:42:53 -07:00
Andrew Trick
5826e75b00 Generalize the MemAccessUtils API.
For use outside access enforcement passes.

Add isUniquelyIdentifiedAfterEnforcement.

Rename functions for clarity and generality.

Rename isUniquelyIdentifiedOrClass to isFormalAccessBase.

Rename findAccessedStorage to identifyFormalAccess.

Rename findAccessedStorageNonNested to findAccessedStorage.

Part of generalizing the utility for use outside the access
enforcement passes.
2020-07-17 10:13:20 -07:00
Andrew Trick
3ff11aa683 Teach EscapeAnalysis to ignore end_access markers.
The presence of end_access markers was causing object properties to
appear to escape globally. That's too conservative.
2020-07-11 01:24:35 -07:00
Erik Eckstein
c82f78b218 AliasAnalysis: handle copy_addr in MemBehaviorVisitor.
Only let copy_addr have side effects if the source or destination really aliases with the address in question.
2020-06-22 13:47:31 +02:00
Erik Eckstein
547d527258 AliasAnalysis: consider a take from an address as an write
Currently we only have load [take] in OSSA which needed to be changed.
(copy_addr is not handled in MemBehavior at all, yet)

Even if the memory is physically not modified, conceptually it's "destroyed" when the value is taken.
Optimizations, like TempRValueOpt rely on this behavior when the check for may-writes.

This fixes a MemoryLifetime failure in TempRValueOpt.
2020-06-22 13:47:31 +02:00
Erik Eckstein
c7c2c139af AliasAnalysis: better handling of init_enum_data_addr and init_existential_addr.
Look "through" those instructions when trying to find the underlying objects.
2020-06-22 13:47:31 +02:00