Commit Graph

2053 Commits

Author SHA1 Message Date
Erik Eckstein
301ab4e112 LoopRotate: remove the replace-arg-with-struct peephole optimization
This does not belong to loop-rotate and does not work with OSSA.
This peephole is covered by other optimizations.
2024-12-12 08:35:48 +01:00
Erik Eckstein
3e35df0983 Simplification: run begin_borrow simplification in SILCombine 2024-12-11 12:32:34 +01:00
Erik Eckstein
6b38f2aab4 Optimizer: simplify load_borrow
* Remove dead `load_borrow` instructions (replaces the old peephole optimization in SILCombine)
* If the `load_borrow` is followed by a `copy_value`, combine both into a `load [copy]`
2024-12-11 12:32:33 +01:00
eeckstein
81c65758e3 Merge pull request #78059 from eeckstein/destroy-hoisting
Optimizer: add a new destroy-hoisting optimization
2024-12-11 06:18:05 +01:00
Erik Eckstein
5be781a9a0 Optimizer: add a new destroy-hoisting optimization
It hoists `destroy_value` instructions  without shrinking an object's lifetime.
This is done if it can be proved that another copy of a value (either in an SSA value or in memory) keeps the referenced object(s) alive until the original position of the `destroy_value`.
```
  %1 = copy_value %0
  ...
  last_use_of %0
  // other instructions
  destroy_value %0       // %1 is still alive here
```
->
```
  %1 = copy_value %0
  ...
  last_use_of %0
  destroy_value %0
  // other instructions
```

The benefit of this optimization is that it can enable copy-propagation by moving destroys above deinit barries and access scopes.
2024-12-10 16:28:11 +01:00
Erik Eckstein
dd78dc722b Optimizer: add an optimization to remove copy_value of a borrowed value.
It removes a `copy_value` where the source is a guaranteed value, if possible:

```
  %1 = copy_value %0   // %0 = a guaranteed value
  // uses of %1
  destroy_value %1     // borrow scope of %0 is still valid here
```
->
```
  // uses of %0
```

This optimization is very similar to the LoadCopyToBorrow optimization.
Therefore I merged both optimizations into a single file and renamed it to "CopyToBorrowOptimization".
2024-12-09 20:01:07 +01:00
nate-chandler
5637eaf3ca Merge pull request #77968 from nate-chandler/rdar139842132
[OSSACanonicalizeOwned] Record traversed defs and don't traverse copies of guaranteed values.
2024-12-06 07:00:20 -08:00
Nate Chandler
eebe9ac20a [NFC] OSSACanonicalizeOwned: Renamed found defs.
The field is no longer a worklist, just a list of discovered defs.
2024-12-05 08:29:52 -08:00
Nate Chandler
498294efa2 [NFC] OSSACanOwned: Record defs in SmallVector.
In preparation for only recording the defs once, replace the
GraphNodeWorklist of defs with a SetVector.  Preserve the current
visitation order by creating a worklist of indices to be visited.
2024-12-05 08:24:46 -08:00
Nate Chandler
77bc114e54 [NFC] OSSACanonicalizeOwned: Record def kinds.
Add a type which distinguishes among the types of defs that are pushed
onto the "def-use worklist".  Note that it's not possible to rely on the
kind of value because the root may itself be a copy_value.  For now, the
distinction is discarded as soon as the def is visited.
2024-12-05 07:31:19 -08:00
nate-chandler
0d76250033 Merge pull request #77908 from nate-chandler/rdar139840307
[BarrierAccessScopes] Handle end_access instructions' barrierness introduced during run.
2024-12-04 15:29:02 -08:00
Michael Gottesman
87495c6b83 Merge pull request #77900 from gottesmm/rdar127477211
[region-isolation] Perform checking of non-Sendable results using rbi rather than Sema.
2024-12-03 22:08:49 -08:00
eeckstein
9e7fa1a023 Merge pull request #77918 from eeckstein/remove-dead-code
ArraySemantics: remove some unused code
2024-12-03 21:30:39 +01:00
Erik Eckstein
f166f4b4df ArraySemantics: remove some unused code
The code is not used anymore because the ArrayElementPropagation pass was removed: https://github.com/swiftlang/swift/pull/77806
2024-12-03 11:45:54 +01:00
Nate Chandler
f79def4cee [BarrierAccessScopes] Handle found gen locality.
As the utility runs, new gens may become local: as access scopes are
determined to contain deinit barriers, their `end_access` instructions
become kills; if such an `end_access` occurs in the same block above an
initially-non-local gen, that gen is now local.

Previously, it was asserted that initially-non-local gens would not
encounter when visiting the block backwards from that gen.  Iteration
would also _stop_ at the discovered kill, if any.  As described above,
the assertion was incorrect.

Stopping at the discovered kill was also incorrect.  It's necessary to
continue walking the block after finding such a new kill because the
book-keeping the utility does for which access scopes contain barriers.
Concretely, there are two cases:
(1) It may contain another `end_access` and above it a deinit barrier
which must result in that second scope becoming a deinit barrier.
(2) Some of its predecessors may be in the region, all the access scopes
which are open at the begin of this block must be unioned into the set
of scopes open at each predecessors' end, and more such access scopes
may be discovered above the just-visited `end_access`.

Here, both the assertion failure and the early bailout are fixed by
walking from the indicated initially-non-local gen backwards over the
entire block, regardless of whether a kill was encountered.  If a kill
is encountered, it is asserted that the kill is an `end_access` to
account for the case described above.

rdar://139840307
2024-12-02 15:36:00 -08:00
Kuba Mracek
6f4ae28520 [ASTMangler] Pass ASTContext to all instantiations of ASTMangler 2024-12-02 15:01:04 -08:00
Nate Chandler
fa126d6d4c [NFC] BarrierAccessScopes: Renamed function. 2024-12-02 14:05:40 -08:00
Nate Chandler
5c5f06e871 [Gardening] BarrierAccessScopes: Corrected comment. 2024-12-02 14:05:40 -08:00
Michael Gottesman
cff835e061 [region-isolation] Perform checking of non-Sendable results using rbi rather than Sema.
In terms of the test suite the only difference is that we allow for non-Sendable
types to be returned from nonisolated functions. This is safe due to the rules
of rbi. We do still error when we return non-Sendable functions across isolation
boundaries though.

The reason that I am doing this now is that I am implementing a prototype that
allows for nonisolated functions to inherit isolation from their caller. This
would have required me to implement support both in Sema for results and
arguments in SIL. Rather than implement results in Sema, I just finished the
work of transitioning the result checking out of Sema and into SIL. The actual
prototype will land in a subsequent change.

rdar://127477211
2024-12-02 16:54:12 -05:00
Doug Gregor
a93e8fd006 Merge pull request #77752 from DougGregor/perf-diag-check-throws
[Performance diagnostics] Enable checking of throw instructions
2024-12-01 22:54:07 -08:00
Erik Eckstein
63f6a2f30d Optimizer: remove the ArrayElementPropagation optimization
Propagating array element values is done by load-simplification and redundant-load-elimination.
So ArrayElementPropagation is not needed anymore.

ArrayElementPropagation also replaced `Array.append(contentsOf:)` with individual `Array.append` calls.
This optimization is removed, because the benefit is questionably, anyway.
In most cases it resulted in a code size increase.
2024-11-28 10:35:40 +01:00
Erik Eckstein
6a0b7d1f8c ObjectOutliner: create outlined arrays as let variables
This will allow load-simplification to replace a load of such an array.
2024-11-28 09:40:12 +01:00
Doug Gregor
1d3332d471 Remove the now-unused NonErrorHandlingBlocks 2024-11-21 16:06:45 -08:00
Michael Gottesman
e6b4e0f9f1 Merge pull request #77709 from gottesmm/pr-6feaf0c91a7d95d75b36d32cc91a32150d992162
[region-isolation] Some initial NFCI refactoring commits before adding experimental support for inheriting isolation to nonisolated functions
2024-11-19 22:22:50 -08:00
Michael Gottesman
d33f819038 [region-isolation] Move freeform logging on the specific error we are emitting into a method on the error itself.
I am doing this since I discovered that we are not printing certain errors as
early as we used to (due to the refactoring I did here), which makes it harder
to see the errors that we are emitting while processing individual instructions
and before we run the actual dataflow.

A nice side-effect of this is that it will make it easy to dump the error in the
debugger rather than having to wait until the point in the code where the normal
logging takes place.
2024-11-19 12:48:30 -08:00
Erik Eckstein
99ef6f727d Optimizer: replace unchecked_enum_data simplification in SILCombine with the corresponding instruction simplification from SwiftCompilerSources
The optimization in SILCombine had a bug (which is already fixed in the instruction simplification).
2024-11-14 09:18:29 +01:00
Erik Eckstein
51e3e5ed80 Optimizer: rename BorrowArgumentsUpdater -> GuaranteedPhiUpdater
NFC
2024-11-12 09:26:59 +01:00
Erik Eckstein
8462459f07 Optimizer: re-compute the re-borrow flags of phi arguments in updateAllBorrowArguments and updateBorrowArguments 2024-11-12 09:26:59 +01:00
Erik Eckstein
6b8c6a3c3b SIL: rename updateBorrowedFrom to updateBorrowArguments
NFC
2024-11-12 09:26:58 +01:00
Michael Gottesman
b5ce28fc57 [region-isolation] Cache getUnderlyingTrackedValue.
TLDR: Was looking at some performance traces and saw that we need to cache the
result of this value.

----

Specifically, I noticed that we were spending a lot of time computing this
operation. When I looked at the code I saw that we already had a cache along the
relevant code paths... but the cache was from equivalence class representative
-> state. Before we hit that cache, we were performing the work to map the value
to the equivalence class representative... so the work to perform the relevant
lookup from value -> state (which goes through the equivalence class
representative) was not just a hash table lookup. This operation makes it
cheaper by making it two cache lookups.

It may be possible to make this cheaper by redoing the actual mapping of
information so that we can go straight from value to state. I think it would be
slightly different since we would probably need to represent the state in a
separate array and map with indices... which is really just a more efficient
hash table. We could also use malloc/etc but lets not even talk about that.

rdar://139520959
2024-11-11 11:43:07 -08:00
Arnold Schwaighofer
34c417d9ff Merge pull request #77379 from aschwaighofer/enable_aggressive_reg2mem
Enable heuristic that tries to keep large values in memory
2024-11-06 11:36:33 -08:00
Arnold Schwaighofer
dc3c19164a PMO: Don't block pmo for large types - rather only block expansion of tuples 2024-11-04 17:06:24 -08:00
Michael Gottesman
32b4de60a9 Rename transfer -> send.
Accomplished using clangd's rename functionality.
2024-11-04 15:17:51 -08:00
Arnold Schwaighofer
787c996394 LargeTypesReg2Mem: Add a new heuristic that trys harder to keep large
values on the stack

This heuristic can be enabled by passing -Xfrontend
-enable-aggressive-reg2mem.

rdar://123916109
2024-10-31 13:22:06 -07:00
Michael Gottesman
2b6b98d767 Merge pull request #77238 from gottesmm/region_isolation_refactoring
[region-isolation] Refactor code so that I can more easily add additional error kinds
2024-10-25 22:13:43 -07:00
Michael Gottesman
067dbadfef [region-isolation] Add a print command that emits errors of the form "*-isolated code" or "code in the current task"
This makes it so that one does not need to deal with the differences in text in
between the task isolated case and the actor isolated case. This is done by
swallowing the entire part of this message in one method rather than having the
caller do the work.
2024-10-25 16:57:55 -07:00
Michael Gottesman
e49ef778f1 [region-isolation] Rename RequireInOutSendingAtFunctionExit -> InOutSendingAtFunctionExit.
I am going to be doing more types of checks for such inout sending types, so it
makes sense to rename it to have a more general name.
2024-10-25 16:57:54 -07:00
Erik Eckstein
ed67e36ce5 bridging: reduce #ifdef USED_IN_CPP_SOURCE in bridging headers
Especially avoid any constructors in `#ifdef USED_IN_CPP_SOURCE` blocks, because this breaks Windows ARM64.
2024-10-25 09:47:56 +02:00
Michael Gottesman
2e403b9a7e [region-isolation] Remove dead code path.
This also has the nice effect of making the subsequent refactoring I am going to
do simpler.
2024-10-24 15:11:50 -07:00
Michael Gottesman
f23ad55acb [region-isolation] Eliminate CRTP for errors and just pass through an error struct instead.
This is going to let me just pass through the error struct to the diagnostic
rather than having the CRTP and then constructing an info object per CRTP.
Currently, to make it easier to refactor, I changed the code in
TransferNonSendable to just take in the new error and call the current CRTP
routines. In the next commit, I am going to refactor TransferNonSendable.cpp
itself. This just makes it easier to test that I did not break anything.
2024-10-24 15:00:43 -07:00
Erik Eckstein
b8026d74e6 Revert "Revert "Optimizer: improve the load-copy-to-borrow optimization and implement it in swift""
This reverts commit 0666c446ec.
2024-10-22 08:40:18 +02:00
Erik Eckstein
0666c446ec Revert "Optimizer: improve the load-copy-to-borrow optimization and implement it in swift"
This reverts commit eed8645610.
2024-10-18 10:36:06 +02:00
Erik Eckstein
709dfc2d21 MandatoryPerformanceOptimization: don't let not-inlinable functions to be inlined
Also refactor canInline.

Fixes a compiler crash.
rdar://137544788
2024-10-15 12:19:50 +02:00
Erik Eckstein
e0533e6125 SIL: add an API to replace all entries of a VTable
* add `ModulePassContext.replaceVTableEntries()`
* add `ModulePassContext.notifyFunctionTablesChanged()`
2024-10-14 14:43:11 +02:00
Erik Eckstein
eed8645610 Optimizer: improve the load-copy-to-borrow optimization and implement it in swift
The optimization replaces a `load [copy]` with a `load_borrow` if possible.

```
  %1 = load [copy] %0
  // no writes to %0
  destroy_value %1
```
->
```
  %1 = load_borrow %0
  // no writes to %0
  end_borrow %1
```

The new implementation uses alias-analysis (instead of a simple def-use walk), which is much more powerful.

rdar://115315849
2024-10-11 09:41:37 +02:00
Erik Eckstein
52deb58251 Optimizer: add the FunctionPassContext.completeLifetime(of: Value) utility
Implemented by bridging the OSSALifetimeCompletion utility
2024-10-11 09:41:37 +02:00
Erik Eckstein
c97502374b Optimizer: add constant folding of classify_bridge_object
Constant fold `classify_bridge_object` to `(false, false)` if the operand is known to be a swift class.
2024-10-08 16:24:46 +02:00
Erik Eckstein
c05234e677 MandatoryPerformanceOptimizations: specialize witness_method instructions
In Embedded Swift, witness method lookup is done from specialized witness tables.
For this to work, the type of witness_method must be specialized as well.
Otherwise the method call would be done with wrong parameter conventions (indirect instead of direct).
2024-10-07 09:00:31 +02:00
Erik Eckstein
f7aaf5874e SwiftCompilerSources: add Context.getSpecializedConformance 2024-10-07 08:49:56 +02:00
Michael Gottesman
f985b0ee03 [thunk-lowering] Add a pass that performs lowering of ThunkInsts.
Right now it just handles the "identity" case so we can validate the
functionality.
2024-10-02 14:15:49 -07:00