Commit Graph

41 Commits

Author SHA1 Message Date
Andrew Trick
d9dd93560d Support mark_dependence_addr in SIL passes. 2025-03-25 23:02:45 -07:00
Arnold Schwaighofer
dc3c19164a PMO: Don't block pmo for large types - rather only block expansion of tuples 2024-11-04 17:06:24 -08:00
Andrew Trick
19c1617059 PredictableMemOpts: handle mark_dependence source promotion 2024-10-15 16:48:36 -07:00
Andrew Trick
2e9daa444d PredictableMemOpts: handle MarkDependence base uses. 2024-10-15 11:18:43 -07:00
Andrew Trick
a635e8a292 [NFC] Refactor PredictableMemoryOptimization
In preparation for adding mark_dependence support.
Required to support addressors (unsafeAddress) in places
other than UnsafePointer.pointee.
2024-09-25 18:18:59 -07:00
Akira Hatanaka
42bc49d3fe Add a new parameter convention @in_cxx for non-trivial C++ classes that are passed indirectly and destructed by the caller (#73019)
This corresponds to the parameter-passing convention of the Itanium C++
ABI, in which the argument is passed indirectly and possibly modified,
but not destroyed, by the callee.

@in_cxx is handled the same way as @in in callers and @in_guaranteed in
callees. OwnershipModelEliminator emits the call to destroy_addr that is
needed to destroy the argument in the caller.

rdar://122707697
2024-06-27 09:44:04 -07:00
Tim Kientzle
1d961ba22d Add #include "swift/Basic/Assertions.h" to a lot of source files
Although I don't plan to bring over new assertions wholesale
into the current qualification branch, it's entirely possible
that various minor changes in main will use the new assertions;
having this basic support in the release branch will simplify that.
(This is why I'm adding the includes as a separate pass from
rewriting the individual assertions)
2024-06-05 19:37:30 -07:00
Meghana Gupta
f3b225a395 Fix computation of argument index in the presence of indirect error results
Fixes rdar://124108894
2024-03-07 02:02:33 -08:00
Holly Borla
dce70f373f [SILGen] Emit MaterializePackExprs.
The subexpression of a MaterializePackExpr (which is always a tuple value
currently) is emitted while preparing to emit a pack expansion expr, and its
elements are projected from within the dynamic pack loop. This means that a
materialized pack is only evaluated once, rather than being evaluated on
every iteration over the pack elements.
2023-03-09 21:44:03 -08:00
swift-ci
0b49ac49d7 Merge remote-tracking branch 'origin/main' into rebranch 2023-01-29 10:33:27 -08:00
John McCall
d25a8aec8b Add explicit lowering for value packs and pack expansions.
- SILPackType carries whether the elements are stored directly
  in the pack, which we're not currently using in the lowering,
  but it's probably something we'll want in the final ABI.
  Having this also makes it clear that we're doing the right
  thing with substitution and element lowering.  I also toyed
  with making this a scalar type, which made it necessary in
  various places, although eventually I pulled back to the
  design where we always use packs as addresses.

- Pack boundaries are a core ABI concept, so the lowering has
  to wrap parameter pack expansions up as packs.  There are huge
  unimplemented holes here where the abstraction pattern will
  need to tell us how many elements to gather into the pack,
  but a naive approach is good enough to get things off the
  ground.

- Pack conventions are related to the existing parameter and
  result conventions, but they're different on enough grounds
  that they deserve to be separated.
2023-01-29 03:29:06 -05:00
swift-ci
328e716489 Merge remote-tracking branch 'origin/main' into rebranch 2022-12-12 07:33:19 -08:00
Nate Chandler
8d8577e5b0 [SIL] Removed Indirect_In_Constant convention.
It is no different from @in.

Continue parse @in_constant in textual and serialized SIL, but just as
an alias for @in.
2022-12-09 21:54:00 -08:00
Erik Eckstein
54fe1304a1 replace LLVM_NODISCARD -> [[nodiscard]]
This is possible because we are now compiling with the C++17 standard.
2022-11-04 20:44:18 +01:00
Meghana Gupta
4e2d41a300 Fix PMO to not scalarize empty tuple (#59382)
Currently PMO tries to scalarize empty tuple, and ends up deleting
store of an empty tuple. This causes a cascade of problems.
Mem2Reg can end up seeing loads without stores of empty tuple type,
and creates undef while optimizing the alloca away.
This is bad, we don't want to create undef's unnecessarily in SIL.
Some optimization can queiry the function or the block on the undef leading to nullptr
and a compiler crash.

Fixes rdar://94829482
2022-06-13 12:05:19 -07:00
Michael Gottesman
6a982de8ad [pmo] Teach pmo to ignore debug instruction uses.
Just noticed this while trying to figure out why a change I was trying to make
in capture promotion wasn't optimizing loads away.
2021-01-31 16:54:16 -08:00
Arnold Schwaighofer
8aaa7b4dc1 SILOptimizer: Pipe through TypeExpansionContext 2019-11-11 14:21:52 -08:00
Slava Pestov
d434188157 SIL: Refactor TypeConverter to not require a SILModule
This mostly requires changing various entry points to pass around a
TypeConverter instead of a SILModule. I've left behind entry points
that take a SILModule for a few methods like SILType::subst() to
avoid creating even more churn.
2019-09-06 21:50:15 -04:00
Saleem Abdulrasool
731c31f9a5 MSVC: litter the code with llvm_unreachable (NFC)
Add `llvm_unreachable` to mark covered switches which MSVC does not
analyze correctly and believes that there exists a path through the
function without a return value.
2019-06-01 19:02:46 -07:00
Slava Pestov
8915f96e3e SIL: Replace SILType::isTrivial(SILModule) with isTrivial(SILFunction) 2019-03-12 01:16:04 -04:00
Michael Gottesman
a310f23b8a [ownership] Add support for load_borrow in predictable mem opt.
This reduces the diff in between -Onone output when stripping before/after
serialization.

We support load_borrow by translating it to the load [copy] case. Specifically,
for +1, we normally perform the following transform.

  store %1 to [init] %0
  ...
  %2 = load [copy] %0
  ...
  use(%2)
  ...
  destroy_value %2

=>

  %1a = copy_value %1
  store %1 to [init] %0
  ...
  use(%1a)
  ...
  destroy_value %1a

We analogously can optimize load_borrow by replacing the load with a
begin_borrow:

  store %1 to [init] %0
  ...
  %2 = load_borrow %0
  ...
  use(%2)
  ...
  end_borrow %2

=>

  %1a = copy_value %1
  store %1 to [init] %0
  ...
  %2 = begin_borrow %1a
  ...
  use(%2)
  ...
  end_borrow %2
  destroy_value %1a

The store from outside a loop being used by a load_borrow inside a loop is a
similar transformation as the +0 version except that we use a begin_borrow
inside the loop instead of a copy_value (making it even more efficient).
2019-02-11 00:54:28 -08:00
Michael Gottesman
628d761798 [pmo] Teach the use collector how to handle store [assign].
I also cleaned up the code there to make it explicit how each ownership
qualifier maps to PMOUseKind.
2019-01-22 01:15:43 -08:00
swift-ci
efba836699 Merge pull request #22013 from gottesmm/pr-651a3baecca3c91ecdb06d2ef012dbe8fb0d13b0 2019-01-20 13:01:00 -08:00
Michael Gottesman
8e85d60b84 [pmo] A copy_addr of a trivial type should be treated as an InitOrAssign of the dest rather than like a non-trivial type.
PMO uses InitOrAssign for trivially typed things and Init/Assign for non-trivial
things, so I think this was an oversight from a long time ago. There is actually
no /real/ effect on the code today since after exploding the copy_addr, the
store will still be used to produce the right available value and since for
stores, init/assign/initorassign all result in allocations being removed. Once
though I change assign to not allow for allocation removal (the proper way to
model this), without this change, certain trivial allocations will no longer be
removed, harming perf as seen via the benchmarking run on the bots in #21918.
2019-01-20 11:52:24 -08:00
Michael Gottesman
9004e87500 [pmo] Remove untested code around load_weak, store_weak, load_unowned, store_unowned.
I am removing these for the following reasons:

* PMO does not have any tests for these code paths. (1).

* PMO does not try to promote these loads (it explicitly pattern matches load,
  copy_addr) or get available values from these (it explicitly pattern matches
  store or explodes a copy_addr to get the copy_addr's stores). This means that
  removing this code will not effect our constant propagation diagnostics. So,
  removing this untested code path at worst could cause us to no longer
  eliminate some dead objects that we otherwise would be able to eliminate at
  -Onone (low-priority). (2).

----

(1). I believe that the lack of PMO tests is due to this being a vestigal
     remnant of DI code in PMO. My suspicion arises since:

     * The code was added when the two passes were both sharing the same use
       collector and auxillary data structures. Since then I have changed DI/PMO
       to each have their own copies.

     * DI has a bunch of tests that verify behavior around these instructions.

(2). I expect the number of actually removed allocations that are no longer
     removed should be small since we do not promote loads from such allocations
     and PMO will not eliminate an allocation that has any loads.
2019-01-20 11:37:02 -08:00
Michael Gottesman
9e25cc54fd [pmo] Eliminate PMOUseKind::PartialStore.
PartialStore is a PMOUseKind that is a vestigal remnant of Definite Init in the
PMO source. This can be seen by noting that in Definite Init, PartialStore is
how Definite Init diagnoses partially initialized values and errors. In contrast
in PMO the semantics of PartialStore are:

1. It can only be produced if we have a raw store use or a copy_addr.
2. We allow for the use to provide an available value just like if it was an
assign or an init.
3. We ignore it for the purposes of removing store only allocations since by
itself without ownership, stores (and stores from exploded copy_addr) do not
effect ownership in any way.

Rather than keeping this around, in this commit I review it since it doesn't
provide any additional value or [init] or [assign]. Functionally there should be
no change.
2019-01-17 18:33:38 -08:00
Michael Gottesman
3d562e59d0 [pmo] Use SILBuilder::emitDestructureOperation to destructure values instead of emitting our own tuple_extracts.
This will cause this code to automagically work correctly in ossa code since we
will instead of emitting the tuple_extracts will emit a destructure operation
automagically without code change.

Since this entrypoint will emit tuple_extracts in non-ossa code, this is a NFC
patch.
2019-01-15 10:41:56 -08:00
Michael Gottesman
5e31124054 [pmo] Update the memory use collector for ownership.
This is technically a NFC commit. The changes are:

1. When we scalarize loads/stores, we need to not just use unqualified
loads/stores. Instead we need to use the createTrivial{Load,Store}Or APIs. In
OSSA mode, this will propagate through the original ownership qualifier if the
sub-type is non-trivial, but if the sub-type is non-trivial will change the
qualifier to trivial. Today when the pass runs without ownership nothing is
changed since I am passing in the "supports unqualified" flag to the
createTrivial{Load,Store}Or API so that we just create an unqualified memop if
we are passed in an unqualified memop. Once we fully move pmo to ownership, this
flag will be removed and we will assert.

2. The container walker is taught about copy_value, destroy_value. Specifically,
we teach the walker how to recursively look through copy_values during the walk
and to treat a destroy_value of the box like a strong_release,
release_value. Since destroy_value, copy_value only exist in [ossa] today, this
is also NFC.
2019-01-15 09:34:45 -08:00
Michael Gottesman
be475827db [pmo] Move handling of releases: ElementUseCollector::{collectFrom,collectContainerUses}()
Since:

1. We only handle alloc_stack, alloc_box in predictable memopts.
2. alloc_stack can not be released.

We know that the release collecting in collectFrom can just be done in
collectContainerUses() [which only processes alloc_box].

This also let me simplify some code as well and add a defensive check in case
for some reason we are passed a release_value on the box. NOTE: I verified that
previously this did not result in a bug since we would consider the
release_value to be an escape of the underlying value even though we didn't
handle it in collectFrom. But the proper way to handle release_value is like
strong_release, so I added code to do that as well.
2019-01-05 23:47:07 -08:00
Michael Gottesman
ecacc7541f [pmo] Now that we are doing far less in these methods, inline them.
These increased the amount of code to read in the file and are really not
necessary.
2019-01-04 14:30:16 -08:00
Michael Gottesman
7b7ccdcca0 [pmo] Eliminate more dead code. 2019-01-04 13:19:43 -08:00
Michael Gottesman
7175e1790a [pmo] Eliminate dead flat namespace tuple numbering from PMOMemoryUseCollector.
TLDR: This does not eliminate the struct/tuple flat namespace from Predictable
Mem Opts. Just the tuple specific flat namespace code from PMOMemoryUseCollector
that we were computing and then throwing away. I explain below in more detail.

First note that this is cruft from when def-init and pmo were one pass. What we
were doing here was maintaing a flattened tuple namespace while we were
collecting uses in PMOMemoryUseCollector. We never actually used them for
anything since we recomputed this information including information about
structs in PMO itself! So this information was truly completely dead.

This commit removes that and related logic and from a maintenance standpoint
makes PMOMemoryUseCollector a simple visitor that doesn't have any real special
logic in it beyond the tuple scalarization.
2019-01-04 13:19:43 -08:00
Michael Gottesman
ef99325427 [pmo] Debride more code that is still in PMO but was only used by DI. 2019-01-04 11:29:43 -08:00
Michael Gottesman
b70f6f8171 [pmo] Eliminate dynamically dead code paths.
Specifically, we are putting dealloc_stack, destroy_box into the Releases array
in PMOMemoryUseCollector only to ignore them in the only place that we use the
Releases array in PredictableMemOpts.
2019-01-03 08:58:20 -08:00
Michael Gottesman
0d962b237f [pmo] Change MemoryInst to be an AllocationInst since it will always be so.
Just a part of a series of small cleanups I found over the break in PMO that I
am landing in preparation for landing patches that fix PMO for ownership.
2019-01-02 11:07:13 -08:00
Michael Gottesman
9620bedf7a [di] Rename: DIMemoryUseCollector{Ownership,}.{cpp,h}
This was done early on during the split of predictable mem opts from DI. This
has been done for a long time, so eliminate the "Ownership" basename suffix.
2018-12-30 16:11:56 -08:00
Jordan Rose
cefb0b62ba Replace old DEBUG macro with new LLVM_DEBUG
...using a sed command provided by Vedant:

$ find . -name \*.cpp -print -exec sed -i "" -E "s/ DEBUG\(/ LLVM_DEBUG(/g" {} \;
2018-07-20 14:37:26 -07:00
David Zarzycki
03b7eae9ed [SILOptimizer] NFC: Adopt reference storage type meta-programming macros 2018-06-30 06:44:33 -04:00
Michael Gottesman
b9f69cb0ea [pmo] Eliminate incomplete support for promoting enums.
This was never implemented correctly way back in 2013-2014. It was originally
added I believe so we could DI checks, but the promotion part was never added.

Given that DI is now completely split from PMO, we can just turn this off and if
necessary add it back on master "properly".

rdar://41161408
2018-06-26 18:49:08 -07:00
Devin Coughlin
c50ca98ac6 [SIL] Factor out logic for detecting sanitizer instrumentation. NFC.
Factor out common logic for detecting sanitizer instrumentation and put it in
SIL/InstructionUtils.
2018-06-10 16:44:19 -07:00
Michael Gottesman
db30959a9d [pred-memopt] Replace remaining occurances of prefix DI with PMO prefix.
I am going to DI and predictable mem opts have been split for a long time and
their subroutines aren't going to be joined in the future... so replace the DI
prefixes in pred-mem-opts impl with PMO and rename DIMemoryUseCollector =>
PMOUseCollector.

Been sitting on this for a long time... just happy to get it in.
2018-05-22 12:52:43 -07:00