Commit Graph

11247 Commits

Author SHA1 Message Date
Doug Gregor
2e27ff8734 Merge pull request #82814 from DougGregor/region-inference-dynamic-isolated-conformances
[SE-0470] Extend region analysis to account for isolated conformances
2025-07-06 09:24:17 -07:00
Doug Gregor
62770ff2d7 Extend region analysis to account for isolated conformances
When a conformance becomes part of a value, and that conformance could
potentially be isolated, the value cannot leave that particular
isolation domain. For example, if we perform a runtime lookup for a
conformance to P as part of a dynamic cast `as? any P`, the
conformance to P used in the cast could be isolated. Therefore, it is
not safe to transfer the resulting value to another concurrency domain.

Model this in region analysis by considering whether instructions that
add conformances could end up introducing isolated conformances. In
such cases, merge the regions with either the isolation of the
conformance itself (if known) or with the region of the task (making
them task-isolated). This prevents such values from being sent.

Note that `@concurrent` functions, which never dynamically execute on
an actor, cannot pick up isolated conformances.

Fixes issue #82550 / rdar://154437489
2025-07-05 07:13:02 -07:00
nate-chandler
6c87a0c2e7 Merge pull request #82736 from nate-chandler/rdar154499140
[SemanticARCOpts] Always heed subpass flags.
2025-07-04 11:15:18 -07:00
nate-chandler
4c1dbd74c7 Merge pull request #82784 from nate-chandler/rdar153693915
[Mem2Reg] Don't promote proj(unchecked_addr_cast).
2025-07-04 02:02:16 -07:00
Nate Chandler
41877ae7b7 [Mem2Reg] Don't promote proj(unchecked_addr_cast).
In OSSA, the result of an `unchecked_bitwise_cast` must immediately be
copied or `unchecked_bitwise_cast`'d again.  In particular, it is not
permitted to borrow it.  For example, the result can't be borrowed for
the purpose of performinig additional projections (`struct_extract`,
`tuple_extract`) on the borrowed value.  Consequently, we cannot promote
an address if such a promotion would result in such a pattern.  That
means we can't promote an address `%addr` which is used like
`struct_element_addr(unchecked_addr_cast(%addr))` or
`tuple_element_addr(unchecked_addr_cast(%addr))`.  We can still promote
`unchecked_addr_cast(unchecked_addr_cast(%addr))`.

In ownership-lowered SIL, this doesn't apply and we can still promote
address with such projections.

rdar://153693915
2025-07-03 14:52:23 -07:00
Michael Gottesman
677b897ff6 Merge pull request #82555 from gottesmm/pr-b4de2f5d58d852cd3e9e606101e1b0ff42ce6092
[nonisolated-nonsending] Make the AST not consider nonisolated(nonsending) to be an actor isolation crossing point.
2025-07-03 05:22:57 -07:00
eeckstein
010291fe65 Merge pull request #82728 from eeckstein/fix-allocbox-to-stack
MandatoryAllocBoxToStack: also handle new specialized functions and re-enable the pass again
2025-07-03 06:31:26 +02:00
Michael Gottesman
010fa39f31 [rbi] Use interned StringRefs for diagnostics instead of SmallString<64>.
This makes the code easier to write and also prevents any lifetime issues from a
diagnostic outliving the SmallString due to diagnostic transactions.
2025-07-02 16:50:24 -07:00
nate-chandler
d2f4e84889 Merge pull request #82696 from nate-chandler/rdar154652254
[CSE] Fix combine of type_value.
2025-07-02 15:42:46 -07:00
Michael Gottesman
14634b6847 [rbi] Begin tracking if a function argument is from a nonisolated(nonsending) parameter and adjust printing as appropriate.
Specifically in terms of printing, if NonisolatedNonsendingByDefault is enabled,
we print out things as nonisolated/task-isolated and @concurrent/@concurrent
task-isolated. If said feature is disabled, we print out things as
nonisolated(nonsending)/nonisolated(nonsending) task-isolated and
nonisolated/task-isolated. This ensures in the default case, diagnostics do not
change and we always print out things to match the expected meaning of
nonisolated depending on the mode.

I also updated the tests as appropriate/added some more tests/added to the
SendNonSendable education notes information about this.
2025-07-02 13:03:13 -07:00
Michael Gottesman
4433ab8d81 [rbi] Thread through a SILFunction into print routines so we can access LangOpts.Features so we can change how we print based off of NonisolatedNonsendingByDefault.
We do not actually use this information yet though... This is just to ease
review.
2025-07-02 12:13:52 -07:00
Michael Gottesman
4ce4fc4f95 [rbi] Wrap use ActorIsolation::printForDiagnostics with our own SILIsolationInfo::printActorIsolationForDiagnostics.
I am doing this so that I can change how we emit the diagnostics just for
SendNonSendable depending on if NonisolatedNonsendingByDefault is enabled
without touching the rest of the compiler.

This does not actually change any of the actual output though.
2025-07-02 12:13:50 -07:00
Michael Gottesman
c12c99fb73 [nonisolated-nonsending] Make the AST not consider nonisolated(nonsending) to be an actor isolation crossing point.
We were effectively working around this previously at the SIL level. This caused
us not to obey the semantics of the actual evolution proposal. As an example of
this, in the following, x should not be considered main actor isolated:

```swift
nonisolated(nonsending) func useValue<T>(_ t: T) async {}
@MainActor func test() async {
  let x = NS()
  await useValue(x)
  print(x)
}
```

we should just consider this to be a merge and since useValue does not have any
MainActor isolated parameters, x should not be main actor isolated and we should
not emit an error here.

I also fixed a separate issue where we were allowing for parameters of
nonisolated(nonsending) functions to be passed to @concurrent functions. We
cannot allow for this to happen since the nonisolated(nonsending) parameters
/could/ be actor isolated. Of course, we have lost that static information at
this point so we cannot allow for it. Given that we have the actual dynamic
actor isolation information, we could dynamically allow for the parameters to be
passed... but that is something that is speculative and is definitely outside of
the scope of this patch.

rdar://154139237
2025-07-02 12:13:29 -07:00
Nate Chandler
cf2ebf41d8 [SemanticARCOpts] Always heed subpass flags.
To get equivalent test coverage in asserts and noasserts builds, enable
the code that triggers executing only specified subpasses in noasserts
builds.

rdar://154499140
2025-07-02 11:58:42 -07:00
Erik Eckstein
35edadca6c Revert "Optimizer: revert to legacy alloc-box-to-stack optimization"
This reverts commit 18499e2bbd.
2025-07-02 19:15:26 +02:00
Nate Chandler
429de09f87 [CSE] Fix combine of type_value.
The instruction has no operands but has a param type.  Hash and compare
the type and the type and the param type.

rdar://154652254
2025-07-01 14:26:21 -07:00
eeckstein
83d7dbb3f5 Merge pull request #82668 from eeckstein/revert-abts
Optimizer: revert to legacy alloc-box-to-stack optimization
2025-07-01 17:49:42 +02:00
eeckstein
0bf490996f Merge pull request #82677 from eeckstein/fix-mandatory-perf-opt
MandatoryPerformanceOptimizations: don't de-virtualize a generic class method call to specialized method
2025-07-01 17:49:04 +02:00
Erik Eckstein
50b50af70f MoveOnlyWrappedTypeEliminator: handle EndCOWMutationAddr
fixes a compiler crash
rdar://154416511
2025-07-01 11:40:26 +02:00
Erik Eckstein
606f7693d4 MandatoryPerformanceOptimizations: don't de-virtualize a generic class method call to specialized method
This results in wrong argument/return calling conventions.
First, the method call must be specialized. Only then the call can be de-virtualized.
Usually, it's done in this order anyway, because the `class_method` instruction is located before the `apply`.
But when inlining functions, the order (in the worklist) can be the other way round.

Fixes a compiler crash.
rdar://154631438
2025-07-01 09:44:47 +02:00
Erik Eckstein
18499e2bbd Optimizer: revert to legacy alloc-box-to-stack optimization
The new implementation causes some problems
rdar://154686063, rdar://154713388
2025-07-01 07:02:36 +02:00
nate-chandler
991eb45432 Merge pull request #82557 from nate-chandler/rdar154407327
[DestroyAddrHoisting] Don't fold into read access.
2025-06-27 15:36:22 -07:00
eeckstein
ea6ed2fdec Merge pull request #82537 from eeckstein/fix-closure-specializer
ClosureSpecializer: don't specialize captures of stack-allocated Objective-C blocks
2025-06-27 06:46:48 +02:00
Nate Chandler
2e2dee7a41 [DestroyAddrHoisting] Don't fold into read access.
Narrowly fix a long-standing bug where destroy_addrs would be hoisted
into read access scopes, leaving the scope as a read despite the fact
that it modifies memory.  This is a problem for utilities which expect
access scopes to provide correct summaries.  Do this by refusing to fold
into access scopes which are marked `[read]`.

In a follow-up, we can reenable this folding, promoting each access
scope to a modify.  Doing so requires checking that there are no access
scopes which overlap with any of the access scopes which would be
promoted to modify.

rdar://154407327
2025-06-26 17:40:03 -07:00
Erik Eckstein
1a009f5b0d ClosureSpecializer: don't specialize captures of stack-allocated Objective-C blocks
Bail if the closure captures an ObjectiveC block which might _not_ be copied onto the heap, i.e optimized by SimplifyCopyBlock.
We can't do this because the optimization inserts retains+releases for captured arguments.
That's not possible for stack-allocated blocks.

Fixes a mis-compile
rdar://154241245
2025-06-26 19:17:37 +02:00
Erik Eckstein
45dc83bcd8 SemanticARCOpts: don't ignore dead-end blocks in the liverange analysis
This can result in wrong ARC optimizations in dead-end blocks.
Fixes a SIL verification error.

rdar://154356277
2025-06-26 16:06:32 +02:00
stzn
660263cf2c [region-isolation] Fix crash due to missing visitVectorBaseAddrInst case
Credit to https://github.com/stzn for initial work on the patch.

rdar://151401230
2025-06-23 11:11:59 -07:00
Andrew Trick
7a29d9d8b6 Fix MoveOnlyObjectCheckerPImpl::check() for mark_dependence.
Handle the presence of mark_dependence instructions after a begin_apply.

Fixes a compiler crash:
"copy of noncopyable typed value. This is a compiler bug. ..."
2025-06-22 17:39:13 -07:00
Andrew Trick
c41715ce8c Fix MoveOnlyObjectCheckerPImpl::check() changed flag
Extract the special pattern matching logic that is otherwise unrelated to the
check() function. This makes it obvious that the implementation was failing to
set the 'changed' flag whenever needed.
2025-06-22 16:38:06 -07:00
nate-chandler
17c1fbd5f4 Merge pull request #82313 from jamieQ/copy-prop-ossa-dead-ends-build-time-fix
[SILOptimizer]: slow OSSA lifetime canonicalization mitigation
2025-06-21 08:50:58 -07:00
Erik Eckstein
6714a72256 Optimizer: re-implement and improve the AllocBoxToStack pass
This pass replaces `alloc_box` with `alloc_stack` if the box is not escaping.
The original implementation had some limitations. It could not handle cases of local functions which are called multiple times or even recursively, e.g.

```
public func foo() -> Int {
  var i = 1

  func localFunction() { i += 1 }

  localFunction()
  localFunction()
  return i
}

```

The new implementation (done in Swift) fixes this problem with a new algorithm.
It's not only more powerful, but also simpler: the new pass has less than half lines of code than the old pass.

The pass is invoked in the mandatory pipeline and later in the optimizer pipeline.
The new implementation provides a module-pass for the mandatory pipeline (whereas the "regular" pass is a function pass).
This is required because the mandatory pass needs to remove originals of specialized closures, which cannot be done from a function-pass.
In the old implementation this was done with a hack by adding a semantic attribute and deleting the function later in the pipeline.

I still kept the sources of the old pass for being able to bootstrap the compiler without a host compiler.

rdar://142756547
2025-06-20 08:15:04 +02:00
Erik Eckstein
28dd6f7064 Optimizer: improve the SpecializationCloner
* add `cloneFunctionBody` without an `entryBlockArguments` argument
* remove the `swift::ClosureSpecializationCloner` from the bridging code and replace it with a more general `SpecializationCloner`
2025-06-20 08:15:02 +02:00
Erik Eckstein
d8e4e501f6 Optimizer: add some SIL modification APIs to Context
* `insertFunctionArgument`
* `BeginAccessInst.set(accessKind:)`
* `erase(function:)`
2025-06-20 08:15:01 +02:00
Erik Eckstein
de28cf04cc Optimizer: add Context. mangle(withBoxToStackPromotedArguments) 2025-06-20 08:15:01 +02:00
Erik Eckstein
bc7024edfe Optimizer: fix ModulePassContext.mangle(withDeadArguments:)
If mangled the wrong argument indices.
2025-06-20 08:15:00 +02:00
Erik Eckstein
4212c611e5 Optimizer: add Context.createSpecializedFunctionDeclaration
Originally this was a "private" utility for the ClosureSpecialization pass.
Now, make it a general utility which can be used for all kind of function specializations.
2025-06-20 08:15:00 +02:00
Jamie
1f3f830fc7 [SILOptimizer]: slow OSSA lifetime canonicalization mitigation
OSSA lifetime canonicalization can take a very long time in certain
cases in which there are large basic blocks. to mitigate this, add logic
to skip walking the liveness boundary for extending liveness to dead
ends when there aren't any dead ends in the function.

Updates `DeadEndBlocks` with a new `isEmpty` method and cache to
determine if there are any dead-end blocks in a given function.
2025-06-18 17:52:14 -05:00
Meghana Gupta
e42b564800 Merge pull request #82033 from meg-gupta/fixinlinercrash
Fix an inliner crash when inlining begin_apply with scoped lifetime dependence
2025-06-12 10:53:19 -07:00
Stephen Canon
9259c3eec4 Add new interleave and deinterleave builtins (#81689)
Ideally we'd be able to use the llvm interleave2 and deinterleave2
intrinsics instead of adding these, but deinterleave currently isn't
available from Swift, and even if you hack that in, the codegen from
LLVM is worse than what shufflevector produces for both x86 and arm. So
in the medium-term we'll use these builtins, and hope to remove them in
favor of [de]interleave2 at some future point.
2025-06-12 12:01:53 -04:00
Meghana Gupta
8396a6d8c0 Fix an inliner crash when inlining begin_apply with scoped lifetime dependence
LifetimeDependenceInsertion inserts mark_dependence on token result of a begin_apply
when it yields a lifetime dependent value. When such a begin_apply gets inlined,
the inliner can crash because of the remaining uses of the token result.

Fix this by inserting mark_dependence on parameter operands that are lifetime dependence sources
and deleting the mark_dependence on token results in the inliner.

Fixes rdar://151568816
2025-06-12 01:35:36 -07:00
eeckstein
116a453556 Merge pull request #82177 from eeckstein/fix-lha
LowerHopToActor: insert a borrow scope for an `Optional<Actor>.none` value
2025-06-12 06:16:45 +02:00
Michael Gottesman
e95b8570d6 Merge pull request #82149 from gottesmm/pr-4cb0aff1ceb16d75edcadd4de2ed612b97924226
Change send-never-sendable of isolated partial applies to use SIL level info instead of AST info.
2025-06-11 13:52:43 -07:00
Erik Eckstein
5489627310 LowerHopToActor: insert a borrow scope for an Optional<Actor>.none value
An optional-none value has "none" ownership, but still a borrow scope is needed.
Fixes a SIL verifier crash.
rdar://153066034
2025-06-11 13:35:21 +02:00
Michael Gottesman
f31236931b Change send-never-sendable of isolated partial applies to use SIL level info instead of AST info.
The reason I am doing this is that we have gotten reports about certain test
cases where we are emitting errors about self being captured in isolated
closures where the sourceloc is invalid. The reason why this happened is that
the decl returned by getIsolationCrossing did not have a SourceLoc since self
was being used implicitly.

In this commit I fix that issue by using SIL level information instead of AST
level information. This guarantees that we get an appropriate SourceLoc. As an
additional benefit, this fixed some extant errors where due to some sort of bug
in the AST, we were saying that a value was nonisolated when it was actor
isolated in some of the error msgs.

rdar://151955519
2025-06-10 08:09:32 -07:00
Andrew Trick
16fffd1704 Fix MoveOnlyWrappedTypeEliminator handling of store_borrow.
Defer visiting an instruction until its operands have been visited. Otherwise,
this pass will crash during ownership verification with invalid operand
ownership.

Fixes rdar://152879038 ([moveonly] MoveOnlyWrappedTypeEliminator ownership
verifier crashes on @_addressableSelf)
2025-06-09 19:45:09 -07:00
Arnold Schwaighofer
11942d70bd Merge pull request #81995 from aschwaighofer/fix_looprotate_densemap_subscript_bug
LoopRotate: Fix a by reference map bug under reallocation
2025-06-06 12:56:10 -07:00
eeckstein
a33ff9879a Merge pull request #81969 from eeckstein/temp-lvalue-elimination
Optimizer: improve TempLValueOpt
2025-06-06 15:30:29 +02:00
Doug Gregor
67b55566fe Merge pull request #82005 from DougGregor/generalize-checked-cast-options
[SIL] Generalize CastingIsolatedConformances to CheckedCastInstOptions
2025-06-05 08:43:09 -07:00
Erik Eckstein
2b9b2d243c Optimizer: improve TempLValueOpt
* re-implement the pass in swift
* support alloc_stack liveranges which span over multiple basic blocks
* support `load`-`store` pairs, copying from the alloc_stack (in addition to `copy_addr`)

Those improvements help to reduce temporary stack allocations, especially for InlineArrays.

rdar://151606382
2025-06-05 06:45:18 +02:00
Doug Gregor
bc4cf1236b [SIL] Generalize CastingIsolatedConformances to CheckedCastInstOptions
We are going to need to add more flags to the various checked cast
instructions. Generalize the CastingIsolatedConformances bit in all of
these SIL instructions to an "options" struct that's easier to extend.

Precursor to rdar://152335805.
2025-06-04 17:12:28 -07:00