Commit Graph

176 Commits

Author SHA1 Message Date
Michael Gottesman
114789707c [moveOnly] Implement a semi-generic _move function that can be used on non-generic, non-existential values.
This patch introduces a new stdlib function called _move:

```Swift
  @_alwaysEmitIntoClient
  @_transparent
  @_semantics("lifetimemanagement.move")
  public func _move<T>(_ value: __owned T) -> T {
  #if $ExperimentalMoveOnly
    Builtin.move(value)
  #else
    value
  #endif
  }
```

It is a first attempt at creating a "move" function for Swift, albeit a skleton
one since we do not yet perform the "no use after move" analysis. But this at
leasts gets the skeleton into place so we can built the analysis on top of it
and churn tree in a manageable way. Thus in its current incarnation, all it does
is take in an __owned +1 parameter and returns it after moving it through
Builtin.move.

Given that we want to use an OSSA based analysis for our "no use after move"
analysis and we do not have opaque values yet, we can not supporting moving
generic values since they are address only. This has stymied us in the past from
creating this function. With the implementation in this PR via a bit of
cleverness, we are now able to support this as a generic function over all
concrete types by being a little clever.

The trick is that when we transparent inline _move (to get the builtin), we
perform one level of specialization causing the inlined Builtin.move to be of a
loadable type. If after transparent inlining, we inline builtin "move" into a
context where it is still address only, we emit a diagnostic telling the user
that they applied move to a generic or existential and that this is not yet
supported.

The reason why we are taking this approach is that we wish to use this to
implement a new (as yet unwritten) diagnostic pass that verifies that _move
(even for non-trivial copyable values) ends the lifetime of the value. This will
ensure that one can write the following code to reliably end the lifetime of a
let binding in Swift:

```Swift
  let x = Klass()
  let _ = _move(x)
  // hypotheticalUse(x)
```

Without the diagnostic pass, if one were to write another hypothetical use of x
after the _move, the compiler would copy x to at least hypotheticalUse(x)
meaning the lifetime of x would not end at the _move, =><=.

So to implement this diagnostic pass, we want to use the OSSA infrastructure and
that only works on objects! So how do we square this circle: by taking advantage
of the mandatory SIL optimzier pipeline! Specifically we take advantage of the
following:

1. Mandatory Inlining and Predictable Dead Allocation Elimination run before any
   of the move only diagnostic passes that we run.

2. Mandatory Inlining is able to specialize a callee a single level when it
   inlines code. One can take advantage of this to even at -Onone to
   monomorphosize code.

and then note that _move is such a simple function that predictable dead
allocation elimination is able to without issue eliminate the extra alloc_stack
that appear in the caller after inlining without issue. So we (as the tests
show) get SIL that for concrete types looks exactly like we just had run a
move_value for that specific type as an object since we promote away the
stores/loads in favor of object operations when we eliminate the allocation.

In order to prevent any issue with this being used in a context where multiple
specializations may occur, I made the inliner emit a diagnostic if it inlines
_move into a function that applies it to an address only value. The diagnostic
is emitted at the source location where the function call occurs so it is easy
to find, e.x.:

```
func addressOnlyMove<T>(t: T) -> T {
    _move(t) // expected-error {{move() used on a generic or existential value}}
}

moveonly_builtin_generic_failure.swift:12:5: error: move() used on a generic or existential value
    _move(t)
    ^
```

To eliminate any potential ABI impact, if someone calls _move in a way that
causes it to be used in a context where the transparent inliner will not inline
it, I taught IRGen that Builtin.move is equivalent to a take from src -> dst and
marked _move as always emit into client (AEIC). I also took advantage of the
feature flag I added in the previous commit in order to prevent any cond_fails
from exposing Builtin.move in the stdlib. If one does not pass in the flag
-enable-experimental-move-only then the function just returns the value without
calling Builtin.move, so we are safe.

rdar://83957028
2021-10-27 19:36:49 -07:00
Nate Chandler
5d85ad393c [SILOpt] Add utility to find guaranteed roots.
The new utility looks through ownership forwarding instructions to find
the original values with guaranteed ownership.
2021-10-27 10:44:18 -07:00
Jonathan Grynspan
f1bf7badba [SE-0322] Temporary uninitialized buffers
Adds two new IRGen-level builtins (one for allocating, the other for deallocating), a stdlib shim function for enhanced stack-promotion heuristics, and the proposed public stdlib functions.
2021-10-25 11:20:10 -04:00
Erik Eckstein
7849f09e52 SIL: remove the unused alloc_value_buffer, project_value_buffer and dealloc_value_buffer instructions.
Those instructions were use for the materializeForSet implementation, which was replaced by modify-coroutines.
2021-10-07 07:41:54 +02:00
Andrew Trick
47791631c0 Change findOwnershipReferenceRoot to stop at phis.
This API should not handle is phis because the phi argument's ownedhip
is independent from the phi itself. The client assumes that the
returned root is in the same lifetime/scope of the access.
2021-10-06 09:18:14 -07:00
Andrew Trick
e9e3ce7ea5 Add findOwnershipReferenceAggregate().
Simple convenience on top of findOwnershipReferenceRoot to handle
guaranteed values forwarded through struct/tuple extract.

Note: eventually this should be handled by a ReferenceRoot abstraction
that stores the component path.
2021-10-06 09:18:14 -07:00
Andrew Trick
20ee76e07a Add AccessBase::hasLocalOwnershipLifetime()
Replaces AccessStorage::isGuaranteedForFunction().

OSSA utilities need to determine whether two addresses with the same
AccessStorage are directly substitutable without "fixing-up" the OSSA
lifetime. Checking whether the access base is within a local borrow
scope is the first step.
2021-10-06 09:18:14 -07:00
Andrew Trick
150208395e Implement AccessBase::compute(SILValue sourceAddress)
Missing implementation from the previous commit that introduced
AccessBase. Now we can directly ask for an AccessBase value.
2021-10-06 09:18:14 -07:00
Andrew Trick
e85228491d Rename AccessedStorage to AccessStorage
to be consistent with AccessPath and AccessBase.

Otherwise, the arbitrary name difference adds constant friction.
2021-09-21 23:18:24 -07:00
Andrew Trick
354b5c4e17 Add AccessBase abstraction as OSSA helper
Split AccessedStorage functionality in two pieces. This doesn't add
any new logic, it just allows utilities to make queries on the access
base. This is important for OSSA where we often need to find the
borrow scope or ownership root that contains an access.
2021-09-21 19:54:24 -07:00
Andrew Trick
c1e6096e44 Fix findOwnershipReferenceRoot to stop at BeginBorrow.
Useful for finding the borrow scope enclosing an address.
2021-09-21 19:54:24 -07:00
Andrew Trick
19cff2e6eb Add AccessedStorage::isGuaranteedForFunction
to quickly bypass all liveness checking in many cases.
2021-09-21 09:43:05 -07:00
Meghana Gupta
4f2d4d5635 Fix LoadBorrowImmutabilityAnalysis in cases it raises false errors on destroy_value
We should only flag an error if there is a destroy_value on the first owned root of the access path.
2021-09-09 17:05:58 -07:00
Min-Yih Hsu
343d842394 [SIL][DebugInfo] PATCH 3/3: Deprecate debug_value_addr SIL instruciton
This patch removes all references to DebugValueAddrInst class and
debug_value_addr instruction in textual SIL files.
2021-08-31 12:01:04 -07:00
Min-Yih Hsu
e1023bc323 [DebugInfo] PATCH 2/3: Duplicate logics regarding debug_value_addr
This patch replace all in-memory objects of DebugValueAddrInst with
DebugValueInst + op_deref, and duplicates logics that handles
DebugValueAddrInst with the latter. All related check in the tests
have been updated as well.

Note that this patch neither remove the DebugValueAddrInst class nor
remove `debug_value_addr` syntax in the test inputs.
2021-08-31 11:57:56 -07:00
Konrad `ktoso` Malawski
ac6bee45db [Distributed] SIL invocation of initialize remote done 2021-08-12 14:09:01 +09:00
Andrew Trick
1f64871b31 MemAccessUtils: unify Box/Class/Tail storage for consistency and usability
It was originally convenient for exclusivity optimization to treat
boxes specially. We wanted to know that the 'Box' kind was always
uniquely identified. But that's not really important. And now that
AccessedStorage is being used more generally, the inconsistency is
problematic.

A consistent model is also must easier to understand and explain.

This also make the implementation of the utility simpler and more powerful.

Functional changes:

isRCIdentical will look through mark_dependence and mark_uninitialized.

findReferenceRoot is used consistently everywhere increasing analysis precision.
2021-08-09 11:43:40 -07:00
Andrew Trick
9984b81de7 MemAccessUtils cleanup: rename hasIdenticalBase
to hasIdenticalStorage.

Be precise in preparation for unifying and clarifying the access base model.
2021-08-07 15:26:46 -07:00
Meghana Gupta
64095a6d6b Fix use of MemAcessUtils in LICM (#38784)
AccessPathWithBase::compute can return a valid access path with unidentified base.
In such cases, we cannot LICM stores, because there is no base address to check if it is invariant
2021-08-06 18:34:55 -07:00
Andrew Trick
c323c6298b Fix an assert in collectUses when handling unknown index offsets.
AccessPathDefUseTraversal accumulates the offsets that it has seen on
the def-use walk while visiting index_addr and casts.

This is quite tricky because either the AccessPath being matched or
the def-use chain could contain unknown offsets. We need to correctly
classify all cases as either an exact, inner, or overlapping match.

For SIL like this

 %91 = integer_literal $Builtin.Int64, 0
 %113 = builtin "truncOrBitCast_Int64_Word"(%91 : $Builtin.Int64) : $Builtin.Word
 %115 = index_addr %114 : $*MyStruct, %113 : $Builtin.Word
 %119 = struct_element_addr %115 : $*MyStruct, #MyStruct.klass

We have Path: (@Unknown,#0)

There are two issues

(1) When AccessPathDefUseTraversal::checkAndUpdateOffset visits the
struct_element_addr, it fails to pop the unknown offset from the
expected path but continues trying to handle the struct_element_addr
and hits an assert. This PR fixes this.

(2) If the builtin "truncOrBitCast_Int64_Word" comes from a literal,
we should not treat it as an unknown offset (given that the literal is
within the size of a word). Added a test for this case, we should be able
to identify the access path accurately in this.
2021-08-04 17:38:51 -07:00
Joe Groff
439edbce1f Handle multiple awaits and suspend-on-exit for async let tasks.
Change the code generation patterns for `async let` bindings to use an ABI based on the following
functions:

- `swift_asyncLet_begin`, which starts an `async let` child task, but which additionally
  now associates the `async let` with a caller-owned buffer to receive the result of the task.
  This is intended to allow the task to emplace its result in caller-owned memory, allowing the
  child task to be deallocated after completion without invalidating the result buffer.
- `swift_asyncLet_get[_throwing]`, which replaces `swift_asyncLet_wait[_throwing]`. Instead of
  returning a copy of the value, this entry point concerns itself with populating the local buffer.
  If the buffer hasn't been populated, then it awaits completion of the task and emplaces the
  result in the buffer; otherwise, it simply returns. The caller can then read the result out of
  its owned memory. These entry points are intended to be used before every read from the
  `async let` binding, after which point the local buffer is guaranteed to contain an initialized
  value.
- `swift_asyncLet_finish`, which replaces `swift_asyncLet_end`. Unlike `_end`, this variant
  is async and will suspend the parent task after cancelling the child to ensure it finishes
  before cleaning up. The local buffer will also be deinitialized if necessary. This is intended
  to be used on exit from an `async let` scope, to handle cleaning up the local buffer if necessary
  as well as cancelling, awaiting, and deallocating the child task.
- `swift_asyncLet_consume[_throwing]`, which combines `get` and `finish`. This will await completion
  of the task, leaving the result value in the result buffer (or propagating the error, if it
  throws), while destroying and deallocating the child task. This is intended as an optimization
  for reading `async let` variables that are read exactly once by their parent task.

To avoid an epoch break with existing swiftinterfaces and ABI clients, the old builtins and entry
points are kept intact for now, but SILGen now only generates code using the new interface.

This new interface fixes several issues with the old async let codegen, including use-after-free
crashes if the `async let` was never awaited, and the inability to read from an `async let` variable
more than once.

rdar://77855176
2021-07-22 10:19:31 -07:00
Doug Gregor
e7e922ea77 Introduce a createAsyncTaskInGroup SIL builtin.
Rather than using group task options constructed from the Swift parts
of the _Concurrency library and passed through `createAsyncTask`'s
options, introduce a separate builtin that always takes a group. Move
the responsibility for creating the options structure into IRGen, so
we don't need to expose the TaskGroupTaskOptionRecord type in Swift.
2021-06-24 07:53:18 -07:00
Doug Gregor
76959b1d4f Remove CreateAsyncTaskGroupFuture and swift_task_create_group_future.
We've moved everything over to `CreateAsyncTask` now.
2021-06-24 07:53:18 -07:00
Doug Gregor
a61adace85 Remove CreateAsyncTaskFuture and swift_task_create_future.
We no longer need these entry points.
2021-06-24 07:53:18 -07:00
Doug Gregor
c7edfa3ba9 Centralize non-group task creation on swift_task_create[_f].
Introduce a builtin `createAsyncTask` that maps to `swift_task_create`,
and use that for the non-group task creation operations based on the
task-creation flags. `swift_task_create` and the thin function version
`swift_task_create_f` go through the dynamically-replaceable
`swift_task_create_common`, where all of the task creation logic is
present.

While here, move copying of task locals and the initial scheduling of
the task into `swift_task_create_common`, enabling by separate flags.
2021-06-24 07:53:17 -07:00
Erik Eckstein
24799e1526 SIL: defer instruction deletion to the end of a pass run.
When an instruction is "deleted" from the SIL, it is put into the SILModule::scheduledForDeletion list.
The instructions in this list are eventually deleted for real in SILModule::flushDeletedInsts(), which is called by the pass manager after each pass run.
In other words: instruction deletion is deferred to the end of a pass.

This avoids dangling instruction pointers within the run of a pass and in analysis caches.
Note that the analysis invalidation mechanism ensures that analysis caches are invalidated before flushDeletedInsts().
2021-05-26 21:57:54 +02:00
Alejandro Alonso
444c35cf8d Merge pull request #36953 from Azoy/isType-isDeclType
[NFC] Introduce isDecl and getDeclType
2021-04-20 15:33:00 -04:00
Azoy
9ed732f0ab Introduce isDecl and getDeclType
fix enum logic issue

fix tests

guard against null types
2021-04-20 02:22:16 -04:00
Konrad `ktoso` Malawski
d3c5ebc9b7 [AsyncLet] reimplemented with new ABI and builtins 2021-04-19 10:06:23 +09:00
Konrad `ktoso` Malawski
ba615029c7 [Concurrency] Store child record when async let child task spawned 2021-04-19 10:06:23 +09:00
Doug Gregor
e77a27e8ed [Concurrency] Introduce runtime detection of data races.
Through various means, it is possible for a synchronous actor-isolated
function to escape to another concurrency domain and be called from
outside the actor. The problem existed previously, but has become far
easier to trigger now that `@escaping` closures and local functions
can be actor-isolated.

Introduce runtime detection of such data races, where a synchronous
actor-isolated function ends up being called from the wrong executor.
Do this by emitting an executor check in actor-isolated synchronous
functions, where we query the executor in thread-local storage and
ensure that it is what we expect. If it isn't, the runtime complains.
The runtime's complaints can be controlled with the environment
variable `SWIFT_UNEXPECTED_EXECUTOR_LOG_LEVEL`:

  0 - disable checking
  1 - warn when a data race is detected
  2 - error and abort when a data race is detected

At an implementation level, this introduces a new concurrency runtime
entry point `_checkExpectedExecutor` that checks the given executor
(on which the function should always have been called) against the
executor on which is called (which is in thread-local storage). There
is a special carve-out here for `@MainActor` code, where we check
against the OS's notion of "main thread" as well, so that `@MainActor`
code can be called via (e.g.) the Dispatch library's
`DispatchQueue.main.async`.

The new SIL instruction `extract_executor` performs the lowering of an
actor down to its executor, which is implicit in the `hop_to_executor`
instruction. Extend the LowerHopToExecutor pass to perform said
lowering.
2021-04-12 15:19:51 -07:00
Erik Eckstein
09755659a1 SIL: remove the sub-classes of MultipleValueInstructionResult
They are not really needed, because they don't contain any stored properties and for isa-checks we can check the parent instruction.
2021-04-11 19:14:34 +02:00
John McCall
efeb818161 Clean up the TaskGroup ABI:
- stop storing the parent task in the TaskGroup at the .swift level
- make sure that swift_taskGroup_isCancelled is implied by the parent
  task being cancelled
- make the TaskGroup structs frozen
- make the withTaskGroup functions inlinable
- remove swift_taskGroup_create
- teach IRGen to allocate memory for the task group
- don't deallocate the task group in swift_taskGroup_destroy

To achieve the allocation change, introduce paired create/destroy builtins.

Furthermore, remove the _swiftRetain and _swiftRelease functions and
several calls to them.  Replace them with uses of the appropriate builtins.
I should probably change the builtins to return retained, since they're
working with a managed type, but I'll do that in a separate commit.
2021-04-09 03:06:31 -04:00
John McCall
57c0c787db Make Builtin.getCurrentExecutor conservatively not readnone.
Fixes a miscompile where the hop_to_executor optimizer would
remove hop_to_executor before a getCurrentExecutor.
2021-04-08 12:57:11 -04:00
Michael Gottesman
059421e26c [mem-access-utils] Allow for getStackInitAllocStackUse to recognize an always take from a unconditional_checked_cast_addr as a destroy_addr 2021-03-29 12:14:14 -07:00
Michael Gottesman
6bb92533f6 [mem-access-utils] Allow for getStackInitAllocStackUse to recognize an always take from a checked_cast_addr_br as a destroy_addr.
Since it is an always take, we know that the original value will always be
invalidated by the checked_cast_addr_br.

This also lets me use this to recognize simple cases of checked casts in
opt-remark-gen.
2021-03-29 12:10:37 -07:00
Michael Gottesman
f6e71d4433 [mem-access-utils] Refactor isSingleInitAllocStack into getSingleInitAllocStackUse and rewrite the former in terms of the first.
This enabled me to expand this API to return the underlying single init use. I
want this information in OptRemarkGen.
2021-03-29 12:10:37 -07:00
John McCall
98711fd628 Revise the continuation ABI.
The immediate desire is to minimize the set of ABI dependencies
on the layout of an ExecutorRef.  In addition to that, however,
I wanted to generally reduce the code size impact of an unsafe
continuation since it now requires accessing thread-local state,
and I wanted resumption to not have to create unnecessary type
metadata for the value type just to do the initialization.

Therefore, I've introduced a swift_continuation_init function
which handles the default initialization of a continuation
and returns a reference to the current task.  I've also moved
the initialization of the normal continuation result into the
caller (out of the runtime), and I've moved the resumption-side
cmpxchg into the runtime (and prior to the task being enqueued).
2021-03-28 12:58:16 -04:00
Joe Groff
d9798c0868 Concurrency: Redo non-_f variants of swift_task_create to accept closures as is.
In their previous form, the non-`_f` variants of these entry points were unused, and IRGen
lowered the `createAsyncTask` builtins to use the `_f` variants with a large amount of caller-side
codegen to manually unpack closure values. Amid all this, it also failed to make anyone responsible
for releasing the closure context after the task completed, causing every task creation to leak.
Redo the `swift_task_create_*` entry points to accept the two words of an async closure value
directly, and unpack the closure to get its invocation entry point and initial context size
inside the runtime. (Also get rid of the non-future `swift_task_create` variant, since it's unused
and it's subtly different in a lot of hairy ways from the future forms. Better to add it later
when it's needed than to have a broken unexercised version now.)
2021-03-08 16:54:19 -08:00
Konrad `ktoso` Malawski
a226259d84 [Concurrency] TaskGroup moves out of AsyncTask, non escaping body 2021-02-22 13:26:27 +09:00
Erik Eckstein
f12ac4c11b SIL: add two utility functions to AccessPath 2021-02-04 07:53:32 +01:00
Robert Widmann
129f6eb90f [NFC] Silence -Wsuggest-override Warnings 2021-02-02 09:22:18 -08:00
Andrew Trick
c6eb9e89f5 [OSSA] Fix AccessPathDefUserTraversal::visitUser; ignore end_borrow.
Ignore end_borrow as a user of the access path. I don't think there's
any value in viewing it as a use of the access path because we really
care about address reads and writes in this context, not object
lifetime extension. Treating it as an access path use clutters the analysis,
and I'm afraid it will inhibit optimization.
2021-01-20 11:37:44 -08:00
John McCall
d874479290 Add builtins to initialize and destroy a default-actor member.
It would be more abstractly correct if this got DI support so
that we destroy the member if the constructor terminates
abnormally, but we can get to that later.
2020-12-10 19:18:53 -05:00
Richard Wei
de2dbe57ed [AutoDiff] Bump-pointer allocate pullback structs in loops. (#34886)
In derivatives of loops, no longer allocate boxes for indirect case payloads. Instead, use a custom pullback context in the runtime which contains a bump-pointer allocator.

When a function contains a differentiated loop, the closure context is a `Builtin.NativeObject`, which contains a `swift::AutoDiffLinearMapContext` and a tail-allocated top-level linear map struct (which represents the linear map struct that was previously directly partial-applied into the pullback). In branching trace enums, the payloads of previously indirect cases will be allocated by `swift::AutoDiffLinearMapContext::allocate` and stored as a `Builtin.RawPointer`.
2020-11-30 15:49:38 -08:00
Arnold Schwaighofer
b49a882ea1 Add some SIL properties for get/await_async_continuation 2020-11-19 12:38:51 -08:00
Erik Eckstein
71af79a46d SIL: handle hop_to_executor instruction in MemAccessUtils 2020-11-17 16:01:36 +01:00
Andrew Trick
c4abccb405 Merge pull request #34742 from atrick/fix-unchecked-ownership
Specify the unchecked_ownership_conversion instruction
2020-11-16 21:13:11 -08:00
Doug Gregor
069dfad638 [Concurrency] Add Builtin.createAsyncTaskFuture.
This new builtin allows the creation of a "future" task, which calls
down to swift_task_create_future to actually form the task.
2020-11-15 22:37:13 -08:00
Andrew Trick
5329d92374 Clarify a comment in MemAccessUtils. 2020-11-13 16:51:32 -08:00