Downgrade to a warning until the next language mode. This is
necessary since we previously missed coercing macro arguments to
parameter types, resulting in cases where closure arguments weren't
being treated as `async` when they should have been.
rdar://149328745
This memory is part of the conformance cache concurrent hash map, so
when we clear the conformance cache, record each of the allocated
pointers within the concurrent map's free list. This way, it'll be
freed with the rest of the concurrent map when it's safe to do so.
(cherry picked from commit 885f829a63)
Don't bind references to storage to use (new ABI) coroutine accessors
unless they're guaranteed to be available. For example, when building
against a resilient module that has coroutine accessors, they can only
be used if the deployment target is >= the version of Swift that
includes the feature.
rdar://148783895
Several callers of `AbstractStorageDecl::getAccessStrategy` only cared
about whether the the access would be via physical storage. Before
adding more arguments to `getAccessStrategy` for which such callers
would have to pass a sentinel value, add a convenience method for this.
It has been decided to split the attribute into `@concurrent` and
`nonisolated(nonsending`. Adjusting diagnostics to accept the attribute
makes the transition easier.
* [SUA][IRGen] Add stub for swift_coroFrameAlloc that weakly links against the runtime function
This commit modifies IRGen to emit a stub function `__swift_coroFrameAllocStub` instead of the
newly introduced swift-rt function `swift_coroFrameAlloc`. The stub checks whether the runtime has the symbol
`swift_coroFrameAlloc` and dispatches to it if it exists, uses `malloc` otherwise. This ensures the
ability to back deploy the feature to older OS targets.
rdar://145239850
(cherry picked from commit 5e2f20b2d8)
Specifically, we only do this if the base is a let or if it is a var but not
captured by reference.
rdar://149019222
(cherry picked from commit 23b6937cbc)
I am doing this so I can mark requires as being on a mutable non-Sendable base
from a Sendable value.
I also took this as an opportunity to compress the size of PartitionOp to be 24
bytes instead of 40 bytes.
(cherry picked from commit a045c9880a)
Previously, when we saw any Sendable type and attempted to look up an underlying
tracked value, we just bailed. This caused an issue in situations like the
following where we need to emit an error:
```swift
func test() {
var x = 5
Task.detached { x += 1 }
print(x)
}
```
The problem with the above example is that despite value in x being Sendable,
'x' is actually in a non-Sendable box. We are passing that non-Sendable box into
the detached task by reference causing a race against the read from the
non-Sendable box later in the function. In SE-0414, this is explicitly banned in
the section called "Accessing Sendable fields of non-Sendable types after weak
transferring". In this example, the box is the non-Sendable type and the value
stored in the box is the Sendable field.
To properly represent this, we need to change how the underlying object part of
our layering returns underlying objects and vends TrackableValues to the actual
analysis for addresses. NOTE: We leave the current behavior alone for SIL
objects.
By doing this, in situations like the above, despite have a Sendable value (the
integer), we are able to ensure that we require that the non-Sendable box
containing the integer is not used after we have sent it into the other Task
despite us not actually using the box directly.
Below I describe the representation change in more detail and describe the
various cases here. In this commit, I only change the representation and do not
actually use the new base information. I do that in the next commit to make this
change easier for others to read and review. I made sure that change was NFC by
leaving RegionAnalysis.cpp:727 returning an optional.none if the value found was
a Sendable value.
----
The way we modify the representation is that we instead of just returning a
single TrackedValue return a pair of tracked values, one for the base and one
for the "value". We return this pair in what is labeled a
"TrackableValueLookupResult":
```c++
struct TrackableValueLookupResult {
TrackableValue value;
std::optional<TrackableValue> base;
TrackableValueLookupResult(TrackableValue value)
: value(value), base() {}
TrackableValueLookupResult(TrackableValue value, TrackableValue base)
: value(value), base(base) {}
};
```
In the case where we are accessing a projection path out of a non-Sendable type
that contains all non-Sendable fields, we do not do anything different than we
did previously. We just walk up from use->def until we find the access path base
which we use as the representative of the leaf of the chain and return
TrackableValueLookupResult(access path base).
In the case where we are accessing a Sendable leaf type projected from a
non-Sendable base, we store the leaf type as our value and return the actual
non-Sendable base in TrackableValueLookupResult. Importantly this ensures that
even though our Sendable value will be ignored by the rest of the analysis, the
rest of the analysis will ensure that our base is required if our base is a var
that had been escaped into a closure by reference.
In the case where we are accessing a non-Sendable leaf type projected from a
Sendable type (which we may have continued to be projected subsequently out of
additional Sendable types or a non-Sendable type), we make the last type on the
projection path before the Sendable type, the value of the leaf type. We return
the eventual access path base as our underlying value base. The logic here is
that since we are dealing with access paths, our access path can only consist of
projections into a recursive value type (e.x.: struct/tuple/enum... never a
class). The minute that we hit a pointer or a class, we will no longer be along
the access path since we will be traversing a non-contiguous piece of
memory (consider a class vs the class's storage) and the traversal from use->def
will stop. Thus, we know that there are only two ways we can get a field in that
value type to be Sendable and have a non-Sendable field:
1. The struct can be @unchecked Sendable. In such a case, we want to treat the
leaf field as part of its own disconnected region.
2. The struct can be global actor isolated. In such a case, we want to treat the
leaf field as part of the global actor's region rather than whatever actor.
The reason why we return the eventual access path base as our tracked value base
is that we want to ensure that if the var value had been escaped by reference,
we can require that the var not be sent since we are going to attempt to access
state from the var in order to get the global actor guarded struct that we are
going to attempt to extract our non-Sendable leaf value out of.
(cherry picked from commit c846c2279e)
Specifically,
1. UseDefChainVisitor::actorIsolation is dead. I removed it to prevent any
confusion/thoughts that it actually found isolation. I also removed it from
UnderlyingTrackedValue since that was the only place where we were using it. I
left UnderlyingTrackedValue there in case I need to add additional state there
in the future.
2. Now that UseDefChainVisitor is only used to find the base of a value (while
not looking through Sendable addresses), I renamed it to
AddressBaseComputingVisitor.
3. I renamed UseDefChainVisitor::isMerge to isProjectedFromAggregate. That is
actually what we use it for. I also added a comment explaining what it is used
for.
(cherry picked from commit 98984a2678)
The IsolatedConformances feature moves to a normal, supported feature.
Remove all of the experimental-feature flags on test cases and such.
The InferIsolatedConformances feature moves to an upcoming feature for
Swift 7. This should become an adoptable feature, adding "nonisolated"
where needed.
(cherry picked from commit 3380331e7e)
This is going to need a proper implementation in the requirement
machine. For the moment, provide a slightly-less-broken implementation
but leave a test case where we incorrectly accept racey code.
(cherry picked from commit 92774e0a3c)
Seems that during refactorings of child cancellations we somehow missed
also cancelling the group itself. It seems we did not have good test
coverage of the addTaskUnlessCancelled somehow and thus this slipped
through.
This adds a regression test for addTaskUnlessCancelled and fixes how we
handle the cancellation effect in TaskStatus.
resolves#80789
resolves rdar://149177600
Because `TaskAllocator` is not a round multiple of the machine word
size on 64-bit platforms, I think we end up with padding before the
`TaskLocal::Storage` following it, which makes the `PrivateStorage`
structure larger than the calculation in `ABI/Task.h`.
rdar://149067144