In the context of coroutine/yielding accessors, "unwind" means
that the second half of the coroutine (code _after_ the `yield`
statement) will not run if an exception is thrown during the access.
Unwinding was the default behavor for legacy `_read`/`_modify` coroutine
accessors.
For the new `yielding borrow`/`yielding mutate` accessors, unwinding
was optional behind a separate feature flag.
But the final SE-0474 dictates that unwinding is always _disabled_ for
the new yielding accessors. That is, the new yielding accessors always
run to completion, regardless of whether there are thrown exceptions within
the access scope. This was deemed essential to ensure that authors of
data structures could guarantee consistency.
This PR permanently disables unwinding behavior for the new accessors.
The feature flag still exists but has no effect.
A handful of tests that verified the unwinding behavior have been
edited to ensure that unwinding does _not_ happen even when the feature
flag is specified.
This verifies that yielding borrow and mutate accessors run both before
and after the access in each of the following cases:
* Simple access of a struct
* As above, but with a throw exception during the access
* As above, but through a protocol existential
Make sure an `end_borrow` is emitted on the `dereference_borrow` so that
SILGenCleanup recognizes this pattern and lowers it to a `return_borrow`
as it does for returned `load_borrow`s. Add support to the Swift implementation
of `BorrowingInstruction` for `DereferenceBorrow`.
This builtin only needs to be supported as the return of a borrow accessor,
so special case its handling as part of a storage expression in SILGenLValue.
Borrow's layout is trivial loadable when the layout of the referent is known (even
if the referent is itself nontrivial or address-only). If the referent's layout is
abstracted or resilient, then Borrow's layout is trivial address-only.
Every time DarwinSDKInfo reads a new key out of SDKSettings, a boatload of test SDKSettings files need to be updated across several repositories and forks and branches. It’s tedious to be careful to update those with real values so that the tests are properly regression testing older SDKs. It’s important to be careful so that the tests are accurate, e.g. to prevent the scenario where DarwinSDKInfo starts reading a new key out of SDKSettings and assumes that it’s always available everywhere, when in reality it was only added a few releases ago and will break with older SDKs. If the test SDKSettings files continue to be updated ad hoc, it’s going to be really easy to copy/paste a default value everywhere, and then clients will see incorrect behaviors with the real SDKs, or even compiler crashes if the key is unconditionally read. Preemptively add all of the maybe-possibly-compiler relevant keys to the test SDKSettings files from the real SDKs so that the test files are an accurate representation and shouldn't need to be touched in the future. Where the test SDKSettings have intentionally doctored data, add a Comments key explaining what is changed from the real SDK, and alter the SDK name with a tag indicating the change.
rdar://168700857
These two new invariants eliminate corner cases which caused bugs if optimization didn't handle them.
Also, it will significantly simplify lifetime completion.
The implementation basically consists of these changes:
* add a flag in SILFunction which tells optimization if they need to take care of infinite loops
* add a utility to break infinite loops
* let all optimizations remove unreachable blocks and break infinite loops if necessary
* add verification to check the new SIL invariants
The new `breakIfniniteLoops` utility breaks infinite loops in the control flow by inserting an "artificial" loop exit to a new dead-end block with an `unreachable`.
It inserts a `cond_br` with a `builtin "infinite_loop_true_condition"`:
```
bb0:
br bb1
bb1:
br bb1 // back-end branch
```
->
```
bb0:
br bb1
bb1:
%1 = builtin "infinite_loop_true_condition"() // always true, but the compiler doesn't know
cond_br %1, bb2, bb3
bb2: // new back-end block
br bb1
bb3: // new dead-end block
unreachable
```
We want to rely on the presence of yielding borrow with the introduction
of CoroutineAccessor feature. Emit it so that when the deployment target
is the same (or higher) as the CoroutineAccessor feature introduction we can
rely on the presence of yielding borrow. (assuming that the source code
was recompiled with a current toolchain at the time of introduction of the
feature)
The reason I am doing this is that I want to be careful and make sure that we
can distinguish in between weak var and weak let var decls and real captures.
In the caller, we do not actually represent the capture's var decl information
in a meaningful way since the actual var decl usage is only in the closure.
After inlining, we get that var decl information from the debug_value of the
argument. So there isn't any reason not to do it and it will simplify the other
work I am doing.
We've had several bugs lately where SILGen produces a diagnostic,
but the resulting invalid SIL causes crashes further down in the
mandatory pipeline. To prevent this from happening, just stop after
SILGen if diagnostics were emitted, even if lazy type checking is
disabled, because the same rationale applies in either case.
We currently appear to miscompile when emitting the erased isolation
existential codegen if the isolated capture is unowned. To patch this,
add new assertions that only supported types are used when creating the
instruction in the builder.
To help reduce cognitive burden for future Swift compiler engineers,
let's use the same terminology for coroutine accessors in SIL dumps
as we use in the surface language and inside the compiler.
This really just changes two lines in SILPrinter.cpp and updates
a lot of tests. I've also copied one test to preserve the old syntax
to make sure that SIL parsing still accepts it. That should hopefully
prevent unfortunate round-tripping issues while these changes settle.