For now, the accessors have been underscored as `_read` and `_modify`.
I'll prepare an evolution proposal for this feature which should allow
us to remove the underscores or, y'know, rename them to `purple` and
`lettuce`.
`_read` accessors do not make any effort yet to avoid copying the
value being yielded. I'll work on it in follow-up patches.
Opaque accesses to properties and subscripts defined with `_modify`
accessors will use an inefficient `materializeForSet` pattern that
materializes the value to a temporary instead of accessing it in-place.
That will be fixed by migrating to `modify` over `materializeForSet`,
which is next up after the `read` optimizations.
SIL ownership verification doesn't pass yet for the test cases here
because of a general fault in SILGen where borrows can outlive their
borrowed value due to being cleaned up on the general cleanup stack
when the borrowed value is cleaned up on the formal-access stack.
Michael, Andy, and I discussed various ways to fix this, but it seems
clear to me that it's not in any way specific to coroutine accesses.
rdar://35399664
There are two general constructor forms here:
- One took the number of parameter lists, to be filled in later.
Now, this takes a boolean indicating if there is an implicit
'self'.
- The other one took the actual parameter lists and filled them
in right away. This now takes a separate 'self' ParamDecl and
ParameterList.
Instead of storing the number of parameter lists, an
AbstractFunctionDecl now only needs to store if there is a 'self'
or not.
I've updated most places that construct AbstractFunctionDecls to
properly use these new forms. In the ClangImporter, there is
more code that remains to be untangled, so we continue to build
multiple ParameterLists and unpack them into a ParamDecl and
ParameterList at the last minute.
Introduce a new request kind to capture the computation of the set of
overridden declarations of a given declaration, eliminating the
stateful “setOverriddenDecls()” calls from the type checker.
The other side of #17404. Since we don't want to generate up front key path metadata for properties/subscripts with no withheld implementation details, the client should generate a key path component that can be used to represent a key path component based on its public interface.
This was originally added to support "derived" top-level declarations,
which in practice was just '==' implementations for enums. Now that
those are members, we don't have the notion of derived top-level
declarations anymore, and neither is there anything that would
normally be a cross-reference that we want to force to be serialized
directly.
No functionality change (because this was unused).
Several kinds of declarations can override other declarations, but the
computation and storage for these “overridden” declarations was scattered in
at least 3 different places, with different resolution paths. Pull them
all together into two bits of LazySemanticInfo in ValueDecl (“have we computed
overrides?” and “are there any overrides?”), with a side table for the
actual list of overrides.
One side effect here is that the AST can now represent multiple overridden
declarations, although only associated type declarations track this
information.
Start using LazyResolver::resolveOverriddenDecl() more consistently, unifying
it with the separate path we had for associated type overrides. All of this
is staging for a move to the request-evaluator for overridden declaration
computation.
The storage kind has been replaced with three separate "impl kinds",
one for each of the basic access kinds (read, write, and read/write).
This makes it far easier to mix-and-match implementations of different
accessors, as well as subtleties like implementing both a setter
and an independent read/write operation.
AccessStrategy has become a bit more explicit about how exactly the
access should be implemented. For example, the accessor-based kinds
now carry the exact accessor intended to be used. Also, I've shifted
responsibilities slightly between AccessStrategy and AccessSemantics
so that AccessSemantics::Ordinary can be used except in the sorts of
semantic-bypasses that accessor synthesis wants. This requires
knowing the correct DC of the access when computing the access strategy;
the upshot is that SILGenFunction now needs a DC.
Accessor synthesis has been reworked so that only the declarations are
built immediately; body synthesis can be safely delayed out of the main
decl-checking path. This caused a large number of ramifications,
especially for lazy properties, and greatly inflated the size of this
patch. That is... really regrettable. The impetus for changing this
was necessity: I needed to rework accessor synthesis to end its reliance
on distinctions like Stored vs. StoredWithTrivialAccessors, and those
fixes were exposing serious re-entrancy problems, and fixing that... well.
Breaking the fixes apart at this point would be a serious endeavor.
We sometimes serialized the “isObjC” state, and sometimes not. When we did serialize it,
we rarely used the bit for anything. Serialize it for all declarations where it makes
sense, and consistently call setIsObjC() with the deserialized value.
This is enough to let the test case in rdar://problem/40899824 pass,
and any callers of this function already need to be able to handle a
nullptr result. There's a lot more work to do in this area, but it's
nice to get the simple things working again.
Now that @inlinable is a supported feature, we need to handle cases
where a function is inlinable but it references some type that imports
differently in different Swift versions. To start, handle the case
where a SIL function's type is now invalid and therefore the entire
function can't be imported. This doesn't open up anything interesting
yet, but it's a start.
Part of rdar://problem/40899824
Signature optimization is slightly different to (most) other thunks, in that
it's taking an existing function and turning that into a thunk, rather than
creating a thunk that calls an existing function. These symbols can be public,
etc. and so need to be handled a bit different to other types of thunks.
If, for whatever reason, a type used in an extension's generic
requirements is missing, just drop the whole extension. This isn't
wonderful recovery, but in practice nothing should be able to use the
extension anyway, since the relevant type in question is missing.
...Okay, that's not quite true; there could, for example, be inlinable
code that references one of these methods. However, that (1) isn't
worse than the behavior for any other inlinable code (which doesn't
yet attempt to recover from missing declarations), and (2) is still a
strict improvement over the current situation, where we will eagerly
abort the compiler trying to load the extension in the first place.
rdar://problem/40956460
Previously: "module compiled with Swift 4.1 cannot be imported in Swift
4.1.50" (i.e. following the -swift-version flag)
Now: "module compiled with Swift 4.1 cannot be imported by the Swift
4.2 compiler"
I'm pretty sure this is what I intended to do all along, and I just
messed it up when I originally implemented it. This is especially
important when working with downloadable toolchains, which would say
"module compiled with Swift 4.2 cannot be imported in Swift 4.1.50",
which is not really the problem at all. Now it'll fall back to the
more generic "module file was created by an older version of the
compiler" error.
Client code can make a best effort at emitting a key path referencing a property with its publicly exposed API, which in the common case will match what the defining module would produce as the canonical key path component representation of the declaration. We can reduce the code size impact of these descriptors by not emitting them when there's no hidden or possibly-resiliently-changed-in-the-past information about a storage declaration, having the property descriptor symbol reference a sentinel value telling client key paths to use their definition of the key path component.
The meaning of this bit changed way back in 3639343c53, but I forgot
to update this fast path. Thanks to this code falling back to the slow
path if things didn't work out, there wasn't a correctness issue here,
but it might have been doing more work than necessary.
Introduce a command-line option to visualize the complete set of output
request dependencies evaluated by a particular compile action. This is
exposing existing visualization facilities to the (-frontend) command line.
Cross-references are identified by their containing module, with the
assumption that two modules will never have the same name. However, an
overlay has the same name as its underlying Clang module, which means
that there can be two declarations with the same name, the same type,
and the same module name. This is the underlying cause of the
'UIEdgeInsetsZero' problem, but it also affects the CloudKit overlay.
By tracking a bit that just says "this came from Clang", we're able
to resolve otherwise ambiguous cross-references.
(Why didn't we do it this way all along? Because if a declaration
moves from Clang to Swift or vice versa, that would break the
cross-reference. But that's only interesting if the swiftmodule format
is meant to be persistent across changing dependencies, and it looks
like we're moving away from that anyway. It's also a little weird for
SerializedModuleLoader to have special cases for Clang, but this isn't
the first.)
Note that I'm not reverting the UIEdgeInsetsZero workaround here; the
end state will have that coming just from UIKit as originally
described.
rdar://problem/40839486
llvm::Expected/llvm::Error require that the error is not just checked
but explicitly handled. Since we're currently recovering as if nothing
happened, we need to use llvm::consumeError to throw the error info
away.
rdar://problem/40738521
This allows us to filter them out in cases that would otherwise be
ambiguous. The particular prompting situation looks a lot like the
test case: a protocol, plus a forward-declared class with the same
name. (Normally we ignore forward-declared classes, but SourceKit's
module interface generation feature makes dummy ClassDecls for them
instead.)
https://bugs.swift.org/browse/SR-4851
Wire up the request-evaluator with an instance in ASTContext, and
introduce two request kinds: one to retrieve the superclass of a class
declaration, and one to compute the type of an entry in the
inheritance clause.
Teach ClassDecl::getSuperclass() to go through the request-evaluator,
centralizing the logic to compute and extract the superclass
type.
Fixes the crasher from rdar://problem/26498438.
Introduce some metaprogramming of accessors and generally prepare
for storing less-structured accessor lists.
NFC except for a change to the serialization format.
This mirrors how a bridging header is processed when compiling source
files: before any of the imports. This is important for LLDB to
recreate the source environment as closely as possible to how the
compiler does it; in the test case being added involving a non-modular
header file, failure to do so resulted in a deserialization
cross-reference crash.
Note that Serialization still sorts imports, which normal resolution
of imports in source does not do. So we're still not consistent. But
this is less important than handling textual includes (bridging
headers) before modular imports.
rdar://problem/40471329
This can only happen in one case today: a module imports a bridging
header, but the header on disk has disappeared, and now we need to
fall back to the (often inadequate) version that's stored inside the
swiftmodule file. Even if the module fails to load, the bridging
header has already been imported, and so anything else that happens
might still emit diagnostics and need that text to be alive, which
means we need to keep the buffer alive too.
Trade a tiny bit of speed in the happy path for a more sensible
result, and make sure to preserve the information about what was being
looked up when a cross-reference turns out to be ambiguous. No
intended functionality change.