Reformatting everything now that we have `llvm` namespaces. I've
separated this from the main commit to help manage merge-conflicts and
for making it a bit easier to read the mega-patch.
This is phase-1 of switching from llvm::Optional to std::optional in the
next rebranch. llvm::Optional was removed from upstream LLVM, so we need
to migrate off rather soon. On Darwin, std::optional, and llvm::Optional
have the same layout, so we don't need to be as concerned about ABI
beyond the name mangling. `llvm::Optional` is only returned from one
function in
```
getStandardTypeSubst(StringRef TypeName,
bool allowConcurrencyManglings);
```
It's the return value, so it should not impact the mangling of the
function, and the layout is the same as `std::optional`, so it should be
mostly okay. This function doesn't appear to have users, and the ABI was
already broken 2 years ago for concurrency and no one seemed to notice
so this should be "okay".
I'm doing the migration incrementally so that folks working on main can
cherry-pick back to the release/5.9 branch. Once 5.9 is done and locked
away, then we can go through and finish the replacement. Since `None`
and `Optional` show up in contexts where they are not `llvm::None` and
`llvm::Optional`, I'm preparing the work now by going through and
removing the namespace unwrapping and making the `llvm` namespace
explicit. This should make it fairly mechanical to go through and
replace llvm::Optional with std::optional, and llvm::None with
std::nullopt. It's also a change that can be brought onto the
release/5.9 with minimal impact. This should be an NFC change.
`getValue` -> `value`
`getValueOr` -> `value_or`
`hasValue` -> `has_value`
`map` -> `transform`
The old API will be deprecated in the rebranch.
To avoid merge conflicts, use the new API already in the main branch.
rdar://102362022
These libraries formed a strongly connected component in the CMake build graph. The weakest link I could find was from IDE to FrontendTool and Frontend, which was necessitated by the `CompileInstance` class (https://github.com/apple/swift/pull/40645). I moved a few files out of IDE into a new IDETools library to break the cycle.
Store whether a result is async in the `ContextFreeCodeCompletionResult` and determine whether an async method is used in a sync context when promoting the context free result to a contextual result.
rdar://78317170
I think that preferring identical over convertible makes sense in e.g. C++ where we have implicit user-defined type conversions but since we don’t have them in Swift, I think the distinction doesn’t make too much sense, because if we have a `func foo(x: Int?)`, want don’t really want to prioritize variables of type `Int?` over `Int` Similarly if we have `func foo(x: View)`, we don’t want to prioritize a variable of type `View` over e.g. `Text`.
rdar://91349364
Computing the type relation for every item in the code completion cache is way to expensive (~4x slowdown for global completion that imports `SwiftUI`). Instead, compute a type’s supertypes (protocol conformances and superclasses) once and write their USRs to the cache. To compute a type relation we can then check if the contextual type is in the completion item’s supertypes.
This reduces the overhead of computing the type relations (again global completion that imports `SwiftUI`) to ~6% – measured by instructions executed.
Technically, we might miss some conversions like
- retroactive conformances inside another module (because we can’t cache them if that other module isn’t imported)
- complex generic conversions (just too complicated to model using USRs)
Because of this, we never report an `unrelated` type relation for global items but always default to `unknown`.
But I believe this change covers the most common cases and is a good tradeoff between accuracy and performance.
rdar://83846531
Computing the type relation for every item in the code completion cache is way to expensive (~4x slowdown for global completion that imports `SwiftUI`). Instead, compute a type’s supertypes (protocol conformances and superclasses) once and write their USRs to the cache. To compute a type relation we can then check if the contextual type is in the completion item’s supertypes.
This reduces the overhead of computing the type relations (again global completion that imports `SwiftUI`) to ~6% – measured by instructions executed.
Technically, we might miss some conversions like
- retroactive conformances inside another module (because we can’t cache them if that other module isn’t imported)
- complex generic conversions (just too complicated to model using USRs)
Because of this, we never report an `unrelated` type relation for global items but always default to `unknown`.
But I believe this change covers the most common cases and is a good tradeoff between accuracy and performance.
rdar://83846531
Filter name for completion item is always used. Also, for cached items,
they are used multiple times for filtering. So precomputing and caching
it improves performance.
rdar://84036006
Convert 'ContextFreeCodeCompletionResult' constructor overloads to
'create()' factory methods. This is the consistent interface with
'CodeCompletionString'. NFC
[CodeCompletion] Make ExpectedTypeContext a class with explicit getters/setters
This simplifies debugging because you can break when the possible types are set and you can also search for references to `setPossibleType` to figure out where the expected types are being set.
This allows makes the distinction between cachable and non-cachable properties cleaner and allows us to more easily compute contextual information (like type relations) for cached items later.
Previously, when creating a `SourceKit::CodeCompletion::Completion`, we needed to copy all fields from the underlying `SwiftResult` (aka `swift::ide::CodeCompletionResult`). The arena in which the `SwiftResult` was allocated still needed to be kept alive for the references stored in the `SwiftResult`.
To avoid this unnecessary copy, make `SourceKit::CodeCompletion::Completion` store a reference to the underlying `SwiftResult`.
Previously the code completion methods just returned an `ArrayRef` that pointed into the result sink that contained the results but no effort was made to actually keep that that result sink alive, e.g. when transforming results in `transformAndForwardResults`.
Instead, return the `CodeCompletionResultSink` from the code compleiton methods now and adopt that sink from the inner results created in `transformAndForwardResults`.
Now that arguments are marked up with whether they have a default or
not, clients may not need the extra call (that has no default
arguments). Add an option to allow not adding this item.
Resolves rdar://85526214.
Essentially, just wire up cancellation tokens and cancellation flags for `CompletionInstance` and make sure to return `CancellableResult::cancelled()` when cancellation is detected.
rdar://83391488
We had some situations left that neither returned an error, nor called the callback with results in `performOperation`. Return an error in these and adjust the tests to correctly match the error.
This refactors a bunch of code-completion methods around `performOperation` to return their results via a callback only instead of the current mixed approach of indicating failure via a return value, returning an error string as an inout parameter and success results via a callback. The new guarantee should be that the callback is always called exactly once on control flow graph.
Other than a support for passing the (currently unused) cancelled state through the different instance, there should be no functionality change.
"add inits to toplevel" and "call pattern heuristics" are only used in
code completion. Move them from LangOptions to CodeCompletionContext so
that they don't affect compiler arguments.
To describe fine grained priorities.
Introduce 'CodeCompletionFlair' that is a set of more descriptive flags for
prioritizing completion items. This aims to replace '
SemanticContextKind::ExpressionSpecific' which was a "catch all"
prioritization flag.
For example, non-Darwin platforms probably don't want
`#colorLiteral(red:green:blue":alpha:)` and `#imageLiteral(named:)`.
Add an completion option to include them, which is "on" by default.
rdar://75620636
struct Foo {
init(_ arg1: String, arg2: Int) {}
init(label: Int) {}
}
func test(strVal: String) {
_ = Foo(<HERE>)
}
In this case, 'strVal' was prioritized because it can use as an argument
for 'init(_:arg2:)'. However, argument labels are almost always
preferable, and if the user actually want 'strVal', they can input a few
characters to get it at the top. So we should always prioritize call
argument patterns.
rdar://77188260
Consolidate ThrowsKeyword, RethrowsKeyword, and AsyncKeyword to
EffectsSpecifierKeyword.
Abolish 'key.throwsoffset' and 'key.throwslength' as they aren't used.
Rather than replacing the code completion file
on the `CompilerInstance` whenever we do a cached
top-level completion, let's set a new main module
instead.
This allows us to properly update the
`LoadedModules` map, and allows the retrieval of
the code completion file to be turned into a
request.
When completing a single argument for a trailing closure, pre-expand the
closure expression syntax instead of using a placeholder. It's not valid
to pass a non-closure anyway.
rdar://62189182