This requests performs an index store index of the given file using the
given index store path and index unit output path. All other options are
derived from the index store related compiler flags.
This will allow IDEs like Xcode to index the file directly inside of
sourcekitd and potentially reuse an already built AST.
Reformatting everything now that we have `llvm` namespaces. I've
separated this from the main commit to help manage merge-conflicts and
for making it a bit easier to read the mega-patch.
This is phase-1 of switching from llvm::Optional to std::optional in the
next rebranch. llvm::Optional was removed from upstream LLVM, so we need
to migrate off rather soon. On Darwin, std::optional, and llvm::Optional
have the same layout, so we don't need to be as concerned about ABI
beyond the name mangling. `llvm::Optional` is only returned from one
function in
```
getStandardTypeSubst(StringRef TypeName,
bool allowConcurrencyManglings);
```
It's the return value, so it should not impact the mangling of the
function, and the layout is the same as `std::optional`, so it should be
mostly okay. This function doesn't appear to have users, and the ABI was
already broken 2 years ago for concurrency and no one seemed to notice
so this should be "okay".
I'm doing the migration incrementally so that folks working on main can
cherry-pick back to the release/5.9 branch. Once 5.9 is done and locked
away, then we can go through and finish the replacement. Since `None`
and `Optional` show up in contexts where they are not `llvm::None` and
`llvm::Optional`, I'm preparing the work now by going through and
removing the namespace unwrapping and making the `llvm` namespace
explicit. This should make it fairly mechanical to go through and
replace llvm::Optional with std::optional, and llvm::None with
std::nullopt. It's also a change that can be brought onto the
release/5.9 with minimal impact. This should be an NFC change.
Update the cursor requests to also pass in their primary file. Snapshots
should be compared using this file, not the input buffer name. This
fixes AST re-use when the AST is usable with snapshots.
Resolves rdar://110344363.
Expand macros in the specified source file syntactically (without any
module imports, nor typechecking).
Request would look like:
```
{
key.compilerargs: [...]
key.sourcefile: <file name>
key.sourcetext: <source text> (optional)
key.expansions: [<expansion specifier>...]
}
```
`key.compilerargs` are used for getting plugins search paths. If
`key.sourcetext` is not specified, it's loaded from the file system.
Each `<expansion sepecifier>` is
```
{
key.offset: <offset>
key.modulename: <plugin module name>
key.typename: <macro typename>
key.macro_roles: [<macro role UID>...]
}
```
Clients have to provide the module and type names because that's
semantic.
Response is a `CategorizedEdits` just like (semantic) "ExpandMacro"
refactoring. But without `key.buffer_name`. Nested expnasions are not
supported at this point.
Introduce a new key `generated_buffers`, which
stores an array of generated buffers. These
include the buffer text, as well as its original
location and any parent buffers.
While here, also fix rdar://107281079 such that
only apply the filename fallback logic to the
pretty-printed Decl case. We ought to remove this
fallback once the editor can handle it though.
rdar://107281079
rdar://107952288
Update requests to handle being passed a separate `key.primary_file`
which specifies the file to use for building the AST. `key.sourcefile`
is then the file to find in `SourceManager`, which could be a generated
buffer.
Resolves rdar://106863186.
This hooks up the cursor info infrastructure to be able to pass through multiple, ambiguous results. There are still minor issues that cause solver-based cursor info to not actually report the ambiguous results but those will be fixed in a follow-up PR.
This adds a new `primary_file` key, which defaults to `sourcefile`. For
nested expansions, `primary_file` should be set to the containing file
and `sourcefile` to the name of the macro expansion buffer.
`source.request.activeregions` is a new request that reports all
`#if`, `#else` and `#elseif` decls in the response with an `is_active` flag.
This is mainly useful for a client to know which branches are active,
but for completeness sake, the active decls are also reported.
Note: it only reports the positions of the decls, not the entire ranges.
It's up to clients to map this to the syntactic structure of a tree.
Reporting the ranges of the decls could be confusing in the case
of nested `#if`s, where the ranges can overlap.
Add two new fields to refactoring edits:
- A file path if the edit corresponds to a buffer other than the
original file
- A buffer name when the edit is actually source of generated buffer
Macro expansions allow the former as a macro could expand to member
attributes, which may eg. add accessors to each member. The attribute
itself is inside the expansion, but the edit is to the member in the
original source.
The latter will later allow clients to send requests with these names to
allow semantic functionality inside synthesized buffers.
Running the SourceKit stress tester with verification of solver-based cursor info returned quite a few differences but in all of them, the old AST-based implementation was actually incorrect. So, instead of verifying the results, deliver the results from solver-baesd cursor info and only fall back to AST-based cursor info if the solver-based implementation returned no results.
rdar://103369449
This implements cursor info resolving for a few expression types using the constraint system. This allows us to detect ambiguous results – we cannot deliver them yet but that will be done in a follow-up PR.
The main problem that prevented us from reusing the ASTContext was that we weren’t remapping the `LocToResolve` in the temporary buffer that only contains the re-parsed function back to the original buffer. Thus `NodeFinder` couldn’t find the node that we want to get cursor info for.
Getting AST reuse to work for top-level items is harder because it currently heavily relies on the `HasCodeCompletion` state being set on the parser result. I’ll try that in a follow-up PR.
rdar://103251263
This allows us to mark expected deviations between the AST-based and the solver-based implementation in the stress tester as XFails without breaking actual clients
We always verify if a cursor info request is issued through `sourcekitd-test`.
'sourcekitd' itself should not call 'swift::' functions. It should just
deserialize the request, execute the logic in LangSupport, then serialize
the result.
We will be using the string serialized results to verify that solver-based cursor info results match the old implementation. This is necessary because cursor info results on their own contain stack references that cannot be stored.
`getValue` -> `value`
`getValueOr` -> `value_or`
`hasValue` -> `has_value`
`map` -> `transform`
The old API will be deprecated in the rebranch.
To avoid merge conflicts, use the new API already in the main branch.
rdar://102362022
IDE/Refactoring had dependencies to libswiftIndex, but libswiftIndex
also depends on libswiftIDE (SourceEntityWalker, etc.)
To break libswiftIndex <-> libswiftIDE dependency cycle, move
"refactoring" related files to a new library 'libswiftRefactoring'
rdar://101692282
#58786 (rdar://93030932) was failing because the `swift-frontend` invocations passed a `swiftExecutablePath` to `Invocation.parseArgs`. This caused the `ClangImporter` instance to point to a `clang` binary next to the `swift-frontend` executable while SourceKit used PATH to find `clang`. The clang executable next to `swift-frontend` doesn’t actually exist because `clang` lives in `llvm-linux-aarch64/bin` and `swift-frontend` lives in `swift-linux-aarch64/bin`.
So some checks for a minimum clang verison failed for the normal build (because the executable doesn’t actually exists) while they pass during the SourceKit build (which used `clang` from `PATH`). This in turn caused the `outline-atomics` to be enabled to the SourceKit clang compiler arguments but not the clang compiler arguments for a normal build and thus resulted in two separate module cache directories (which includes the enabled features in the module directory hash).
To fix this issue, also set the swift executable path for compiler invocations created from SourceKit.
Fixes#58786 (rdar://93030932)
Essentially, just wire up cancellation tokens and cancellation flags for `CompletionInstance` and make sure to return `CancellableResult::cancelled()` when cancellation is detected.
rdar://83391488
Previously, `SwiftASTManager` and `SlowRequestSimulator` maintained their own list of in-progress cancellation tokens. With code completion cancellation coming up, there would need to be yet another place to track in-progress requests, so let’s centralize it.
While at it, also support cancelling requests before they are scheduled, eliminating the need for a `sleep` in a test case.
The current implementaiton leaks tiny amounts of memory if a request is cancelled after if finishes. I think this is fine because it is a pretty nieche case and the leaked memory is pretty small (a `std::map` entry pointing to a `std::function` + `bool`). Alternatively, we could require the client to always dispose of the cancellation token manually.
Mark implicit declarations with a `synthesized` field, since while we
still want them to have a cursor info result (eg. for their USR to find
possible overrides), it's likely that clients will want to treat them
differently to results that have a real location in source.
An alternative could be to remove the location entirely, but:
- the module name would also have to be removed to prevent wasted
lookups in the generated interface
- having the location could still be useful as the location that
caused the synthesized declaration (though at the moment only
synthesized initializers have a location)
Resolves rdar://84990655
This refactors a bunch of code-completion methods around `performOperation` to return their results via a callback only instead of the current mixed approach of indicating failure via a return value, returning an error string as an inout parameter and success results via a callback. The new guarantee should be that the callback is always called exactly once on control flow graph.
Other than a support for passing the (currently unused) cancelled state through the different instance, there should be no functionality change.
Currently, SourceKit always implicitly sends diagnostics to the client after every edit. This doesn’t match our new cancellation story because diagnostics retrieval is no request and thus can’t be cancelled.
Instead, diagnostics retrieval should be a standalone request, which returns a cancellation token like every other request and which can thus also be cancelled as such.
The indented transition path here is to change all open and edit requests to be syntactic only. When diagnostics are needed, the new diagnostics request should be used.
rdar://83391522
The key changes here are
- To keep track of cancellation tokens for all `ScheduledConsumer`s in `SwiftASTManager`
- Generate unique request handles for all incoming requests (`create_request_handle `), use these request handles as cancellation tokens and return them from the `sourcekitd_send_request` methods
- Implement cancellation with `sourcekitd_cancel_request` as the entry point and `SwiftASTManager::cancelASTConsumer` as the termination point
Everything else is just plumbing the cancellation token through the various abstraction layers.
rdar://83391505
- Add VariableTypeCollector
This new SourceEntityWalker collects types from variable declarations.
- Add SwiftLangSupport::collectVariableTypes
- Implement CollectVariableType request
- Provide information about explicit types in CollectVarType
- Fix HasExplicitType in VariableTypeArray
- Fix typo
- Implement ranged CollectVariableTypes requests
- Use offset/length params for CollectVariableType in sourcekitd-test
- Address a bunch of PR suggestions
- Remove CanonicalType from VariableTypeCollector
This turned out not to be needed (for now).
- Improve doc comment on VariableTypeCollector::getTypeOffsets
- Remove unused CanonicalTy variable
- Remove out-of-date comment
- Run clang-format on the CollectVariableType implementation
- Fix some minor style issues
- Use emplace_back while collecting variable infos
- Pass CollectVariableType range to VariableTypeCollector
- Use capitalized variable names in VariableTypeArray
...as recommended by the LLVM coding standards
- Use PrintOptions for type printing in VariableTypeCollector
- Return void for collectVariableType
This seems to be cleaner stylistically.
- Avoid visiting subranges of invalid range in VariableTypeCollector
- Use std::string for type buffer in VariableTypeCollectorASTConsumer
- Use plural for PrintedType in VariableTypeArray
- Remove unused fields in VariableTypeArrayBuilder
- Add suggested doc comments to VariableTypeArray
- Remove unused VariableTypeInfo.TypeLength
- Fix typo of ostream in VariableTypeCollectorASTConsumer
- Fix typo
- Document Offset and Length semantics in collectVariableTypes
Have SourceKit return locations for symbols outside of the current
module as well. Callsites of location and comment information should
explicitly disable retrieving serialized information where performance
is a concern.
Resolves rdar://75582627
.swiftsourceinfo files contain the serialized location for declarations.
Use this when outputting locations in cursor info so that clients need
not perform an extra index lookup for external modules.
We were only keeping track of `RawSyntax` node IDs to incrementally transfer a syntax tree via JSON. However, AFAICT the incremental JSON transfer option has been superceeded by `SyntaxParseActions`, which are more efficient.
So, let’s clean up and remove the `RawSyntax` node ID and JSON incremental transfer option.
In places that still need a notion of `RawSyntax` identity (like determining the reused syntax regions), use the `RawSyntax`’s pointer instead of the manually created ID.
In `incr_transfer_round_trip.py` always use the code path that uses the `SyntaxParseActions` and remove the transitional code that was still using the incremental JSON transfer but was never called.