- New `swift.play` CodeLens support that is an experimental feature while [swift play](https://github.com/apple/swift-play-experimental/) is still experimental
- Add #Playground macro visitor to parse the macro expansions
- File must `import Playgrounds` to record the macro expansion
- The `swift-play` binary must exist in the toolchain to
- TextDocumentPlayground will record the id and optionally label to match detail you get from
```
$ swift play --list
Building for debugging...
Found 1 Playground
* Fibonacci/Fibonacci.swift:23 "Fibonacci"
```
- Add LSP extension documentation for designing pending `workspace/playground` request
- Add new parsing test cases
- Update CMake files
Issue: #2339#2343
Add column to unnamed label
Update Sources/SwiftLanguageService/SwiftCodeLensScanner.swift
Co-authored-by: Alex Hoppen <alex@alexhoppen.de>
Update Sources/SwiftLanguageService/SwiftPlaygroundsScanner.swift
Co-authored-by: Alex Hoppen <alex@alexhoppen.de>
Update Sources/SwiftLanguageService/SwiftPlaygroundsScanner.swift
Co-authored-by: Alex Hoppen <alex@alexhoppen.de>
Update Tests/SourceKitLSPTests/CodeLensTests.swift
Co-authored-by: Alex Hoppen <alex@alexhoppen.de>
Address review comments
Fix test failures
Fix more review comments
Update for swift-tools-core
The reason why I am making the first change is because in a separate PR in
swiftlang I am fixing a bug that caused certain captured parameters to be
treated as sending parameters incorrectly. This allowed for parameters to
incorrectly be allowed to be sent from one isolation domain to another.
The specific problem here can be seen with the following swift code:
```swift
actor B {
init(callback: @escaping @Sendable () -> Void) async {}
}
actor A {
private func poke() {}
func schedule() async {
_ = await B(
callback: { [weak self] in // closure 1
Task.detached { // closure 2
await self?.poke()
}
})
}
}
```
When we capture the weak self from closure 1 in closure 2, we are not actually
capturing self directly. Instead we are capturing the var box which contains the
weak self. The box (unlike self) is actually non-Sendable. Since closure 2 is
not call(once), the compiler must assume semantically that the closure can be
invoked potentially multiple times meaning that it cannot allow for self to be
used in Task.detached. The fix for this is to perform an inner [weak self]
capture. As follows:
```swift
actor A {
private func poke() {}
func schedule() async {
_ = await B(
callback: { [weak self] in // closure 1
Task.detached { [weak self] // closure 2
await self?.poke()
}
})
}
}
```
The reason why this works is that when we form the second weak self binding, we
perform a load from the outer weak self giving us an Optional<A>. Then we store
that optional value back into a new weak box. Since Optional<A> is Sendable, we
know that the two non-Sendable weak var boxes are completely unrelated, so we
can send that new var box into the new Task.detached safely.
The second `[weak self]` is just something I noticed later in the function. The
`[weak self]` just makes the detached function safer.
Introduce `LSPRequest`, `LSPNotification`, `BSPRequest`, and
`BSPNotification`. And change the protocol of the each concrete type.
This is beneficial for implementing LSP/BSP only message handlers.
When launching sourcekit-lsp without any command-line arguments, we would set `backgroundIndexing = false` in the options. Unless the user overwrites this somehow, this means that background indexing is disabled.
This is not an issue in VS Code, because it explicitly enables background indexing in the initialization request but for all other editors this means that background indexing was likely disabled by default.
Simply remove that line since `backgroundIndexing` defaults to `true` by now anyway.
The idea is pretty simple: When `MemberImportVisibility` is enabled, we know that imports can only affect the current source file. So, we can just try and remove every single `import` declaration in the file, check if a new error occurred and if not, we can safely remove it.
The modulo operator associated `0` and `100`, so the computation here was essentially `handle?.numericValue ?? (0 % 100)`, equivalent to `handle?.numericValue ?? 0`, which means that we didn’t acutally perform modulo operations on the numeric value, which means that we would exceed the maximum number of `os_log_t` objects after some time.
rdar://162891887
This allows us to easily get rid of some `@_inheritActorContext`. The others seem to be a little more tricky and I haven’t spent too much time at trying to figure out how to remove the attribute from those.
Rename `LineTable.replace(utf8Offset:length:with)` to `tryReplace`
and bail if the provided range is out of bounds of the buffer. This
ensures we match the behavior of SourceKit when handling an
`editor.replacetext` request.
rdar://161268691
We previously waited for the initialization response from the build server during the creation of a `Workspace` so that we could create a `SemanticIndexManager` with the index store path etc. that was returned by the `build/initialize` response. This caused all functionality (including syntactic) of SourceKit-LSP to be blocked until the build server was initialized.
Change the computation of the `SemanticIndexManager` and related types to happen in the background so that we can provide functionality that doesn’t rely on the build server immediately.
Fixes#2304
Absolute search paths were being ignored without logging, which makes it
somewhat difficult to diagnose. Log when they're skipped.
Also remove a duplicate options merging block - both
`createWorkspaceWithInferredBuildServer` and `findImplicitWorkspace`
(the only callers of `createWorkspace`) already merge in the workspace
options.
When `DYLD_(FRAMEWORK|LIBRARY)_PATH` is set, `dlopen` will first check
if the basename of the provided path is within any of its search paths.
Thus it's possible that only a single library is loaded for each
toolchain, rather than a separate like we expect. The paths should be
equal in this case, since the client plugin is loaded based on the path
of `sourcekitd.framework` (and we should only have one for the same
reason). Allow this case and just avoid re-initializing.
Until we have better measurements that would motivate a different batching strategy, copying the driver’s batch size seems like the most reasonable thing to do.