Instead of producing a warning for each use of an unsafe entity,
collect all of the uses of unsafe constructs within a given function
and batch them together in a single diagnostic at the function level
that tells you what you can do (add `@unsafe` or `@safe(unchecked)`,
depending on whether all unsafe uses were in the definition), plus
notes identifying every unsafe use within that declaration. The new
diagnostic renderer nicely collects together in a single snippet, so
it's easier to reason about.
Here's an example from the embedded runtime that previously would have
been 6 separate warnings, each with 1-2 notes:
```
swift/stdlib/public/core/EmbeddedRuntime.swift:397:13: warning: global function 'swift_retainCount' involves unsafe code; use '@safe(unchecked)' to assert that the code is memory-safe
395 |
396 | @_cdecl("swift_retainCount")
397 | public func swift_retainCount(object: Builtin.RawPointer) -> Int {
| `- warning: global function 'swift_retainCount' involves unsafe code; use '@safe(unchecked)' to assert that the code is memory-safe
398 | if !isValidPointerForNativeRetain(object: object) { return 0 }
399 | let o = UnsafeMutablePointer<HeapObject>(object)
| | `- note: call to unsafe initializer 'init(_:)'
| `- note: reference to unsafe generic struct 'UnsafeMutablePointer'
400 | let refcount = refcountPointer(for: o)
| | `- note: reference to let 'o' involves unsafe type 'UnsafeMutablePointer<HeapObject>'
| `- note: call to global function 'refcountPointer(for:)' involves unsafe type 'UnsafeMutablePointer<Int>'
401 | return loadAcquire(refcount) & HeapObject.refcountMask
| | `- note: reference to let 'refcount' involves unsafe type 'UnsafeMutablePointer<Int>'
| `- note: call to global function 'loadAcquire' involves unsafe type 'UnsafeMutablePointer<Int>'
402 | }
403 |
```
Note that we have lost a little bit of information, because we no
longer produce "unsafe declaration was here" notes pointing back at
things like `UnsafeMutablePointer` or `recountPointer(for:)`. However,
strict memory safety tends to be noisy to turn on, so it's worth
losing a little bit of easily-recovered information to gain some
brevity.
Sema now type-checks the alternate ABI-providing decls inside of @abi attributes.
Making this work—particularly, making redeclaration checking work—required making name lookup aware of ABI decls. Name lookup now evaluates both API-providing and ABI-providing declarations. In most cases, it will filter ABI-only decls out unless a specific flag is passed, in which case it will filter API-only decls out instead. Calls that simply retrieve a list of declarations, like `IterableDeclContext::getMembers()` and friends, typically only return API-providing decls; you have to access the ABI-providing ones through those.
As part of that work, I have also added some basic compiler interfaces for working with the API-providing and ABI-providing variants. `ABIRole` encodes whether a declaration provides only API, only ABI, or both, and `ABIRoleInfo` combines that with a pointer to the counterpart providing the other role (for a declaration that provides both, that’ll just be a pointer to `this`).
Decl checking of behavior specific to @abi will come in a future commit.
Note that this probably doesn’t properly exercise some of the new code (ASTScope::lookupEnclosingABIAttributeScope(), for instance); I expect that to happen only once we can rename types using an @abi attribute, since that will create distinguishable behavior differences when resolving TypeReprs in other @abi attributes.
Update the logic selecting the most restrictive import for a given
reference to account for @_exported imports from the local module. We
should always prioritize @_exported imports from the local module over
more restrictive same file imports. Only if an import from the same file
is also public we prefer it as it's more useful for diagnostics and
generally recommended to locally declare dependencies.
Also update the test that was meant to check this configuration to apply
two different variations, one for a module local @_exported and one
relying on the underlying clang module.
rdar://140924031
Rather than exposing an `addFile` member on
ModuleDecl, have the `create` members take a
lambda that populates the files for the module.
Once module construction has finished, the files
are immutable.
Previously, they were being parsed as top-level code, which would cause
errors because there are no definitions. Introduce a new
GeneratedSourceInfo kind to mark the purpose of these buffers so the
parser can handle them appropriately.
* Make ExportedSourceFile hold any Syntax as the root node
* Move `ExportedSourceFileRequest::evaluate()` to `ParseRequests.cpp`
* Pass the decl context and `GeneatedSourceFileInfo::Kind` to
`swift_ASTGen_parseSourceFile()` to customize the parsing
* Make `ExportedSourceFile` to hold an arbitrary Syntax node
* Move round-trip checking into `ExportedSourceFileRequest::evaluate()`
* Split `parseSourceFileViaASTGen` completely from C++ parsing logic
(in `ParseSourceFileRequest::evaluate()`)
* Remove 'ParserDiagnostics' experimental feature: Now that we have
ParserASTGen mode which includes the swift-syntax parser diagnostics.
In #58965, lookup for custom derivatives in non-primary source files was
introduced. It required triggering delayed members parsing of nominal types in
a file if the file was compiled with differential programming enabled.
This patch introduces `CustomDerivativesRequest` to address the issue.
We only parse delayed members if tokens `@` and `derivative` appear
together inside skipped nominal type body (similar to how member operators
are handled).
Resolves#60102
On Windows, we run into the following situation when running SourceKit-LSP tests:
- The SDK is located at `S:\Program Files\Swift\Platforms\Windows.platform\Developer\SDKs\Windows.sdk` with `S:` being a substitution drive
- We find `Swift.swiftmodule` at `S:\Program Files\Swift\Platforms\Windows.platform\Developer\SDKs\Windows.sdk\usr\lib\swift\windows\Swift.swiftmodule`
- Now, to check if `Swift.swiftmodule` is a system module, we take the realpath of the SDK, which resolves the substitution drive an results in something like `C:\Users\alex\src\Program Files\Swift\Platforms\Windows.platform\Developer\SDKs\Windows.sdk`
- Since we don’t take the realpath of `Swift.swiftmodule`, we will assume that it’s not in the SDK, because the SDK’s path is on `C:` while `Swift.swiftmodule` lives on `S:`
To fix this, we also need to check if a module’s real path is inside the SDK.
Fixesswiftlang/sourcekit-lsp#1770
rdar://138210224
Add function to handle all macro dependencies kinds in the scanner,
including taking care of the macro definitions in the module interface
for its client to use. The change involves:
* Encode the macro definition inside the binary module
* Resolve macro modules in the dependencies scanners, including those
declared inside the dependency modules.
* Propagate the macro defined from the direct dependencies to track
all the potentially available modules inside a module compilation.
ModuleDecl kept track of all of the source files in the module so that it
could find the source file containing a given location, which relied on
a sorted array all of these source files. SourceManager has its own
similar data structure for a similar query mapping the locations to
buffer IDs.
Replace ModuleDecl's dats structure with a use of the SourceManager's version
with the mapping from buffer IDs to source files.
Now that every source file has a buffer ID, introduce the reverse mapping
so clients can find the source file(s) in their module that reference
that buffer ID.
We're using a small custom frontend tool to generate indexstore data for `.swiftinterface` files in the SDKs. We do this by treating the `.swiftinterface` file as the input of an interface compilation, but this exits early because it treats it as a `SourceFile` instead of an external `LoadedFile`. This happens even if we call `setIsSystemModule(true)` unless we skip setting the SDK path, but that causes other problems. It seems harmless to check for `SourceFile`s as well, so that a tool processing an SDK interface as a direct input still gets the right state.
The "buffer ID" in a SourceFile, which is used to find the source file's
contents in the SourceManager, has always been optional. However, the
effectively every SourceFile actually does have a buffer ID, and the
vast majority of accesses to this information dereference the optional
without checking.
Update the handful of call sites that provided `nullopt` as the buffer
ID to provide a proper buffer instead. These were mostly unit tests
and testing programs, with a few places that passed a never-empty
optional through to the SourceFile constructor.
Then, remove optionality from the representation and accessors. It is
now the case that every SourceFile has a buffer ID, simplying a bunch
of code.
When onlyIfImported is true, we should return the public module name
only when the public facing module is already imported. Replace the
call to getModuleByIdentifier with getLoadedModule to prevent trigering
loading that module.
Also fix the test where the CHECK lined ended up matching itself from
the diagnostics output.
Change how we pick the one import to point to in diagnostics about a
referenced decl. This mostly affects the warning about superfluously
public imports. This warning encourages the developer to delete imports,
let's make sure we push them towards deleting the right ones.
The order was previously not well defined, except that we always picked
one of the most public imports.
We now prioritize imports in this order:
1. The most public import. (Preserving the current behavior in
type-checking of access-level on imports)
2. The import of the public version of the module defining the decl,
determined via export_as or -public-module-name.
3. The import of the module defining the decl.
4. The first import in the sources bringing the decl via reexports.
5. Any other import, usually via an @_exported import in a different file.
rdar://135357155
When looking up the decl context of a type, ASTDemangler has to take
into account that there are multiple different modules where that type
could've come from. This is due to two facts:
- Thanks to the `-module-abi-name` flag, multiple modules can share
the same ABI name (which is the module name that is usually used when
mangling a type).
- In some situations mangling can use the module's real name, for
example, when mangling for the debugger or USRs coupled with @_originallyDefinedIn.
rdar://134095412
Modules defined within the SDK are considered
non-user modules, extend this to any module found
within the parent platform directory if there is
one. This ensures we include modules such as
XCTest and Testing.
rdar://131854240
This makes sure that Swift respects `-Xcc -stdlib=libc++` flags.
Clang already has existing logic to discover the system-wide libc++ installation on Linux. We rely on that logic here.
Importing a Swift module that was built with a different C++ stdlib is not supported and emits an error.
The Cxx module can be imported when compiling with any C++ stdlib. The synthesized conformances, e.g. to CxxRandomAccessCollection also work. However, CxxStdlib currently cannot be imported when compiling with libc++, since on Linux it refers to symbols from libstdc++ which have different mangled names in libc++.
rdar://118357548 / https://github.com/swiftlang/swift/issues/69825
When emitting fix-its for missing imports, include an access level when the
module has been imported with an access level in other source files. For now,
the suggested access level for will always be `internal`, even when uses of
members in the file would actually require `public` or `package` visibility. In
order to suggest the correct access level, name lookup will need to be
refactored to repair references to inaccessible declarations, instead of
leaving error nodes in the AST. In anticipation of that refactoring of name
lookup, missing import diagnostics are now delayed until type checking a source
file is finished so that a consistent access level can be suggested for each
import fix-it for a given module.
Partially resolves rdar://126637855.
In anticipation of adding a new kind of missing import record to `SourceFile`,
clarify the purpose of the existing "missing imports" record with more specific
naming and documentation.
The SwiftIfConfig library provides APIs for evaluating and extracting
the active #if regions in source code. Use its "configured regions" API
along with the ASTContext-backed build configuration to reimplement the
extraction of active/inactive regions from the source.
This approach has the benefit of being effectively stateless: where the
existing solution relies on the C++ parser recording all of the `#if`
clauses it sees as it is parsing (and then might have to sort them later),
this version does a scan of source to collect the list without requiring
any other state. The newer implementation is also conceptually cleaner,
and can be shared with other clients that have their own take on the
build configuration.
The primary client of this information is the SourceKit request that
identifies "inactive" regions within the source file, which IDEs can
use to grey out inactive code within the current build configuration.
There is also some profiling information that uses it. Those clients
should be unaffected by this under-the-hood change.
For the moment, I'm leaving the old code path in place for compiler
builds that don't have swift-syntax. This should be considered
temporary, and that code should be removed in favor of request'ifying
this function and removing the incrementally-built state entirely.
In anticipation of reusing minimum access level information for diagnostics
related to the `MemberImportVisibility` feature, refactor the way the type
checker tracks the modules which must be imported publicly. Recording minimum
access levels is no longer restricted to modules that are already imported in a
source file since `MemberImportVisibility` diagnostics will need this
information when emitting fix-its for modules that are not already imported.
Unblocks rdar://126637855.
The issue with recursion here is that if there are enough modules
involved, this function will blow the process stack, particularly
in the case where the `FileUnit`s are not `SourceFile`s, since in
that instance a `SmallVector` gets allocated on the stack for each
level of the recursion.
rdar://130527640
The operation that finds the best import for a given declaration was
treating an overload module as being distinct from its underlying
module, even though they both have the same name and are imported
together. Teach it to treat those modules as equivalent, so we
correctly identify the right import declaration for something that
comes from the underlying module.
Fixes rdar://129401319.