Only return macros that are valid in their current position, ie. an
attached macro is not valid on a nominal.
Also return freestanding expression macros in code block item position
and handle the new freestanding code item macros.
Resolves rdar://105563583.
Rather than using `ModuleDecl::isSystemModule()` to determine whether a
module is not a user module, instead check whether the module was
defined adjacent to the compiler or if it's part of the SDK.
If no SDK path was given, then `isSystemModule` is still used as a
fallback.
Resolves rdar://89253201.
Although the declaration of macros doesn't appear in Swift source code
that uses macros, they still operate as declarations within the
language. Rework `Macro` as `MacroDecl`, a generic value declaration,
which appropriate models its place in the language.
The vast majority of this change is in extending all of the various
switches on declaration kinds to account for macros.
Introduce `MacroExpansionExpr` and `MacroExpansionDecl` and plumb it through. Parse them in roughly the same way we parse `ObjectLiteralExpr`.
The syntax is gated under `-enable-experimental-feature Macros`.
When an function has an async alternative, that should be preferred when we are completing in an async context. Thus, the sync method should be marked as not recommended if the current context can handle async methods.
rdar://88354910
Store whether a result is async in the `ContextFreeCodeCompletionResult` and determine whether an async method is used in a sync context when promoting the context free result to a contextual result.
rdar://78317170
I think that preferring identical over convertible makes sense in e.g. C++ where we have implicit user-defined type conversions but since we don’t have them in Swift, I think the distinction doesn’t make too much sense, because if we have a `func foo(x: Int?)`, want don’t really want to prioritize variables of type `Int?` over `Int` Similarly if we have `func foo(x: View)`, we don’t want to prioritize a variable of type `View` over e.g. `Text`.
rdar://91349364
Computing the type relation for every item in the code completion cache is way to expensive (~4x slowdown for global completion that imports `SwiftUI`). Instead, compute a type’s supertypes (protocol conformances and superclasses) once and write their USRs to the cache. To compute a type relation we can then check if the contextual type is in the completion item’s supertypes.
This reduces the overhead of computing the type relations (again global completion that imports `SwiftUI`) to ~6% – measured by instructions executed.
Technically, we might miss some conversions like
- retroactive conformances inside another module (because we can’t cache them if that other module isn’t imported)
- complex generic conversions (just too complicated to model using USRs)
Because of this, we never report an `unrelated` type relation for global items but always default to `unknown`.
But I believe this change covers the most common cases and is a good tradeoff between accuracy and performance.
rdar://83846531
Computing the type relation for every item in the code completion cache is way to expensive (~4x slowdown for global completion that imports `SwiftUI`). Instead, compute a type’s supertypes (protocol conformances and superclasses) once and write their USRs to the cache. To compute a type relation we can then check if the contextual type is in the completion item’s supertypes.
This reduces the overhead of computing the type relations (again global completion that imports `SwiftUI`) to ~6% – measured by instructions executed.
Technically, we might miss some conversions like
- retroactive conformances inside another module (because we can’t cache them if that other module isn’t imported)
- complex generic conversions (just too complicated to model using USRs)
Because of this, we never report an `unrelated` type relation for global items but always default to `unknown`.
But I believe this change covers the most common cases and is a good tradeoff between accuracy and performance.
rdar://83846531