We want to be able to adopt
(https://github.com/swiftlang/swift/pull/82225) in the stdlib without
breaking people building at desk with older toolchains, so let's add a
feature flag.
The concrete nesting limit, which defaults to 30, catches
things like A == G<A>. However, with something like
A == (A, A), you end up with an exponential problem size
before you hit the limit.
Add two new limits.
The first is the total size of the concrete type, counting
all leaves, which defaults to 4000. It can be set with the
-requirement-machine-max-concrete-size= frontend flag.
The second avoids an assertion in addTypeDifference() which
can be hit if a certain counter overflows before any other
limit is breached. This also defaults to 4000 and can be set
with the -requirement-machine-max-type-differences= frontend flag.
Package the flag into `performanceHacksEnabled()` method on
`ConstraintSystem` and start using it to wrap all of the hacks
in constraint generator and the solver.
The implementation of the ObjC -retain method saved a local variable and returned that after calling swift_retain, which forced it to create a stack frame. swift_retain returns the object being retained, so we can take advantage of that to have the compiler emit a tail call to it instead.
Reflection metadata lookup failures are notoriously difficult to debug
because there is no error handling in TypeLowering outside of
compile-time #ifdef'd fprintf(stderr) calls. The nicest thing to do
would be to adopt llvm::Expected<> but TypeLowering is also included
in the standard library, which only has access to a tiny subset of the
LLVM Support library. This patch adds a place to store a pointer to
the first encountered error, which can then be converted to an
llvm::Error at the API level.
Initially, the compiler rejected building dependencies that contained OS
versions in an invalid range. However, this happens to be quite
disruptive, so instead allow it and request that these versions be
implicitly bumped based on what `llvm::Triple::getCanonicalVersionForOS`
computes.
resolves: rdar://153205856
This patch makes sure we don't get warnings in strict memory safe mode
when using shared references. Those types are reference counted so we
are unlikely to run into lifetime errors.
rdar://151039766
Currently, when we jump-to-definition for decls that are macro-expanded
from Clang imported decls (e.g., safe overloads generated by
@_SwiftifyImport), setLocationInfo() emits a bongus location pointing to
a generated buffer, leading the IDE to try to jump to a file that does
not exist.
The root cause here is that setLocationInfo() calls getOriginalRange()
(earlier, getOriginalLocation()), which was not written to account for
such cases where a macro is generated from another generated buffer
whose kind is 'AttributeFromClang'.
This patch fixes setLocationInfo() with some refactoring:
- getOriginalRange() is inlined into setLocationInfo(), so that the
generated buffer-handling logic is localized to that function. This
includes how it handles buffers generated for ReplacedFunctionBody.
- getOriginalLocation() is used in a couple of other places that only
care about macros expanded from the same buffer (so other generated
buffers not not relevant). This "macro-chasing" logic is simplified
and moved from ModuleDecl::getOriginalRange() to a free-standing
function, getMacroUnexpandedRange() (there is no reason for it to be
a method of ModuleDecl).
- GeneratedSourceInfo now carries an extra ClangNode field, which is
populated by getClangSwiftAttrSourceFile() when constructing
a generated buffer for an 'AttributeFromClang'. This could probably
be union'ed with one or more of the other fields in the future.
rdar://151020332
Calling setupLLVMOptimizationRemarks overwrites the MainRemarkStreamer
in the LLVM context. This prevents LLVM from serializing the remark meta
information for the already emitted SIL remarks into the object file.
Without the meta information bitstream remarks don't work correctly.
Instead, emit SIL remarks and LLVM remarks to the same RemarkSerializer,
and keep the file stream alive until after CodeGen.
Remarks emitted by LLVM use the mangled function name. Be consistent
with LLVM when serializing SIL remarks, so external tools can properly
filter for all remarks attributed to a certain function.
Ideally we'd be able to use the llvm interleave2 and deinterleave2
intrinsics instead of adding these, but deinterleave currently isn't
available from Swift, and even if you hack that in, the codegen from
LLVM is worse than what shufflevector produces for both x86 and arm. So
in the medium-term we'll use these builtins, and hope to remove them in
favor of [de]interleave2 at some future point.
This was used a long time ago for a design of a scanner which could rely on the client to specify that some modules *will be* present at a given location but are not yet during the scan. We have long ago determined that the scanner must have all modules available to it at the time of scan for soundness. This code has been stale for a couple of years and it is time to simplify things a bit by deleting it.
Adds an access control field for each imported module identified. When multiple imports of the same module are found, this keeps track of the most "open" access specifier.
Escaping solver-allocated types into a nested allocation arena is
problematic since we can e.g lazily compute the `ContextSubMap` for a
`NominalOrBoundGenericNominalType`, which is then destroyed when we
exit the nested arena. Ensure we don't pass any types with type
variables or placeholders to `typesSatisfyConstraint`.
rdar://152763265
LifetimeDependenceInsertion inserts mark_dependence on token result of a begin_apply
when it yields a lifetime dependent value. When such a begin_apply gets inlined,
the inliner can crash because of the remaining uses of the token result.
Fix this by inserting mark_dependence on parameter operands that are lifetime dependence sources
and deleting the mark_dependence on token results in the inliner.
Fixes rdar://151568816
Begin accepting the attribute in the form of `@cdecl(cName)`, using an
identifier instead of a string.
For ease of landing this change we still accept the string form. We
should stop accepting it before making this feature available in
production.