Let's not perform $T? -> $T for closure result types to avoid having
to re-discover solutions that differ only in location of optional
injection.
The pattern with such type variables is:
```
$T_body <conv/subtype> $T_result <conv/subtype> $T_contextual_result
```
When `$T_contextual_result` is `Optional<$U>`, the optional injection
can either happen from `$T_body` or from `$T_result` (if `return`
expression is non-optional), if we allow both the solver would
find two solutions that differ only in location of optional
injection.
Prioritize `build{Block, Expression, ...}` and any chained
members that are connected to individual builder elements
i.e. `ForEach(...) { ... }.padding(...)`, once `ForEach`
is resolved, `padding` should be prioritized because its
requirements can help prune the solution space before the
body is checked.
Allow CGFloat -> Double widening conversions between
candidate argument types and parameter types. This would
make sure that Double is always preferred over CGFloat
when using literals and ranking supported disjunction
choices. Narrowing conversion (Double -> CGFloat) should
be delayed as much as possible.
If disjunction represents a member reference that has no arguments
applied, let's score that as `1` to indicate that it should be priorized.
This helps in situations like `a.b + 1` where resolving `a.b` member
chain helps to establish context for `+`.
If some of the requirements of a generic overload reference other
generic parameters, the optimizer won't be able to satisfy them
because it only has candidates for one (current) parameter. In
cases like that, let's fallback to a light-weight protocol conformance
check instead of skipping an overload choice altogether.
Some disjunctions e.g. explicit coercions, compositions of restrictions,
and implicit CGFloat initializers have favored choices set independently
from optimization algorithm, `selectDisjunction` should account for
such situations.
When operators are chained it's possible that we don't know anything
about parameter(s) but result is known from the context, we should
use that information.
This algorithm attempts to ensure that the solver always picks a disjunction
it knows the most about given the previously deduced type information.
For example in chains of operators like: `let _: (Double) -> Void = { 1 * 2 + $0 - 5 }`
The solver is going to start from `2 + $0` because `$0` is known to be `Double` and
then proceed to `1 * ...` and only after that to `... - 5`.
The algorithm is pretty simple:
- Collect "candidate" types for each argument
- If argument is bound then the set going to be represented by just one type
- Otherwise:
- Collect all the possible bindings
- Add default literal type (if any)
- Collect "candidate" types for result
- For each disjunction in the current scope:
- Compute a favoring score for each viable* overload choice:
- Compute score for each parameter:
- Match parameter flags to argument flags
- Match parameter types to a set of candidate argument types
- If it's an exact match
- Concrete type: score = 1.0
- Literal default: score = 0.3
- Highest scored candidate type wins.
- If none of the candidates match and they are all non-literal
remove overload choice from consideration.
- Average the score by dividing it by the number of parameters
to avoid disfavoring disjunctions with fewer arguments.
- Match result type to a set of candidates; add 1 to the score
if one of the candidate types matches exactly.
- The best choice score becomes a disjunction score
- Compute disjunction scores for all of the disjunctions in scope.
- Pick disjunction with the best overall score and favor choices with
the best local candidate scores (if some candidates have equal scores).
- Viable overloads include:
- non-disfavored
- non-disabled
- available
- non-generic (with current exception to SIMD)
If a (trailing) closure is determined to be an extraneous argument
for one of the overload choices it needs to be marked as hole as
eagerly as possible and prevented from being resolved because
otherwise it's going to be disconnected from the rest of the
constraint system and resolution might not be able to find all of
the referenced variables. This could result either in crashes
or superfluous diagnostics.
Resolves: rdar://141012049
This makes sure that the compiler does not emit `-enable-experimental-cxx-interop`/`-cxx-interoperability-mode` flags in `.swiftinterface` files. Those flags were breaking explicit module builds. The module can still be rebuilt from its textual interface if C++ interop was enabled in the current compilation.
rdar://140203932
Darwin defines memcmp with optional pointers. Update SwiftShims to
define it to the same type to avoid deserialization failures where we
get one over the other and the types don't match anymore.
rdar://140596571
As the utility runs, new gens may become local: as access scopes are
determined to contain deinit barriers, their `end_access` instructions
become kills; if such an `end_access` occurs in the same block above an
initially-non-local gen, that gen is now local.
Previously, it was asserted that initially-non-local gens would not
encounter when visiting the block backwards from that gen. Iteration
would also _stop_ at the discovered kill, if any. As described above,
the assertion was incorrect.
Stopping at the discovered kill was also incorrect. It's necessary to
continue walking the block after finding such a new kill because the
book-keeping the utility does for which access scopes contain barriers.
Concretely, there are two cases:
(1) It may contain another `end_access` and above it a deinit barrier
which must result in that second scope becoming a deinit barrier.
(2) Some of its predecessors may be in the region, all the access scopes
which are open at the begin of this block must be unioned into the set
of scopes open at each predecessors' end, and more such access scopes
may be discovered above the just-visited `end_access`.
Here, both the assertion failure and the early bailout are fixed by
walking from the indicated initially-non-local gen backwards over the
entire block, regardless of whether a kill was encountered. If a kill
is encountered, it is asserted that the kill is an `end_access` to
account for the case described above.
rdar://139840307
Type annotations for instruction operands are omitted, e.g.
```
%3 = struct $S(%1, %2)
```
Operand types are redundant anyway and were only used for sanity checking in the SIL parser.
But: operand types _are_ printed if the definition of the operand value was not printed yet.
This happens:
* if the block with the definition appears after the block where the operand's instruction is located
* if a block or instruction is printed in isolation, e.g. in a debugger
The old behavior can be restored with `-Xllvm -sil-print-types`.
This option is added to many existing test files which check for operand types in their check-lines.
Situations like:
```
let _: Double = <<CGFloat>>
<var/property of type Double> = <<CGFloat>>
```
Used to be supported due to an incorrect fix added in
diagnostic mode. Lower impact here means that right-hand
side of the assignment is allowed to maintain CGFloat
until the very end which minimizes the number of conversions
used and keeps literals as Double when possible.
Resolves: rdar://139675914
The _StringProcessing module provides a generic, collection-based
`contains` method that performs poorly for ranges and closed ranges.
This addresses the primary issue by providing concrete overloads
for Range and ClosedRange which match the expected performance for
these operations.
This change also fixes an issue with the existing range overlap tests.
The generated `(Closed)Range.overlap` tests are ignoring the "other"
range type when generating ranges for testing, so all overlap tests
are only being run against ranges of the same type. This fixes things
so that heterogeneous testing is included.
Add @PointerBounds macro
@PointerBounds is a macro intended to be applied by ClangImporter when
importing functions with pointer parameters from C headers. By
leveraging C attributes we can get insight into bounds, esapability, and
(eventually) lifetimes of pointers, allowing us to map them to safe(r)
and more ergonomic types than UnsafePointer.
This initial macro implementation supports CountedBy and Sizedby, but
not yet EndedBy. It can generate function overloads with and without an
explicit count parameter, as well as with UnsafeBufferPointer or Span
(if marked nonescaping), and any of their combinations. It supports
nullable/optional pointers, and both mutable and immutable pointers.
It supports arbitrary count expressions. These are passed to the macro
as a string literal since any parameters referred to in the count
expression will not have been declared yet when parsing the macro.
It does not support indirect pointers or inout parameters. It supports
functions with return values, but returned pointers can not be bounds
checked yet.
Bounds checked pointers must be of type Unsafe[Mutable]Pointer[?]<T>
or Unsafe[Mutable]RawPointer[?]. Count expressions must conform to
the BinaryInteger protocol, and have an initializer with signature
"init(exactly: Int) -> T?" (or be of type Int).
rdar://137628612
---------
Co-authored-by: Doug Gregor <dgregor@apple.com>
When we replay a solution, we must record changes in the trail, so fix the
logic to do that. This fixes the first assertion failure with this test case.
The test case also exposed a second issue. We synthesize a CustomAttr in
applySolutionToClosurePropertyWrappers() with a type returned by simplifyType().
Eventually, CustomAttrNominalRequest::evaluate() looks at this type, and passes
it to directReferencesForType(). Unfortunately, this entry point does not
understand type aliases whose underlying type is a type parameter.
However, directReferencesForType() is the wrong thing to use here, and we
can just call getAnyNominal() instead.
Fixes rdar://139237781.
`filterDisjunction` should ignore the choices that are already
disabled while attempting to optimize disjunctions related to
dynamic member lookup.
Resolves: rdar://139314763