Data can encapsulate it’s own sub-sequence type by housing a range of the slice in the structural type for Data. By doing this it avoids the API explosion of supporting all APIs that take Data would need overloads to take a slice of Data. This does come at a small conceptual cost: any index based iteration should always account for the startIndex and endIndex of the Data (which was an implicit requirement previously by being a Collection). Moreover this prevents the requirement of O(n) copies of Data if it is never mutated while parsing sub sequences; so more than an API amelioration this also could offer a more effecient code-path for applications to use.
Swift 3 unintentionally allowed collection casts from, e.g.,
Set<AnyHashable> to Set<NSObject>, when in fact the object
representation of the AnyHashable might not be an NSObject. Fix up our
tests and overlays that ran afoul of this rule.
This reverts commit 231fb5e054.
This test was failing the last time we updated the stable branches
of LLVM/Clang so it was disabled. No one remembers what was the
problem, so try reenabling it to see if anything breaks.
In the constraint solver, we've traditionally modeled nested type via
a "type member" constraint of the form
$T1 = $T0.NameOfTypeMember
and treated $T1 as a type variable. While the solver did generally try
to avoid attempting bindings for $T1 (it would wait until $T0 was
bound, which solves the constraint), on occasion we would get weird
behavior because the solver did try to bind the type
variable.
With this commit, model nested types via DependentMemberType, the same
way we handle (e.g.) the nested type of a generic type parameter. This
solution maintains more information (e.g., we know specifically which
associated type we're referring to), fits in better with the type
system (we know how to deal with dependent members throughout the type
checker, AST, and so on), and is easier to reason able.
This change is a performance optimization for the type checker for a
few reasons. First, it reduces the number of type variables we need to
deal with significantly (we create half as many type variables while
type checking the standard library), and the solver scales poorly with
the number of type variables because it visits all of the
as-yet-unbound type variables at each solving step. Second, it
eliminates a number of redundant by-name lookups in cases where we
already know which associated type we want.
Overall, this change provides a 25% speedup when type-checking the
standard library.
The test was verifying that two independently constructed
empty cocoa sets somehow had the same identity. Not sure why this used
to hold, but it's definitely not something we guarantee.
Some of these are kinda dubious, but I think this would be better
addressed as part of eager bridging, which will invalidate the concept
most of these are checking for.