See the comment at the top of ConcreteContraction.cpp for a detailed explanation.
This can be turned off with the -disable-requirement-machine-concrete-contraction
pass, mostly meant for testing. A few tests now run with this pass both enabled
and disabled, to exercise code paths which are otherwise trivially avoided by
concrete contraction.
Fixes rdar://problem/88135912.
In the "next" branch, set_size has been made protected. Instead, users
can use resize_for_overwrite, optionally followed by truncate, to
handle the same use cases more safely.
Add a few includes of Optional.h and Hashing.h. These files are failing
ot build in the "next" branch due to changes in llvm's own includes, but
it's general goodness to include them on main as well.
When calling a generic function with an argument of existential type,
implicitly "open" the existential type into a concrete archetype, which
can then be bound to the generic type. This extends the implicit
opening that is performed when accessing a member of an existential
type from the "self" parameter to all parameters. For example:
func unsafeFirst<C: Collection>(_ c: C) -> C.Element { c.first! }
func g(c: any Collection) {
unsafeFirst(c) // currently an error
// with this change, succeeds and produces an 'Any'
}
This avoids many common sources of errors of the form
protocol 'P' as a type cannot conform to the protocol itself
which come from calling generic functions with an existential, and
allows another way "out" if one has an existention and needs to treat
it generically.
This feature is behind a frontend flag
`-enable-experimental-opened-existential-types`.
and make `@_unsafeInheritExecutor` a suppressible feature.
Some language features are required in order to parse a
declaration correctly, but some can safely be ignored.
For the latter, we'd like the module interface to simply
contain the declaration twice, once with the feature and
once without. Some basic support for that was already
added for the SpecializeAttributeWithAvailability feature,
but it didn't interact correctly with required features
that might be checked in the same `#if` clause (it simply
introduced an `#else`), and it wasn't really set up to
allow multiple features to be handled this way. There
were also a few other places that weren't updated to
handle this, presumably because they never coincided
with a `@_specialize` attribute.
Introduce the concept of a suppressible feature, which
is anything that the ASTPrinter can modify the current
PrintOptions in order to suppress. Restructure the
printing of compatibility checks so that we can print
the body multiple times with different settings.
Print required feature checks in an outer `#if...#endif`,
then perform a separate `#if...#else...#endif` within
if we have suppressible features. If there are multiple
suppressible features, check for the most recent first,
on the assumption that it will imply the rest; then
perform subsequent checks with an `#elsif` clause.
This should be a far more solid foundation on which to
build compatibility checks in the future.
`@_unsafeInheritExecutor` needs to be suppressible
because it's been added to some rather important
existing APIs. Simply suppressing the entire decl will
effectively block old tools from using a new SDK to
build many existing projects (if they've adopted
`async`). Dropping the attribute changes the semantics
of these functions, but only if the compiler features
the SE-0338 scheduling change; this is a very narrow
window of main-branch development builds of the tools,
none of which were officially released.
I wanted a bit vector that I could use as a compact sorted
set of enum values: an inline-allocated, fixed-size array
of bits supporting efficient and convenient set operations
and iteration.
The C++ standard library offers std::bitset, but the API
is far from ideal for this purpose. It's positioned as
an abstract bit-vector rather than as a set. To use it
as a set, you have to turn your values into indices, which
for enums means explicitly casting them all in the caller.
There's also no iteration operation, so to find the
elements of the set, you have to iterate over all possible
indices, test whether they're in the set, and (if so)
cast the current index back to the enum. Not only is that
much more awkward than normal iteration, but it's also
substantially less efficient than what you can get by
counting trailing zeroes in a mask.
LLVM and Swift offer a number of other bit vectors, but
they're all dynamically allocated because they're meant
to track arbitrary sets. That's not a non-starter for my
use case, which is in textual serialization and so rather
slow anyway, but it's also not very hard to whip together
the optimal data structure here.
I have committed the cardinal sin of C++ data structure
design and provided the operations as ordinary methods
instead of operators.
I wanted a bit vector that I could use as a compact sorted
set of enum values: an inline-allocated, fixed-size array
of bits supporting efficient and convenient set operations
and iteration.
The C++ standard library offers std::bitset, but the API
is far from ideal for this purpose. It's positioned as
an abstract bit-vector rather than as a set. To use it
as a set, you have to turn your values into indices, which
for enums means explicitly casting them all in the caller.
There's also no iteration operation, so to find the
elements of the set, you have to iterate over all possible
indices, test whether they're in the set, and (if so)
cast the current index back to the enum. Not only is that
much more awkward than normal iteration, but it's also
substantially less efficient than what you can get by
counting trailing zeroes in a mask.
LLVM and Swift offer a number of other bit vectors, but
they're all dynamically allocated because they're meant
to track arbitrary sets. That's not a non-starter for my
use case, which is in textual serialization and so rather
slow anyway, but it's also not very hard to whip together
the optimal data structure here.
I have committed the cardinal sin of C++ data structure
design and provided the operations as ordinary methods
instead of operators.
SE-0338 changed the execution of non-actor async functions
so that they always hop to the generic executor, but some
functions need a way to suppress this so that they inherit
the caller's executor.
The right way to implement this is to have the caller pass
down the target executor in some reliable way and then
switch to it in all the appropriate places in the caller.
We might reasonably be able to build this on top of isolated
parameters, using some sort of default argument, or we might
need a wholly novel mechanism.
But those things are all ABI-breaking absent some sort of
guarantee about switching that we probably don't want to make,
and unfortunately we have functions in the library which we
need to export that need to inherit executors. So in the
short term, we need some unsafe way of getting back to the
previous behavior.
- Rename StepLimit to MaxRuleCount, DepthLimit to MaxRuleLength
- Rename command line flags to -requirement-machine-max-rule-{count,length}=
- Check limits outside of PropertyMap::buildPropertyMap()
- Simplify the logic in RequirementMachine::computeCompletion()
A scoped-down version of #39307. Implement extension of bound generic types. The important bit here is in TypeCheckGeneric where we now use the underlying type of a typealias and its associated nominal type decl when we're generating substitutions for the extended type.
Put this behind a new experimental flag
-enable-experimental-bound-generic-extensions
Resolves SR-4875
Resolves rdar://17434633