Since the functions produce pointers with tightly-scoped lifetimes there's no formal reason these have to only work on `inout` things. Now that arguments can be +0, we can even do this without copying values that we already have at +0.
On many-core machines, the files that take the longest to compile are
build bottlenecks. With this change, we workaround clang scaling
problems compiling large llvm::StringSwitch expressions and bring the
"touch tools/swift/include/swift/AST/* ; time ninja swift" time down
from 1m43.24s down to 40.26s.
I've also benchmarked rebuilding Swift.o after this change. The before
and after seems to be deep in the noise.
This has three principal advantages:
- It gives some additional type-safety when working
with known accessors.
- It makes it significantly easier to test whether a declaration
is an accessor and encourages the use of a common idiom.
- It saves a small amount of memory in both FuncDecl and its
serialized form.
* Reduce array abstraction on apple platforms dealing with literals
Part of the ongoing quest to reduce swift array literal abstraction
penalties: make the SIL optimizer able to eliminate bridging overhead
when dealing with array literals.
Introduce a new classify_bridge_object SIL instruction to handle the
logic of extracting platform specific bits from a Builtin.BridgeObject
value that indicate whether it contains a ObjC tagged pointer object,
or a normal ObjC object. This allows the SIL optimizer to eliminate
these, which allows constant folding a ton of code. On the example
added to test/SILOptimizer/static_arrays.swift, this results in 4x
less SIL code, and also leads to a lot more commonality between linux
and apple platform codegen when passing an array literal.
This also introduces a couple of SIL combines for patterns that occur
in the array literal passing case.
If we want it to be a Swift function, we'll have to thunk in the
runtime when using a system implementaton like dispatch_once_f,
since the function pointer ABIs could be different, depending on
the target. Dealing with that, or avoiding it on a per-target basis,
is more complexiity than a micro-optimization of the slow path of
this builtin could possibly be worth.
Now that the GenericSignatureBuilder is no longer sensitive to the input
module, stop uniquing the canonical GSBs based on that module. The main
win here is when deserializing a generic environment: we would end up
creating a canonical GSB in the module we deserialized and another
canonical GSB in the module in which it is used.
Implement a module-agnostic conformance lookup operation within the GSB
itself, so it does not need to be supplied by the code constructing the
generic signature builder. This makes the generic signature builder
(closer to) being module-agnostic.
Once we compute a generic signature from a generic signature builder,
all queries involving that generic signature will go through a separate
(canonicalized) builder, and the original builder can no longer be used.
The canonicalization process then creates a new, effectively identical
generic signature builder. How silly.
Once we’ve computed the signature of a generic signature builder, “register”
it with the ASTContext, allowing us to move the existing generic signature
builder into place as the canonical generic signature builder. The builder
requires minimal patching but is otherwise fully usable.
Thanks to Slava Pestov for the idea!
Funnel all places where we create a generic signature builder to compute
the generic signature through a single entry point in the GSB
(`computeGenericSignature()`), and make `finalize` and `getGenericSignature`
private so no new uses crop up.
Tighten up the signature of `computeGenericSignature()` so it only works on
GSB rvalues, and ensure that all clients consider the GSB dead after that
point by clearing out the internal representation of the GSB.
Funnel all places where we create a generic signature builder to compute
the generic signature through a single entry point in the GSB
(`computeGenericSignature()`), and make `finalize` and `getGenericSignature`
private so no new uses crop up.
Tighten up the signature of `computeGenericSignature()` so it only works on
GSB rvalues, and ensure that all clients consider the GSB dead after that
point by clearing out the internal representation of the GSB.
Once we compute a generic signature from a generic signature builder,
all queries involving that generic signature will go through a separate
(canonicalized) builder, and the original builder can no longer be used.
The canonicalization process then creates a new, effectively identical
generic signature builder. How silly.
Once we’ve computed the signature of a generic signature builder, “register”
it with the ASTContext, allowing us to move the existing generic signature
builder into place as the canonical generic signature builder. The builder
requires minimal patching but is otherwise fully usable.
Thanks to Slava Pestov for the idea!
Funnel all places where we create a generic signature builder to compute
the generic signature through a single entry point in the GSB
(`computeGenericSignature()`), and make `finalize` and `getGenericSignature`
private so no new uses crop up.
Tighten up the signature of `computeGenericSignature()` so it only works on
GSB rvalues, and ensure that all clients consider the GSB dead after that
point by clearing out the internal representation of the GSB.
These will be used for unit-testing the Type::join functionality in the
type checker. The result of the join is replaced during constraint
generation with the actual type.
There is currently no checking for whether the arguments can be used to
statically compute the value, so bad things will likely happen if
e.g. they are type variables. Once more of the basic functionality of
Type::join is working I'll make this a bit more bullet-proof in that
regard.
They include:
// Compute the join of T and U and return the metatype of that type.
Builtin.type_join<T, U, V>(_: T.Type, _: U.Type) -> V.Type
// Compute the join of &T and U and return the metatype of that type.
Builtin.type_join_inout<T, U, V>(_: inout T, _: U.Type) -> V.Type
// Compute the join of T.Type and U.Type and return that type.
Builtin.type_join_meta<T, U, V>(_: T.Type, _: U.Type) -> V.Type
I've added a couple simple tests to start off, based on what currently
works (aka doesn't cause an assert, crash, etc.).
"Accessibility" has a different meaning for app developers, so we've
already deliberately excised it from our diagnostics in favor of terms
like "access control" and "access level". Do the same in the compiler
now that we aren't constantly pulling things into the release branch.
This commit changes the 'Accessibility' enum to be named 'AccessLevel'.
I don't have reduced test cases. The original test cases
were a series of frontend invocations in -parse-stdlib
mode.
While the original bugs seem to have been fixed, while
verifying I found a few places where we weren't checking
for null decls property in the ASTContext.
Probably not too useful to check this in, but I don't see it
causing any harm, either.
In anticipation of future attributes, and perhaps the ability to
declare lvalues with specifiers other than 'let' and 'var', expand
the "isLet" bit into a more general "specifier" field.
The GenericSignatureBuilder requires `finalize()` to be called before a
generic signature can be retrieved with `getGenericSignature()`. Most of the former isn’t strictly needed unless you want a generic signature, and the
latter is potentially expensive. `computeGenericSignature()` combines the two
operations together, since they are conceptually related. Update most of the
callers to the former two functions to use `computeGenericSignature()`.
Having such a builtin makes it easier for the optimizer to reason about what is actually happening.
I plan to add later some optimizations which can optimize pieces of code dominated by such a check.