Beside the general goal to remove inlinable functions, this reduces code size and also improves performance for several benchmarks.
The performance problem was that by inlining top-level String API functions into client code (like String.count) it ended up calling non-inlinable internal String functions eventually.
This is much slower than to make a single call at the top-level API boundary into the library. Inside the library all the internal String functions can be specialized and inlined.
rdar://problem/39921548
Previously, swiftImageInspectionShared generated one specific library at
`lib/libswiftImageInspectionShared.a` for only the main arch and sdk.
Generic cross compilation and various changes to the build system to get
cross compilation to work will require swiftImageInspectionShared to
generate libraries at the proper subdirectory. Change the outputs to
agree with paths such as `lib/swift/linux/x86_64`
Non-tagged NSStrings carry identity separate from their
value. Continue to bridge them lazily, even if they could fit in small
form, to respect this and avoid potential information loss.
Temporarily disable invariant that all natives strings that could be
small, are.
This hoists out the retain into Swift code from the casting runtime and along a
few paths in the runtime allows us to eliminate a dynamic retain release.
rdar://38196046
rdar://38771331
This is truly a consuming operation. This can be seen since we always would need
to retain the argument here. This makes guaranteed -> owned less transformation
effective. Instead represent it taking a +1 argument so that the retain happens
outside the builtin instead of inside the builtin.
This also allows me to remove an extra copy from dynamicCastValueToNSError
rdar://38771331
When Set/Dictionary is nested in another Set, the boundaries of the nested collections weren’t correctly delineated in commutative hashing.
For example, these Sets all hashed the same:
[[1, 2], [3, 4]]
[[1, 3], [2, 4]]
[[1, 4], [2, 3]]
Hash collisions could thus be systematically generated.
To fix this, remove collection-level support for one-shot hashing and revert to the previous method of generating hash values. (Set is still able to support one-shot hashing for its members, though.)
The new _rawHashValue(seed:) requirement allows stdlib types to specialize their hashing when they’re hashed on their own (i.e., not as a component of some composite type).
This makes it possible to get rid of discriminator/terminator values and to eliminate most of Hasher’s resiliency overhead, leading to a measurable speedup, especially for tiny keys.
The @inlinable attribute was left off by accident, but this turned out to be a measurable performance boost for Character hashing.
Also add @effects(releasenone), for good measure.
StringStorage.create is the primary means of allocating storage for a
string, so drop inlinability to allow for future evolution.
StringStorage also exposes some .appendInPlace methods, which we
currently need to keep inlinable for benchmark performance. We'd
really like to drop inlinability for these for evolution purposes
(e.g. imagine a future version that adjusts nul-termination or changes
in coordination with create). These are flagged with:
` // TODO(inlinability): @usableFromInline - P3`
Where "P3" reflects urgency on a scale from 1 (stop the presses) to 5
(whatevs).
Dropping many of Array's @inlinable annotations caused some
performance regressions. Restore them temporarily while we figure out
how to better annotate these decls.
Aggressively remove all `@inlinable` from any function that's
`@inline(never)` to see the impact.
`@inlinable @inline(never)` is a potential code smell. While it might
expose some optimization and specialization opportunities to the
optimizer, it's most commonly a sign that more thought is needed.
This removes the default implementation of hash(into:), and replaces it with automatic synthesis built into the compiler. Hashable can now be implemented by defining either hashValue or hash(into:) -- the compiler supplies the missing half automatically, in all cases.
To determine which hash(into:) implementation to generate, the synthesizer resolves hashValue -- if it finds a synthesized definition for it, then the generated hash(into:) body implements hashing from scratch, feeding components into the hasher. Otherwise, the body implements hash(into:) in terms of hashValue.