In the absence of conditional conformances, they are now extra types
with different constraints, plus extra overloads to the
stride(from:to:by:) and stride(from:through:by:).
When we pre-scan the components of a key path pattern to determine its runtime type and instance size, we would short-circuit upon seeing an optional-chaining component, since that makes a key path definitely read-only, but the loop also accumulates the size of the instance we're supposed to allocate, so…bad stuff happened. Leave out the short-circuit, fixing SR-6096 | rdar://problem/34889333 .
Extensions on ImplicitlyUnwrappedOptional are not actually accessible
because we force the optional (and thus can only access things on the
type it is wrapping).
Remove these from the stdlib in order to pave the way toward fully
implementing SE-0054.
- Use SWIFT_RUNTIME_EXPORT instead of SWIFT_RT_ENTRY_VISIBILITY for exposed functions
- Use `_swift_` prefixes on the names of exposed functions
- Make the global counters and per-object counters cache thread-safe by using locks
When building the stdlib for Windows x86_64, we would see the following error:
swift/stdlib/public/core/RuntimeFunctionCounters.swift:95:19: error: '(UnsafeRawPointer, Int64) -> Void' is not representable in Objective-C, so it cannot be used with '@convention(c)'
@convention(c) (_ object: UnsafeRawPointer, _ functionId: Int64) -> Void
^
This is caused by `Int64` not being mapped as on Windows x86_64, `CLong`
is mapped to `Int32` and `CLongLong` is mapped to `Int`. This causes
the `Int64` to fail to be reverse-mapped to a C type causing the FFI
construction failure.
Move bits mask from Metadata.h to SwiftShims's HeapObject.h. This
exposes the bit masks to the stdlib, so that the stdlib doesn't have
to have its own magic numbers per-platform. This also enhances
readability for BridgeObject, whose magic numbers are mostly derived
from Swift's ABI.
makeUnique hoisting can create a situation where we hit an assert in
_reserveCapacityAssumingUniqueBuffer that the buffer is not unique if we use
_makeMutableAndUniqueOrPinned to replace
_makeUniqueAndReserveCapacityIfNotUnique.
It is actually fine do to the replacement because we will make the buffer unique
_reserveCapacityAssumingUniqueBuffer if the capacity is zero so adjust the
assert accordingly.
do {
if (someCond) {
// array.append(...)
_makeUniqueAndReserveCapacityIfNotUnique(&array)
_reserveCapacityAssumingUniqueBuffer(&array)
_appendElementAssumeUniqueAndCapacity(&array, …)
} else {
// array[i] = …
_makeMutableAndUniqueOrPinned(&array)
addr = _getElementAddress(&array)
store 1, addr
}
} while();
to:
_makeMutableAndUniqueOrPinned(&array) // does not replace empty arrays.
do {
if (someCond) {
// array.append(...)
_reserveCapacityAssumingUniqueBuffer(&array) // hit the assert.
_appendElementAssumeUniqueAndCapacity(&array, …)
} else {
// array[i] = …
addr = _getElementAddress(&array)
store 1, addr
}
} while();
Tested by the performance test if we build the stdlib with assertions.
rdar://34149935