(instead of an unused field that we'd rather not initialize)
constexpr can't be 100% enforced in a template, so instead it gets
silently dropped if the instantiated function doesn't fulfill all the
requirements of being constexpr. In this case, that was a constructor
not explicitly initializing all fields, even the one we marked
unavailable. This meant we were using a non-constexpr constructor to
instantiate a global, which semantically requires static
initialization.
(The actual initialization is still optimized away at the LLVM
level. But the Clang frontend doesn't know that.)
Note that the warning will still fire unless you update your Clang
build; I just today cherry-picked the change that handles unnamed
bitfields correctly.
Move bits mask from Metadata.h to SwiftShims's HeapObject.h. This
exposes the bit masks to the stdlib, so that the stdlib doesn't have
to have its own magic numbers per-platform. This also enhances
readability for BridgeObject, whose magic numbers are mostly derived
from Swift's ABI.
This is a blanket pass replacing use of `__LP64__` with
`__POINTER_WIDTH__ == 64`. The latter is more expressive and also LLP64
clean. This change is needed to enable support for Windows x86_64 which
is a LLP64 environment.
* Mutations of slices of data should preserve relative indexing as well as cow semantics of slices
* Ensure hashes of ranges are uniform to the expected hash for Data
* Correct a few mistakes in the slice mutation tests
* Update sequence initializations to avoid directly calling mutableCopy which prevents slice offset mismatches
* Avoid invalid index slices in creating mirrors
* Restore the original Data description
* Resetting a slice region should expand the slice to the maximum of the region (not a out of bounds index of the backing buffer)
* Remove stray comment and use a stack buffer for sequence appending
* Return false when allocations fail in _resizeConditionalAllocationBuffer (not yet in use)
* Enumeration of regions of a slice should be limited to the slice range in the case of custom backing (e.g. dispatch_data_t)
* adjust assertion warnings for data indexes that are negative
Rename the explicit `pthread` to `thread` to indicate that this is not
`pthread` specifically. This allows us to implement the TLS support for
Windows using Fibers.
In Xcode project builds where LLVM/Clang are built differentually than
Swift (e.g., RelWithDebInfo LLVM/Clang + Debug Swift, a Very Useful
Combination), the LLVM headers symlink pointed to nowhere. Follow the
same logic used for the Clang headers to deal with this case.
Similarly to Clang, the flag enables coverage instrumentation, and links
`libLLVMFuzzer.a` to the produced binary.
Additionally, this change affects the driver logic, and enables the
concurrent usage of multiple sanitizers.
On 32bit platforms there are 7 bits reserved for the unowned retain count. This
makes overflow a likely scenario. Implement overflow into the side table.
rdar://33495003
To get the full benefit of dyld3 on Darwin platforms, pointer relocations need to be pointer-aligned, which unfortunately requires growing some key path data structures a little bit. This does tidy up some code that had to hack around our lack of unaligned load/store operations on UnsafeRawPointer, at least. While we're here, we can also simplify the identification strategy for reabstracted stored properties; we only need the property index to identify, not the absolute offset. rdar://problem/32318829
* IRGen: EmptyBoxType's representation cannot be nil because of a conflict with extra inhabitant assumption in indirect enums
We map nil to the .None case of Optional. Instead use a singleton object.
SR-5148
rdar://32618580
We need to use the ivar offset variables in this case, since the Swift field offset vector doesn't pick up the adjusted offsets from the ObjC runtime. Fixes SR-5036 | rdar://problem/32488871.
Adds in Linux platform support for our pthread TLS. Replace usage of
PTHREAD_KEYS_MAX with a sentinel value, as it's tricky to define
cross-platform and was only lightly used inside sanity checks.
Introduce shims for using UBreakIterators from ICU. Also introduce
shims for using thread local storage via pthreads.
We will be relying on ICU and UBreakIterators for grapheme
breaking. But, UBreakIterators are very expensive to create,
especially for the way we do grapheme breaking, which is relatively
stateless. Thus, we will stash one or more into thread local storage
and reset it as needed.
Note: Currently, pthread_key_t is hard coded for a single platform
(Darwin), but I have a static_assert alongside directions on how to
adapt it to any future platforms who differ in key type.
A property imported from Objective-C, or marked in Swift with the `dynamic` keyword, doesn't have a vtable slot, so can't be identified that way. Use the ObjC selector as the unique identifier to ascribe equality to such components. Fixes rdar://problem/31768669. (While we're here, throw some more execution tests and a changelog note in.)
Adds in Linux platform support for our pthread TLS. Replace usage of
PTHREAD_KEYS_MAX with a sentinel value, as it's tricky to define
cross-platform and was only lightly used inside sanity checks.
Introduce shims for using UBreakIterators from ICU. Also introduce
shims for using thread local storage via pthreads.
We will be relying on ICU and UBreakIterators for grapheme
breaking. But, UBreakIterators are very expensive to create,
especially for the way we do grapheme breaking, which is relatively
stateless. Thus, we will stash one or more into thread local storage
and reset it as needed.
Note: Currently, pthread_key_t is hard coded for a single platform
(Darwin), but I have a static_assert alongside directions on how to
adapt it to any future platforms who differ in key type.
* [Foundation] Refactor the backing of IndexPath to favor stack allocations
The previous implementation of IndexPath would cause a malloc of the underlying array buffer upon bridging from ObjectiveC. This impacts graphical APIs (such as UICollectionView or AppKit equivalents) when calling delegation patterns. Since IndexPath itself can be a tagged pointer and most often just a pair of elements it can be represented as an enum of common cases. Those common cases of empty, single, or pair can be represented respectively as no associated value, a single Int, and a tuple of Ints. These cases will be exclusively stack allocations, which is markably faster than the allocating code-path. IndexPaths that have a count greater than 2 will still fall into the array storage case. As an added performance benefit, accessing count and subscripting is now faster by aproximately 30% due to more tightly coupled inlining potential under whole module optimizations. Accessing count is also faster since it has better cache-line effeciency (lesson learned: the branch predictor is more optimized than pointer indirection chasing).
Benchmarks performed on x86_64, arm and arm64 still pending results but should be applicable across the board.
Resolves the following issues:
https://bugs.swift.org/browse/SR-3655https://bugs.swift.org/browse/SR-2769
Resolves the following radars:
rdar://problem/28207534
rdar://problem/28209456
* [Foundation] remove temp IndexPath hashing that required bridging to ref types
* [Foundation] IndexPath does not guarentee hashing to be the same as objc
HeapObject contains an InlineRefCounts. The Swift shim definition of
InlineRefCounts used uint64_t, which has the correct size but the incorrect
alignment on armv7k. This caused the Swift stdlib and the Swift runtime
to disagree about object layouts.
rdar://31664545
I had optimistically written the code here optimistically hoping #7837 would land in time for me to merge, but that didn't happen, so adjust some things to match the current 12-byte object header size on 32-bit, and introduce some ABI constants for the expected 32- and 64-bit object header sizes we can assert against so that we have some robustness when it eventually changes again. Implements rdar://problem/31768303.