Ensure that the local definition of the stdlib functions are marked for
export. We would previously mark the runtime functions as dllimport on
Windows.
32-bit has a 7-bit inline unowned refcount, then 31 bits in the side table. Overflowing the inline count in deinit on an object that didn't already have a side table would crash, because the code assumed that creating a side table in deinit was not allowed.
(64-bit has 31 bits inline and in the side table. Overflowing the inline count immediately overflows the side table as well, so there's no change in behavior there.)
rdar://problem/33765960
This new format more efficiently represents existing information, while
more accurately encoding important information about nested generic
contexts with same-type and layout constraints that need to be evaluated
at runtime. It's also designed with an eye to forward- and
backward-compatible expansion for ABI stability with future Swift
versions.
clang is miscompiling some swiftcall functions on armv7s.
Stop using swiftcall in some places until it is fixed.
Reverts c5bf2ec (#13299).
rdar://35973477
* Check for overflow in incrementWeak().
This mirrors what is currently done for unowned reference counts, where overflowing the side table field produces a fatal error. Without this, the count silently wrapped from 2^31-1 to 0, which then caused breakage when the balancing releases happened (possibly including use-after-free bugs).
* Fix the implementation of RefCounts::getWeakCount().
The previous implementation was only appropriate for heap objects, but not side tables. This resulted in the weak count always returning 0 or 1. This change specializes the implementation for the two different cases and returns the correct count for side tables.
* Test large weak retain counts.
This tests the largest allowed weak retain count, as well as the overflow check when that count is exceeded.
The prior refactoring to add the protocol conformance descriptor into
witness tables offset the # of requirements stored in the witness
table by 1. Address this oddity in the metadata by ensuring that the #
of requirements (and # of mandatory requirements) in the protocol
descriptor is accurate.
Swift class metadata has a bit to distinguish it from non-Swift Objective-C
classes. The stable ABI will use a different bit so that stable Swift and
pre-stable Swift can be distinguished from each other.
No bits are actually changed yet. Enabling the new bit needs to wait for
other coordination such as libobjc.
rdar://35767811
Don't emit placeholders for field offsets and vtable entries,
since they were always null. Instead, calculate the final size
of class metadata at runtime using the size of the superclass
metadata and the number of immediate members, and only copy
this prefix from the template to the instantiated metadata,
zero-filling the rest.
For this to work with non-generic resilient classes and
non-generic subclasses of generic classes, we need a new
runtime entry point to relocate non-generic class metadata,
calculating its size at runtime using the same strategy.
When allocating metadata for a generic class we would copy
any prefix matter from the superclass metadata, if the
superclass metadata's address point was greater than our
address point.
While we may use prefix matter for resilient metadata
in the future, I don't believe just copying bytes like
this will prove useful.
When building swift on macOS with long tests, the build would fail with
undefined references to these symbols. The undefined symbols were
expected to be present in swiftCore against which we link. However,
because the symbols are marked as internal, they would be dead stripped
before the dylib was constructed. As a result, we would end up with
undefined references to these symbols. Expose the additional internal
interfaces for now so that we can build the tests.
* [runtime] Clean up symbols in error machinery.
* [runtime] Clean up symbols in Foundation overlay.
* [runtime] Clean up symbols in collections and hashing.
* [runtime] Remove symbol controls from the Linux definition of swift_allocError.
* [tests] Add more stub functions for tests that link directly to the runtime.
This requires the witness table accessor function to gain two new parameters: a
pointer to an array of witness tables and their count. These are then passed down
to the instantiation function which reads them out of the array and writes them
into the newly-allocated witness table.
We use the count to assert that the number of conditional witness tables passed
in is what the protocol conformance expects, which is especially useful while
the feature is still experimental: it is a compiler/runtime bug if an incorrect
number is passed.
Alter the value metadata layout to use an absolute pointer for the
nominal type descriptor rather than a relative offset relative to the
complete type metadata. Although this is slightly less efficient in
terms of load times, this is more portable across different
environments. For example, PE/COFF does not provide a cross-section
relative offset relocation. Other platform ports are unable to provide
a 64-bit relative offset encoding.
Given that the value witness table reference in the value metadata is
currently an absolute pointer, this page is most likely going to be
dirtied by the loader.
StructMetadata is now two words that the parent pointer is gone.
We need to add our own storage for the generic arguments, instead
of smashing the location where the parent pointer used to be.
My previous fix was incorrect because it would set the template
size to two words (thus we no longer read past the end of the
template) but in turn we would write past the end of the
instantiated metadata.
We no longer need this for anything, so remove it from metadata
altogether. This simplifies logic for emitting type metadata and
makes type metadata smaller.
We still pass the parent metadata pointer to type constructors;
removing that is a separate change.
On architectures where the calling convention uses the same argument register as
return register this allows the argument register to be live through the calls.
We use LLVM's 'returned' attribute on the parameter to facilitate this.
We used to perform this optimization via an optimization pass. This was ripped
out some time ago around commit 955e4ed652.
By using LLVM's 'returned' attribute on swift_*retain, we get the same
optimization from the LLVM backend.