* IRGen: EmptyBoxType's representation cannot be nil because of a conflict with extra inhabitant assumption in indirect enums
We map nil to the .None case of Optional. Instead use a singleton object.
SR-5148
rdar://32618580
* IRGen: Change c-o-w existential implementation functions
* initialzeBufferWith(Copy|Take)OfBuffer value witness implementation for cow existentials
Implement and use initialzeBufferWith(Copy|Take)OfBuffer value witnesses for
copy-on-write existentials.
Before we used a free standing function but the overhead of doing so was
noticable (~20-30%) on micro benchmarks.
* IRGen: Use common getCopyOutOfLineBoxPointerFunction
* Add a runtime function to conditionally make a box unique
* Fix compilation of HeapObject.cpp on i386
* Fix IRGen test case
* Fix test case for i386
The high half of the address space is used by the kernel in these architectures, and it would be useful for us to be able to pack small values into places the ABI otherwise requires a refcountable pointer, such as closures and maybe refcounted existentials.
Use the generic type lowering algorithm described in
"docs/CallingConvention.rst#physical-lowering" to map from IRGen's explosion
type to the type expected by the ABI.
Change IRGen to use the swift calling convention (swiftcc) for native swift
functions.
Use the 'swiftself' attribute on self parameters and for closures contexts.
Use the 'swifterror' parameter for swift error parameters.
Change functions in the runtime that are called as native swift functions to use
the swift calling convention.
rdar://19978563
IIRC we never had any evidence that the performance impact of a
separate allocator here was actually measurable, and it does come
at a significant fragmentation cost because every single cache
allocates at least a page of memory. Sharing that with the system
allocator makes more sense, even if these allocations are typically
permanent.
This also means that standard memory-debugging tools will actually
find problems with out-of-bounds accesses to metadata.
Use the C++11 std::this_thread::yield to abstract away platform specific
differences for yield execution time. This makes the code more portable to
non-POSIX environments (i.e. Windows). NFC.
This patch is for libswiftCore.lib, linking with the library set of Visual Studio 2015. Clang with the option -fms-extension is used to build this port.
This is the approved subpatch of a large patch.
Part 1: Generic SIL Boxes always have instatiated metadata with kind
HeapGenericLocalVariable, which includes a metadata pointer for the
boxed type.
Part 2, after this, is to provide some kind of outgoing pointer map for
fixed heap boxes, whose metadata may be shared among different but
destructor-compatible types.
rdar://problem/26240419
This is a purely mechanical change replacing the attributes with the reserved
spelling. Compilers are to not error when they encounter a reserved spelling
for an attribute which they do not support.
It has been fairly easy to cause the runtime to crash on multithreaded read-read access to weak references (e.g. https://bugs.swift.org/browse/SR-192). Although weak references are value types, they can get elevated to the heap in multiple ways, such as when captured by a closure or when used as a property in a class object instance. In such cases, race conditions involving weak references could cause the runtime to perform to multiple decrement operations of the unowned reference count for a single increment; this eventually causes early deallocation, leading to use-after-free, modify-after-free and double-free errors.
This commit changes the weak reference operations to use a spinlock rather than assuming thread-exclusive access, when appropriate.
With this change, the crasher discussed in SR-192 no longer encounters crashes due to modify-after-free or double-free errors.
It's to be used by code produced by the ReleaseDevirtualizer.
As the function is only used for non-escaping objects, the deallocating bit is set non-atomically.
Teach swift_deallocPartialClassInstance how to deal with classes that
have pure Objective-C classes in their hierarchy. In such cases, we
need to make sure a few things happen:
1) We deallocate via objc_release rather than
swift_deallocClassInstance.
2) We only attempt to find an execute ivar destroyers for
Swift-defined classes in the hierarchy
3) When we hit the most-derived pure Objective-C class, make sure that we
only execute the dealloc of that class and not any of the subclasses
(which would end up trying to destroy ivars again).
Fixes rdar://problem/25023544.
This makes sure that runtime functions use proper calling conventions, get the required visibility, etc.
We annotate the most popular runtime functions in terms of how often they are invoked from Swift code.
- Almost all variants of retain/release functions are annotated to use the new calling convention.
- Some popular non-reference counting functions like swift_getGenericMetadata or swift_dynamicCast are annotated as well.
The set of runtime functions annotated to use the new calling convention should exactly match the definitions in RuntimeFunctions.def!
and MetadataCache and fix a re-entrancy bug in metadata
instantiation.
The re-entrancy bug is that we were holding the instantiation
lock of a metadata cache while instantiating metadata. Doing
so prevents us from creating a different instantiation if
it's needed by the outer instantiation. This is already
possible, but it's much more likely in a patch I'm working on
to only store the minimal metadata for generic parameters
in generic types.
The same bug could also show up as a deadlock between threads,
so a recursive lock would not be a good fix. Instead, we add
a condition variable to the metadata cache. When fetching
metadata, we look for a node in the concurrent map, eagerly
creating an empty one if none currently exists. If lookup
finds an empty node, we wait on the condition variable for
the node to become populated. If lookup succeeds in creating
an empty node, we instantiate the metadata, grab the lock,
populate the node, and notify the condition variable.
Safely creating an empty node without any metadata present
requires us to move the key data into the map entry. That,
plus a few other invariant shifts, makes it sensible to
give the user of ConcurrentMap more control over the
allocation of map nodes and the layout of keys. That, in
turn, allows us to change the contract so that keys can be
more complex than just a hash code. Instead of incrementing
hash codes and re-performing the lookup, we just insist
that lookup keys be totally ordered.
For now, I've kept the uniform use of hash codes as a
component of the key for MetadataCaches. However, hash
codes aren't really profitable for small keys, and we should
probably use direct comparisons instead.
We should also switch the safer metadata caches (i.e. the
ones that don't involve calling an arbitrary instantiation
function, like MetatypeMetadataCache) over to directly use
ConcurrentMap.
LLDB's requirement that we maintain a linked list of metadata
cache instantiations with a known layout means we can't yet
remove the CacheEntry's redundant copy of the generic
arguments.
Deallocation of partially-initialized instances of @objc classes
isn't supported yet, but let's avoid a regression compared to
Swift 2.1 behavior by fixing the one case that used to work,
where the early return occurs after all stored properties have
been initialized.
Fixes SR-704.
...and explicitly mark symbols we export, either for use by executables or for runtime-stdlib interaction. Until the stdlib supports resilience we have to allow programs to link to these SPI symbols.
When I originally added this I did not understand how dtrace worked well enough.
Turns out we do not need any of this runtime instrumentation and we can just
dynamically instrument the calls.
This commit rips out the all of the static calls and replaces the old
runtime_statistics dtrace file with a new one that does the dynamic
instrumentation for you. To do this one does the following:
sudo dtrace -s ./swift/utils/runtime_statistics.d -c "$CMD"
The statistics are currently focused around dynamic retain/release counts.