Introduce a new runtime entry point,
`swift_objc_swift3ImplicitObjCEntrypoint`, which is called from any
Objective-C method that was generated due to `@objc` inference rules
that were removed by SE-0160. Aside from being a central place where
users can set a breakpoint to catch when this occurs, this operation
provides logging capabilities that can be enabled by setting the
environment variable SWIFT_DEBUG_IMPLICIT_OBJC_ENTRYPOINT:
SWIFT_DEBUG_IMPLICIT_OBJC_ENTRYPOINT=0 (default): do not log
SWIFT_DEBUG_IMPLICIT_OBJC_ENTRYPOINT=1: log failed messages
SWIFT_DEBUG_IMPLICIT_OBJC_ENTRYPOINT=2: log failed messages with
backtrace
SWIFT_DEBUG_IMPLICIT_OBJC_ENTRYPOINT=3: log failed messages with
backtrace and abort the process.
The log messages look something like:
***Swift runtime: entrypoint -[t.MyClass foo] generated by
implicit @objc inference is deprecated and will be removed in
Swift 4
* IRGen: Change c-o-w existential implementation functions
* initialzeBufferWith(Copy|Take)OfBuffer value witness implementation for cow existentials
Implement and use initialzeBufferWith(Copy|Take)OfBuffer value witnesses for
copy-on-write existentials.
Before we used a free standing function but the overhead of doing so was
noticable (~20-30%) on micro benchmarks.
* IRGen: Use common getCopyOutOfLineBoxPointerFunction
* Add a runtime function to conditionally make a box unique
* Fix compilation of HeapObject.cpp on i386
* Fix IRGen test case
* Fix test case for i386
Adds the runtime implementation for copy-on-write existentials. This support is
enabled if SWIFT_RUNTIME_ENABLE_COW_EXISTENTIALS is defined. Focus is on
correctness -- not performance yet.
Don't use allocate/deallocate/projectBuffer witnesses for globals in cow
existential mode.
Use SWIFT_RUNTIME_ENABLE_COW_EXISTENTIALS configuration to set the default for
SILOptions.
This includes an IRGen fix to use the right projection in
emitMetatypeOfOpaqueExistential if SWIFT_RUNTIME_ENABLE_COW_EXISTENTIALS is set.
Use unknownRetain instead of native retain in dynamicCastToExistential.
Previously it was part of swiftBasic.
The demangler library does not depend on llvm (except some header-only utilities like StringRef). Putting it into its own library makes sure that no llvm stuff will be linked into clients which use the demangler library.
This change also contains other refactoring, like moving demangler code into different files. This makes it easier to remove the old demangler from the runtime library when we switch to the new symbol mangling.
Also in this commit: remove some unused API functions from the demangler Context.
fixes rdar://problem/30503344
...and IRGen it into a call to __tsan_write1 in compiler-rt. This is
preparatory work for a later patch that will add an experimental
option to treat Swift inout accesses as TSan writes.
Use the generic type lowering algorithm described in
"docs/CallingConvention.rst#physical-lowering" to map from IRGen's explosion
type to the type expected by the ABI.
Change IRGen to use the swift calling convention (swiftcc) for native swift
functions.
Use the 'swiftself' attribute on self parameters and for closures contexts.
Use the 'swifterror' parameter for swift error parameters.
Change functions in the runtime that are called as native swift functions to use
the swift calling convention.
rdar://19978563
This seems to more than fix a performance regression that we
detected on a metadata-allocation microbenchmark.
A few months ago, I improved the metadata cache representation
and changed the metadata allocation scheme to primarily use malloc.
Previously, we'd been using malloc in the concurrent tree data
structure but a per-cache slab allocator for the metadata itself.
At the time, I was concerned about the overhead of per-cache
allocators, since many metadata patterns see only a small number
of instantiations. That's still an important factor, so in the
new scheme we're using a global allocator; but instead of using
malloc for individual allocations, we're using a slab allocator,
which should have better peak, single-thread performance, at the
cost of not easily supporting deallocation. Deallocation is
only used for metadata when there's contention on the cache, and
specifically only when there's contention for the same key, so
leaking a little isn't the worst thing in the world.
The initial slab is a 64K globally-allocated buffer.
Successive slabs are 16K and allocated with malloc.
rdar://28189496
This seems to more than fix a performance regression that we
detected on a metadata-allocation microbenchmark.
A few months ago, I improved the metadata cache representation
and changed the metadata allocation scheme to primarily use malloc.
Previously, we'd been using malloc in the concurrent tree data
structure but a per-cache slab allocator for the metadata itself.
At the time, I was concerned about the overhead of per-cache
allocators, since many metadata patterns see only a small number
of instantiations. That's still an important factor, so in the
new scheme we're using a global allocator; but instead of using
malloc for individual allocations, we're using a slab allocator,
which should have better peak, single-thread performance, at the
cost of not easily supporting deallocation. Deallocation is
only used for metadata when there's contention on the cache, and
specifically only when there's contention for the same key, so
leaking a little isn't the worst thing in the world.
The initial slab is a 64K globally-allocated buffer.
Successive slabs are 16K and allocated with malloc.
rdar://28189496