Commit Graph

140 Commits

Author SHA1 Message Date
Arnold Schwaighofer
ca63326e1b Delete unused existential value witnesses from the old existential
implementation

And remove the SWIFT_RUNTIME_ENABLE_COW_EXISTENTIALS flag.
2017-06-02 14:34:41 -07:00
Joe Groff
439d5d6e41 Runtime: swift_getExistentialTypeMetadata should trust the compiler's ordering of protocols in compositions.
The compiler pre-canonicalizes protocol composition types by minimizing constraints and sorting the remaining protocols by module + name, which ought to be globally stable within a program (assuming there aren't multiple modules with the same name, in which case we'll have bigger problems…). The compiler also statically lays out existential types according to its conception of the canonical composition ordering, so the runtime's own attempts to form a stable ordering lead to layout inconsistencies between runtime and compile-time layout, leading to crashes like SR-4477.
2017-05-09 13:32:02 -07:00
Erik Eckstein
c4002a9398 Use the old mangling for generic ObjC runtime names, which are generated at runtime.
To be backward compatible to existing archives created by the NSKeyedArchiver for generic classes
2017-05-03 10:52:54 -07:00
Slava Pestov
b5721e8d8e AST: Remove AnyObject protocol 2017-05-02 19:45:00 -07:00
Slava Pestov
58f2f35313 Runtime: Add superclass constraint to existential type metadata 2017-04-25 01:32:44 -07:00
John McCall
7a4c761426 Merge pull request #8821 from rjmccall/dynamic-enforcement-vol-1
Basic dynamic enforcement of exclusivity
2017-04-18 13:57:54 -04:00
John McCall
2c40b39f26 Move runtime functions for casting into their own header. 2017-04-17 17:16:13 -04:00
Arnold Schwaighofer
1d2e46bbec Runtime: Add support for Builtin.Int512
rdar://31540879
2017-04-17 09:58:42 -07:00
Slava Pestov
a5a40c7fc7 Runtime/IRGen: Preliminary plumbing for subclass existentials 2017-04-13 21:29:57 -07:00
Arnold Schwaighofer
67e3d27fd9 Copy-on-write existential performance work (#8369)
* IRGen: Change c-o-w existential implementation functions

* initialzeBufferWith(Copy|Take)OfBuffer value witness implementation for cow existentials

Implement and use initialzeBufferWith(Copy|Take)OfBuffer value witnesses for
copy-on-write existentials.

Before we used a free standing function but the overhead of doing so was
noticable (~20-30%) on micro benchmarks.

* IRGen: Use common getCopyOutOfLineBoxPointerFunction

* Add a runtime function to conditionally make a box unique

* Fix compilation of HeapObject.cpp on i386

* Fix IRGen test case

* Fix test case for i386
2017-03-27 20:51:02 -07:00
Arnold Schwaighofer
709226258c Runtime: Allow taking out of inline opaque existentials
Only values stored in the outline boxed existential representation can be
shared.
2017-03-21 11:37:23 -07:00
Arnold Schwaighofer
d5cbb0bd62 Runtime changes for the copy-on-write existential implementation
Adds the runtime implementation for copy-on-write existentials. This support is
enabled if SWIFT_RUNTIME_ENABLE_COW_EXISTENTIALS is defined. Focus is on
correctness -- not performance yet.

Don't use allocate/deallocate/projectBuffer witnesses for globals in cow
existential mode.

Use SWIFT_RUNTIME_ENABLE_COW_EXISTENTIALS configuration to set the default for
SILOptions.

This includes an IRGen fix to use the right projection in
emitMetatypeOfOpaqueExistential if SWIFT_RUNTIME_ENABLE_COW_EXISTENTIALS is set.

Use unknownRetain instead of native retain in dynamicCastToExistential.
2017-03-15 14:54:55 -07:00
Erik Eckstein
5e80555c9b demangler: put the demangler into a separate library
Previously it was part of swiftBasic.

The demangler library does not depend on llvm (except some header-only utilities like StringRef). Putting it into its own library makes sure that no llvm stuff will be linked into clients which use the demangler library.

This change also contains other refactoring, like moving demangler code into different files. This makes it easier to remove the old demangler from the runtime library when we switch to the new symbol mangling.

Also in this commit: remove some unused API functions from the demangler Context.

fixes rdar://problem/30503344
2017-03-09 13:42:43 -08:00
practicalswift
99ceb78c7d Merge pull request #7723 from practicalswift/gardening-20170223
[gardening] Shell fixes. Consistent headers. a-vs-an typos. Python fixes. Unused variables and methods.
2017-02-27 14:05:05 +01:00
Erik Eckstein
7d7dc5aaac Demangler: Use a bump-pointer allocator for node allocation.
This makes the demangler about 10 times faster.
It also changes the lifetimes of nodes. Previously nodes were reference-counted.
Now the returned demangle  node-tree is owned by the Demangler class and it’s lifetime ends with the lifetime of the Demangler.

Therefore the old (and already deprecated) global functions demangleSymbolAsNode and demangleTypeAsNode are no longer available.

Another change is that the demangling for reflection now only supports the new mangling (which should be no problem because
we are generating only new mangled names for reflection).
2017-02-24 19:04:13 -08:00
practicalswift
f062a84185 [gardening] Add end-of-namespace comment 2017-02-24 09:38:00 +01:00
Erik Eckstein
2d127e4192 Reinstate ”Use the new mangling for reflection."
It also uses the new mangling for type names in meta-data (except for top-level non-generic classes).
lldb has now support for new mangled metadata type names.

This reinstates commit 21ba292943.
2017-02-15 09:47:22 -08:00
John McCall
038303b1b1 Switch MetadataCache to use a global slab allocator.
This seems to more than fix a performance regression that we
detected on a metadata-allocation microbenchmark.

A few months ago, I improved the metadata cache representation
and changed the metadata allocation scheme to primarily use malloc.
Previously, we'd been using malloc in the concurrent tree data
structure but a per-cache slab allocator for the metadata itself.
At the time, I was concerned about the overhead of per-cache
allocators, since many metadata patterns see only a small number
of instantiations.  That's still an important factor, so in the
new scheme we're using a global allocator; but instead of using
malloc for individual allocations, we're using a slab allocator,
which should have better peak, single-thread performance, at the
cost of not easily supporting deallocation.  Deallocation is
only used for metadata when there's contention on the cache, and
specifically only when there's contention for the same key, so
leaking a little isn't the worst thing in the world.

The initial slab is a 64K globally-allocated buffer.
Successive slabs are 16K and allocated with malloc.

rdar://28189496
2017-02-14 11:10:44 -05:00
Hugh Bellamy
d457742a8d Change getKeyIntValueForDump to use intptr_t rather than long 2017-02-12 09:46:36 +07:00
Erik Eckstein
254f36aba5 Revert "Use the new mangling for reflection."
This needs some changes in lldb.
Disabled for now until lldb supports the new mangling.

This reverts commit 21ba292943.
2017-02-08 09:01:51 -08:00
Erik Eckstein
21ba292943 Use the new mangling for reflection.
For this we are linking the new re-mangler instead of the old one into the swift runtime library.
Also we are linking the new de-mangling into the swift runtime library.

It also switches to the new mangling for class names of generic swift classes in the metadata.
Note that for non-generic class we still have to use the old mangling, because the ObjC runtime in the OS depends on it (it de-mangles the class names).
But names of generic classes are not handled by the ObjC runtime anyway, so there should be no problem to change the mangling for those.
The reason for this change is that it avoids linking the old re-mangler into the runtime library.
2017-02-07 08:36:21 -08:00
Hugh Bellamy
d030ae4c94 Cleanup uses of SWIFT_RT_ENTRY_VISIBILITY (#7103) 2017-01-31 15:53:14 -08:00
John McCall
2b25701a93 Revert "Switch MetadataCache to use a global slab allocator."
This reverts commit ccbe5fcf73.
2017-01-29 00:17:30 -05:00
John McCall
ccbe5fcf73 Switch MetadataCache to use a global slab allocator.
This seems to more than fix a performance regression that we
detected on a metadata-allocation microbenchmark.

A few months ago, I improved the metadata cache representation
and changed the metadata allocation scheme to primarily use malloc.
Previously, we'd been using malloc in the concurrent tree data
structure but a per-cache slab allocator for the metadata itself.
At the time, I was concerned about the overhead of per-cache
allocators, since many metadata patterns see only a small number
of instantiations.  That's still an important factor, so in the
new scheme we're using a global allocator; but instead of using
malloc for individual allocations, we're using a slab allocator,
which should have better peak, single-thread performance, at the
cost of not easily supporting deallocation.  Deallocation is
only used for metadata when there's contention on the cache, and
specifically only when there's contention for the same key, so
leaking a little isn't the worst thing in the world.

The initial slab is a 64K globally-allocated buffer.
Successive slabs are 16K and allocated with malloc.

rdar://28189496
2017-01-28 02:37:22 -05:00
Hugh Bellamy
818099ecbe Rename swift_unreachable to swift_runtime_unreachable 2017-01-26 15:31:34 +00:00
Hugh Bellamy
a5a4880075 Remove extern "C" uses of SWIFT_RT_ENTRY_VISIBILITY 2017-01-22 18:32:17 +00:00
Hugh Bellamy
63cf2d561e Remove extern "C" from uses of SWIFT_RUNTIME_EXPORT 2017-01-22 18:32:17 +00:00
practicalswift
30a88d38e6 [gardening] Fix recently introduced typos 2017-01-06 21:16:02 +01:00
practicalswift
6d1ae2a39c [gardening] 2016 → 2017 2017-01-06 16:41:22 +01:00
Hugh Bellamy
7b66b579b1 Add various unreachable annotations to the runtime 2016-12-19 15:54:50 +00:00
Hugh Bellamy
61c83ab5eb Change _MSC_VER conditions to _WIN32 in runtime 2016-12-19 15:54:49 +00:00
practicalswift
9d0b2abfc2 [gardening] Normalize end-of-namespace comments 2016-12-17 22:29:07 +01:00
practicalswift
38be6125e5 [gardening] C++ gardening: Terminate namespaces, fix argument names, ...
Changes:
* Terminate all namespaces with the correct closing comment.
* Make sure argument names in comments match the corresponding parameter name.
* Remove redundant get() calls on smart pointers.
* Prefer using "override" or "final" instead of "virtual". Remove "virtual" where appropriate.
2016-12-17 00:32:42 +01:00
Erik Eckstein
9f8b68ae11 Mangling: use macros instead of hard-coded swift symbol names.
This makes it easier to switch between the old and new mangling scheme.
2016-12-02 15:55:30 -08:00
practicalswift
797b80765f [gardening] Use the correct base URL (https://swift.org) in references to the Swift website
Remove all references to the old non-TLS enabled base URL (http://swift.org)
2016-11-20 17:36:03 +01:00
Arnold Schwaighofer
d3ef6f70f2 Runtime: Make swift_getObjCClassMetadata resilient against weakly linked classes
rdar://28203571
2016-09-28 14:45:39 -07:00
Erik Eckstein
c710b04dbf IRGen: Let the stride of a type be at least one, even for zero-sized types like the empty tuple.
This affects the computed stride for fixed-sized types in IRGen as well as the stored stride in value witness tables.
The reason is to let comparisons and difference operations work for pointers to zero-sized types.
(Currently this is achieved by using Builtin.strideof_nonzero in MemoryLayout.stride, but this requires a std::max(1, stride) operation after loading the stride)
2016-09-14 13:24:19 -07:00
John McCall
4b22a29a7c Change MetadataCache to just use malloc/free.
IIRC we never had any evidence that the performance impact of a
separate allocator here was actually measurable, and it does come
at a significant fragmentation cost because every single cache
allocates at least a page of memory.  Sharing that with the system
allocator makes more sense, even if these allocations are typically
permanent.

This also means that standard memory-debugging tools will actually
find problems with out-of-bounds accesses to metadata.
2016-09-01 14:19:14 -07:00
John McCall
81b27c210b Permit ConcurrentMap to be templated over an allocator and move
MetadataCache's allocator into it.

The major functional change here is that MetadataCache will now use
the slab allocator for tree nodes, but I also switched the Hashable
conformances cache to use ConcurrentMap directly instead of a
Lazy<ConcurrentMap<>>.
2016-09-01 14:09:43 -07:00
swift-ci
65f6d9dda5 Merge pull request #4582 from rjmccall/metadata-caching-redux 2016-08-31 19:39:30 -07:00
John McCall
17d72f558b Simplify and optimize the structural type metadata caches to use ConcurrentMap directly.
Previously, these were all using MetadataCache.  MetadataCache is a
more heavyweight structure which acquires a lock before building the
metadata.  This is appropriate if building the metadata is very
expensive or might have semantic side-effects which cannot be rolled
back.  It's also useful when there's a risk of re-entrance, since it
can diagnose such things instead of simply dead-locking or infinitely
recursing.  However, it's necessary for structural cases like tuple
and function types, and instead we can just use ConcurrentMap, which
does a compare-and-swap to publish the constructed metadata and
potentially destroys it if another thread successfully won the race.

This is an optimization which we could not previously attempt.

As part of this, fix tuple metadata uniquing to consider the label
string correctly.  This exposes a bug where the runtime demangling
of tuple metadata nodes doesn't preserve labels; fix this as well.
2016-08-31 18:53:54 -07:00
John McCall
99c52e89ad Remove the linked list through metadata caches, since LLDB is
no longer using it even in fallback code.
2016-08-31 18:53:54 -07:00
swift-ci
ab59308c54 Merge pull request #4535 from bryongloden/patch-1 2016-08-31 12:08:05 -07:00
Greg Parker
e0656515b9 Revert "Metadata cache optimizations" 2016-08-29 19:03:40 -07:00
Bryon Gloden, CISSP®
b07bc61b48 uninitialized variable: savedSize
[stdlib/public/runtime/Metadata.cpp:2560]: (error) Uninitialized variable: savedSize
2016-08-28 16:47:12 -04:00
John McCall
b7bd752946 Simplify and optimize the structural type metadata caches to use ConcurrentMap directly.
Previously, these were all using MetadataCache.  MetadataCache is a
more heavyweight structure which acquires a lock before building the
metadata.  This is appropriate if building the metadata is very
expensive or might have semantic side-effects which cannot be rolled
back.  It's also useful when there's a risk of re-entrance, since it
can diagnose such things instead of simply dead-locking or infinitely
recursing.  However, it's necessary for structural cases like tuple
and function types, and instead we can just use ConcurrentMap, which
does a compare-and-swap to publish the constructed metadata and
potentially destroys it if another thread successfully won the race.

This is an optimization which we could not previously attempt.

As part of this, fix tuple metadata uniquing to consider the label
string correctly.  This exposes a bug where the runtime demangling
of tuple metadata nodes doesn't preserve labels; fix this as well.
2016-08-25 22:27:17 -07:00
John McCall
f2080cb1a2 Remove the linked list through metadata caches, since LLDB is
no longer using it even in fallback code.
2016-08-25 20:11:56 -07:00
Joe Groff
4f65ef92c2 Runtime: Don't clobber the compiler-emitted layout of empty fields.
Otherwise, we try to dirty constant memory for classes emitted as constant-layout by the compiler. Fixes rdar://problem/27951346.
2016-08-22 13:37:13 -07:00
Doug Gregor
823c24b355 [SE-0112] Rename ErrorProtocol to Error.
This is bullet (5) of the proposed solution in SE-0112, and the last
major piece to be implemented.
2016-07-12 10:53:52 -07:00
John McCall
8d1ea1c001 Fix foreign type metadata creation to not call the initializer
under the lock.

Unfortunately, unit-testing this is gratuitously difficult
because of C++ problems.  (Fixed in C++17, apparently.)  Will
be tested by a future patch which creates a nested foreign
type.
2016-07-08 15:21:34 -07:00