Commit Graph

872 Commits

Author SHA1 Message Date
Hugh Bellamy
bddebc58b8 Don't define SWIFT_USE_SWIFTCALL if there is no such attribute 2017-02-20 10:16:30 +07:00
Hugh Bellamy
a570f6b87b Merge pull request #7405 from hughbe/visual-studio-bug
Work around Visual Studio bug inferring type of auto
2017-02-18 09:26:40 +07:00
Arnold Schwaighofer
6c35202565 Conditionalize usage of llvm::CallingConv::Swift on SWIFT_USE_SWIFTCALL macro 2017-02-14 12:17:57 -08:00
Arnold Schwaighofer
39fa2f0228 Use the swift calling convention for swift functions
Use the generic type lowering algorithm described in
"docs/CallingConvention.rst#physical-lowering" to map from IRGen's explosion
type to the type expected by the ABI.

Change IRGen to use the swift calling convention (swiftcc) for native swift
functions.

Use the 'swiftself' attribute on self parameters and for closures contexts.

Use the 'swifterror' parameter for swift error parameters.

Change functions in the runtime that are called as native swift functions to use
the swift calling convention.

rdar://19978563
2017-02-14 12:17:57 -08:00
John McCall
038303b1b1 Switch MetadataCache to use a global slab allocator.
This seems to more than fix a performance regression that we
detected on a metadata-allocation microbenchmark.

A few months ago, I improved the metadata cache representation
and changed the metadata allocation scheme to primarily use malloc.
Previously, we'd been using malloc in the concurrent tree data
structure but a per-cache slab allocator for the metadata itself.
At the time, I was concerned about the overhead of per-cache
allocators, since many metadata patterns see only a small number
of instantiations.  That's still an important factor, so in the
new scheme we're using a global allocator; but instead of using
malloc for individual allocations, we're using a slab allocator,
which should have better peak, single-thread performance, at the
cost of not easily supporting deallocation.  Deallocation is
only used for metadata when there's contention on the cache, and
specifically only when there's contention for the same key, so
leaking a little isn't the worst thing in the world.

The initial slab is a 64K globally-allocated buffer.
Successive slabs are 16K and allocated with malloc.

rdar://28189496
2017-02-14 11:10:44 -05:00
Hugh Bellamy
d457742a8d Change getKeyIntValueForDump to use intptr_t rather than long 2017-02-12 09:46:36 +07:00
Hugh Bellamy
a94f65cb5e Include llvm/Support/Compiler.h wherever we use __has_attribute 2017-02-12 09:30:22 +07:00
Hugh Bellamy
e9a6cbfeff Fix misc Visual Studio squiggly 2017-02-12 09:30:02 +07:00
John McCall
2b25701a93 Revert "Switch MetadataCache to use a global slab allocator."
This reverts commit ccbe5fcf73.
2017-01-29 00:17:30 -05:00
John McCall
ccbe5fcf73 Switch MetadataCache to use a global slab allocator.
This seems to more than fix a performance regression that we
detected on a metadata-allocation microbenchmark.

A few months ago, I improved the metadata cache representation
and changed the metadata allocation scheme to primarily use malloc.
Previously, we'd been using malloc in the concurrent tree data
structure but a per-cache slab allocator for the metadata itself.
At the time, I was concerned about the overhead of per-cache
allocators, since many metadata patterns see only a small number
of instantiations.  That's still an important factor, so in the
new scheme we're using a global allocator; but instead of using
malloc for individual allocations, we're using a slab allocator,
which should have better peak, single-thread performance, at the
cost of not easily supporting deallocation.  Deallocation is
only used for metadata when there's contention on the cache, and
specifically only when there's contention for the same key, so
leaking a little isn't the worst thing in the world.

The initial slab is a 64K globally-allocated buffer.
Successive slabs are 16K and allocated with malloc.

rdar://28189496
2017-01-28 02:37:22 -05:00
Hugh Bellamy
818099ecbe Rename swift_unreachable to swift_runtime_unreachable 2017-01-26 15:31:34 +00:00
Hugh Bellamy
5a59971b95 Move Unreachable.h from include/Basic to include/Runtime 2017-01-26 15:31:33 +00:00
Hugh Bellamy
a5a4880075 Remove extern "C" uses of SWIFT_RT_ENTRY_VISIBILITY 2017-01-22 18:32:17 +00:00
Hugh Bellamy
05a50fd978 Remove extern "C" from uses of SWIFT_RUNTIME_STDLIB_INTERFACE 2017-01-22 18:32:17 +00:00
Hugh Bellamy
63cf2d561e Remove extern "C" from uses of SWIFT_RUNTIME_EXPORT 2017-01-22 18:32:17 +00:00
Slava Pestov
230b50263e Merge pull request #6940 from hughbe/crash-client-hidden
Remove CRASH_REPORTER_CLIENT_HIDDEN in favour of equivilent LLVM macro
2017-01-20 20:28:44 -08:00
Hugh Bellamy
4e55214c18 Remove CRASH_REPORTER_CLIENT_HIDDEN in favour of equivilent LLVM macro 2017-01-20 13:48:56 +00:00
Hugh Bellamy
9b06a72f44 Use LLVM_BUILTIN_TRAP in swift::crash instead of our own solution 2017-01-20 13:26:53 +00:00
Slava Pestov
7b5cf4ab7d Merge pull request #3841 from tinysun212/pr-swiftc-cygwin-2
[swiftc] Fixed for Cygwin
2017-01-19 20:17:47 -08:00
Han Sangjin
b8dd577693 [swiftc] Fixed for Cygwin
Fixed for the difference of Cygwin with other Windows variants (MSVC,
Itanium, MinGW).

- The platform name is renamed to "cygwin" from "windows" which is used
  for searching the standard libraries.

- The consideration for DLL storage class (DllExport/DllImport) is not
  required for Cygwin and MinGW. There is no problem when linking in
  these environment.

- Cygwin should use large memory model as default.(This may be changed
  if someone ports to 32bit)

- Cygwin and MinGW should use the autolink feature in the sameway of
  Linux due to the linker's limit.
2017-01-19 05:48:24 +09:00
Hugh Bellamy
42a7ff6caa Fix reinterpret cast warning of unsigned to void* in Metadata.h (#6814) 2017-01-14 12:33:21 -08:00
Slava Pestov
7561d5c0f4 Merge pull request #6682 from hughbe/msvc-user-label-prefix
Fix undefined __USER_LABEL_PREFIX compiling swift/{LLVMPasses|IRGen} with MSVC
2017-01-11 15:35:14 -08:00
Hugh Bellamy
9b38b0a9d5 Fix MSVC errors compiling GenMeta.cpp 2017-01-09 22:16:19 +00:00
Hugh Bellamy
cdb73ae193 Fix undefined __USER_LABEL_PREFIX compiling swift/LLVMPasses 2017-01-09 21:17:18 +00:00
practicalswift
6d1ae2a39c [gardening] 2016 → 2017 2017-01-06 16:41:22 +01:00
Saleem Abdulrasool
75533d9dac Merge pull request #6385 from hughbe/runtime
Fix building the runtime from Windows
2016-12-21 10:02:22 -08:00
Joe Groff
2e0cb9c0a2 Runtime: getDynamicType can't drill into existentials when producing a concrete metatype.
For a value of an opaque generic type `<T> x: T`, the language currently defines `type(of: x)` and `T.self` as both producing a type `T.Type`, and the result of substituting an existential type by `T == P` gives `P.Protocol`, so the `type(of:)` operation on `x` can only give the concrete protocol metatype when `x` is an existential in this case. The optimizer understood this rule, but the runtime did not, causing SR-3304.
2016-12-19 11:03:52 -08:00
Hugh Bellamy
7b66b579b1 Add various unreachable annotations to the runtime 2016-12-19 15:54:50 +00:00
Hugh Bellamy
61c83ab5eb Change _MSC_VER conditions to _WIN32 in runtime 2016-12-19 15:54:49 +00:00
Greg Parker
361d026080 [runtime] Use memory_order_consume hack on 32-bit arm architectures. (#6259) 2016-12-14 15:42:21 -08:00
Erik Eckstein
9f8b68ae11 Mangling: use macros instead of hard-coded swift symbol names.
This makes it easier to switch between the old and new mangling scheme.
2016-12-02 15:55:30 -08:00
practicalswift
797b80765f [gardening] Use the correct base URL (https://swift.org) in references to the Swift website
Remove all references to the old non-TLS enabled base URL (http://swift.org)
2016-11-20 17:36:03 +01:00
Roman Levenstein
4e8147c791 Merge pull request #5299 from swiftix/wip-rename-rt-prefix
[swift-runtime] Rename rt_swift_* to swift_rt_*. NFC
2016-10-14 17:03:08 -07:00
Roman Levenstein
56d55dec2b [swift-runtime] Rename rt_swift_* to swift_rt_*. NFC
Swift uses rt_swift_* functions to call the Swift runtime without using dyld's stubs. These functions are renamed to swift_rt_* to reduce namespace pollution.

rdar://28706212
2016-10-11 09:49:06 -07:00
gregomni
b634aedb3c Emit calls to objc_allocWithZone(cls) when allocating obj-c objects. 2016-10-07 12:17:53 -07:00
John McCall
81b27c210b Permit ConcurrentMap to be templated over an allocator and move
MetadataCache's allocator into it.

The major functional change here is that MetadataCache will now use
the slab allocator for tree nodes, but I also switched the Hashable
conformances cache to use ConcurrentMap directly instead of a
Lazy<ConcurrentMap<>>.
2016-09-01 14:09:43 -07:00
Greg Parker
fcd5b927dc Merge pull request #4581 from gparker42/fix-SwiftRuntimeTests
[stdlib] Fix assertion failures in SwiftRuntimeTests (aka unittests/runtime).
2016-08-31 20:31:05 -07:00
swift-ci
65f6d9dda5 Merge pull request #4582 from rjmccall/metadata-caching-redux 2016-08-31 19:39:30 -07:00
Greg Parker
5817ca7381 [stdlib] Fix assertion failures in SwiftRuntimeTests (aka unittests/runtime). 2016-08-31 18:55:31 -07:00
John McCall
17d72f558b Simplify and optimize the structural type metadata caches to use ConcurrentMap directly.
Previously, these were all using MetadataCache.  MetadataCache is a
more heavyweight structure which acquires a lock before building the
metadata.  This is appropriate if building the metadata is very
expensive or might have semantic side-effects which cannot be rolled
back.  It's also useful when there's a risk of re-entrance, since it
can diagnose such things instead of simply dead-locking or infinitely
recursing.  However, it's necessary for structural cases like tuple
and function types, and instead we can just use ConcurrentMap, which
does a compare-and-swap to publish the constructed metadata and
potentially destroys it if another thread successfully won the race.

This is an optimization which we could not previously attempt.

As part of this, fix tuple metadata uniquing to consider the label
string correctly.  This exposes a bug where the runtime demangling
of tuple metadata nodes doesn't preserve labels; fix this as well.
2016-08-31 18:53:54 -07:00
Dmitri Gribenko
dfb4f56e55 runtime: rename _TMps8Hashable to HashableProtocolDescriptor 2016-08-31 09:53:39 -07:00
Greg Parker
e0656515b9 Revert "Metadata cache optimizations" 2016-08-29 19:03:40 -07:00
John McCall
b7bd752946 Simplify and optimize the structural type metadata caches to use ConcurrentMap directly.
Previously, these were all using MetadataCache.  MetadataCache is a
more heavyweight structure which acquires a lock before building the
metadata.  This is appropriate if building the metadata is very
expensive or might have semantic side-effects which cannot be rolled
back.  It's also useful when there's a risk of re-entrance, since it
can diagnose such things instead of simply dead-locking or infinitely
recursing.  However, it's necessary for structural cases like tuple
and function types, and instead we can just use ConcurrentMap, which
does a compare-and-swap to publish the constructed metadata and
potentially destroys it if another thread successfully won the race.

This is an optimization which we could not previously attempt.

As part of this, fix tuple metadata uniquing to consider the label
string correctly.  This exposes a bug where the runtime demangling
of tuple metadata nodes doesn't preserve labels; fix this as well.
2016-08-25 22:27:17 -07:00
David Farler
f0609a612b Make a temporary existential when reflecting weak properties
When getting a mirror child that is a class existential, there
may be witness tables for the protocol composition to copy. Don't
just take the address of a class instance pointer from the stack -
make a temporary existential-like before calling into the Mirror
constructor.

This now correctly covers reflecting weak optional class types, and weak
optional class existential types, along with fixing a stack buffer
overflow reported by the Address Sanitizer (thanks, ASan!).

Tests were also updated to check for the validity of the child's data.

rdar://problem/27348445
2016-08-19 15:30:35 -07:00
David Farler
a420279a70 Merge pull request #4060 from bitjammer/weak-reference-in-mirror-existential
Always, no, never... forget to check your references
2016-08-16 12:47:27 -07:00
David Farler
302fe96049 Fix some Linux inconsistencies with swift_*weak runtime functions
Some of these return void * on platforms with Objective-C interop,
but drops the return value on those without, such as on Linux.
2016-08-16 11:44:05 -07:00
Joe Groff
f998c624e6 Runtime: Add a convenient -[_SwiftValue _swiftTypeName] method for debugging.
This makes it a bit easier to diagnose unexpected boxing problems in the debugger, by allowing `po [value _swiftTypeName]` to work, instead of forcing users to know how to call `swift_getTypeName` from lldb themselves.
2016-08-15 13:53:59 -07:00
Slava Pestov
c98ce0c770 IRGen: Fix Dispatch overlay for non-optimized builds
I apologize in advance to @jrose-apple, who is not a fan
of this fix ;-)

In unoptimized builds, the convenience initializers on
DispatchQueue allocate and immediately deallocate an
instance of OS_dispatch_queue prior to calling the
C function that returns the "real" instance.

This is because we don't have a way to write user-defined
factory initializers yet; convenience initializers still
have an 'initializing' entry point that takes an existing
instance, which we have no choice but to throw away.

Unfortunately, when we perform the fake allocation, we
look up class metadata by calling the wrong Swift runtime
function, causing a crash when we send +allocWithZone:.

Fix this so that the metadata is accessed via a lookup
from the Objective-C runtime, instead of making a totally
fake 'foreign metadata' object -- it looks like there was
code for this already, it just wasn't used in all cases.

While getting metadata for a runtime-only class should be
rare, this feels like a real bug fix, to me.

Second, we would ultimately free the fake object by sending
-release, however OS_dispatch_queue has an override of
-dealloc which doesn't like to be called with a completely
uninitialized instance.

Here, I'm going to drop all pretense of sanity. The patch
just changes IRGen to lower the dealloc_partial_ref instruction
as a call to the object_dispose() Objective-C runtime function
when the class in question is a runtime-only class. This
frees the object without running -dealloc, which *happens*
to work for OS_dispatch_queue.

Fixes <rdar://problem/27226313>.
2016-08-13 01:51:45 -07:00
John McCall
a6e1e87585 Add implicit conversions and casts from T:Hashable <-> AnyHashable.
rdar://27615802
2016-08-04 23:13:27 -07:00
Dmitri Gribenko
4e7cc0d938 SwiftShims: remove C++ code and unprefixed names from RuntimeShims.h 2016-08-01 16:53:11 -07:00