When cross-compiling for android ARM, it is possible that the system
linker does not support the target. However, in order to cross-compile
the target runtime, we need to adjust the linker to the target linker.
If one is not specified, fall back to the current behaviour of using the
system linker.
This makes the demangler about 10 times faster.
It also changes the lifetimes of nodes. Previously nodes were reference-counted.
Now the returned demangle node-tree is owned by the Demangler class and it’s lifetime ends with the lifetime of the Demangler.
Therefore the old (and already deprecated) global functions demangleSymbolAsNode and demangleTypeAsNode are no longer available.
Another change is that the demangling for reflection now only supports the new mangling (which should be no problem because
we are generating only new mangled names for reflection).
The high half of the address space is used by the kernel in these architectures, and it would be useful for us to be able to pack small values into places the ABI otherwise requires a refcountable pointer, such as closures and maybe refcounted existentials.
These changes caused a number of issues:
1. No debug info is emitted when a release-debug info compiler is built.
2. OS X deployment target specification is broken.
3. Swift options were broken without any attempt any recreating that
functionality. The specific option in question is --force-optimized-typechecker.
Such refactorings should be done in a fashion that does not break existing
users and use cases.
This reverts commit e6ce2ff388.
This reverts commit e8645f3750.
This reverts commit 89b038ea7e.
This reverts commit 497cac64d9.
This reverts commit 953ad094da.
This reverts commit e096d1c033.
rdar://30549345
It also uses the new mangling for type names in meta-data (except for top-level non-generic classes).
lldb has now support for new mangled metadata type names.
This reinstates commit 21ba292943.
This patch splits add_swift_library into two functions one which handles
the simple case of adding a library that is part of the compiler being
built and the second handling the more complicated case of "target"
libraries, which may need to build for one or more targets.
The new add_swift_library is built using llvm_add_library, which re-uses
LLVM's CMake modules. In adapting to use LLVM's modules some of
add_swift_library's named parameters have been removed and
LINK_LIBRARIES has changed to LINK_LIBS, and LLVM_LINK_COMPONENTS
changed to LINK_COMPONENTS.
This patch also cleans up libswiftBasic's handling of UUID library and
headers, and how it interfaces with gyb sources.
add_swift_library also no longer has the FILE_DEPENDS parameter, which
doesn't matter because llvm_add_library's DEPENDS parameter has the same
behavior.
Use the generic type lowering algorithm described in
"docs/CallingConvention.rst#physical-lowering" to map from IRGen's explosion
type to the type expected by the ABI.
Change IRGen to use the swift calling convention (swiftcc) for native swift
functions.
Use the 'swiftself' attribute on self parameters and for closures contexts.
Use the 'swifterror' parameter for swift error parameters.
Change functions in the runtime that are called as native swift functions to use
the swift calling convention.
rdar://19978563
This seems to more than fix a performance regression that we
detected on a metadata-allocation microbenchmark.
A few months ago, I improved the metadata cache representation
and changed the metadata allocation scheme to primarily use malloc.
Previously, we'd been using malloc in the concurrent tree data
structure but a per-cache slab allocator for the metadata itself.
At the time, I was concerned about the overhead of per-cache
allocators, since many metadata patterns see only a small number
of instantiations. That's still an important factor, so in the
new scheme we're using a global allocator; but instead of using
malloc for individual allocations, we're using a slab allocator,
which should have better peak, single-thread performance, at the
cost of not easily supporting deallocation. Deallocation is
only used for metadata when there's contention on the cache, and
specifically only when there's contention for the same key, so
leaking a little isn't the worst thing in the world.
The initial slab is a 64K globally-allocated buffer.
Successive slabs are 16K and allocated with malloc.
rdar://28189496
For this we are linking the new re-mangler instead of the old one into the swift runtime library.
Also we are linking the new de-mangling into the swift runtime library.
It also switches to the new mangling for class names of generic swift classes in the metadata.
Note that for non-generic class we still have to use the old mangling, because the ObjC runtime in the OS depends on it (it de-mangles the class names).
But names of generic classes are not handled by the ObjC runtime anyway, so there should be no problem to change the mangling for those.
The reason for this change is that it avoids linking the old re-mangler into the runtime library.
The old method of constructing a mangled class name does not work anymore with the new mangling scheme.
Also, by using the re-mangler, _typeByName now works with class names containing non-ascii characters.
This breaks the build on exherbo, which uses target tripled linker
names. Furthermore, the default name on unix-ish systems for the linker
is `ld`. This can be a symlink to a specific implementation, usually
named as ld.<name> (e.g. `ld.bfd` or `ld.gold`). Use the cmake variable
for the linker rather than hardcoding the name.
This seems to more than fix a performance regression that we
detected on a metadata-allocation microbenchmark.
A few months ago, I improved the metadata cache representation
and changed the metadata allocation scheme to primarily use malloc.
Previously, we'd been using malloc in the concurrent tree data
structure but a per-cache slab allocator for the metadata itself.
At the time, I was concerned about the overhead of per-cache
allocators, since many metadata patterns see only a small number
of instantiations. That's still an important factor, so in the
new scheme we're using a global allocator; but instead of using
malloc for individual allocations, we're using a slab allocator,
which should have better peak, single-thread performance, at the
cost of not easily supporting deallocation. Deallocation is
only used for metadata when there's contention on the cache, and
specifically only when there's contention for the same key, so
leaking a little isn't the worst thing in the world.
The initial slab is a 64K globally-allocated buffer.
Successive slabs are 16K and allocated with malloc.
rdar://28189496
- Create separate swift_begin.o/swift_end.o for lib/swift and
lib/swift_static. The static swift_begin.o does not call
swift_addNewDSOImage() at startup.
- Update ToolChains.cpp to use the correct swift_begin.o/swift_end.o
files for the `-static-stdlib` and `-static-executable` options.
- Rename swift::addNewDSOImage() to swift_addNewDSOImage() and
export using SWIFT_RUNTIME_EXPORT.
- Move ELF specific parts of ImageInspection.h into
ImageInspectionELF.h.
- For ELF targets, keep track of shared objects as they are
dynamically loaded so that section data can be added to
the protocol conformance and type metadata caches after
initialisation (rdar://problem/19045112).