Previously it was part of swiftBasic.
The demangler library does not depend on llvm (except some header-only utilities like StringRef). Putting it into its own library makes sure that no llvm stuff will be linked into clients which use the demangler library.
This change also contains other refactoring, like moving demangler code into different files. This makes it easier to remove the old demangler from the runtime library when we switch to the new symbol mangling.
Also in this commit: remove some unused API functions from the demangler Context.
fixes rdar://problem/30503344
Use the generic type lowering algorithm described in
"docs/CallingConvention.rst#physical-lowering" to map from IRGen's explosion
type to the type expected by the ABI.
Change IRGen to use the swift calling convention (swiftcc) for native swift
functions.
Use the 'swiftself' attribute on self parameters and for closures contexts.
Use the 'swifterror' parameter for swift error parameters.
Change functions in the runtime that are called as native swift functions to use
the swift calling convention.
rdar://19978563
Changes:
* Terminate all namespaces with the correct closing comment.
* Make sure argument names in comments match the corresponding parameter name.
* Remove redundant get() calls on smart pointers.
* Prefer using "override" or "final" instead of "virtual". Remove "virtual" where appropriate.
- Revert the use of SWIFT_RUNTIME_EXPORT in ImageInspectionELF.cpp and
fix the unittests by explicitly adding the file to the list
- Revert the change of section data names
MetadataCache's allocator into it.
The major functional change here is that MetadataCache will now use
the slab allocator for tree nodes, but I also switched the Hashable
conformances cache to use ConcurrentMap directly instead of a
Lazy<ConcurrentMap<>>.
If the Swift error wrapped in a _SwiftNativeNSError box conforms to
Hashable, the box now uses the Swift's conformance to Hashable.
Part of rdar://problem/27574348.
- added read / write lock support
- added non-fatal error support to allow use of mutex in fatal error reporting pathway
- isolated pthread implementation to it own header/cpp file pair
- expanded unit tests to cover new code as well as better test existing mutex
- removed a layer of complexity that added no real value
If the thread starts slowly enough it will succeed in taking the lock,
leading to a deadlock. We saw this test hanging on some of our bots
inside of pthread's lock acquire.
initialization in-place on demand. Initialize parent metadata
references correctly on struct and enum metadata.
Also includes several minor improvements related to relative
pointers that I was using before deciding to simply switch the
parent reference to an absolute reference to get better access
patterns.
Includes a fix since the earlier commit to make enum metadata
writable if they have an unfilled payload size. This didn't show
up on Darwin because "constant" is currently unenforced there in
global data containing relocations.
This patch requires an associated LLDB change which is being
submitted in parallel.
initialization in-place on demand. Initialize parent metadata
references correctly on struct and enum metadata.
Also includes several minor improvements related to relative
pointers that I was using before deciding to simply switch the
parent reference to an absolute reference to get better access
patterns.
and MetadataCache and fix a re-entrancy bug in metadata
instantiation.
The re-entrancy bug is that we were holding the instantiation
lock of a metadata cache while instantiating metadata. Doing
so prevents us from creating a different instantiation if
it's needed by the outer instantiation. This is already
possible, but it's much more likely in a patch I'm working on
to only store the minimal metadata for generic parameters
in generic types.
The same bug could also show up as a deadlock between threads,
so a recursive lock would not be a good fix. Instead, we add
a condition variable to the metadata cache. When fetching
metadata, we look for a node in the concurrent map, eagerly
creating an empty one if none currently exists. If lookup
finds an empty node, we wait on the condition variable for
the node to become populated. If lookup succeeds in creating
an empty node, we instantiate the metadata, grab the lock,
populate the node, and notify the condition variable.
Safely creating an empty node without any metadata present
requires us to move the key data into the map entry. That,
plus a few other invariant shifts, makes it sensible to
give the user of ConcurrentMap more control over the
allocation of map nodes and the layout of keys. That, in
turn, allows us to change the contract so that keys can be
more complex than just a hash code. Instead of incrementing
hash codes and re-performing the lookup, we just insist
that lookup keys be totally ordered.
For now, I've kept the uniform use of hash codes as a
component of the key for MetadataCaches. However, hash
codes aren't really profitable for small keys, and we should
probably use direct comparisons instead.
We should also switch the safer metadata caches (i.e. the
ones that don't involve calling an arbitrary instantiation
function, like MetatypeMetadataCache) over to directly use
ConcurrentMap.
LLDB's requirement that we maintain a linked list of metadata
cache instantiations with a known layout means we can't yet
remove the CacheEntry's redundant copy of the generic
arguments.
I am adding this test mainly to check that the code that removed the sentinal
value and replaced it with a nullable pointer and some logic for initializing
the pointer works.
...and explicitly mark symbols we export, either for use by executables or for runtime-stdlib interaction. Until the stdlib supports resilience we have to allow programs to link to these SPI symbols.
This is the first patch in a series that will allow new protocol
requirements to be added resiliently, with the runtime filling in
default implementations in witness tables.
First, this adds a new flag to the protocol descriptor indicating
that the protocol is resilient. In this case, there are two
additional fields, MinimumWitnessTableSizeInWords and
DefaultWitnessTableSizeInWords, followed by tail-allocated
default witnesses.
The swift_getGenericWitnessTable() entry point now fills in the
default witnesses from the protocol if the given witness table
template is smaller than the expected witness table size.
This also changes the layout of instantiated witness tables to move
the address point to the end of private data. Previously the private
data came after the requirements, but this meant that adding new
requirements would require sliding the private data at runtime and
accessing it indirectly. It is much simpler to access it from
negative offsets instead.
I updated IRGen to emit the new metadata, but currently all protocols
are flagged as not resilient, and default witnesses are not emitted;
this will come in a subsequent patch once some more plumbing is
in place.
To avoid generating GOT entries for references to protocols defined
in the current module, I had to add some hacks to the existing hack
for this. I'll hopefully clean this up in a principled manner later.