We do this by doing a traversal of our sorted lists in a similar manner as one
would when one is merging two such sets, i.e. one has two iterators and always
advances the iterator that has a value that is less than the other. If we ever
hit a situation where the two iterators equal, we must have a non-empty
intersection.
A unittest that exercises very basic functionality is provided as well.
and MetadataCache and fix a re-entrancy bug in metadata
instantiation.
The re-entrancy bug is that we were holding the instantiation
lock of a metadata cache while instantiating metadata. Doing
so prevents us from creating a different instantiation if
it's needed by the outer instantiation. This is already
possible, but it's much more likely in a patch I'm working on
to only store the minimal metadata for generic parameters
in generic types.
The same bug could also show up as a deadlock between threads,
so a recursive lock would not be a good fix. Instead, we add
a condition variable to the metadata cache. When fetching
metadata, we look for a node in the concurrent map, eagerly
creating an empty one if none currently exists. If lookup
finds an empty node, we wait on the condition variable for
the node to become populated. If lookup succeeds in creating
an empty node, we instantiate the metadata, grab the lock,
populate the node, and notify the condition variable.
Safely creating an empty node without any metadata present
requires us to move the key data into the map entry. That,
plus a few other invariant shifts, makes it sensible to
give the user of ConcurrentMap more control over the
allocation of map nodes and the layout of keys. That, in
turn, allows us to change the contract so that keys can be
more complex than just a hash code. Instead of incrementing
hash codes and re-performing the lookup, we just insist
that lookup keys be totally ordered.
For now, I've kept the uniform use of hash codes as a
component of the key for MetadataCaches. However, hash
codes aren't really profitable for small keys, and we should
probably use direct comparisons instead.
We should also switch the safer metadata caches (i.e. the
ones that don't involve calling an arbitrary instantiation
function, like MetatypeMetadataCache) over to directly use
ConcurrentMap.
LLDB's requirement that we maintain a linked list of metadata
cache instantiations with a known layout means we can't yet
remove the CacheEntry's redundant copy of the generic
arguments.
This is an immutable data structure with the following properties:
1. All of the sets are sorted and can be iterated over.
2. It takes in a bump ptr allocator and uses that allocator for all
allocations.
3. All concatenation operations involve only one bump ptr allocation.
4. Since we are only storing pointers, the data structure does not need any
destructors to be invoked to be cleaned up. The bumpptrallocator memory just
needs to be freed.
I am going to use this to improve the compile time performance of ARC.
I am adding this test mainly to check that the code that removed the sentinal
value and replaced it with a nullable pointer and some logic for initializing
the pointer works.
...and explicitly mark symbols we export, either for use by executables or for runtime-stdlib interaction. Until the stdlib supports resilience we have to allow programs to link to these SPI symbols.
This is the first patch in a series that will allow new protocol
requirements to be added resiliently, with the runtime filling in
default implementations in witness tables.
First, this adds a new flag to the protocol descriptor indicating
that the protocol is resilient. In this case, there are two
additional fields, MinimumWitnessTableSizeInWords and
DefaultWitnessTableSizeInWords, followed by tail-allocated
default witnesses.
The swift_getGenericWitnessTable() entry point now fills in the
default witnesses from the protocol if the given witness table
template is smaller than the expected witness table size.
This also changes the layout of instantiated witness tables to move
the address point to the end of private data. Previously the private
data came after the requirements, but this meant that adding new
requirements would require sliding the private data at runtime and
accessing it indirectly. It is much simpler to access it from
negative offsets instead.
I updated IRGen to emit the new metadata, but currently all protocols
are flagged as not resilient, and default witnesses are not emitted;
this will come in a subsequent patch once some more plumbing is
in place.
To avoid generating GOT entries for references to protocols defined
in the current module, I had to add some hacks to the existing hack
for this. I'll hopefully clean this up in a principled manner later.
Swift parser splits tokens in few cases, but it swift::tokenize(...) does not know
about that. In order to reconstruct token stream as it was seen by the parser,
we need to collect the tokens it decided to split and use this information
in swift::tokenize(...).
Introduce a new "swift" build configuration that guards declarations
and statements with a language version - if the current language version
of the compiler is at least that version, the block will parse as normal.
For inactive blocks, the code will not be parsed an no diagnostics will
be emitted there.
Example:
#if swift(>=2.2)
print("Active")
#else
this code will not parse or emit diagnostics
#endif
https://github.com/apple/swift-evolution/blob/master/proposals/0020-if-swift-version.md
rdar://problem/19823607
The big differences here are that:
1. We no longer use the 4096 trick.
2. Now we store all indices inline so no mallocing is required and the
value is trivially copyable. We allow for much larger indices to be
stored inline which makes having an unrepresentable index a much smaller
issue. For instance on a 32 bit platform, in NewProjection, we are able
to represent an index of up to (1 << 26) - 1, which should be more than
enough to handle any interesting case.
3. We can now have up to 7 ptr cases and many more index cases (with each extra
bit needed to represent the index cases lowering the representable range of
indices).
The whole data structure is much simpler and easier to understand as a
bonus. A high level description of the ADT is as follows:
1. A PointerIntEnum for which bits [0, (num_tagged_bits(T*)-1)] are not all
set to 1 represent an enum with a pointer case. This means that one can have
at most ((1 << num_tagged_bits(T*)) - 2) enum cases associated with
pointers.
2. A PointerIntEnum for which bits [0, (num_tagged_bits(T*)-1)] are all set
is either an invalid PointerIntEnum or an index.
3. A PointerIntEnum with all bits set is an invalid PointerIntEnum.
4. A PointerIntEnum for which bits [0, (num_tagged_bits(T*)-1)] are all set
but for which the upper bits are not all set is an index enum. The case bits
for the index PointerIntEnum are stored in bits [num_tagged_bits(T*),
num_tagged_bits(T*) + num_index_case_bits]. Then the actual index is stored
in the remaining top bits. For the case in which this is used in swift
currently, we use 3 index bits meaning that on a 32 bit system we have 26
bits for representing indices meaning we can represent indices up to
67_108_862. Any index larger than that will result in an invalid
PointerIntEnum. On 64 bit we have many more bits than that.
By using this representation, we can make PointerIntEnum a true value type
that is trivially constructable and destructable without needing to malloc
memory.
In order for all of this to work, the user of this needs to construct an
enum with the appropriate case structure that allows the data structure to
determine what cases are pointer and which are indices. For instance the one
used by Projection in swift is:
enum class NewProjectionKind : unsigned {
// PointerProjectionKinds
Upcast = 0,
RefCast = 1,
BitwiseCast = 2,
FirstPointerKind = Upcast,
LastPointerKind = BitwiseCast,
// This needs to be set to ((1 << num_tagged_bits(T*)) - 1). It
// represents the first NonPointerKind.
FirstIndexKind = 7,
// Index Projection Kinds
Struct = PointerIntEnumIndexKindValue<0, EnumTy>::value,
Tuple = PointerIntEnumIndexKindValue<1, EnumTy>::value,
Index = PointerIntEnumIndexKindValue<2, EnumTy>::value,
Class = PointerIntEnumIndexKindValue<3, EnumTy>::value,
Enum = PointerIntEnumIndexKindValue<4, EnumTy>::value,
LastIndexKind = Enum,
};