Reformatting everything now that we have `llvm` namespaces. I've
separated this from the main commit to help manage merge-conflicts and
for making it a bit easier to read the mega-patch.
This is phase-1 of switching from llvm::Optional to std::optional in the
next rebranch. llvm::Optional was removed from upstream LLVM, so we need
to migrate off rather soon. On Darwin, std::optional, and llvm::Optional
have the same layout, so we don't need to be as concerned about ABI
beyond the name mangling. `llvm::Optional` is only returned from one
function in
```
getStandardTypeSubst(StringRef TypeName,
bool allowConcurrencyManglings);
```
It's the return value, so it should not impact the mangling of the
function, and the layout is the same as `std::optional`, so it should be
mostly okay. This function doesn't appear to have users, and the ABI was
already broken 2 years ago for concurrency and no one seemed to notice
so this should be "okay".
I'm doing the migration incrementally so that folks working on main can
cherry-pick back to the release/5.9 branch. Once 5.9 is done and locked
away, then we can go through and finish the replacement. Since `None`
and `Optional` show up in contexts where they are not `llvm::None` and
`llvm::Optional`, I'm preparing the work now by going through and
removing the namespace unwrapping and making the `llvm` namespace
explicit. This should make it fairly mechanical to go through and
replace llvm::Optional with std::optional, and llvm::None with
std::nullopt. It's also a change that can be brought onto the
release/5.9 with minimal impact. This should be an NFC change.
Its storage vector is intended to be of some type like
`std::vector<std::pair<Key, Optional<Value>>>`, i.e., some collection of
pairs whose `second` is an `Optional<Value>`. So when constructing a
default instance of that pair, just construct an Optional in the None
case.
For those unfamiliar, this map is a vector of pairs that we stable sort by the
key when we "freeze" it so we can run map operations upon the keys. This allows
one to accumulate into the multi-map and then once one has finished
accumulating, perform these map operations.
While doing some work in the move checker, I thought I would need the ability to
incrementally update a multi-map by appending more entires. Due to the design,
supporting this is as simple as unfreezing the map. This results in one being no
longer able to run map operations without hitting an assert. When one freezes
again, the stable sort will put the new entries in the appropriate place in the
already sorted initial part of the array.
Turns out I didn't need this, but seemed useful, so I am upstreaming it.
`getValue` -> `value`
`getValueOr` -> `value_or`
`hasValue` -> `has_value`
`map` -> `transform`
The old API will be deprecated in the rebranch.
To avoid merge conflicts, use the new API already in the main branch.
rdar://102362022
Beyond allowing us to emit better errors, this will allow me to (in a subsequent
commit) count the amount of uses that are "outside" of the linear lifetime. I
can then compare that against a passed in set of "non consuming uses". If the
count of the number of uses "outside" of the linear lifetime equals the count of
the passed in set of "non consuming uses", then I know that /all/ non consuming
uses that I am testing against are not co-incident with the linear lifetime,
meaning that they can not effect (in a local, direct sense) the linear lifetime.
I am going to use that information to determine when it is safe to convert an
inout form a load [copy] to a load_borrow in the face of local mutations. I can
pass the set of local mutations as non-consuming uses to a linear lifetime
consisting of the load [copy]/destroy_values and thus prove that no writes occur
in between the load [copy]/destroy_value meaning it is safe to conver them to
borrow form.
NOTE: The aforementioned optimization is an extension of an optimization already
in tree that just bails if we have any writes to an inout locally, which is so
unfortunate.
I implemented this in a similar way to the way blotting is implemented in a blot
map vector:
1. I changed this to store (Key, Optional<Value>) pairs.
2. I made it so that once frozen, we can "erase" things from the multimap by
setting all Optional<Value> to none.
3. I changed the range we vend to be an OptionalTransformRange instead of just a
TransformRange so we skip all keys with .none values, meaning that a user will
get the nice behavior that getRange() still works after erasing.
One interesting thing to note is that one /cannot/ erase elements when
initializing the frozen multi-map since we haven't sorted it yet. At first this
seems weird, but it actually fits with the use case of this data structure:
building up state by processing IR in a readonly way and then later working with
it in a worklist like way (and perhaps checking for unhandled cases at the end
of processing).
The nice thing additional thing is that I was able to ensure that the actual
exposed API did not change in terms of how one uses it. I just changed the
underlying iterators/etc.
This is the most simple initial version that I can commit. The hope is that this will help to bring this up in a nice way.
I am going to handle the multiple phi node and load [copy] case later to reduce
code churn.
<rdar://problem/56720436>
I have been using this in a bunch of places in the compiler and rather than
implement it by hand over and over (and maybe messing up), this commit just
commits a correct implementation.
This data structure is a map backed by a vector like data structure. It has two
phases:
1. An insertion phase when the map is mutable and one inserts (key, value) pairs
into the map. These are just appeneded into the storage array.
2. A frozen stage when the map is immutable and one can now perform map queries
on the multimap.
The map transitions from the mutable, thawed phase to the immutable, frozen
phase by performing a stable_sort of its internal storage by only the key. Since
this is a stable_sort, we know that the relative insertion order of values is
preserved if their keys equal. Thus the sorting will have created contiguous
regions in the array of values, all mapped to the same key, that are insertion
order. Thus by finding the lower_bound for a given key, we are guaranteed to get
the first element in that continguous range. We can then do a forward search to
find the end of the region, allowing us to then return an ArrayRef to these
internal values.
The reason why I keep on finding myself using this is that this map enables one
to map a key to an array of values without needing to store small vectors in a
map or use heap allocated memory, all key, value pairs are stored inline (in
potentially a single SmallVector given that one is using SmallFrozenMultiMap).