We were using std::function that allocated memory. Unfortunately c++ lambdas
don't have a type so I had to make the function findOrAdd templated on the
type of the callback. This is not a big deal since the class is already
templated and there is a single call site.
Swift SVN r23919
Before the change we were spending ~35% of the time of getGenericMetadata in hashing the inputs. I measured the following speedups:
SwiftStructuresQueue 1.11x
Havlak 1.11x
CaptureProp 1.11x
SwiftStructuresStack 1.12x
Life 1.15x
NestedLoop 1.16x
RangeAssignment 1.24x
Swift SVN r23708
Making the type of nodes in the concurrent map a concurrent list will get rid
of the problem of the map itself allocating the node type, which may have a
side effect.
Swift SVN r23443
This commits also reduces the size of the metadata cache by allocating the
map dynamically,
There are no regressions and many perf wins:
NestedLoop 1.96x
Life 1.92x
CaptureProp 1.84x
Ary 1.67x
Ary2 1.60x
ArraySubscript 1.57x
ArrayLiteral 1.54x
StdlibSort 1.52x
Havlak 1.51x
TwoSum 1.49x
DollarFilter 1.37x
PrimeNum 1.35x
NBody 1.30x
RangeAssignmen 1.29x
DollarFunction 1.28x
Walsh 1.27x
Memset 1.27x
Histogram 1.27x
RIPEMD 1.27x
Swift SVN r23358
This commit removes the locks from the family of getXXXXMetadata APIs in the
fast path that does not create a new metadata. We still need the lock for the
cache-miss case because constructing metadata has side effects and we must
not construct two metadatas at the same time.
I am seeing a 25% - 30% boost in performance on most workloads that are built
with -Onone.
Swift SVN r23353