Mask the setup overhead from copying of existential array in `Existential.Array.Mutating` by increasing the workload (5x). This way the overhead of copying is less than 5%.
Remove the misguided attempt at solving this problem with `grabArray` method - there is no way to avoid this overhead because every sample should start from a fresh copy.
The technique from preceding commit, can be used to fully determine the tested type variant in the `setUpFunction` and use a non-generic `runFunction`s for all the benchmarks.
This eliminates 202 lines of boilerplate.
Benchmarks from Array group (except the `init`) don’t use the `withType` parameter from the `run_` function anymore. The type specific variation is taken care of in the `BenchmarkInfo`, since the `existentialArray` with appropriate type is created by the `setUpFunction`. This means we can reuse the same non-generic `run_ ` function for the whole group and eliminate all the specialized `runFunction` variants.
This eliminates 144 lines of boilerplate.
Existential.Array group had setup overhead caused by initialization of the existential array. Since this seems to be quite substatial and is dependant on the existential type, it makes sense to add Array.init benchmark group that will measure it explicitly.
Array setup is extracted into 9 type-specific functions. The setup inside the `run_` functions now grabs that array, prepared in `setUpFunction`, excluding the initialization from the measurement.
This helped with extracting setup overhead from most cases, but it appears that the mutable test still have measurable overhead because they perform COW. I’ve tried to work around this by transfering ownership of the pre-initialized array with the `grabArray` function, but nilling the IUO was crashing at runtime. This should be fixed later.
Renamed tests according to the naming convention proposed in PR #20334.
They now fit the 40 character limit and are structured into groups and variants.
Also extracted tags into 2 constants, to simplify the [BenchmarkInfo] expression.
Painstakingly reverse-engineered the boilerplate generator, that produces ExistentialPerformance.swift (sans few whitespace differences), to ease maintanance (The upcoming refactoring).
Consider a class ‘C’ with distinct fields ‘A’ and ‘B’
And consider we are accessing C.A and C.B inside a loop
LICM well not hoist the exclusivity checking outside of the loop because isDistinctFrom(C.A, C.B) returns false.
This is because the helper function bails if isUniquelyIdentified returns false (which is the case in class kinds)
Same with all other potential access enforcement optimizations.
This PR resolves that