Strict aliasing only applies to memory operations that use strict
addresses. The optimizer needs to be aware of this flag. Uses of raw
addresses should not have their address substituted with a strict
address.
Also add Builtin.LoadRaw which will be used by raw pointer loads.
This is safe because the closure is not allowed to capture the array according
to the documentation of 'withUnsafeMutableBuffer' and the current implementation
makes sure that any such capture would observe an empty array by swapping self
with an empty array.
Users will get "almost guaranteed" stack promotion for small arrays by writing
something like:
func testStackAllocation(p: Proto) {
var a = [p, p, p]
a.withUnsafeMutableBufferPointer {
let array = $0
work(array)
}
}
It is "almost guaranteed" because we need to statically be able to tell the size
required for the array (no unspecialized generics) and the total buffer size
must not exceed 1K.
We were giving special handling to ApplyInst when we were attempting to use
getMemoryBehavior(). This commit changes the special handling to work on all
full apply sites instead of just AI. Additionally, we look through partial
applies and thin to thick functions.
I also added a dumper called BasicInstructionPropertyDumper that just dumps the
results of SILInstruction::get{Memory,Releasing}Behavior() for all instructions
in order to verify this behavior.
Currently the array.get_element calls return the element as indirect result.
The generic specializer will change so that the element can be returned as direct result.
SILValue.h/.cpp just defines the SIL base classes. Referring to specific instructions is a (small) kind of layering violation.
Also I want to keep SILValue small so that it is really just a type alias of ValueBase*.
NFC.
As there are no instructions left which produce multiple result values, this is a NFC regarding the generated SIL and generated code.
Although this commit is large, most changes are straightforward adoptions to the changes in the ValueBase and SILValue classes.
And use project_box to get to the address value.
SILGen now generates a project_box for each alloc_box.
And IRGen re-uses the address value from the alloc_box if the operand of project_box is an alloc_box.
This lets the generated code be the same as before.
Other than that most changes of this (quite large) commit are straightforward.
The code in question was the following:
auto *RetainArray = dyn_cast_or_null<StrongRetainInst>(getInstBefore(Call));
if (!RetainArray && MayHaveBridgedObjectElementType)
return false;
auto *ReleaseArray = dyn_cast_or_null<StrongReleaseInst>(getInstAfter(Call));
if (!ReleaseArray && MayHaveBridgedObjectElementType)
return false;
if (ReleaseArray &&
ReleaseArray->getOperand() != RetainArray->getOperand())
return false;
There is no check in the last if if RetainArray is not nullptr even though it is
possible for it to be so.
Found by clang static analyzer.
Having a separate address and container value returned from alloc_stack is not really needed in SIL.
Even if they differ we have both addresses available during IRGen, because a dealloc_stack is always dominated by the corresponding alloc_stack in the same function.
Although this commit quite large, most changes are trivial. The largest non-trivial change is in IRGenSIL.
This commit is a NFC regarding the generated code. Even the generated SIL is the same (except removed #0, #1 and @local_storage).
(libraries now)
It has been generally agreed that we need to do this reorg, and now
seems like the perfect time. Some major pass reorganization is in the
works.
This does not have to be the final word on the matter. The consensus
among those working on the code is that it's much better than what we
had and a better starting point for future bike shedding.
Note that the previous organization was designed to allow separate
analysis and optimization libraries. It turns out this is an
artificial distinction and not an important goal.