The main changes are:
*) Rewrite everything in swift. So far, parts of memory-behavior analysis were already implemented in swift. Now everything is done in swift and lives in `AliasAnalysis.swift`. This is a big code simplification.
*) Support many more instructions in the memory-behavior analysis - especially OSSA instructions, like `begin_borrow`, `end_borrow`, `store_borrow`, `load_borrow`. The computation of end_borrow effects is now much more precise. Also, partial_apply is now handled more precisely.
*) Simplify and reduce type-based alias analysis (TBAA). The complexity of the old TBAA comes from old days where the language and SIL didn't have strict aliasing and exclusivity rules (e.g. for inout arguments). Now TBAA is only needed for code using unsafe pointers. The new TBAA handles this - and not more. Note that TBAA for classes is already done in `AccessBase.isDistinct`.
*) Handle aliasing in `begin_access [modify]` scopes. We already supported truly immutable scopes like `begin_access [read]` or `ref_element_addr [immutable]`. For `begin_access [modify]` we know that there are no other reads or writes to the access-address within the scope.
*) Don't cache memory-behavior results. It turned out that the hit-miss rate was pretty bad (~ 1:7). The overhead of the cache lookup took as long as recomputing the memory behavior.
The result of a store_borrow can "escape" to other access bases, like a "pointer" access base.
Then a store-borrow access base is _not_ distinct from such another base.
Create a Predecessor abstraction for readability. Eliminates frequent
occurence of inscrutable getInt() and getPointer() calls.
Create an escapesInsideFunction() API that only uses mapped
values. This prevents bugs where a content node's value would be
mistakenly used to check for escapes.
As a structural SIL property, we disallow address-type block
arguments. Supporting them would prevent reliably reasoning about the
underlying storage of memory access. This reasoning is important for
diagnosing violations of memory access rules and supporting future
optimizations such as bitfield packing. Address-type block arguments
also create unnecessary complexity for SIL optimization passes that
need to reason about memory aliasing.
This must be enforced in RAW SIL for diagnosing exclusive memory
access. The optimizer currently breaks the invariant in three places:
1. Normal Simplify CFG during conditional branch simplification
(sneaky jump threading).
2. Simplify CFG via Jump Threading.
3. Loop Rotation.
This was done some time ago to make it easier to diff large amounts of SIL
output. The problem is that it makes it difficult to know the *true* memory
order that the predecessor list is in which can lead to surprise when working
with SIL and create test cases.
I believe some time after that point we added the notion of "sorted" sil, i.e.
SIL that does not guarantee any relation to the actual memory representation of
the SIL and is meant to ease diffing. This fits nicely with the true intention
of this sort of sorting.
Thus this commit puts sorting PredIDs, UserIDs behind that flag.
Strict aliasing only applies to memory operations that use strict
addresses. The optimizer needs to be aware of this flag. Uses of raw
addresses should not have their address substituted with a strict
address.
Also add Builtin.LoadRaw which will be used by raw pointer loads.
This ireapplies commit 255c52de9f.
Original commit message:
Serialize debug scope and location info in the SIL assembler language.
At the moment it is only possible to test the effects that SIL
optimization passes have on debug information by observing the
effects of a full .swift -> LLVM IR compilation. This change enable us
to write targeted testcases for single SIL optimization passes.
The new syntax is as follows:
sil-scope-ref ::= 'scope' [0-9]+
sil-scope ::= 'sil_scope' [0-9]+ '{'
sil-loc
'parent' scope-parent
('inlined_at' sil-scope-ref )?
'}'
scope-parent ::= sil-function-name ':' sil-type
scope-parent ::= sil-scope-ref
sil-loc ::= 'loc' string-literal ':' [0-9]+ ':' [0-9]+
Each instruction may have a debug location and a SIL scope reference
at the end. Debug locations consist of a filename, a line number, and
a column number. If the debug location is omitted, it defaults to the
location in the SIL source file. SIL scopes describe the position
inside the lexical scope structure that the Swift expression a SIL
instruction was generated from had originally. SIL scopes also hold
inlining information.
<rdar://problem/22706994>
At the moment it is only possible to test the effects that SIL
optimization passes have on debug information by observing the
effects of a full .swift -> LLVM IR compilation. This change enable us
to write targeted testcases for single SIL optimization passes.
The new syntax is as follows:
sil-scope-ref ::= 'scope' [0-9]+
sil-scope ::= 'sil_scope' [0-9]+ '{'
sil-loc
'parent' scope-parent
('inlined_at' sil-scope-ref )?
'}'
scope-parent ::= sil-function-name ':' sil-type
scope-parent ::= sil-scope-ref
sil-loc ::= 'loc' string-literal ':' [0-9]+ ':' [0-9]+
Each instruction may have a debug location and a SIL scope reference
at the end. Debug locations consist of a filename, a line number, and
a column number. If the debug location is omitted, it defaults to the
location in the SIL source file. SIL scopes describe the position
inside the lexical scope structure that the Swift expression a SIL
instruction was generated from had originally. SIL scopes also hold
inlining information.
<rdar://problem/22706994>
to disamuguite index_address with same base but different indices.
But the indices here have to be constant. This is a limitation/design choice
made in the projection code.
In order to handle non-constant indices, we need an analysis to compute the index
difference.
rdar://22484392
As there are no instructions left which produce multiple result values, this is a NFC regarding the generated SIL and generated code.
Although this commit is large, most changes are straightforward adoptions to the changes in the ValueBase and SILValue classes.
Having a separate address and container value returned from alloc_stack is not really needed in SIL.
Even if they differ we have both addresses available during IRGen, because a dealloc_stack is always dominated by the corresponding alloc_stack in the same function.
Although this commit quite large, most changes are trivial. The largest non-trivial change is in IRGenSIL.
This commit is a NFC regarding the generated code. Even the generated SIL is the same (except removed #0, #1 and @local_storage).