In language 6 mode keypath instructions are created as existentials and the optimizer needs to look through the `open_existential_ref` instructions to recognize a keypath.
rdar://150173106
Type annotations for instruction operands are omitted, e.g.
```
%3 = struct $S(%1, %2)
```
Operand types are redundant anyway and were only used for sanity checking in the SIL parser.
But: operand types _are_ printed if the definition of the operand value was not printed yet.
This happens:
* if the block with the definition appears after the block where the operand's instruction is located
* if a block or instruction is printed in isolation, e.g. in a debugger
The old behavior can be restored with `-Xllvm -sil-print-types`.
This option is added to many existing test files which check for operand types in their check-lines.
The main changes are:
*) Rewrite everything in swift. So far, parts of memory-behavior analysis were already implemented in swift. Now everything is done in swift and lives in `AliasAnalysis.swift`. This is a big code simplification.
*) Support many more instructions in the memory-behavior analysis - especially OSSA instructions, like `begin_borrow`, `end_borrow`, `store_borrow`, `load_borrow`. The computation of end_borrow effects is now much more precise. Also, partial_apply is now handled more precisely.
*) Simplify and reduce type-based alias analysis (TBAA). The complexity of the old TBAA comes from old days where the language and SIL didn't have strict aliasing and exclusivity rules (e.g. for inout arguments). Now TBAA is only needed for code using unsafe pointers. The new TBAA handles this - and not more. Note that TBAA for classes is already done in `AccessBase.isDistinct`.
*) Handle aliasing in `begin_access [modify]` scopes. We already supported truly immutable scopes like `begin_access [read]` or `ref_element_addr [immutable]`. For `begin_access [modify]` we know that there are no other reads or writes to the access-address within the scope.
*) Don't cache memory-behavior results. It turned out that the hit-miss rate was pretty bad (~ 1:7). The overhead of the cache lookup took as long as recomputing the memory behavior.
Distinguish ref_tail_addr storage from the other storage classes.
We didn't have this originally because be don't expect a begin_access
to directly operate on tail storage. It could occur after inlining, at
least with static access markers. More importantly it helps ditinguish
regular formal accesses from other unidentified access, so we probably
should have always had this.
At any rate, it's particularly important when AccessedStorage is
generalized to arbitrary memory access.
The immediate motivation is to add an AccessPath utility, which will
need to distinguish tail storage.
In the process, rewrite AccessedStorage::isDistinct. This could have a
large positive impact on exclusivity performance.
I was inconsistently providing initialized or uninitialized memory
to the callback when projecting a settable address, depending on
component type. We should always provide an uninitialized address.
When a computed property returns a generic, the accessor's function
type may involve a type parameter that needs to be resolved using
the key path instruction's substitution map.
We have an optimization in SILCombiner that "inlines" the use of compile-time constant key paths by performing the property access directly instead of calling a runtime function (leading to huge performance gains e.g. for heavy use of @dynamicMemberLookup). However, this optimization previously only supported key paths which solely access stored properties, so computed properties, optional chaining, etc. still had to call a runtime function. This commit generalizes the optimization to support all types of key paths.
I was inconsistently providing initialized or uninitialized memory
to the callback when projecting a settable address, depending on
component type. We should always provide an uninitialized address.
When a computed property returns a generic, the accessor's function
type may involve a type parameter that needs to be resolved using
the key path instruction's substitution map.
We have an optimization in SILCombiner that "inlines" the use of compile-time constant key paths by performing the property access directly instead of calling a runtime function (leading to huge performance gains e.g. for heavy use of @dynamicMemberLookup). However, this optimization previously only supported key paths which solely access stored properties, so computed properties, optional chaining, etc. still had to call a runtime function. This commit generalizes the optimization to support all types of key paths.
If the keypath argument of a keypath access function is a keypath literal instruction, generate the projection inline and remove the access function.
For example, replaces (simplified SIL):
%kp = keypath ... stored_property #Foo.bar
apply %keypath_runtime_function(%root_object, %kp, %addr)
with:
%addr = struct_element_addr %root_object, #Foo.bar
load/store %addr
Currently this only handles stored property patterns.
rdar://problem/36244734