- code simplification critical for comprehension
- substantially improves the overhead of AccessedStorage comparison
- as a side effect improves precision of analysis in some cases
AccessedStorage is meant to be an immutable value type that identifies
a storage location with minimal representation. It is used in many global
interprocedural data structures.
The RefElementAddress instruction that it was derived from does not
contribute to the uniqueness of the storage location. It doesn't
belong here. It was being used to create a ProjectionPath, which is an
extremely inneficient way to compare access paths.
Just delete all the code related to that extra field.
Cleaning up in preparation for making changes that improve
compile-time issues in AccessEnforcementOpts.
This is a simple but important enum with a handful of cases. The cases
need to be easily referenced from the header. Don't define them in a
separate .def. Remove the visitor biolerplate because it doesn't serve
any purpose.
This enum is meant to be used with covered switches. The enum cases do
not have their own types, so there's no performance reason to use a
Visitor pattern.
It should not be possible to add a case to this enum without carefully
considering the impact on the encoding of this class and the impact on
each and every one of the uses. We always want covered switches at the
use sites.
This is a major improvement in readability and usability both in the
definition of the class and in the one place where a visitor was used.
This adds a mostly flow-insensitive analysis that runs before the
dominator-based transformations. The analysis is simple and efficient
because it only needs to track data flow of currently in-scope
accesses. The original dominator tree walk remains simple, but it now
checks the flow insensitive analysis information to determine general
correctness. This is now correct in the presence of all kinds of nested
static and dynamic nested accesses, call sites, coroutines, etc.
This is a better compromise than:
(a) disabling the pass and taking a major performance loss.
(b) converting the pass itself to full-fledged data flow driven
optimization, which would be more optimal because it could remove
accesses when nesting is involved, but would be much more expensive
and complicated, and there's no indication that it's useful.
The new approach is also simpler than adding more complexity to
independently handle to each of many issues:
- Nested reads followed by a modify without a false conflict.
- Reads nested within a function call without a false conflict.
- Conflicts nested within a function call without dropping enforcement.
- Accesses within a generalized accessor.
- Conservative treatment of invalid storage locations.
- Conservative treatment of unknown apply callee.
- General analysis invalidation.
Some of these issues also needed to be considered in the
LoopDominatingAccess sub-pass. Rather than fix that sub-pass, I just
integrated it into the main pass. This is a simplification, is more
efficient, and also handles nested loops without creating more
redundant accesses. It is also generalized to:
- hoist non-uniquely identified accesses.
- Avoid unnecessarily promoting accesses inside the loop.
With this approach we can remove the scary warnings and caveats in the
comments.
While doing this I also took the opportunity to eliminate quadratic
behavior, make the domtree walk non-recursive, and eliminate cutoff
thresholds.
Note that simple nested dynamic reads to identical storage could very
easily be removed via separate logic, but it does not fit with the
dominator-based algorithm. For example, during the analysis phase, we
could simply mark the "fully nested" read scopes, then convert them to
[static] right after the analysis, removing them from the result
map. I didn't do this because I don't know if it happens in practice.
This simplifies some boilerplate, and in particular, some SIL verifier
logic; and fixes a couple bugs related to always loadable reference
storage types.
This pattern is normally folded away:
%ga = global_addr @gvar : $*Int64
%ptr = address_to_pointer %ga : $*Int64 to $Builtin.RawPointer
%adr = pointer_to_address %ptr : $Builtin.RawPointer to [strict] $*Int64
%access = begin_access [read] [dynamic] [no_nested_conflict] %adr : $*Int64
However, now that we handle address type phi arguments in the SIL
verifier, we could see this pattern. [In the long term, when
address-type phis are universally prohibited, all of this stuff
becomes irrelevant.]
Fixes <rdar://47555992> [Source Compat] AudioKit: SIL verification
failed: Unknown formal access pattern: storage.
Fixes infinite recursion in findAccessedStorge. This routine was not
designed to be recursive, but recursion was recently added as a hack
for address-type phis. Naturally, phis can be cyclic, so this was not
a correct workaround.
This is only a problem with enforce-exclusivity=checked, with which
findAccessedStorge is invoked after arbitrary optimization passes that
introduce address-type phis. Ultimately, address phis will be
disallowed in SIL, but some optimizer passes must be fixed first.
Fixes <rdar://problem/47059671> swiftlang-1001.0.31.11 root: swiftc crashes
This is the first in a sequence of patches that implement various optimizations
to transform load [copy] into load_borrow.
The optimization works by looking for a load [copy] that:
1. Only has destroy_value as consuming users. This implies that we do not need
to pass off the in memory value at +1 and that we can use a +0 value.
2. Is loading from a memory location that is never written to or only written to
after all uses of the load [copy].
and then RAUW the load [copy] with a load_borrow and convertes the destroy_value
to end_borrow.
NOTE: I also a .def file for AccessedStorage so we can do visitors over the
kinds. The reason I want to do this is to ensure that people update these
optimizations if we add new storage kinds.
The SIL verifier was asserting while attempting to prove that all
formal accesses are well-formed. This is necessary because
unrecognized access could lead to invalid whole-module optimization.
We would like to eliminate address-phis from SIL, but there are still
optimizer passes that produce them. For now, the best we can do is
hope to recover the original base of each phi value and prove they are
actually the same value. They should be because the optimizer
produced the phi through block cloning.
Fixes <rdar://problem/46114512> SIL verification failed: Unknown
formal access pattern: storage
This includes:
1) Not crashing in AccessEnforcementOpts in case we have an Unidentified storage access - rdar://problem/45956642 and rdar://problem/45956777
2) Actually supporting finding the storage locations if we do begin_access <block argument>
3) Test case for said support
This silences the instances of the warning from Visual Studio about not all
codepaths returning a value. This makes the output more readable and less
likely to lose useful warnings. NFC.
I changed all of the places that used end_borrow_argument to use end_borrow.
NOTE: I discovered in the process of this patch that we are not verifying
guaranteed block arguments completely. I disabled the tests here that show this
bad behavior and am going to re-enable them with more tests in a separate PR.
This has not been a problem since SILGen does not emit any such arguments as
guaranteed today. But once I do the SILGenPattern work this will change.
rdar://33440767
Consider a class ‘C’ with distinct fields ‘A’ and ‘B’
And consider we are accessing C.A and C.B inside a loop
LICM well not hoist the exclusivity checking outside of the loop because isDistinctFrom(C.A, C.B) returns false.
This is because the helper function bails if isUniquelyIdentified returns false (which is the case in class kinds)
Same with all other potential access enforcement optimizations.
This PR resolves that
Add support to static diagnostics for tracking noescape closures through block
arguments.
Improve the noescape closure SIL verification logic to match. Cleanup noescape
closure handling in both diagnostics and SIL verification to be more
robust--they must perfectly match each other.
Fixes <rdar://problem/42560459> [Exclusivity] Failure to statically diagnose a
conflict when passing conditional noescape closures.
Initially reported in [SR-8266] Compiler crash when checking exclusivity of
inout alias.
Example:
struct S {
var x: Int
mutating func takeNoescapeClosure(_ f: ()->()) { f() }
mutating func testNoescapePartialApplyPhiUse(z : Bool) {
func f1() {
x = 1 // expected-note {{conflicting access is here}}
}
func f2() {
x = 1 // expected-note {{conflicting access is here}}
}
takeNoescapeClosure(z ? f1 : f2)
// expected-error@-1 2 {{overlapping accesses to 'self', but modification requires exclusive access; consider copying to a local variable}}
}
}
For now, the accessors have been underscored as `_read` and `_modify`.
I'll prepare an evolution proposal for this feature which should allow
us to remove the underscores or, y'know, rename them to `purple` and
`lettuce`.
`_read` accessors do not make any effort yet to avoid copying the
value being yielded. I'll work on it in follow-up patches.
Opaque accesses to properties and subscripts defined with `_modify`
accessors will use an inefficient `materializeForSet` pattern that
materializes the value to a temporary instead of accessing it in-place.
That will be fixed by migrating to `modify` over `materializeForSet`,
which is next up after the `read` optimizations.
SIL ownership verification doesn't pass yet for the test cases here
because of a general fault in SILGen where borrows can outlive their
borrowed value due to being cleaned up on the general cleanup stack
when the borrowed value is cleaned up on the formal-access stack.
Michael, Andy, and I discussed various ways to fix this, but it seems
clear to me that it's not in any way specific to coroutine accesses.
rdar://35399664
Narrow the list of address producers that verification can "peek through"
to avoid trigger an assertion in the previous commit with
-enable-verify-exclusivity.
My previous attempt to strengthen this verification ignored the fact that a
projection or addressor that returns UnsafePointer can have nested access after
being passed as an @inout argument. After inlining, this caused the verifier to
assert.
Instead, handle access to an UnsafePointer as a valid but unidentified access. I
was trying to avoid this because an UnsafePointer could refer to a global or
class property, which we can never consider "Unidentified". However, this is
reasonably safe because it can only result from a _nested_ access. If a global
or class property is being addressed, it must already have its own dynamic
access within the addressor, and that access will be exposed to the
optimizer. If a non-public KeyPath is being addressed, a keypath instruction
must already exist somewhere in the module which is exposed to the optimizer.
Fixes <rdar://problem/41660554> Swift CI (macOS release master, OSS): SIL verification failed: Unknown formal access pattern: storage.
* Teach findAccessedStorage about global addressors.
AccessedStorage now properly represents access to global variables, even if they
haven't been fully optimized down to global_addr instructions.
This is essential for optimizing dynamic exclusivity checks. As a
verified SIL property, all access to globals and class properties
needs to be identifiable.
* Add stronger SILVerifier support for formal access.
Ensure that all formal access follows recognizable patterns
at all points in the SIL pipeline.
This is important to run acccess enforcement optimization late in the pipeline.
Use AccessedStorageAnalysis to find access markers with no nested conflicts.
This optimization analyzes the scope of each access to determine
whether it contains a potentially conflicting access. If not, then it
can be demoted to an instantaneous check, which still catches
conflicts on any enclosing outer scope.
This removes up to half of the runtime calls associated with
exclusivity checking.
Mandatory pass will clean it up and replace it by a copy_block and
is_escaping/cond_fail/release combination on the %closure in follow-up
patches.
The instruction marks the dependence of a block on a closure that is
used as an 'withoutActuallyEscaping' sentinel.
rdar://39682865
An interprocedural analysis pass that summarizes the dynamically
enforced formal accesses within a function. These summaries will be
used by a new AccessEnforcementOpts pass to locally fold access scopes
and remove dynamic checks based on whole module analysis.
Added new files, MemAccessUtils.h and MemAccessUtils.cpp to house the
new utility. Three functions previously in InstructionUtils.h move
here:
- findAccessedAddressBase
- isPossibleFormalAccessBase
- visitAccessedAddress
Rather than working with SILValues, these routines now work with the
AccessedStorage abstraction. This allows enforcement logic and SIL
pattern recognition to be shared across diagnostics and
optimization. (It's very important for this to be consistent).
The new AccessedStorage utility is a superset of the class that was
local to DiagnoseStaticExclusivity. It exposes the full set of all
recognized kinds of storage. It also represents function arguments as
an index rather that a SILValue. This allows an analysis pass to
compare/merge AccessedStorage results from multiple callee functions,
or more naturally propagate from callee to caller context.