This disables a bunch of passes when ownership is enabled. This will allow me to
keep transparent functions in ossa and skip most of the performance pipeline without
being touched by passes that have not been updated for ownership.
This is important so that we can in -Onone code import transparent functions and
inline them into other ossa functions (you can't inline from ossa => non-ossa).
This will let me strip ownership from non-transparent functions at the beginning
of the perf pipeline. Then after we serialize, I will run OME on the transparent
functions. Otherwise, we can not perform mandatory inlining successfully since
we can not inline ossa into non-ossa functions.
The problematic scenario is that a function receives an @in_guaranteed and @inout parameter where one is a copy of the other value. For example, when appending a container to itself.
In this case the optimization removed the copy_addr, resulting in passing the same stack location to both parameters.
rdar://problem/47632890
This is needed since we are going to start eliminating ownership after SIL is
serialized. So we need to make sure that we strip those as well.
In terms of compile time, this shouldn't have much of an effect since we already
skip functions that already have ownership stripped.
Improves SwiftSyntax release build speed by 4x.
Limit the size of the sets tracked by inter procedural
AccessedStorageAnalysis. There is a lot of leeway in this limit since
"normal" code doesn't come close to hitting it and SwiftSyntax compile
time isn't noticeably affected until 10x this limit.
This change also avoids reanalyzing function bodies once results have
bottomed out and avoids copying sets in the common case.
Fixes <rdar://problem/46905624> Release build time regression, a lot of time spent on AccessEnforcementOpts::run().
This is just a band aid. The fundamental problem is really:
<rdar://problem/47195282> Recomputing BottomUpIPAnalysis takes most of
the SwiftSyntax compile time.
SwiftSyntax compile time is still at least an order of magnitude
longer than it should be.
This is not an actual bug since given where OME is in the pipeline today it will
only process ossa functions. But philosophically this is the right thing to do.
We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.
Patch produced by
for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done
* Remove apparently obsolete builtin functions.
- Remove s_to_u_checked_conversion and u_to_s_checked_conversion functions from builtin AST parsing, SIL/IR generation and from SIL optimisations.
* Remove apparently obsolete builtin functions - unit tests.
- Remove unit tests for SIL transformations relating to s_to_u_checked_conversion and u_to_s_checked_conversion builtin functions.
* Remove apparently obsolete builtin functions.
- Remove s_to_u_checked_conversion and u_to_s_checked_conversion functions from builtin AST parsing, SIL/IR generation and from SIL optimisations.
* Remove apparently obsolete builtin functions - unit tests.
- Remove unit tests for SIL transformations relating to s_to_u_checked_conversion and u_to_s_checked_conversion builtin functions.
This is in preparation for verifying that when ownership verification is enabled
that only enums and trivial values can have any ownership. I am doing this in
preparation for eliminating ValueOwnershipKind::Trivial.
rdar://46294760
A recent SILCloner rewrite removed a special case hack for single
basic block callee functions:
commit c6865c0dff
Merge: 76e6c4157e9e440d13a6
Author: Andrew Trick <atrick@apple.com>
Date: Thu Oct 11 14:23:32 2018
Merge pull request #19786 from atrick/silcloner-cleanup
SILCloner and SILInliner rewrite.
Instead, the new inliner simply merges trivial unconditional branches
after inlining the return block. This way, the CFG is always in
canonical state after inlining. This is more robust, and avoids
interfering with subsequent SIL passes when non-single-block callees
are inlined.
The problem is that inlining a series of calls within a large block
could result in interleaved block splitting and merging operations,
which is quadratic in the block size. This showed up when inlining the
tens of thousands of array subscript calls emitted for a large array
initialization.
The first half of the fix is to simply defer block merging until all
calls are inlined. We can't expect SimplifyCFG to run immediately
after inlining, nor would we want to do that, *especially* for
mandatory inlining. This fix instead exposes block merging as a
trivial utility.
Note: by eliminating some unconditional branches, this change could
reduce the number of debug locations emitted. This does not
fundamentally change any debug information guarantee, and I was unable
to observe any behavior difference in the debugger.
Currently if a caller is > 400 blocks, the inliner bails out of finding
inlinable targets. This is incorrect behavior for inline always functions. In
such cases, we should continue inlining inline always functions and skip any
functions that are not inline always.
rdar://45976860
This includes:
1) Not crashing in AccessEnforcementOpts in case we have an Unidentified storage access - rdar://problem/45956642 and rdar://problem/45956777
2) Actually supporting finding the storage locations if we do begin_access <block argument>
3) Test case for said support
This is a performance hack: inlining a coroutine can promote heap
allocations of the frame to stack allocations, which is valuable
out of proportion to the normal weight. There are surely more
principled ways of getting this effect, though.
`#assert` is a new static assertion statement that will let us write
tests for the new constant evaluation infrastructure that we are working
on. `#assert` works by lowering to a `Builtin.poundAssert` SIL
instruction. The constant evaluation infrastructure will look for these
SIL instructions, const-evaluate their conditions, and emit errors if
the conditions are non-constant or false.
This commit implements parsing, typechecking and SILGen for `#assert`.
In the included, test case, the optimization was sinking
releases past is_escaping_closure.
Rewrite the isBarrier logic to be conservative and define the
mayCheckRefCount property in SIL/InstructionUtils. Properties that may
need to be updated when SIL changes belong there.
Note that it is particularly bad behavior if the presence of access
markers in the code cause miscompiles unrelated to access enforcement.
Fixes <rdar://problem/45846920> TestFoundation, TestProcess, closure
argument passed as @noescape to Objective-C has escaped.
General case:
<loop preheader>
A = ref_element_addr
<loop>
begin_access A [dynamic] [no_nested_conflict]
Adding an empty begin_access A in the preheader would allow us to turn the loop's access to [static]
General case:
begin_access A
...
strong_release / release_value / destroy
end_access
The release instruction can be sunk below the end_access instruction,
This extends the lifetime of the released value, but, might allow us to
Mark the access scope as no nested conflict.
Rewrite the SILCLoners used in SimplifyCFG. For convenience, there is
now simply a BasicBlockCloner and a SILFunctionCloner. It's pretty
obvious what they do and almost impossible to use incorrectly.
This is worthwhile on its own just to make the usage clear, but the
real reason is that after this cleanup, it will be possible to remove
many extraneous calls to global critical edge splitting related to
cloning.
General case:
—
begin_access A (may or may not have no_nested_conflict)
load/store
end_access
apply // may have a scoped access that conflicts with A
begin_access A [no_nested_conflict]
load/store
end_access A
—
The second access scope does not need to be emitted.
NOTE: KeyPath access must be identified at the top-level, non-inlinable stdlib entry point.
As such, The sodlib entry pointed is annotated by a new @_semantics that is equivalent to inline(never)