without a valid SILDebugScope. An assertion in IRGenSIL prevents future
optimizations from regressing in this regard.
Introducing SILBuilderWithScope and SILBuilderwithPostprocess to ease the
transition.
This patch is large, but mostly mechanical.
<rdar://problem/18494573> Swift: Debugger is not stopping at the set breakpoint
Swift SVN r22978
Really we want to only cast between equally sized types. SIL does not really
have a concept of types of the same size so this solution is a workaround
(temporary) for acutal code in the standard library which casts a pointer of an
UTF16 type to a pointer of an UTF8 type.
rdar://18118602
Swift SVN r21470
We use a DenseMap to map an address to a list of reaching StoreInsts,
one for each predecessor. The map is passed as function arguments instead
of a member in LSBBForwarder since it is only used during one invocation
of optimizing a basic block.
We can now remove all loads in rdar://17613168 and enable dead object
elimination. There is one issue though, one of the loads is loop invariant
and is moved outside by LICM, but we need to run LoadStoreOpt and DeadObjectElim
afterwards to actually remove the dead object.
Performance for rdar://17613168 (with added passes after LICM):
-Ounchecked: 13.8s to 0.199s
-O: 15.4s to 2s
Swift SVN r20922
To help adding BBArguments to merge in multiple stores, we separate
the feasibility check so we can check the feasibility before creating
BBArguments.
Swift SVN r20904
There is no need to clear the state before merging in the states of
the predecessors. In the case of a self loop, we need the previous
state of the basic block, otherwise, the state will be initialized to
the state of the first predecessor.
Swift SVN r20766
When BB is its own predecessor, we can't do the following
state[BB] = state[first pred of BB]
state[BB].merge(state[other predecessors])
--> here we are merging in the updated state[BB]
Instead we should do
state[BB].merge(state[predecessors other than BB])
Swift SVN r20114
This is not run by default unless one passes in the flag -Xllvm -enable-global-load-store-opts.
Also in order to make sure in the face of multi-bbs dead store elimination is
still correct, we use the post order dominator tree to determine if the dead
store is post dominated by the store that is causing it to be dead.
With this pass enabled, we see a 3.5% decrease in overall time in the precommit
bench and the following tests increase in speed by > 5%:
2Sum: 8.9%
Rectangles: 7.35%
Ackermann: 6.43%
StringBuilder: 6.16%
EditDistance: 5.71%
StringWalk: 5.58%
That means that 30% of our benchmarks increased in speed by > 5%. Many of the
other benchmarks increased in speed significantly but not as drmatically.
The only benchmark that regressed is SmallPt which I am looking into.
rdar://17680758
Swift SVN r20009
The code does the right thing, but the assert should be checking that
the load we're about to replace has the same type as the value we're
replacing it with.
This was exposed by changes to inline generic code.
Swift SVN r19921
This commit enables support in the optimizer for promoting the following
unchecked_addr_cast kinds to object bit casts:
1. (Trivial => Trivial) yields a trivial bit cast.
2. (Non-Trivial => Trivial) yields a trivial bit cast.
3. (Non-Trivial => Non-Trivial) yields a ref bit cast.
We do not promote conversions in between trivial and non-trivial types
since a trivial bit cast must have a trivial output and if we allowed
for ref bit casts in between the two, we would be breaking the rule that
ref bit casts do not change the reference semantics of its input, output
types. Technically, we could lower trivial => trivial as a ref cast and
then simplify later but that is unnecessary currently.
<rdar://problem/17373087>
Swift SVN r19784
This is tested by an assertion in IRGen. After Beta3, this code is going
to go away and be replaced by just always promoting the cast. Then the
IRGen assertion will be replaced by propagating undef. The assertion in
the stdlib will still fire in that case since the assertion is based on
the tops not the given value implying that we will not lose any
correctness.
Swift SVN r19272
Both have the same form of, (Address, Value, Load). For the store->load I pass in
(SI->getDest(), SI->getSrc(), LI) and for the load->load I pass in
(OldLI->getOperand(), OldLI, LI).
This also for free lets our load deduplication support forwarding from casts.
Swift SVN r19133
I also refactored findExtractPathBetweenValues to be able to be used for both
partial load duplication and forwarding stores to partial loads.
Swift SVN r19132
A first element field of a nominal type is either the first element of a
struct or the first payload of an enum. We currently allow the stdlib to
rappel into struct heirarchies using reinterpretCast. This patch teaches
the optimizer how to rewrite such unchecked_addr_cast into
unchecked_enum_data_addr and struct_element_addr instructions. Then
Mem2Reg and Load Store Forwarding will remove the allocation generated
by such uses of reinterpret cast.
<rdar://problem/16703656>
Swift SVN r18977
We add a callback function to recursivelyDeleteTriviallyDeadInstructions.
When a Load instruction is deleted, we erase it from Loads.
rdar://16815627
Swift SVN r17558