Merge pull request #84811 from rjmccall/verify-dead-end-edges

Strengthen the SIL verifier's rules for edges into dead-end regions
This commit is contained in:
John McCall
2025-10-11 18:16:56 -04:00
committed by GitHub
11 changed files with 1592 additions and 154 deletions

View File

@@ -780,6 +780,217 @@ _lexical_ in order to specify this property for all contributing lifetimes.
For details see [Variable Lifetimes](Ownership.md#variable-lifetimes) in the
Ownership document.
# Dominance
## Value and instruction dominance
Whenever an instruction uses a [value](#values-and-operands) as an
operand, the definition of the value must dominate the instruction.
This is a common concept across all SSA-like representations. SIL
uses a standard definition of dominance, modified slightly to account
for SIL's use of basic block arguments rather than phi instructions:
- The value `undef` always dominates an instruction.
- An instruction result `R` dominates an instruction `I` if the
instruction that defines `R` dominates `I`.
- An argument of a basic block `B` dominates an instruction `I` if all
initial paths passing through `I` must also pass through the start
of `B`.
An instruction `D` dominates another instruction `I` if they are
different instructions and all initial paths passing through `I`
must also pass through `D`.
See [below](#definition-of-a-path) for the formal definition of an
initial path.
## Basic block dominance
A basic block `B1` dominates a basic block `B2` if they are different
blocks and if all initial paths passing through the start of `B2` must
also pass through through the start of `B1`.
This relationship between blocks can be thought of as creating a
directed acyclic graph of basic blocks, called the *dominance tree*.
The dominance tree is not directly represented in SIL; it is just
an emergent property of the dominance requirement on SIL functions.
## Joint post-dominance
Certain instructions are required to have a *joint post-dominance*
relationship with certain other instructions. Informally, this means
that all terminating paths through the first instruction must
eventually pass through one of the others. This is common for
instructions that define a scope in the SIL function, such as
`alloc_stack` and `begin_access`.
The dominating instruction is called the *scope instruction*,
and the post-dominating instructions are called the *scope-ending
instructions*. The specific joint post-dominance requirement
defines the set of instructions that count as scope-ending
instructions for the begin instruction.
For example, an `alloc_stack` instruction must be jointly
post-dominated by the set of `dealloc_stack` instructions
whose operand is the result of the `alloc_stack`. The
`alloc_stack` is the scope instruction, and the `dealloc_stack`s
are the scope-ending instructions.
The *scope* of a joint post-dominance relationship is the set
of all points in the function following the scope instruction
but prior to a scope-ending instruction. Making this precisely
defined is part of the point of the joint post-dominance rules.
A formal definition is given later.
In SIL, if an instruction acts as a scope instruction, it always
has exactly one set of scope-ending instructions associated
with it, and so it forms exactly one scope. People will therefore
often talk about, e.g., the scope of an `alloc_stack`, meaning
the scope between it and its `dealloc_stack`s. Furthermore,
there are no instructions in SIL which act as scope-ending
instructions for multiple scopes.
A scope instruction `I` is jointly post-dominated by its
scope-ending instructions if:
- All initial paths that pass through a scope-ending instruction
of `I` must also pass through `I`. (This is just the normal
dominance rule, and it is typically already required by the
definition of the joint post-dominance relationship. For example,
a `dealloc_stack` must be dominated by its associated
`alloc_stack` because it uses its result as an operand.)
- All initial paths that pass through `I` twice must also pass
through a scope-ending instruction of `I` in between.
- All initial paths that pass through a scope-ending instruction
of `I` twice must also pass through `I` in between.
- All terminating initial paths that pass through `I` must also
pass through a scope-ending instruction of `I`.
In other words, all paths must strictly alternate between `I`
and its scope-ending instructions, starting with `I` and (if
the path exits) ending with a scope-ending instruction.
Note that a scope-ending instruction does not need to appear on
a path following a scope instruction if the path doesn't exit
the function. In fact, a function needn't include any scope-ending
instructions for a particular scope instruction if all paths from
that point are non-terminating, such as by ending in `unreachable`
or containing an infinite loop.
A scope instruction `I` is *coherently* jointly post-dominated
by its scope-ending instructions if there is no point in the
function for which it is possible to construct two paths, both
ending in that point, which differ by whether they most recently
passed through `I` or one of its scope-ending instructions.
This is always true for points from which it is possible to
construct a terminating path, but it can be false for dead-end
points.
Several important joint post-dominance requirements in SIL
do not require coherence, including the stack-allocation rule.
Non-coherence allows optimizations to be more aggressive
across control flow that enters dead-end regions. Note that
control flow internal to a dead-end region is not special
in this way, so SIL analyses must not not simply check
whether a destination block is dead-end.
The *scope* defined by a joint post-dominance relationship for a
scope instruction `I` is the set of points in the function for
which:
- there exists an initial path that ends at that point and
passes through `I`, but
- there does not an exist a simple initial path that ends at
that point and passes through a scope-ending instruction
of `I`.
In the absence of coherence, this second rule conservatively
shrinks the scope to the set of points that cannot possibly
have passed through a scope-ending instruction.
For a coherent joint post-dominance relationship, this
definition simplifies to the set of points for which there
exists an initial path that ends at that point and passes
through `I`, but which does not pass through a scope-ending
instruction of `I`.
Note that the point before a scope-ending instruction is always
within the scope.
## Definition of a path
A *point* in a SIL function is the moment before an instruction.
Every basic block has an entry point, which is the point before
its first instruction. The entry point of the entry block is also
called the entry point of the function.
A path through a SIL function is a path (in the usual graph-theory
sense) in the underlying directed graph of points, in which:
- every point in the SIL function is a vertex in the graph,
- each non-terminator instruction creates an edge from the point
before it to the point after it, and
- each terminator instruction creates edges from the point before
the terminator to the initial point of each its successor blocks.
A path is said to pass through an instruction if it includes
any of the edges created by that instruction. A path is said to
pass through the start of a basic block if it visits the entry
point of that block.
An *initial path* is a path which begins at the entry point of the
function. A *terminating path* is a path which ends at the point
before an exiting instruction, such as `return` or `throw`.
Note that the dominance rules generally require only an initial path,
not a terminating path. A path that simply stops in the middle of a
block still counts for dominance. Among other things, this ensures that
dominance holds in blocks that are part of an infinite loop.
A *dead-end point* is a point which cannot be included on any
terminating path. A *dead-end block* is a block for which the
entry point is a dead-end point. A *dead-end region* is a
strongly-connected component of the CFG containing only dead-end
blocks.
Note also that paths consider successors without regard to the
nature of the terminator. Paths that are provably impossible because
of value relationships still count for dominance. For example,
consider the following function:
```
bb0(%cond : $Builtin.Int1):
cond_br %cond, bb1, b22
bb1:
%value = integer_literal $Builtin.Int32, 0
br bb3
bb2:
br bb3
bb3:
cond_br %cond, bb4, bb5
bb4:
%twice_value = builtin "add_Int32"(%value, %value) : $Builtin.Int32
br bb6
bb5:
br bb6
bb6:
ret %cond
```
Dynamically, it is impossible to reach the `builtin` instruction
without passing through the definition of `%value`: to reach
the `builtin`, `%cond` must be `true`, and so the first `cond_br`
must have branched to `bb1`. This is not taken into consideration
by dominance, and so this function is ill-formed.
# Debug Information
Each instruction may have a debug location and a SIL scope reference at
@@ -1364,48 +1575,39 @@ stack deallocation instructions. It can even be paired with no
instructions at all; by the rules below, this can only happen in
non-terminating functions.
- At any point in a SIL function, there is an ordered list of stack
allocation instructions called the *active allocations list*.
- All stack allocation instructions must be jointly post-dominated
by stack deallocation instructions paired with them.
- The active allocations list is defined to be empty at the initial
point of the entry block of the function.
- No path through the function that passes through a stack allocation
instruction `B`, having already passed a stack allocation
instruction `A`, may subsequently pass through a stack deallocation
instruction paired with `A` without first passing through a stack
deallocation instruction paired with `B`.
- The active allocations list is required to be the same at the
initial point of any successor block as it is at the final point of
any predecessor block. Note that this also requires all
predecessors/successors of a given block to have the same
final/initial active allocations lists.
These two rules statically enforce that all stack allocations are
properly nested. In simpler terms:
In other words, the set of active stack allocations must be the same
at a given place in the function no matter how it was reached.
- At every point in a SIL function, there is an ordered list of stack
allocation instructions called the *active allocations list*.
- The active allocations list for the point following a stack
allocation instruction is defined to be the result of adding that
instruction to the end of the active allocations list for the point
preceding the instruction.
- The active allocations list is empty at the start of the entry block
of the function, and it must be empty again whenever an instruction
that exits the function is reached, like `return` or `throw`.
- The active allocations list for the point following a stack
deallocation instruction is defined to be the result of removing the
instruction from the end of the active allocations list for the
point preceding the instruction. The active allocations list for the
preceding point is required to be non-empty, and the last
instruction in it must be paired with the deallocation instruction.
- Whenever a stack allocation instruction is reached, it is added to
the end of the list.
In other words, all stack allocations must be deallocated in
last-in, first-out order, aka stack order.
- Whenever a stack deallocation instruction is reached, its paired
stack allocation instruction must be at the end of the list, which it
is then removed from.
- The active allocations list for the point following any other
instruction is defined to be the same as the active allocations list
for the point preceding the instruction.
- The active allocations list always be the same on both sides of a
control flow edge. This implies both that all successors of a block
must start with the same list and that all predecessors of a block
must end with the same list.
- The active allocations list is required to be empty prior to
`return` or `throw` instructions.
In other words, all stack allocations must be deallocated prior to
exiting the function.
Note that these rules implicitly prevent an allocation instruction from
still being active when it is reached.
Note that these rules implicitly prevent stack allocations from leaking
or being double-freed.
The control-flow rule forbids certain patterns that would theoretically
be useful, such as conditionally performing an allocation around an
@@ -1414,6 +1616,12 @@ to use, however, as it is illegal to locally abstract over addresses,
and therefore a conditional allocation cannot be used in the
intermediate operation anyway.
The stack discipline rules do not require coherent joint post-dominance.
This means that different control-flow paths entering a dead-end region
may disagree about the state of the stack. In such a region, the stack
discipline rules permit further allocation, but nothing that was not
allocated within the region can be deallocated.
# Structural type matching for pack indices
In order to catch type errors in applying pack indices, SIL requires the

View File

@@ -793,6 +793,82 @@ auto transform(const std::optional<OptionalElement> &value,
}
return std::nullopt;
}
/// A little wrapper that either wraps a `T &&` or a `const T &`.
/// It allows you to defer the optimal decision about how to
/// forward the value to runtime.
template <class T>
class maybe_movable_ref {
/// Actually a T&& if movable is true.
const T &ref;
bool movable;
public:
// The maybe_movable_ref wrapper itself is, basically, either an
// r-value reference or an l-value reference. It is therefore
// move-only so that code working with it has to properly
// forward it around.
maybe_movable_ref(maybe_movable_ref &&other) = default;
maybe_movable_ref &operator=(maybe_movable_ref &&other) = default;
maybe_movable_ref(const maybe_movable_ref &other) = delete;
maybe_movable_ref &operator=(const maybe_movable_ref &other) = delete;
/// Allow the wrapper to be statically constructed from an r-value
/// reference in the movable state.
maybe_movable_ref(T &&ref) : ref(ref), movable(true) {}
/// Allow the wrapper to be statically constructed from a
/// const l-value reference in the non-movable state.
maybe_movable_ref(const T &ref) : ref(ref), movable(false) {}
/// Don't allow the wrapper to be statically constructed from
/// a non-const l-value reference without passing a flag
/// dynamically.
maybe_movable_ref(T &ref) = delete;
/// The fully-general constructor.
maybe_movable_ref(T &ref, bool movable) : ref(ref), movable(movable) {}
/// Check dynamically whether the reference is movable.
bool isMovable() const {
return movable;
}
/// Construct a T from the wrapped reference.
T construct() && {
if (isMovable()) {
return T(move());
} else {
return T(ref);
}
}
/// Get access to the value, conservatively returning a const
/// reference.
const T &get() const {
return ref;
}
/// Get access to the value, dynamically aserting that it is movable.
T &get_mutable() const {
assert(isMovable());
return const_cast<T&>(ref);
}
/// Return an r-value reference to the value, dynamically asserting
/// that it is movable.
T &&move() {
assert(isMovable());
return static_cast<T&&>(const_cast<T&>(ref));
}
};
template <class T>
maybe_movable_ref<T> move_if(T &ref, bool movable) {
return maybe_movable_ref<T>(ref, movable);
}
} // end namespace swift
#endif // SWIFT_BASIC_STLEXTRAS_H

View File

@@ -14,6 +14,7 @@
#define SWIFT_SIL_BASICBLOCKUTILS_H
#include "swift/SIL/BasicBlockBits.h"
#include "swift/SIL/BasicBlockData.h"
#include "swift/SIL/BasicBlockDatastructures.h"
#include "swift/SIL/SILValue.h"
#include "llvm/ADT/SetVector.h"
@@ -125,6 +126,213 @@ protected:
void propagateNewlyReachableBlocks(unsigned startIdx);
};
/// A utility for detecting edges that enter a dead-end region.
///
/// A dead-end region is a strongly-connected component of the CFG
/// consisting solely of dead-end blocks (i.e. from which it is not
/// possible to reach a function exit). The strongly-connected
/// components of a CFG form a DAG: once control flow from the entry
/// block has entered an SCC, it cannot return to an earlier SCC
/// (because then by definition they would have to be the same SCC).
///
/// Note that the interior edges of a dead-end region do not *enter*
/// the region. Only edges from an earlier SCC count as edges into
/// the region.
///
/// For example, in this CFG:
///
/// /-> bb1 -> bb2 -> return
/// bb0
/// \-> bb3 -> bb4 -> bb5 -> unreachable
/// ^ |
/// \------/
///
/// The edge from bb0 to bb3 enters a new dead-end region, as does
/// the edge from bb4 to bb5. The edge from bb4 to bb3 does not
/// enter a new region because it is an internal edge of its region.
///
/// Edges that enter dead-end regions are special in SIL because certain
/// joint post-dominance rules are relaxed for them. For example, the
/// stack does need not be consistent on different edges into a dead-end
/// region.
class DeadEndEdges {
enum : unsigned {
/// A region data value which represents that a block is unreachable
/// from the entry block.
UnreachableRegionData = 0,
/// A region data value which represents that a block is reachable
/// from the entry block but not in a dead-end region.
NonDeadEndRegionData = 1,
/// A value that must be added to a region index when storing it in
/// a region data.
///
/// This should be the smallest number such that
/// (IndexOffset << IndexShift)
/// is always greater than all of the special region-data values
/// above.
IndexOffset = 1,
/// A mask which can be applied to a region to say that it contains
/// a cycle. This slightly optimizes the check in isDeadEndEdge for
/// the common case where regions do not have cycles.
HasCycleMask = 0x1,
/// The amount to shift the region index by when storing it in a
/// region data.
///
/// This should be the smallest number such that an arbitrary value
/// left-shifted by it will not have any of the mask bits set.
IndexShift = 1,
};
/// An integer representing what we know about the SCC partition that
/// a particular block is in. All blocks in the same region store the
/// same value to make comparisons faster.
///
/// Either:
/// - UnreachableRegionData, representing a block that cannot be
/// reached from the entry block;
/// - NonDeadEndRegionData, representing a block that can be reached
/// from the entry block but is not in a dead-end region; or
/// - an encoded region index, representing a block that is in a
/// dead-end region.
///
/// A region index is a unique value in 0..<numDeadEndRegions,
/// selected for a specific dead-end SCC. It is encoded by adding
/// IndexOffset, left-shifting by IndexShift, and then or'ing
/// in any appropriate summary bits like HasCycleMask.
///
/// If regionDataForBlock isn't initialized, the function contains
/// no dead-end blocks.
std::optional<BasicBlockData<unsigned>> regionDataForBlock;
/// The total number of dead-end regions in the function.
unsigned numDeadEndRegions;
static constexpr bool isDeadEndRegion(unsigned regionData) {
return regionData >= (IndexOffset << IndexShift);
}
static unsigned getIndexFromRegionData(unsigned regionData) {
assert(isDeadEndRegion(regionData));
return (regionData >> IndexShift) - IndexOffset;
}
public:
/// Perform the analysis on the given function. An existing
/// DeadEndBlocks analysis can be passed in to avoid needing to
/// compute it anew.
explicit DeadEndEdges(SILFunction *F,
DeadEndBlocks *deadEndBlocks = nullptr);
/// Return the number of dead-end regions in the function.
unsigned getNumDeadEndRegions() const {
return numDeadEndRegions;
}
/// Does the given CFG edge enter a new dead-end region?
///
/// If so, return the index of the dead-end region it enters.
std::optional<unsigned>
entersDeadEndRegion(SILBasicBlock *srcBB, SILBasicBlock *dstBB) const {
// If we didn't initialize regionDataForBlock, there are no dead-end
// edges at all.
if (!regionDataForBlock)
return std::nullopt;
auto dstRegionData = (*regionDataForBlock)[dstBB];
// If the destination block is not in a dead-end region, this is
// not a dead-end edge.
if (!isDeadEndRegion(dstRegionData)) return std::nullopt;
unsigned dstRegionIndex = getIndexFromRegionData(dstRegionData);
// If the destination block is in a region with no cycles, every edge
// to it is a dead-end edge; no need to look up the source block's
// region.
if (!(dstRegionData & HasCycleMask)) return dstRegionIndex;
// Otherwise, it's a dead-end edge if the source block is in a
// different region. (That region may or may not be itself be a
// dead-end region.)
auto srcRegionData = (*regionDataForBlock)[srcBB];
if (srcRegionData != dstRegionData) {
return dstRegionIndex;
} else {
return std::nullopt;
}
}
/// A helper class for tracking visits to edges into dead-end regions.
///
/// The client is assumed to be doing a walk of the function which will
/// naturally visit each edge exactly once. This set allows the client
/// to track when they've processed every edge to a particular dead-end
/// region and can therefore safely enter it.
///
/// The set does not count edges from unreachable blocks by default. This
/// matches the normal expectation that the client is doing a CFG search
/// and won't try to visit edges from unreachable blocks. If you are
/// walking the function in some other, e.g. by iterating the blocks,
/// you must pass `true` for `includeUnreachableEdges`.
class VisitingSet {
const DeadEndEdges &edges;
/// Stores the remaining number of edges for each dead-end region
/// in the function.
SmallVector<unsigned> remainingEdgesForRegion;
friend class DeadEndEdges;
explicit VisitingSet(const DeadEndEdges &parent,
bool includeUnreachableEdges);
public:
/// Record that a dead-end edge to the given block was visited.
///
/// Returns true if this was the last dead-end edge to the region
/// containing the block.
///
/// Do not call this multiple times for the same edge. Do not
/// call this for an unreachable edge if you did not create the
/// set including unreachable edges.
bool visitEdgeTo(SILBasicBlock *destBB) {
assert(edges.regionDataForBlock &&
"visiting dead-end edge in function that has none");
auto destRegionData = (*edges.regionDataForBlock)[destBB];
assert(isDeadEndRegion(destRegionData) &&
"destination block is not in a dead-end region");
auto destRegionIndex = getIndexFromRegionData(destRegionData);
assert(remainingEdgesForRegion[destRegionIndex] > 0 &&
"no remaining dead-end edges for region; visited "
"multiple times?");
auto numRemaining = --remainingEdgesForRegion[destRegionIndex];
return numRemaining == 0;
}
/// Return true if all of the edges have been visited.
bool visitedAllEdges() const {
for (auto count : remainingEdgesForRegion) {
if (count) return false;
}
return true;
}
};
/// Create a counter set which can be used to count edges in the
/// dead-end regions.
///
/// By default, the set does not include edges from unreachable blocks.
VisitingSet createVisitingSet(bool includeUnreachableEdges = false) const {
return VisitingSet(*this, includeUnreachableEdges);
}
};
/// Compute joint-postdominating set for \p dominatingBlock and \p
/// dominatedBlockSet found by walking up the CFG from the latter to the
/// former.

View File

@@ -46,6 +46,7 @@
#include "llvm/ADT/MapVector.h"
#include "llvm/ADT/PointerIntPair.h"
#include "llvm/ADT/SetVector.h"
#include "llvm/ADT/STLFunctionalExtras.h"
#include "llvm/ADT/ilist.h"
#include "llvm/ProfileData/InstrProfReader.h"
#include "llvm/Support/Allocator.h"
@@ -1144,7 +1145,7 @@ inline llvm::raw_ostream &operator<<(llvm::raw_ostream &OS, const SILModule &M){
void verificationFailure(const Twine &complaint,
const SILInstruction *atInstruction,
const SILArgument *atArgument,
const std::function<void()> &extraContext);
llvm::function_ref<void(llvm::raw_ostream &OS)> extraContext);
inline bool SILOptions::supportsLexicalLifetimes(const SILModule &mod) const {
switch (mod.getStage()) {

View File

@@ -25,6 +25,7 @@
#include "swift/SIL/TerminatorUtils.h"
#include "swift/SIL/Test.h"
#include "llvm/ADT/STLExtras.h"
#include "llvm/ADT/SCCIterator.h"
using namespace swift;
@@ -458,6 +459,187 @@ static FunctionTest HasAnyDeadEndBlocksTest(
});
} // end namespace swift::test
//===----------------------------------------------------------------------===//
// DeadEndEdges
//===----------------------------------------------------------------------===//
DeadEndEdges::DeadEndEdges(SILFunction *F,
DeadEndBlocks *existingDeadEndBlocks) {
// Hilariously, C++ does not permit these to be written in
// the class definition.
static_assert(!isDeadEndRegion(UnreachableRegionData), "");
static_assert(!isDeadEndRegion(NonDeadEndRegionData), "");
std::optional<DeadEndBlocks> localDeadEndBlocks;
if (!existingDeadEndBlocks) {
localDeadEndBlocks.emplace(F);
}
DeadEndBlocks &deadEndBlocks =
(existingDeadEndBlocks ? *existingDeadEndBlocks : *localDeadEndBlocks);
// If there are no dead-end blocks, exit immediately, leaving
// regionDataForBlock empty.
if (deadEndBlocks.isEmpty()) {
numDeadEndRegions = 0;
return;
}
// Initialize regionDataForBlock to consider all blocks to be unreachable.
regionDataForBlock.emplace(F, [](SILBasicBlock *bb) {
return UnreachableRegionData;
});
unsigned nextDeadRegionIndex = 0;
// Iterate the strongly connected components of the CFG.
//
// We might be able to specialize the SCC algorithm to both (1) detect
// dead-end SCCs directly and (2) maybe even eagerly count the dead-end
// edges into each block. But that would require rewriting the algorithm,
// and this doesn't seem problematic.
//
// Note that only reachable blocks can be found by SCC iteration.
// So we're implicitly leaving regionDataForBlock filled with
// UnreachableRegionData for any unreachable blocks.
for (auto sccIt = scc_begin(F); !sccIt.isAtEnd(); ++sccIt) {
const auto &scc = *sccIt;
// If this SCC is not dead-end, just record that all of its blocks
// are reachable. We can check any block in the SCC for this: they
// can all reach each other by definition, so if any of them can
// reach an exit, they all can.
auto repBB = scc[0];
if (!deadEndBlocks.isDeadEnd(repBB)) {
for (auto *block : scc) {
(*regionDataForBlock)[block] = NonDeadEndRegionData;
}
continue;
}
// Allocate a new region index.
unsigned regionIndex = nextDeadRegionIndex++;
// Encode the region data.
unsigned regionData = (regionIndex + IndexOffset) << IndexShift;
if (sccIt.hasCycle()) {
regionData |= HasCycleMask;
}
assert(getIndexFromRegionData(regionData) == regionIndex);
// Assign the encoded region data to the every block in the region.
for (auto *block : scc) {
(*regionDataForBlock)[block] = regionData;
}
}
// The entry block can technically be a dead-end region if there are
// no reachable exits in the function, but we don't care about tracking
// it because we only care about edges *into* regions, and the entry
// region never has in-edges. So avoid the weird corner case of a region
// with no in-edges *and* save some space in VisitingSets by removing it.
auto &entryRegionData = (*regionDataForBlock).entry().data;
if (isDeadEndRegion(entryRegionData)) {
// SCC iteration is in reverse topological order, so the entry block
// is always the last region. It's also always the only block in
// its region, so it's really easy to retroactively erase from the
// records.
assert(getIndexFromRegionData(entryRegionData) == nextDeadRegionIndex - 1);
nextDeadRegionIndex--;
entryRegionData = NonDeadEndRegionData;
}
numDeadEndRegions = nextDeadRegionIndex;
}
DeadEndEdges::VisitingSet::VisitingSet(const DeadEndEdges &edges,
bool includeUnreachableEdges)
: edges(edges) {
// Skip all of this if there are no dead-end regions.
if (edges.numDeadEndRegions == 0)
return;
// Initialize all of the totals to 0.
remainingEdgesForRegion.resize(edges.numDeadEndRegions, 0);
// Simultaneously iterate the blocks of the function and the
// region data we have for each block.
for (auto blockAndData : *edges.regionDataForBlock) {
SILBasicBlock *bb = &blockAndData.block;
unsigned regionData = blockAndData.data;
// Ignore blocks in non-dead-end regions.
if (!isDeadEndRegion(regionData))
continue;
// Count the edges to the block that begin in other regions.
// But ignore edges from unreachable blocks unless requested.
unsigned numDeadEndEdgesToBlock = 0;
for (SILBasicBlock *pred : bb->getPredecessorBlocks()) {
auto predRegionData = (*edges.regionDataForBlock)[pred];
if (predRegionData != regionData &&
(predRegionData != UnreachableRegionData ||
includeUnreachableEdges)) {
numDeadEndEdgesToBlock++;
}
}
// Add that to the total for the block's region.
auto regionIndex = getIndexFromRegionData(regionData);
remainingEdgesForRegion[regionIndex] += numDeadEndEdgesToBlock;
}
#ifndef NDEBUG
// We should have found at least one edge for every dead-end
// region, since they're all supposed to be reachable from the
// entry block.
for (auto count : remainingEdgesForRegion) {
assert(count && "didn't find any edges to region?");
}
#endif
}
namespace swift::test {
// Arguments:
// - none
// Prints:
// - a bunch of lines like
// %bb3 -> %bb6 (region 2; more edges remain)
// %bb5 -> %bb6 (region 2; last edge)
// - either "Visited all edges" or "Did not visit all edges"
static FunctionTest DeadEndEdgesTest("dead_end_edges", [](auto &function,
auto &arguments,
auto &test) {
DeadEndEdges edges(&function);
auto visitingSet = edges.createVisitingSet(/*includeUnreachable*/ true);
auto &out = llvm::outs();
for (auto &srcBB : function) {
for (auto *dstBB : srcBB.getSuccessorBlocks()) {
if (auto regionIndex = edges.entersDeadEndRegion(&srcBB, dstBB)) {
srcBB.printID(out, false);
out << " -> ";
dstBB->printID(out, false);
out << " (region " << *regionIndex << "; ";
if (visitingSet.visitEdgeTo(dstBB)) {
out << "last edge)";
} else {
out << "more edges remain)";
}
out << "\n";
}
}
}
if (visitingSet.visitedAllEdges()) {
out << "visited all edges\n";
} else {
out << "did not visit all edges\n";
}
});
} // end namespace swift::test
//===----------------------------------------------------------------------===//
// Post Dominance Set Completion Utilities
//===----------------------------------------------------------------------===//

View File

@@ -98,7 +98,9 @@ extern llvm::cl::opt<bool> SILPrintDebugInfo;
void swift::verificationFailure(const Twine &complaint,
const SILInstruction *atInstruction,
const SILArgument *atArgument,
const std::function<void()> &extraContext) {
llvm::function_ref<void(llvm::raw_ostream &out)> extraContext) {
llvm::raw_ostream &out = llvm::dbgs();
const SILFunction *f = nullptr;
StringRef funcName = "?";
if (atInstruction) {
@@ -109,32 +111,32 @@ void swift::verificationFailure(const Twine &complaint,
funcName = f->getName();
}
if (ContinueOnFailure) {
llvm::dbgs() << "Begin Error in function " << funcName << "\n";
out << "Begin Error in function " << funcName << "\n";
}
llvm::dbgs() << "SIL verification failed: " << complaint << "\n";
out << "SIL verification failed: " << complaint << "\n";
if (extraContext)
extraContext();
extraContext(out);
if (atInstruction) {
llvm::dbgs() << "Verifying instruction:\n";
atInstruction->printInContext(llvm::dbgs());
out << "Verifying instruction:\n";
atInstruction->printInContext(out);
} else if (atArgument) {
llvm::dbgs() << "Verifying argument:\n";
atArgument->printInContext(llvm::dbgs());
out << "Verifying argument:\n";
atArgument->printInContext(out);
}
if (ContinueOnFailure) {
llvm::dbgs() << "End Error in function " << funcName << "\n";
out << "End Error in function " << funcName << "\n";
return;
}
if (f) {
llvm::dbgs() << "In function:\n";
f->print(llvm::dbgs());
out << "In function:\n";
f->print(out);
if (DumpModuleOnFailure) {
// Don't do this by default because modules can be _very_ large.
llvm::dbgs() << "In module:\n";
f->getModule().print(llvm::dbgs());
out << "In module:\n";
f->getModule().print(out);
}
}
@@ -976,7 +978,8 @@ public:
}
void _require(bool condition, const Twine &complaint,
const std::function<void()> &extraContext = nullptr) {
llvm::function_ref<void(llvm::raw_ostream &)> extraContext
= nullptr) {
if (condition) return;
verificationFailure(complaint, CurInstruction, CurArgument, extraContext);
@@ -1108,13 +1111,17 @@ public:
/// Assert that two types are equal.
void requireSameType(Type type1, Type type2, const Twine &complaint) {
_require(type1->isEqual(type2), complaint,
[&] { llvm::dbgs() << " " << type1 << "\n " << type2 << '\n'; });
[&](llvm::raw_ostream &out) {
out << " " << type1 << "\n " << type2 << '\n';
});
}
/// Assert that two types are equal.
void requireSameType(SILType type1, SILType type2, const Twine &complaint) {
_require(type1 == type2, complaint,
[&] { llvm::dbgs() << " " << type1 << "\n " << type2 << '\n'; });
[&](llvm::raw_ostream &out) {
out << " " << type1 << "\n " << type2 << '\n';
});
}
SynthesisContext getSynthesisContext() {
@@ -1145,32 +1152,22 @@ public:
CanSILFunctionType type2,
const Twine &what,
SILFunction &inFunction) {
auto complain = [=](const char *msg) -> std::function<void()> {
return [=]{
llvm::dbgs() << " " << msg << '\n'
<< " " << type1 << "\n " << type2 << '\n';
};
};
auto complainBy = [=](std::function<void()> msg) -> std::function<void()> {
return [=]{
msg();
llvm::dbgs() << '\n';
llvm::dbgs() << " " << type1 << "\n " << type2 << '\n';
};
};
// If we didn't have a failure, return.
auto Result = type1->isABICompatibleWith(type2, inFunction);
if (Result.isCompatible())
return;
if (!Result.hasPayload()) {
_require(false, what, complain(Result.getMessage().data()));
_require(false, what, [&](llvm::raw_ostream &out) {
out << " " << Result.getMessage().data() << '\n'
<< " " << type1 << "\n " << type2 << '\n';
});
} else {
_require(false, what, complainBy([=] {
llvm::dbgs() << " " << Result.getMessage().data()
<< ".\nParameter: " << Result.getPayload();
}));
_require(false, what, [&](llvm::raw_ostream &out) {
out << " " << Result.getMessage().data()
<< ".\nParameter: " << Result.getPayload()
<< "\n " << type1 << "\n " << type2 << '\n';
});
}
}
@@ -1196,7 +1193,7 @@ public:
template <class T>
T *requireValueKind(SILValue value, const Twine &what) {
auto match = dyn_cast<T>(value);
_require(match != nullptr, what, [=] { llvm::dbgs() << value; });
_require(match != nullptr, what, [=](llvm::raw_ostream &out) { out << value; });
return match;
}
@@ -1893,14 +1890,15 @@ public:
if (subs.getGenericSignature().getCanonicalSignature() !=
fnTy->getInvocationGenericSignature().getCanonicalSignature()) {
llvm::dbgs() << "substitution map's generic signature: ";
subs.getGenericSignature()->print(llvm::dbgs());
llvm::dbgs() << "\n";
llvm::dbgs() << "callee's generic signature: ";
fnTy->getInvocationGenericSignature()->print(llvm::dbgs());
llvm::dbgs() << "\n";
require(false,
"Substitution map does not match callee in apply instruction");
_require(false, "Substitution map does not match callee in apply instruction",
[&](llvm::raw_ostream &out) {
out << "substitution map's generic signature: ";
subs.getGenericSignature()->print(out);
out << "\n";
out << "callee's generic signature: ";
fnTy->getInvocationGenericSignature()->print(out);
out << "\n";
});
}
// Apply the substitutions.
return fnTy->substGenericArgs(F.getModule(), subs, F.getTypeExpansionContext());
@@ -6746,25 +6744,6 @@ public:
}
}
bool isUnreachableAlongAllPathsStartingAt(
SILBasicBlock *StartBlock, BasicBlockSet &Visited) {
if (isa<UnreachableInst>(StartBlock->getTerminator()))
return true;
else if (isa<ReturnInst>(StartBlock->getTerminator()))
return false;
else if (isa<ThrowInst>(StartBlock->getTerminator()) ||
isa<ThrowAddrInst>(StartBlock->getTerminator()))
return false;
// Recursively check all successors.
for (auto *SuccBB : StartBlock->getSuccessorBlocks())
if (!Visited.insert(SuccBB))
if (!isUnreachableAlongAllPathsStartingAt(SuccBB, Visited))
return false;
return true;
}
void verifySILFunctionType(CanSILFunctionType FTy) {
// Make sure that if FTy's calling convention implies that it must have a
// self parameter.
@@ -6811,8 +6790,164 @@ public:
std::set<SILInstruction*> ActiveOps;
CFGState CFG = Normal;
GetAsyncContinuationInstBase *GotAsyncContinuation = nullptr;
BBState() = default;
// Clang (as of LLVM 22) does not elide the final move for this;
// see https://github.com/llvm/llvm-project/issues/34037. But
// GCC and MSVC do, and the clang issue will presumably get fixed
// eventually, and the move is not an outrageous cost to bear
// compared to actually copying it.
BBState(maybe_movable_ref<BBState> other)
: BBState(std::move(other).construct()) {}
void printStack(llvm::raw_ostream &out, StringRef label) const {
out << label << ": [";
if (!Stack.empty()) out << "\n";
for (auto allocation: Stack) {
allocation->print(out);
}
out << "]\n";
}
void printActiveOps(llvm::raw_ostream &out, StringRef label) const {
out << label << ": [";
if (!ActiveOps.empty()) out << "\n";
for (auto op: ActiveOps) {
op->print(out);
}
out << "]\n";
}
/// Given that we have two edges to the same block or dead-end region,
/// handle any potential mismatch between the states we were in on
/// those edges.
///
/// For the most part, we don't allow mismatches and immediately report
/// them as errors. However, we do allow certain mismatches on edges
/// that enter dead-end regions,
/// in which case the states need to be conservatively merged.
///
/// This state is the previously-recorded state of the block, which
/// is also what needs to be updated for the merge.
///
/// Note that, when we have branches to a dead-end region, we merge
/// state across *all* branches into the region, not just to a single
/// block. The existence of multiple blocks in a dead-end region
/// implies that all of the blocks are in a loop. This means that
/// flow-sensitive states that were begun externally to the region
/// cannot possibly change within the region in any well-formed way,
/// which is why we can merge across all of them.
///
/// For example, consider this function excerpt:
///
/// bb5:
/// %alloc = alloc_stack $Int
/// cond_br %cond1, bb6, bb100
/// bb6:
/// dealloc_stack %alloc
/// cond_br %cond2, bb7, bb101
///
/// Now suppose that the branches to bb100 and bb101 are branches
/// into the same dead-end region. We will conservatively merge the
/// BBState heading into this region by recognizing that there's
/// a stack mismatch and therefore clearing the stack (preventing
/// anything currently on the stack from being deallocated).
///
/// One might think that this is problematic because, e.g.,
/// bb100 might deallocate %alloc before proceeding to bb101. But
/// for bb100 and bb101 to be in the *same* dead-end region, they
/// must be in a strongly-connected component, which means there
/// must be a path from bb101 back to bb100. That path cannot
/// possibly pass through %alloc again, or else bb5 would be a
/// branch *within* the region, not *into* it. So the loop from
/// bb100 -> bb101 -> ... -> bb100 repeatedly deallocates %alloc
/// and should be ill-formed.
///
/// That's why it's okay (and necessary) to merge state across
/// all paths to the dead-end region.
void handleJoinPoint(const BBState &otherState, bool isDeadEndEdge,
SILInstruction *term, SILBasicBlock *succBB) {
// A helper function for reporting a failure in the edge.
// Note that this doesn't always abort, e.g. when running under
// -verify-continue-on-failure. So if there's a mismatch, we do the
// conservative merge regardless of failure so that we're in a
// coherent state in the successor block.
auto fail = [&](StringRef complaint,
llvm::function_ref<void(llvm::raw_ostream &out)> extra
= nullptr) {
verificationFailure(complaint, term, nullptr,
[&](llvm::raw_ostream &out) {
out << "Entering basic block ";
succBB->printID(out, /*newline*/ true);
if (extra) extra(out);
});
};
// These rules are required to hold unconditionally; there is
// no merge rule for dead-end edges.
if (CFG != otherState.CFG) {
fail("inconsistent coroutine states entering basic block");
}
if (GotAsyncContinuation != otherState.GotAsyncContinuation) {
fail("inconsistent active async continuations entering basic block");
}
// The stack normally has to agree exactly, but we allow stacks
// to disagree on different edges into a dead-end region.
// Intersecting the stacks would be wrong because we actually
// cannot safely allow anything to be popped in this state;
// instead, we simply clear the stack completely. This would
// allow us to incorrectly pass the function-exit condition,
// but we know we cannot reach an exit from succBB because it's
// dead-end.
if (Stack != otherState.Stack) {
if (!isDeadEndEdge) {
fail("inconsistent stack states entering basic block",
[&](llvm::raw_ostream &out) {
otherState.printStack(out, "Current stack state");
printStack(out, "Recorded stack state");
});
}
Stack.clear();
}
// The set of active operations normally has to agree exactly,
// but we allow sets to diverge on different edges into a
// dead-end region.
if (ActiveOps != otherState.ActiveOps) {
if (!isDeadEndEdge) {
fail("inconsistent active-operations sets entering basic block",
[&](llvm::raw_ostream &out) {
otherState.printActiveOps(out, "Current active operations");
printActiveOps(out, "Recorded active operations");
});
}
// Conservatively remove any operations that aren't also found
// in the other state's active set.
for (auto i = ActiveOps.begin(), e = ActiveOps.end(); i != e; ) {
if (otherState.ActiveOps.count(*i)) {
++i;
} else {
i = ActiveOps.erase(i);
}
}
}
}
};
struct DeadEndRegionState {
BBState sharedState;
llvm::SmallPtrSet<SILBasicBlock *, 4> entryBlocks;
DeadEndRegionState(maybe_movable_ref<BBState> state)
: sharedState(std::move(state).construct()) {}
};
};
@@ -6827,16 +6962,28 @@ public:
if (F->getASTContext().hadError())
return;
// Compute which blocks are part of dead-end regions, and start tracking
// all of the edges to those regions.
DeadEndEdges deadEnds(F);
auto visitedDeadEndEdges = deadEnds.createVisitingSet();
using BBState = VerifyFlowSensitiveRulesDetails::BBState;
llvm::DenseMap<SILBasicBlock*, BBState> visitedBBs;
using DeadEndRegionState = VerifyFlowSensitiveRulesDetails::DeadEndRegionState;
SmallVector<std::optional<DeadEndRegionState>> deadEndRegionStates;
deadEndRegionStates.resize(deadEnds.getNumDeadEndRegions());
// Do a traversal of the basic blocks.
// Note that we intentionally don't verify these properties in blocks
// that can't be reached from the entry block.
llvm::DenseMap<SILBasicBlock*, VerifyFlowSensitiveRulesDetails::BBState> visitedBBs;
SmallVector<SILBasicBlock*, 16> Worklist;
visitedBBs.try_emplace(&*F->begin());
Worklist.push_back(&*F->begin());
while (!Worklist.empty()) {
SILBasicBlock *BB = Worklist.pop_back_val();
VerifyFlowSensitiveRulesDetails::BBState state = visitedBBs[BB];
BBState state = visitedBBs[BB];
for (SILInstruction &i : *BB) {
CurInstruction = &i;
@@ -6853,7 +7000,7 @@ public:
"cannot suspend async task while unawaited continuation is active");
}
}
if (i.isAllocatingStack()) {
if (auto *BAI = dyn_cast<BeginApplyInst>(&i)) {
state.Stack.push_back(BAI->getCalleeAllocationResult());
@@ -6868,16 +7015,18 @@ public:
while (auto *mvi = dyn_cast<MoveValueInst>(op)) {
op = mvi->getOperand();
}
require(!state.Stack.empty(),
"stack dealloc with empty stack");
if (op != state.Stack.back()) {
llvm::errs() << "Recent stack alloc: " << *state.Stack.back();
llvm::errs() << "Matching stack alloc: " << *op;
require(op == state.Stack.back(),
"stack dealloc does not match most recent stack alloc");
}
state.Stack.pop_back();
if (!state.Stack.empty() && op == state.Stack.back()) {
state.Stack.pop_back();
} else {
verificationFailure("deallocating allocation that is not the top of the stack",
&i, nullptr,
[&](llvm::raw_ostream &out) {
state.printStack(out, "Current stack state");
out << "Stack allocation:\n" << *op;
// The deallocation is printed out as the focus of the failure.
});
}
} else if (isa<BeginAccessInst>(i) || isa<BeginApplyInst>(i) ||
isa<StoreBorrowInst>(i)) {
bool notAlreadyPresent = state.ActiveOps.insert(&i).second;
@@ -6939,6 +7088,84 @@ public:
auto successors = term->getSuccessors();
for (auto i : indices(successors)) {
SILBasicBlock *succBB = successors[i].getBB();
auto succStateRef = move_if(state, i + 1 == successors.size());
// Some successors (currently just `yield`) have state
// transitions on the edges themselves. Fortunately,
// these successors all require their destination blocks
// to be uniquely referenced, so we never have to combine
// the state change with merging or consistency checking.
// Check whether this edge enters a dead-end region.
if (auto deadEndRegion = deadEnds.entersDeadEndRegion(BB, succBB)) {
// If so, record it in the visited set, which will tell us
// whether it's the last remaining edge to the region.
bool isLastDeadEndEdge = visitedDeadEndEdges.visitEdgeTo(succBB);
// Check for an existing shared state for the region.
auto &regionInfo = deadEndRegionStates[*deadEndRegion];
// If we don't have an existing shared state, and this is the
// last edge to the region, just fall through and process it
// like a normal edge.
if (!regionInfo && isLastDeadEndEdge) {
// This can only happen if there's exactly one edge to the
// block, so we will end up in the insertion success case below.
// Note that the state-changing terminators like `yield`
// always take this path: since this must be the unique edge
// to the successor, it must be in its own dead-end region.
// fall through to the main path
// Otherwise, we need to merge this into the shared state.
} else {
require(!isa<YieldInst>(term),
"successor of 'yield' should not be encountered twice");
// Copy/move our current state into the shared state if it
// doesn't already exist.
if (!regionInfo) {
regionInfo.emplace(std::move(succStateRef));
// Otherwise, merge our current state into the shared state.
} else {
regionInfo->sharedState
.handleJoinPoint(succStateRef.get(), /*dead end*/true,
term, succBB);
}
// Add the successor block to the state's set of entry blocks.
regionInfo->entryBlocks.insert(succBB);
// If this was the last branch to the region, act like we
// just saw the edges to each of its entry blocks.
if (isLastDeadEndEdge) {
for (auto ebi = regionInfo->entryBlocks.begin(),
ebe = regionInfo->entryBlocks.end(); ebi != ebe; ) {
auto *regionEntryBB = *ebi++;
// Copy/move the shared state to be the state for the
// region entry block.
auto insertResult =
visitedBBs.try_emplace(regionEntryBB,
move_if(regionInfo->sharedState, ebi == ebe));
assert(insertResult.second &&
"already visited edge to dead-end region!");
(void) insertResult;
// Add the region entry block to the worklist.
Worklist.push_back(regionEntryBB);
}
}
// Regardless, don't fall through to the main path.
continue;
}
}
// Okay, either this isn't an edge to a dead-end region or it
// was a unique edge to it.
// Optimistically try to set our current state as the state
// of the successor. We can use a move on the final successor;
@@ -6946,30 +7173,33 @@ public:
// happen, which is important because we'll still need it
// to compare against the already-recorded state for the block.
auto insertResult =
i + 1 == successors.size()
? visitedBBs.try_emplace(succBB, std::move(state))
: visitedBBs.try_emplace(succBB, state);
visitedBBs.try_emplace(succBB, std::move(succStateRef));
// If the insertion was successful, add the successor to the
// worklist and continue.
// If the insertion was successful, we need to add the successor
// block to the worklist.
if (insertResult.second) {
Worklist.push_back(succBB);
auto &insertedState = insertResult.first->second;
// If we're following a 'yield', update the CFG state:
// 'yield' has successor-specific state updates, so we do that
// now. 'yield' does not permit critical edges, so we don't
// have to worry about doing this in the case below where
// insertion failed.
if (isa<YieldInst>(term)) {
// Enforce that the unwind logic is segregated in all stages.
if (i == 1) {
insertResult.first->second.CFG = VerifyFlowSensitiveRulesDetails::YieldUnwind;
insertedState.CFG = VerifyFlowSensitiveRulesDetails::YieldUnwind;
// We check the yield_once rule in the mandatory analyses,
// so we can't assert it yet in the raw stage.
} else if (F->getLoweredFunctionType()->getCoroutineKind()
== SILCoroutineKind::YieldOnce &&
F->getModule().getStage() != SILStage::Raw) {
insertResult.first->second.CFG = VerifyFlowSensitiveRulesDetails::YieldOnceResume;
insertedState.CFG = VerifyFlowSensitiveRulesDetails::YieldOnceResume;
}
}
// Go ahead and add the block.
Worklist.push_back(succBB);
continue;
}
@@ -6978,30 +7208,23 @@ public:
require(!isa<YieldInst>(term),
"successor of 'yield' should not be encountered twice");
// Check that the stack height is consistent coming from all entry
// points into this BB. We only care about consistency if there is
// a possible return from this function along the path starting at
// this successor bb. (FIXME: Why? Infinite loops should still
// preserve consistency...)
auto isUnreachable = [&] {
BasicBlockSet visited(F);
return isUnreachableAlongAllPathsStartingAt(succBB, visited);
};
const auto &foundState = insertResult.first->second;
require(state.Stack == foundState.Stack || isUnreachable(),
"inconsistent stack heights entering basic block");
// Okay, we failed to insert. That means there's an existing
// state for the successor block. That existing state generally
// needs to match the current state, but certain rules are
// relaxed for branches that enter dead-end regions.
auto &foundState = insertResult.first->second;
require(state.ActiveOps == foundState.ActiveOps || isUnreachable(),
"inconsistent active-operations sets entering basic block");
require(state.CFG == foundState.CFG,
"inconsistent coroutine states entering basic block");
require(state.GotAsyncContinuation == foundState.GotAsyncContinuation,
"inconsistent active async continuations entering basic block");
// Join the states into `foundState`. We can still validly use
// succStateRef here because the insertion didn't work.
foundState.handleJoinPoint(succStateRef.get(), /*dead-end*/false,
term, succBB);
}
}
}
}
assert(visitedDeadEndEdges.visitedAllEdges() &&
"didn't visit all edges");
}
void verifyBranches(const SILFunction *F) {

View File

@@ -55,11 +55,16 @@ bb3:
return %1 : $()
}
// CHECK: Begin Error in function test_missing_end_borrow
// CHECK: SIL verification failed: inconsistent active-operations sets entering basic block: state.ActiveOps == foundState.ActiveOps || isUnreachable()
// CHECK: Verifying instruction:
// CHECK: -> br bb3 // id: %5
// CHECK: End Error in function test_missing_end_borrow
// CHECK: Begin Error in function test_missing_end_borrow
// CHECK-NEXT: SIL verification failed: inconsistent active-operations sets entering basic block
// CHECK-NEXT: Entering basic block bb3
// CHECK-NEXT: Current active operations: []
// CHECK-NEXT: Recorded active operations: [
// CHECK-NEXT: %2 = store_borrow %0 to %1 : $*Klass
// CHECK-NEXT: ]
// CHECK-NEXT: Verifying instruction:
// CHECK-NEXT: -> br bb3 // id: %5
// CHECK-NEXT: End Error in function test_missing_end_borrow
sil [ossa] @test_missing_end_borrow_dead : $@convention(thin) (@guaranteed Klass) -> () {
bb0(%0 : @guaranteed $Klass):
%stk = alloc_stack $Klass

View File

@@ -0,0 +1,333 @@
// RUN: %target-sil-opt -verify-continue-on-failure -o /dev/null %s 2>&1 | %FileCheck %s
// REQUIRES: asserts
sil_stage canonical
import Builtin
// Check that join points normally require the stack to match.
// CHECK-LABEL: Begin Error in function require_match_1
// CHECK-LABEL: Begin Error in function require_match_1
// CHECK: SIL verification failed: inconsistent stack states entering basic block
// CHECK-NEXT: Entering basic block bb3
// CHECK-NEXT: Current stack state: []
// CHECK-NEXT: Recorded stack state: [
// CHECK-NEXT: %0 = alloc_stack $Builtin.Int32
// CHECK-NEXT: ]
// CHECK-LABEL: End Error in function require_match_1
sil @require_match_1 : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack $Builtin.Int32
cond_br undef, bb1, bb2
bb1:
dealloc_stack %0
br bb3
bb2:
br bb3
bb3:
%result = tuple ()
return %result : $()
}
// Same as above, just with the branches switched around.
// CHECK-LABEL: Begin Error in function require_match_2
// CHECK-LABEL: Begin Error in function require_match_2
// CHECK: SIL verification failed: inconsistent stack states entering basic block
// CHECK-NEXT: Entering basic block bb3
// CHECK-NEXT: Current stack state: [
// CHECK-NEXT: %0 = alloc_stack $Builtin.Int32
// CHECK-NEXT: ]
// CHECK-NEXT: Recorded stack state: []
// CHECK-LABEL: End Error in function require_match_2
sil @require_match_2 : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack $Builtin.Int32
cond_br undef, bb1, bb2
bb1:
br bb3
bb2:
dealloc_stack %0
br bb3
bb3:
%result = tuple ()
return %result : $()
}
// Check that such a join point is okay if it's a branch into a dead-end region.
// CHECK-NOT: Begin Error in function merge_unreachable_okay_1
sil @merge_unreachable_okay_1 : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack $Builtin.Int32
cond_br undef, bb1, bb2
bb1:
dealloc_stack %0
br bb3
bb2:
br bb3
bb3:
unreachable
}
// Same as above, just with the branches switched around.
// CHECK-NOT: Begin Error in function merge_unreachable_okay_1
sil @merge_unreachable_okay_2 : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack $Builtin.Int32
cond_br undef, bb1, bb2
bb1:
br bb3
bb2:
dealloc_stack %0
br bb3
bb3:
unreachable
}
// Check that it's not okay to subsequently dealloc the allocation.
// CHECK-LABEL: Begin Error in function merge_unreachable_then_dealloc_1
// CHECK: SIL verification failed: deallocating allocation that is not the top of the stack
// CHECK-NEXT: Current stack state: []
// CHECK-NEXT: Stack allocation:
// CHECK-NEXT: %0 = alloc_stack $Builtin.Int32
// CHECK-LABEL: End Error in function merge_unreachable_then_dealloc_1
sil @merge_unreachable_then_dealloc_1 : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack $Builtin.Int32
cond_br undef, bb1, bb2
bb1:
dealloc_stack %0
br bb3
bb2:
br bb3
bb3:
dealloc_stack %0
unreachable
}
// Same as above, just with the branches switched around.
// CHECK-LABEL: Begin Error in function merge_unreachable_then_dealloc_2
// CHECK: SIL verification failed: deallocating allocation that is not the top of the stack
// CHECK-NEXT: Current stack state: []
// CHECK-NEXT: Stack allocation:
// CHECK-NEXT: %0 = alloc_stack $Builtin.Int32
// CHECK-LABEL: End Error in function merge_unreachable_then_dealloc_2
sil @merge_unreachable_then_dealloc_2 : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack $Builtin.Int32
cond_br undef, bb1, bb2
bb1:
br bb3
bb2:
dealloc_stack %0
br bb3
bb3:
dealloc_stack %0
unreachable
}
// Parallel branches with inconsistent stack state into a dead-end loop
// CHECK-NOT: Begin Error in function parallel_branches_into_dead_1
sil @parallel_branches_into_dead_1 : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack $Builtin.Int32
cond_br undef, bb1, bb2
bb1:
dealloc_stack %0
br bb3
bb2:
br bb4
bb3:
br bb4
bb4:
br bb3
}
// Same as above, just with the dealloc switched around to trigger
// a different visitation pattern.
// CHECK-NOT: Begin Error in function parallel_branches_into_dead_2
sil @parallel_branches_into_dead_2 : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack $Builtin.Int32
cond_br undef, bb1, bb2
bb1:
br bb3
bb2:
dealloc_stack %0
br bb4
bb3:
br bb4
bb4:
br bb3
}
// Add an unreachable block that also branches to the dead-end region to
// make sure we don't fail to visit anything.
// CHECK-NOT: Begin Error in function parallel_branches_into_dead_with_unreachable_block
sil @parallel_branches_into_dead_with_unreachable_block : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack $Builtin.Int32
cond_br undef, bb1, bb2
bb1:
br bb3
bb2:
dealloc_stack %0
br bb4
bb3:
br bb4
bb4:
br bb3
bb5: // unreachable
br bb3
}
// Parallel branches with inconsistent stack state into a dead-end loop
// that contains a dealloc
// CHECK-LABEL: Begin Error in function parallel_branches_into_dead_dealloc_1
// CHECK-NEXT: SIL verification failed: deallocating allocation that is not the top of the stack
sil @parallel_branches_into_dead_dealloc_1 : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack $Builtin.Int32
cond_br undef, bb1, bb2
bb1:
dealloc_stack %0
br bb3
bb2:
br bb4
bb3:
dealloc_stack %0
br bb4
bb4:
br bb3
}
// Same as above, just with the dealloc switched around to trigger
// a different visitation pattern.
// CHECK-LABEL: Begin Error in function parallel_branches_into_dead_dealloc_2
// CHECK-NEXT: SIL verification failed: deallocating allocation that is not the top of the stack
sil @parallel_branches_into_dead_dealloc_2 : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack $Builtin.Int32
cond_br undef, bb1, bb2
bb1:
dealloc_stack %0
br bb3
bb2:
br bb4
bb3:
br bb4
bb4:
dealloc_stack %0
br bb3
}
// Yet another visitation pattern.
// CHECK-LABEL: Begin Error in function parallel_branches_into_dead_dealloc_3
// CHECK-NEXT: SIL verification failed: deallocating allocation that is not the top of the stack
sil @parallel_branches_into_dead_dealloc_3 : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack $Builtin.Int32
cond_br undef, bb1, bb2
bb1:
br bb3
bb2:
dealloc_stack %0
br bb4
bb3:
dealloc_stack %0
br bb4
bb4:
br bb3
}
// Yet another visitation pattern.
// CHECK-LABEL: Begin Error in function parallel_branches_into_dead_dealloc_4
// CHECK-NEXT: SIL verification failed: deallocating allocation that is not the top of the stack
sil @parallel_branches_into_dead_dealloc_4 : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack $Builtin.Int32
cond_br undef, bb1, bb2
bb1:
br bb3
bb2:
dealloc_stack %0
br bb4
bb3:
br bb4
bb4:
dealloc_stack %0
br bb3
}
// And again, add an unreachable block.
// CHECK-LABEL: Begin Error in function parallel_branches_into_dead_dealloc_with_unreachable_block
// CHECK-NEXT: SIL verification failed: deallocating allocation that is not the top of the stack
sil @parallel_branches_into_dead_dealloc_with_unreachable_block : $@convention(thin) () -> () {
bb0:
%0 = alloc_stack $Builtin.Int32
cond_br undef, bb1, bb2
bb1:
br bb3
bb2:
dealloc_stack %0
br bb4
bb3:
dealloc_stack %0
br bb4
bb4:
br bb3
bb5: // unreachable
br bb3
}

View File

@@ -19,7 +19,10 @@ sil @alloc_pack_metadata_before_tuple : $@convention(thin) () -> () {
}
// CHECK-LABEL: Begin Error in function dealloc_pack_metadata_with_bad_operand
// CHECK: SIL verification failed: stack dealloc does not match most recent stack alloc:
// CHECK: SIL verification failed: deallocating allocation that is not the top of the stack
// CHECK-LABEL: End Error in function dealloc_pack_metadata_with_bad_operand
// CHECK-LABEL: Begin Error in function dealloc_pack_metadata_with_bad_operand
// CHECK: SIL verification failed: return with stack allocs that haven't been deallocated
// CHECK-LABEL: End Error in function dealloc_pack_metadata_with_bad_operand
// CHECK-LABEL: Begin Error in function dealloc_pack_metadata_with_bad_operand
// CHECK: SIL verification failed: Must have alloc_pack_metadata operand

View File

@@ -0,0 +1,197 @@
// RUN: %target-sil-opt -test-runner %s -o /dev/null 2>&1 | %FileCheck %s
// CHECK-LABEL: begin running test {{.*}} on all_exit: dead_end_edges
// CHECK-NEXT: visited all edges
// CHECK-NEXT: end running test {{.*}} on all_exit: dead_end_edges
sil @all_exit : $@convention(thin) () -> () {
bb0:
specify_test "dead_end_edges"
cond_br undef, bb1, bb2
bb1:
br bb3
bb2:
br bb3
bb3:
%result = tuple ()
return %result : $()
}
// CHECK-LABEL: begin running test {{.*}} on one_dead: dead_end_edges
// CHECK-NEXT: bb0 -> bb2 (region 0; last edge)
// CHECK-NEXT: visited all edges
// CHECK-NEXT: end running test {{.*}} on one_dead: dead_end_edges
sil @one_dead : $@convention(thin) () -> () {
bb0:
specify_test "dead_end_edges"
cond_br undef, bb1, bb2
bb1:
br bb3
bb2:
unreachable
bb3:
%result = tuple ()
return %result : $()
}
// CHECK-LABEL: begin running test {{.*}} on one_dead_loop: dead_end_edges
// CHECK-NEXT: bb0 -> bb2 (region 0; last edge)
// CHECK-NEXT: visited all edges
// CHECK-NEXT: end running test {{.*}} on one_dead_loop: dead_end_edges
sil @one_dead_loop : $@convention(thin) () -> () {
bb0:
specify_test "dead_end_edges"
cond_br undef, bb1, bb2
bb1:
br bb3
bb2:
br bb2
bb3:
%result = tuple ()
return %result : $()
}
// CHECK-LABEL: begin running test {{.*}} on one_dead_loop_with_branch: dead_end_edges
// CHECK-NEXT: bb0 -> bb2 (region 1; last edge)
// CHECK-NEXT: bb3 -> bb4 (region 0; last edge)
// CHECK-NEXT: visited all edges
// CHECK-NEXT: end running test {{.*}} on one_dead_loop_with_branch: dead_end_edges
sil @one_dead_loop_with_branch : $@convention(thin) () -> () {
bb0:
specify_test "dead_end_edges"
cond_br undef, bb1, bb2
bb1:
br bb5
bb2:
br bb3
bb3:
cond_br undef, bb2, bb4
bb4:
unreachable
bb5:
%result = tuple ()
return %result : $()
}
// CHECK-LABEL: begin running test {{.*}} on complicated_one: dead_end_edges
// CHECK-NEXT: bb0 -> bb2 (region 2; last edge)
// CHECK-NEXT: bb2 -> bb3 (region 1; last edge)
// CHECK-NEXT: bb3 -> bb8 (region 0; more edges remain)
// CHECK-NEXT: bb5 -> bb8 (region 0; more edges remain)
// CHECK-NEXT: bb6 -> bb8 (region 0; more edges remain)
// CHECK-NEXT: bb9 -> bb8 (region 0; more edges remain)
// CHECK-NEXT: bb11 -> bb8 (region 0; last edge)
// CHECK-NEXT: visited all edges
// CHECK-NEXT: end running test {{.*}} on complicated_one: dead_end_edges
sil @complicated_one : $@convention(thin) () -> () {
bb0:
specify_test "dead_end_edges"
cond_br undef, bb1, bb2
bb1:
br bb9
bb2:
br bb3
bb3:
cond_br undef, bb4, bb8
bb4:
cond_br undef, bb5, bb6
bb5:
cond_br undef, bb8, bb3
bb6:
cond_br undef, bb7, bb8
bb7:
br bb3
bb8:
unreachable
bb9:
cond_br undef, bb8, bb10
bb10:
cond_br undef, bb11, bb12
bb11:
cond_br undef, bb9, bb8
bb12:
%result = tuple ()
return %result : $()
}
// CHECK-LABEL: begin running test {{.*}} on trivial_dead_end: dead_end_edges
// CHECK-NEXT: visited all edges
// CHECK-NEXT: end running test {{.*}} on trivial_dead_end: dead_end_edges
sil @trivial_dead_end : $@convention(thin) () -> () {
bb0:
specify_test "dead_end_edges"
unreachable
}
// CHECK-LABEL: begin running test {{.*}} on all_dead: dead_end_edges
// CHECK-NEXT: bb0 -> bb1 (region 1; last edge)
// CHECK-NEXT: bb0 -> bb2 (region 2; last edge)
// CHECK-NEXT: bb1 -> bb3 (region 0; more edges remain)
// CHECK-NEXT: bb2 -> bb3 (region 0; last edge)
// CHECK-NEXT: visited all edges
// CHECK-NEXT: end running test {{.*}} on all_dead: dead_end_edges
sil @all_dead : $@convention(thin) () -> () {
bb0:
specify_test "dead_end_edges"
cond_br undef, bb1, bb2
bb1:
br bb3
bb2:
br bb3
bb3:
unreachable
}
// CHECK-LABEL: begin running test {{.*}} on parallel_into_loop: dead_end_edges
// CHECK-NEXT: bb0 -> bb1 (region 1; last edge)
// CHECK-NEXT: bb0 -> bb2 (region 2; last edge)
// CHECK-NEXT: bb1 -> bb3 (region 0; more edges remain)
// CHECK-NEXT: bb2 -> bb4 (region 0; last edge)
// CHECK-NEXT: visited all edges
// CHECK-NEXT: end running test {{.*}} on parallel_into_loop: dead_end_edges
sil @parallel_into_loop : $@convention(thin) () -> () {
bb0:
specify_test "dead_end_edges"
cond_br undef, bb1, bb2
bb1:
br bb3
bb2:
br bb4
bb3:
br bb4
bb4:
br bb3
}

View File

@@ -115,14 +115,16 @@ bb12:
} // end sil function 'multi_end_licm'
// CHECK-LABEL: sil hidden @multi_end_licm_loop_exit : $@convention(thin) () -> () {
// CHECK: br [[LOOPH:bb[0-9]+]]({{.*}} : $Builtin.Int64)
// CHECK: [[LOOPH]]({{.*}} : $Builtin.Int64)
// CHECK: begin_access [modify] [dynamic] [no_nested_conflict]
// CHECK: cond_br {{.*}}, [[LOOPCOND1:bb[0-9]+]], [[LOOPCOND2:bb[0-9]+]]
// CHECK: [[LOOPCOND1]]
// CHECK-NEXT: store
// CHECK-NEXT: end_access
// CHECK: return
// CHECK: bb2:
// CHECK: begin_access [modify] [dynamic] [no_nested_conflict]
// CHECK: br bb3
// CHECK: bb5:
// CHECK: end_access
// CHECK: bb6:
// CHECK: bb7:
// CHECK: end_access
// CHECK: bb8:
// CHECK: } // end sil function 'multi_end_licm_loop_exit'
sil hidden @multi_end_licm_loop_exit : $@convention(thin) () -> () {
bb0:
%0 = global_addr @$s3tmp13reversedArrays18ReversedCollectionVySaySiGGvp : $*ReversedCollection<Array<Int>>
@@ -185,10 +187,10 @@ bbend1:
bbend2:
%otherInt = struct $Int (%27 : $Builtin.Int64)
store %otherInt to %global : $*Int
end_access %global : $*Int
cond_br %47, bbOut, bb5
bbOut:
end_access %global : $*Int
br bb6
bb5: