This has three principal advantages:
- It gives some additional type-safety when working
with known accessors.
- It makes it significantly easier to test whether a declaration
is an accessor and encourages the use of a common idiom.
- It saves a small amount of memory in both FuncDecl and its
serialized form.
The goal is to make it more composable to add trailing-objects fields in
a subclass.
While I was doing this, I noticed that the apply instructions provided
redundant getNumArguments() and getNumCallArguments() accessors, so I
went ahead and unified them.
...being careful to only do it once per initializer. Additionally,
/don't/ offer the suggestion if there was already a conditional
assignment to 'self', because that would wipe it out and the user
should think harder.
...unless the struct contains a field that cannot be zero-initialized,
such as a non-nullable pointer.
This suggestion is only made for C structs because 'init()' may not be
the right choice for other structs.
...as detected by initializing an individual field without having
initialized the whole object (via `self = value`).
This only applies in pre-Swift-5 mode because the next commit will
treat all cross-module struct initializers as delegating in Swift 5.
This changes code generation a bit, because now the conditional
state bitmap uses a bit to track if the 'self' box was stored,
not if the 'self' value was consumed. In some cases, this
eliminates an extra bit, in other places it introduces an
extra bit, but it really doesn't matter because LLVM will
optimize this bit manipulation easily.
In a throwing or failable initializer for a class, the typical pattern
is that an apply or try_apply consumes the self value, and returns
success or failure. On success, a new self value is produced.
On failure, there is no new self value. In both cases, the original
self value no longer exists.
We used to model this by attempting to look at the apply or try_apply
instruction, and figure out from subsequent control flow which
successor block was the success case and which was the error case.
The error blocks were marked as such, and a dataflow analysis was used
to compute whether 'self' had been consumed in each block reachable
from the entry block.
This analysis was used to prevent invalid use of 'self' in catch
blocks when the initializer delegation was wrapped in do/catch;
more importantly, it was also used to know when to release 'self'
on exit from the initializer.
For example, when we 'throw e' here, 'self' was already consumed
and does not need to be released -- doing so would cause a crash:
do {
try self.init(...)
} catch let e {
// do some other cleanup
throw e
}
On the other hand, here we do have to release 'self', otherwise we
will exit leaking memory:
do {
try someOtherThing()
self.init(...)
} catch let e {
// do some other cleanup
throw e
}
The problem with the old analysis is that it was too brittle and did
not recognize certain patterns generated by SILGen. For example, it
did not correctly detect the failure block of a delegation to a
foreign throwing initializer, because those are not modeled as a
try_apply; instead, they return an Optional value.
For similar reasons, we did not correctly failure blocks emitted
after calls to initializers which are both throwing and failable.
The new analysis is simpler and more robust. The idea is that in the
success block, SILGen emits a store of the new 'self' value into
the self box. So all we need to do is seed the dataflow analysis with
the set of blocks where the 'self' box is stored to, excluding the
initial entry block.
The new analysis is called 'self initialized' rather than 'self
consumed'. In blocks dominated by the self.init() delegation,
the result is the logical not of the old analysis:
- If the old analysis said self was consumed, the new one says self
is not initialized.
- If the old analysis said self was not consumed, the new analysis
says that self *is* initialized.
- If the old analysis returned a partial result, the new analysis
will also; it means the block in question can be reached from
blocks where the 'self' box is both initialized and not.
Note that any blocks that precede the self.init() delegation now
report self as uninitialized, because they are not dominated by
a store into the box. So any clients of the old analysis must first
check if self is "live", meaning we're past the point of the
self.init() call. Only if self is live do we then go on to check
the 'self initialized' analysis.
Again, since there's no distinction between an enum initializer that
delegates to 'self.init' from one that assigns to 'self', we can remove
the special handling of enum initializers in the 'root self' case.
Now, 'root self' is only used for designated initializers in classes
with no superclass, and struct initializers that perform memberwise
initialization of stored properties.
This regresses some diagnostics, because the logic for delegating
init diagnostics is missing some heuristics present in the root self
case. I will fix this in a subsequent patch.
Previously protocol extension initializers which called 'self.init' were
considered 'delegating', and ones that assign to 'self' were considered
'root'.
Both have the same SIL lowering so the distinction is not useful, and
removing it simplifies some code.
This replaces the '[volatile]' flag. Now, class_method and
super_method are only used for vtable dispatch.
The witness_method instruction is still overloaded for use
with both ObjC protocol requirements and Swift protocol
requirements; the next step is to make it only mean the
latter, also using objc_method for ObjC protocol calls.
introduce a common superclass, SILNode.
This is in preparation for allowing instructions to have multiple
results. It is also a somewhat more elegant representation for
instructions that have zero results. Instructions that are known
to have exactly one result inherit from a class, SingleValueInstruction,
that subclasses both ValueBase and SILInstruction. Some care must be
taken when working with SILNode pointers and testing for equality;
please see the comment on SILNode for more information.
A number of SIL passes needed to be updated in order to handle this
new distinction between SIL values and SIL instructions.
Note that the SIL parser is now stricter about not trying to assign
a result value from an instruction (like 'return' or 'strong_retain')
that does not produce any.
This is a very easily misused API since it allows for users to leak instructions
if they are not careful. This commit removes this API and replaces the small
number of uses of this API with higher level APIs that accomplish the same task
without using removeFromParent(). There were no API users that specifically
required removeFromParent.
An example of one way we were using removeFromParent is to move a SILInstruction
to the front of a block. That does not require exposing an API like
removeFromParent()... we can just create a higher level API like the one added
in this commit: SILInstruction::moveFront(SILBasicBlock *).
rdar://31276565
With the introduction of special decl names, `Identifier getName()` on
`ValueDecl` will be removed and pushed down to nominal declarations
whose name is guaranteed not to be special. Prepare for this by calling
to `DeclBaseName getBaseName()` instead where appropriate.
Replace `NameOfType foo = dyn_cast<NameOfType>(bar)` with DRY version `auto foo = dyn_cast<NameOfType>(bar)`.
The DRY auto version is by far the dominant form already used in the repo, so this PR merely brings the exceptional cases (redundant repetition form) in line with the dominant form (auto form).
See the [C++ Core Guidelines](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#es11-use-auto-to-avoid-redundant-repetition-of-type-names) for a general discussion on why to use `auto` to avoid redundant repetition of type names.
There was no real code sharing going on here and instead due to the size of
ElementCollector made it difficult to ascertain without reading the code that
the two code paths are completely separate.
This is a NFC change internal to DIMemoryUseCollector that is not visible
outside of DI.
rdar://31521023
This is necessary since other passes rely on DIMemoryUseCollector.h and I want
to update each one of them individually to minimize disruption.
rdar://31521023
At some point, pass definitions were heavily macro-ized. Pass
descriptive names were added in two places. This is not only redundant
but a source of confusion. You could waste a lot of time grepping for
the wrong string. I removed all the getName() overrides which, at
around 90 passes, was a fairly significant amount of code bloat.
Any pass that we want to be able to invoke by name from a tool
(sil-opt) or pipeline plan *should* have unique type name, enum value,
commend-line string, and name string. I removed a comment about the
various inliner passes that contradicted that.
Side note: We should be consistent with the policy that a pass is
identified by its type. We have a couple passes, LICM and CSE, which
currently violate that convention.
When the DI lifetime checker diagnoses an `inout`-related error, it tries to examine the function you’re calling to emit its name in the error message. Unfortunately, it implicitly assumes that `ApplyExpr::getCalledValue()` will find a `ValueDecl` to return; if the `ApplyExpr` directly calls a closure, it won’t, and so `handleInOutUse` will try to `dyn_cast` a `nullptr`. This change adds a check to avoid that.