If the '[poison]' flag is set, then all references within this debug
value will be overwritten with a sentinel at this point in the
program. This is used in debug builds when shortening non-trivial
value lifetimes to ensure the debugger cannot inspect invalid
memory. `debug_value` instructions with the poison flag are not
generated until OSSA islowered. They are not expected to be serialized
within the module, and the pipeline is not expected to do any
significant code motion after lowering.
* Refactoring: replace "Destination" and the ownership qualifier by a single "Mode". This represents much better the mode how the instruction is to be lowered. NFC
* Make assign_by_wrapper printable and parseable.
* Fix lowering of the assign modes for indirect results of the init-closure: The indirect result was initialized and not assigned to. The fix is to insert a destroy_addr before calling the init closure. This fixes a memory lifetime error and/or a memory leak. Found by inspection.
* Fix an iterator-invalidation crash in RawSILInstLowering
* Add tests for lowering assign_by_wrapper.
Store the 1-byte kindAndFlags of SILLocation in the instruction's SILNode bitfield and only store SILLocation::storage in SILInstruction directly.
This reduces the space for the location from 2 to 1 word in SILInstruction.
This removes the ambiguity when casting from a SingleValueInstruction to SILNode, which makes the code simpler. E.g. the "isRepresentativeSILNode" logic is not needed anymore.
Also, it reduces the size of the most used instruction class - SingleValueInstruction - by one pointer.
Conceptually, SILInstruction is still a SILNode. But implementation-wise SILNode is not a base class of SILInstruction anymore.
Only the two sub-classes of SILInstruction - SingleValueInstruction and NonSingleValueInstruction - inherit from SILNode. SingleValueInstruction's SILNode is embedded into a ValueBase and its relative offset in the class is the same as in NonSingleValueInstruction (see SILNodeOffsetChecker).
This makes it possible to cast from a SILInstruction to a SILNode without knowing which SILInstruction sub-class it is.
Casting to SILNode cannot be done implicitly, but only with an LLVM `cast` or with SILInstruction::asSILNode(). But this is a rare case anyway.
This removes the ambiguity when casting from a SingleValueInstruction to SILNode, which makes the code simpler. E.g. the "isRepresentativeSILNode" logic is not needed anymore.
Also, it reduces the size of the most used instruction class - SingleValueInstruction - by one pointer.
Conceptually, SILInstruction is still a SILNode. But implementation-wise SILNode is not a base class of SILInstruction anymore.
Only the two sub-classes of SILInstruction - SingleValueInstruction and NonSingleValueInstruction - inherit from SILNode. SingleValueInstruction's SILNode is embedded into a ValueBase and its relative offset in the class is the same as in NonSingleValueInstruction (see SILNodeOffsetChecker).
This makes it possible to cast from a SILInstruction to a SILNode without knowing which SILInstruction sub-class it is.
Casting to SILNode cannot be done implicitly, but only with an LLVM `cast` or with SILInstruction::asSILNode(). But this is a rare case anyway.
Previously FieldIndexCacheBase only had a parent class of
SingleValueInstruction. I need to be able to in certain cases shim in a
SingleValueInstruction subclass as a parent class instead. In my case it is to
imbue ownership forwarding on StructExtractInst.
This commit itself doesn't make that change and instead just always templatizes
using SingleValueInstruction.
Previously, we always inferred the ownership of the switch_enum from its phi
operands. This forced us to need to model a failure to find a good
OperandOwnershipKindMap in OperandOwnership.cpp. We want to eliminate such
conditions so that we can use failing to find a constraint to mean that a value
can accept any value rather than showing a failure.
* a new [immutable] attribute on ref_element_addr and ref_tail_addr
* new instructions: begin_cow_mutation and end_cow_mutation
These new instructions are intended to be used for the stdlib's COW containers, e.g. Array.
They allow more aggressive optimizations, especially for Array.
The field's ordinal value is used by the Projection abstraction, which is
the basis of efficiently comparing and sorting access paths in SIL. It must
be cached before it is used by any SIL passes, including the verifier, or it
causes widespread quadratic complexity.
Fixes <rdar://problem/50353228> Swift compile time regression with optimizations enabled
In production code, a file that was taking 40 minutes to compile now
takes 1 minute, with more than half of the time in LLVM.
Here's a short script that reproduces the problem. It used to take 30s
and now takes 0.06s:
// swift genlazyinit.swift > lazyinit.sil
// sil-opt ./lazyinit.sil --access-enforcement-opts
var NumProperties = 300
print("""
sil_stage canonical
import Builtin
import Swift
import SwiftShims
public class LazyProperties {
""")
for i in 0..<NumProperties {
print("""
// public lazy var i\(i): Int { get set }
@_hasStorage @_hasInitialValue final var __lazy_storage__i\(i): Int? { get set }
""")
}
print("""
}
// LazyProperties.init()
sil @$s4lazy14LazyPropertiesCACycfc : $@convention(method) (@owned LazyProperties) -> @owned LazyProperties {
bb0(%0 : $LazyProperties):
%enum = enum $Optional<Int>, #Optional.none!enumelt
""")
for i in 0..<NumProperties {
let adr = (i*4) + 2
let access = adr + 1
print("""
%\(adr) = ref_element_addr %0 : $LazyProperties, #LazyProperties.__lazy_storage__i\(i)
%\(access) = begin_access [modify] [dynamic] %\(adr) : $*Optional<Int>
store %enum to %\(access) : $*Optional<Int>
end_access %\(access) : $*Optional<Int>
""")
}
print("""
return %0 : $LazyProperties
} // end sil function '$s4lazy14LazyPropertiesCACycfc'
""")
ConvertFunction and reabstraction thunks need this attribute. Otherwise,
there is no way to identify that withoutActuallyEscaping was used
to explicitly perform a conversion.
The destination of a [without_actually_escaping] conversion always has
an escaping function type. The source may have either an escaping or
@noescape function type. The conversion itself may be a nop, and there
is nothing distinctive about it. The thing that is special about these
conversions is that the source function type may have unboxed
captures. i.e. they have @inout_aliasable parameters. Exclusivity
requires that the compiler enforce a SIL data flow invariant that
nonescaping closures with unboxed captures can never be stored or
passed as an @escaping function argument. Adding this attribute allows
the compiler to enforce the invariant in general with an escape hatch
for withoutActuallyEscaping.
This flag supports promoting KeyPath access violations to an error in
Swift 4+, while building the standard library in Swift 3 mode. This is
only necessary as long as the standard library continues to build in
Swift 3 mode. Once the standard library build migrates, it can all be
ripped out.
<rdar://problem/40115738> [Exclusivity] Enforce Keypath access as an error, not a warning in 4.2.
This statically guarantees that the access has no inner conflict within
its own scope.
IRGen will turn this into a "nontracking" access in which an
exclusivity check is performed for conflicts on an outer scope. However,
unlike normal accesses the runtime does not record the access, and the
access will not be checked for subsequent conflicts.
end_unpaired_access [no_nested_conflict] is not currently
supported. Making a begin_unpaired_access [no_nested_conflict] requires
deleting the corresponding end_unpaired_access. Future runtimes
could support this for verification by storing inline data in the
valud buffer. However, the runtime can never assume that a
[no_nested_conflict] begin_unpaired_access will have a corresponding
end_unpaired_access call without adding a new ExclusivityFlag for
that purpose.
Make getDesugaredType() as fast as possible for now. With the old way:
1) Switching over the sugared types turned into a frequently
mispredicted branch because the sugar in the type system is random
as far as the processor is concerned.
2) Storing the underlying/singlely-desugared type at different offsets
in memory adds more code bloat and misprediction.
Short of a major redesign to avoid pointer chasing, this is probably as
fast as the method will get.
1) Remove SWIFT_INLINE_BITS boilerplate. Now that we're not using anonymous/transparent unions, we don't need the
SWIFT_BITFIELD_BITS macro.
2) Refine the the bitfield size check to better support templated bitfields.
3) Refine the SIL templated bitfields to not be prematurely "full".
Refactor UnaryInstructionWithTypeDependentOperandsBase into two template
classes, where the original template now subclasses a simpler template
called "InstructionBaseWithTrailingObjects".