Introduce `AvailableDuringLoweringDeclFilter` which can be composed with
`OptionalTransformRange` to implement iterators that filter out unavailable
decls.
Moving the query implementation up to the AST library from SIL will allow
conveniences to be written on specific AST element classes. For instance, this
will allow `EnumDecl` to expose a convenience that enumerates element decls
that are available during lowering.
Also, improve naming and documentation for these queries.
Reformatting everything now that we have `llvm` namespaces. I've
separated this from the main commit to help manage merge-conflicts and
for making it a bit easier to read the mega-patch.
This is phase-1 of switching from llvm::Optional to std::optional in the
next rebranch. llvm::Optional was removed from upstream LLVM, so we need
to migrate off rather soon. On Darwin, std::optional, and llvm::Optional
have the same layout, so we don't need to be as concerned about ABI
beyond the name mangling. `llvm::Optional` is only returned from one
function in
```
getStandardTypeSubst(StringRef TypeName,
bool allowConcurrencyManglings);
```
It's the return value, so it should not impact the mangling of the
function, and the layout is the same as `std::optional`, so it should be
mostly okay. This function doesn't appear to have users, and the ABI was
already broken 2 years ago for concurrency and no one seemed to notice
so this should be "okay".
I'm doing the migration incrementally so that folks working on main can
cherry-pick back to the release/5.9 branch. Once 5.9 is done and locked
away, then we can go through and finish the replacement. Since `None`
and `Optional` show up in contexts where they are not `llvm::None` and
`llvm::Optional`, I'm preparing the work now by going through and
removing the namespace unwrapping and making the `llvm` namespace
explicit. This should make it fairly mechanical to go through and
replace llvm::Optional with std::optional, and llvm::None with
std::nullopt. It's also a change that can be brought onto the
release/5.9 with minimal impact. This should be an NFC change.
Data structures must be layout compatible when built with and without asserts.
Fixes a compiler crash when C++ sources are built without asserts because SwiftCompilerSources are built with asserts.
rdar://110363377
When `-unavailable-decl-optimization=complete` is specified, exclude
unavailable enum cases from the runtime layout of enums with payloads. Without
this, the type metadata for unavailable types may be referenced by enum cases
with unavailable payloads and cause linker failures.
Resolves rdar://107483852
When `-unavailable-decl-optimization=stub` is specified, insert a call to
`_diagnoseUnavailableCodeReached()` at the beginning of the function to cause
it to trap if executed at run time.
Part of rdar://107388493
Optimizations can rely on alias analysis to know that an in-argument (or parts of it) is not actually read.
We have to do the same in the verifier: if alias analysis says that an in-argument is not read, there is no need that the memory location is initialized.
Fixes a false verifier error.
rdar://106806899
Add a separate 'verifyOwnership()' entry point so it's possible
to check OSSA lifetimes at various points.
Move SILGenCleanup into a SILGen pass pipeline.
After SILGen, verify incomplete OSSA.
After SILGenCleanup, verify ownership.
When opaque values are enabled, TypeConverter associates to an
address-only type an OpaqueValueTypeLowering. That lowering stores a
single lowered SIL type, and its value category is "object". So long as
the module has not yet been address-lowered, that type has the
appropriate value category. After the module has been address-lowered,
however, that type has the wrong value category: the type is
address-only, and in an address-lowered module, its lowered type's value
category must be "address".
Code that obtains a lowered type expects the value category to reflect
the state of the module. So somewhere, it's necessary to fixup that
single lowered type's value category.
One option would be to update all code that uses lowered types. That
would require many changes across the codebase and all new code that
used lowered types would need to account for this.
Another option would be to update some popular conveniences that call
through to TypeConverter, for example those on SILFunction, and ensure
that all code used those conveniences. Even if this were done
completely, it would be easy enough for new code to be added which
didn't use the conveniences.
A third option would be to update TypeLowering::getLoweredType to take
in the context necessary to determine whether the stored SILType should
be fixed up. That would require each callsite to be changed and
potentially to carry around more context than it already had in order to
be able to pass it along.
A fourth option would be to make TypeConverter aware of the
address-loweredness, and to update its state at the end of
AddressLowering.
Updating TypeConverter's state would entail updating all cached
OpaqueValueTypeLowering instances at the end of the AddressLowering
pass. Additionally, when TypeConverter produces new
OpaqueValueTypeLowerings, they would need to have the "address" value
category from creation.
Of all the options, the last is least invasive and least error-prone, so
it is taken here.
Specifically, we get an additional table like thing called sil_moveonlydeinit. It looks as follows:
sil_moveonlydeinit TYPE {
@FUNC_NAME
}
It always has a single entry.
* move the source file to SILOptimizer/IRGenTransforms
* add a file level comment
* document and verify that the pass runs after serialization
* catch overflows when truncating a constant value
As we do with field indices for struct instructions.
This avoids quadratic behavior in case of enums with lots of cases.
Also: cache field and enum case indices in the SILModule.
The main point of this change is to make sure that a shared function always has a body: both, in the optimizer pipeline and in the swiftmodule file.
This is important because the compiler always needs to emit code for a shared function. Shared functions cannot be referenced from outside the module.
In several corner cases we missed to maintain this invariant which resulted in unresolved-symbol linker errors.
As side-effect of this change we can drop the shared_external SIL linkage and the IsSerializable flag, which simplifies the serialization and linkage concept.
PublicCMOSymbols stores symbols which are made public by cross-module-optimizations.
Those symbols are primarily stored in SILModule and eventually used by TBD generation and validation.
The reason why I am doing this is that we are going to be enabling lexical
lifetimes early in the pipeline so that I can use it for the move operator's
diagnostics.
To make it easy for passes to know whether or not they should support lexical
lifetimes, I included a query on SILOptions called
supportsLexicalLifetimes. This will return true if the pass (given the passed in
option) should insert the lexical lifetime flag. This ensures that passes that
run in both pipelines (e.x.: AllocBoxToStack) know whether or not to set the
lexical lifetime flag without having to locally reason about it.
This is just chopping off layers of a larger patch I am upstreaming.
NOTE: This is technically NFC since it leaves the default alone of not inserting
lexical lifetimes at all.
StackList is a very efficient data structure for worklist type things.
This is a port of the C++ utility with the same name.
Compared to Array, it does not require any memory allocations.
It's not needed anymore with delayed instruction deletion.
It was used for two purposes:
1. For analysis, which cache instructions, to avoid dangling instruction pointers
2. For passes, which maintain worklists of instructions, to remove a deleted instructions from the worklist. This is now done by checking SILInstruction::isDeleted().
When an instruction is "deleted" from the SIL, it is put into the SILModule::scheduledForDeletion list.
The instructions in this list are eventually deleted for real in SILModule::flushDeletedInsts(), which is called by the pass manager after each pass run.
In other words: instruction deletion is deferred to the end of a pass.
This avoids dangling instruction pointers within the run of a pass and in analysis caches.
Note that the analysis invalidation mechanism ensures that analysis caches are invalidated before flushDeletedInsts().
In theory we could map opened archetypes per module because opened archetypes _should_ be unique across the module.
But currently in some rare cases SILGen re-uses the same opened archetype in multiple functions.
The fix is to add the SILFunction to the map's key.
That also requires that we update the map whenever instructions are moved from one function to another.
This fixes a compiler crash.
rdar://76916931
Instead, put the archetype->instrution map into SIlModule.
SILOpenedArchetypesTracker tried to maintain and reconstruct the mapping locally, e.g. during a use of SILBuilder.
Having a "global" map in SILModule makes the whole logic _much_ simpler.
I'm wondering why we didn't do this in the first place.
This requires that opened archetypes must be unique in a module - which makes sense. This was the case anyway, except for keypath accessors (which I fixed in the previous commit) and in some sil test files.
... with a fix for a non-assert build crash: I used the wrong ilist type for SlabList. This does not explain the crash, though. What I think happened here is that llvm miscompiled and put the llvm_unreachable from the Slab's deleteNode function unconditionally into the SILModule destructor.
Now by using simple_ilist, there is no need for a deleteNode at all.
A StackList is the best choice for things like worklists, etc., if no random access is needed.
Regardless of how large a Stack gets, there is no memory allocation needed (except maybe for the first few uses in the compiler run).
All operations have (almost) zero cost.
The needed memory is managed by the SILModule. Initially, the memory slabs are allocated with the module's bump pointer allocator. In contrast to bump pointer allocated memory, those slabs can be freed again (at zero cost) and then recycled.
StackList is meant to be a replacement for llvm::SmallVector, which needs to malloc after the small size is exceeded.
This is more a usability than a compile time improvement.
Usually we think hard about how to correctly use an llvm::SmallVector to avoid memory allocations: we chose the small size wisely and in many cases we keep a shared instance of a SmallVector to reuse its allocated capacity.
All this is not necessary by using a StackList: no need to select a small size and to share it across usages.
...and avoid reallocation.
This is immediately necessary for LICM, in addition to its current
uses. I suspect this could be used by many passes that work with
addresses. RLE/DSE should absolutely migrate to it.