evaluator to precisely evaluate Builtin.assert_configuration.
Unify UnknownReason::Trap and UnknownReason::AssertionFailure error
values in the constant evaluator, now that we have 'condfail_message'
SIL instruction, which provides an error message for the traps.
@_semantics("constant_evaluable") annotation to denote constant
evaluable functions.
Add a test suite that uses the sil-opt pass ConstantEvaluableSubsetChecker.cpp
to check the constant evaluability of function in the OSLog
overlay.
of Swift code snippets. Add a new test: constant_evaluable_subset_test.swift
that tests Swift code snippets that are expected to be constant evaluable and
those that are not.
1. builtin "int_expect", which makes the evaluator work on more
integer operations such as left/right shift (with traps) and
integer conversions.
2. builtin "_assertConfiguration", which enables the evaluator
to work with debug stdlib.
3. builtin "ptrtoint", which enables the evaluator to track
StaticString
4. _assertionFailure API, which enables the evaluator to report
stdlib assertion failures encountered during constant evaluation.
Also, enable attaching auxiliary data with the enum "UnknownReason"
and use it to improve diagnostics for UnknownSymbolicValues,
which represent failures during constant evaluation.
This makes for a cleaner and less implicit-context-heavy API, and makes it easier for symbolic
reference resolvers to do context-dependent things (like map the in-memory base address back to a
remote address in MetadataReader).
Co-routines are so expensive (e.g. Array.subscript.read) that it makes sense to enable generic inlining of co-routines.
This will speed up array iteration (e.g. for elem in array { }) in a generic context significantly.
Another example is ManagedBuffer.header.read, which gets much faster.
In both cases, the speedup is mainly because there is no malloc happening anymore.
https://bugs.swift.org/browse/SR-11231
rdar://problem/53777612
This mostly requires changing various entry points to pass around a
TypeConverter instead of a SILModule. I've left behind entry points
that take a SILModule for a few methods like SILType::subst() to
avoid creating even more churn.
This memoizes the result, which is fine for all callers; the only
exception is open existential types where each new open existential
now explicitly gets a unique generic environment, allocated by
calling GenericEnvironment::getIncomplete().
Returns `true` if `T.Type` is known to refer to a concrete type. The
implementation allows for the optimizer to specialize this at -O and
eliminate conditional code.
Includes `Swift._isConcrete<T>(T.Type) -> Bool` wrapper function.
Rather than storing the set of input requirements in a
(SIL)SpecializeAttr, store the specialized generic signature. This
prevents clients from having to rebuild the same specialized generic
signature on every use.
This provides a singular instruction for convert an unmanaged value to a ref,
then strong_retain it. I expanded the definition of UNCHECKED_REF_STORAGE to
include these copy like instructions. This instruction is valid in all SIL.
The reason why I am adding this instruction is that currently when we emit an
access to an unowned (unsafe) ivar, we use an unmanaged_to_ref and a strong
retain. This can look to the optimizer like a strong retain that can potentially
be optimized. By combining the two together into a new instruction, we can avoid
this potential problem since the pattern matching will break.
Now that we've moved to C++14, we no longer need the llvm::make_unique
implementation from STLExtras.h. This patch is a mechanical replacement
of (hopefully) all the llvm::make_unique instances in the swift repo.
This was not a real leak, because it only happened in an error condition where the block ends in an trap instructon anyway.
But it must be fixed to make the MemoryLifetimeVerifier happy.
This simplifies the IR and eliminates in a certain sense "constant information"
from the IR by allowing us to remove extract, nominal literal round trips.
NOTE: I had to modify the pound_assert test slightly on the location of a note
that it emits. The reason that I did this is that the test output is technically
correct. The instruction we are interpreting when we error is (after this
commit), a debug_value in the prelude of the function (and thus has the new
location). I am going to talk with Ravi and others on what to do with this.
When this was originally implemented, this transform was put into
SILCombine. This patch also puts it into constant folding so we can also perform
the transform at -Onone.
My hunch is that the reason why this isn't happening with the -Onone
serialization change is that we were relying only this code being specialized
before serialization in the stdlib. But transparent functions are now not
optimized until after serialization, so it isn't done. Rather than mess with
that, I just added the support here.
I put in a little bit of infrastructure that should provide the appropriate
places for adding information about other cases where we can run into this with
other casts.
The reason I added this is that the codegen around condfail_message has changed
slightly and thus has this round trip in it, messing with various pattern
matching routines.
Example:
```
%x = string_literal
%x1 = ptr_to_int %x // <=== This messes with pattern matching from
%x2 = int_to_ptr %x1 // <=== the cond_fail to %x.
cond_fail(%x2)
```
=>
```
%x = string_literal // <=== Happiness = ).
cond_fail(%x)
```
This originally used a SmallSetVector before a refactoring that I did. I changed
it to a vector + sortUnique since AFAIKT it only needed uniqueness, not
insertion order... turns out I was wrong. I added a comment here explaining that
this has to be a SetVector to preserve both uniqueness and insertion order.
rdar://53401018
This improves on the previous situation:
- The request ensures that the backing storage for lazy properties
and property wrappers gets synthesized first; previously it was
only somewhat guaranteed by callers.
- Instead of returning a range this just returns an ArrayRef,
which simplifies clients.
- Indexing into the ArrayRef is O(1), which addresses some FIXMEs
in the SIL optimizer.
"globalStringTablePointer": String -> Builtin.RawPointer` to a
string_literal instruction if the string that is passed is constructed
from a literal. Otherwise, emit diagnostics.
If somebody called `demangleType` or `demangleSymbol` using a demangler that was already
in the middle of demangling a string, then the state for the new demangler would clobber the old
demangler. This manifested in rdar://problem/50380275 because, as part of demangling a
string with a symbolic reference to a private type's context, we would demangle the debug string
for the referenced context using the same demangler. If there were additional operators in the
original string after the symbolic reference, these never got demangled because the demangler
was now in the state of having completed demangling the other string, so we would drop nesting,
and incorrectly report field types of, for instance, `Array<PrivateStruct>` or
`(PrivateStruct, OtherStruct)` as just being `PrivateStruct`.
This is a likely situation to be in, especially now that `Demangler` objects also serve as
arena allocators for their demangled nodes, so change the top-level demangler entry points
to use an RAII object to push and pop the existing state instead of unilaterally clobbering
the existing state.
Specifically:
1. I removed an extra defensive copy that we put in place some time ago that
isn't really warranted. We know that we have an @owned value, so can safely just
pass the value as a @guaranteed parameter. This also eliminates an ownership
error that would occur due to my not having updated this code for ownership in
tree.
2. I also ensured that if we are performing a loadable address bridging cast ->
value bridging cast that we store the loadable value back into memory after we
perform the cast. Otherwise, it appears to leak to the ownership verifier.
I also centralized the non-ownership tests for this into one place
(const_fold_objc_bridge.sil => constant_propagation_objc.sil).
[Const Evaluator] Make compile-time constant evaluator correctly handle s_to_s_checked_trunc_IntLiteral_IntNN where the bit width of the source symbolic value (an APInt) could be smaller than the destination bits