* Revise error for incorrect subscript parameters.
We use subscripts for more than just indexes in Swift these days, so
the error message needs to be a bit more general.
* Use the term 'argument' instead of 'value'
This adds a mostly flow-insensitive analysis that runs before the
dominator-based transformations. The analysis is simple and efficient
because it only needs to track data flow of currently in-scope
accesses. The original dominator tree walk remains simple, but it now
checks the flow insensitive analysis information to determine general
correctness. This is now correct in the presence of all kinds of nested
static and dynamic nested accesses, call sites, coroutines, etc.
This is a better compromise than:
(a) disabling the pass and taking a major performance loss.
(b) converting the pass itself to full-fledged data flow driven
optimization, which would be more optimal because it could remove
accesses when nesting is involved, but would be much more expensive
and complicated, and there's no indication that it's useful.
The new approach is also simpler than adding more complexity to
independently handle to each of many issues:
- Nested reads followed by a modify without a false conflict.
- Reads nested within a function call without a false conflict.
- Conflicts nested within a function call without dropping enforcement.
- Accesses within a generalized accessor.
- Conservative treatment of invalid storage locations.
- Conservative treatment of unknown apply callee.
- General analysis invalidation.
Some of these issues also needed to be considered in the
LoopDominatingAccess sub-pass. Rather than fix that sub-pass, I just
integrated it into the main pass. This is a simplification, is more
efficient, and also handles nested loops without creating more
redundant accesses. It is also generalized to:
- hoist non-uniquely identified accesses.
- Avoid unnecessarily promoting accesses inside the loop.
With this approach we can remove the scary warnings and caveats in the
comments.
While doing this I also took the opportunity to eliminate quadratic
behavior, make the domtree walk non-recursive, and eliminate cutoff
thresholds.
Note that simple nested dynamic reads to identical storage could very
easily be removed via separate logic, but it does not fit with the
dominator-based algorithm. For example, during the analysis phase, we
could simply mark the "fully nested" read scopes, then convert them to
[static] right after the analysis, removing them from the result
map. I didn't do this because I don't know if it happens in practice.
A ‘forwarding module’ is a YAML file that’s meant to stand in for a .swiftmodule file and provide an up-to-date description of its dependencies, always using modification times.
When a ‘prebuilt module’ is first loaded, we verify that it’s up-to-date by hashing all of its dependencies. Since this is orders of magnitude slower than reading mtimes, we’ll install a `forwarding module` containing the mtimes of the now-validated dependencies.
Recent Swift uses 2 as the is-Swift bit when running on newer versions, and 1 on older versions. Since it's difficult or impossible to know what we'll be running on at build time, make the selection at runtime.
This dramatically reduces the number of needed malloc calls.
Unfortunately I had to add the implementation of SmallVectorBase::grow_pod to the runtime, as we don't link LLVM. This is a bad hack, but better than re-inventing a new SmallVector implementation.
SR-10028
rdar://problem/48575729
We were calling isResilent() to check if a type's layout contained a
type that was resilient in some resilience domain. This was used to
determine if we should re-lower the type when the desired resilience
expansion does not match the given resilence expansion.
However this bit was only getting set on address only types. This
means if a resilient type was lowered with maximal expansion first,
and this produced a non-address only type lowering, we would not
lower it again when minimal expansion was requested.
This can't be triggered with the existing tests, but upcoming changes
result in regressions from this.
When calling getTypeLowering() for the first time with a new type that did not
appear in the cache under _any_ resilience expansion, we would incorrectly take
the unlowered type and treat it as a lowered type, incorrectly skipping the
computeLoweredRValueType() call.
Each call site will soon have to think about passing in the right expansion
instead of just assuming the default will be OK. But there are now only a
few call sites left, because most have been refactored to use convenience
APIs that pass in the right resilience expansion already.
Add a bit to the module to determine whether the dependency’s stored bit pattern is a hash or an mtime.
Prebuilt modules store a hash of their dependencies because we can’t be sure their dependencies will have the same modtime as when they were built.
In various places we want to get a lowered type, but ignoring the
SIL type category (object or address). Add a pair of utility methods
for this purpose.
If a value decl is internal hasTestableOrPrivateImport will succeed (or
fail) without looking at the filename. However this breaks when we query
an internal storage decl with private formal access for a private
setter: the query would fail because no filename was serialized for the
decl (we only serialize filenames for private decls). So in the special
case of a internal storage with private accessor also serialize the
filename.
rdar://48516545
Calling DependentMemberType::get() repeatedly pollutes the processor
caches because the global hash table can be quite large. With this
change, we either precompute DependentMemberTypes or cache them on
demand. This change makes the release/no-assert build of
Swift.swiftmodule 4.1% faster.
For context, String, Nil, Bool, and Int already behave this way.
Note: Swift can compile against 80 or 64 bit floats as the builtin
literal type. Thus, it was necessary to capture this bit somehow in the
FloatLiteralExpr. This was done as another Type field capturing this
info.