Checking each module dependency info if it is up-to-date with respect to when the cache contents were serialized in a prior scan.
- Add a timestamp field to the serialization format for the dependency scanner cache
- Add a flag "-validate-prior-dependency-scan-cache" which, when combined with "-load-dependency-scan-cache" will have the scanner prune dependencies from the deserialized cache which have inputs that are newer than the prior scan itself
With the above in-place, the scan otherwise proceeds as-is, getting cache hits for entries still valid since the prior scan.
Also introduce two new frontend flags:
The -solver-scope-threshold flag sets the maximum number of scopes, which was
previously hardcoded to 1 million.
The -solver-trail-threshold flag sets the maximum number of trail steps,
which defaults to 64 million.
Consider a signature with a conformance requirement, and two
identical superclass requirements, both of which make the
conformance requirement redundant.
The conformance will have two requirement sources, the explicit
source and a derived source based on one of the two superclass
requirements, whichever one was processed first.
If we end up marking the *other* superclass requirement as
redundant, we would incorrectly conclude that the conformance
was not redundant. Instead, if a derived source is based on a
redundant requirement, we can't just discard it right away;
instead, we have to check if that source could have been
derived some other way.
This doesn't really fit the request evaluator model since the
result of evaluating the request is the new insertion point,
but we don't have a way to get the insertion point of an
already-expanded scope.
Instead, let's change the callers of the now-removed
expandAndBeCurrentDetectingRecursion() to instead call
expandAndBeCurrent(), while checking first if the scope was
already expanded.
Also, set the expanded flag before calling expandSpecifically()
from inside expandAndBeCurrent(), to ensure that re-entrant
calls to expandAndBeCurrent() are flagged by the existing
assertion there.
Finally, replace a couple of existing counters, and the
now-gone request counter with a single ASTScopeExpansions
counter to track expansions.
Also perform the plumbing necessary to convince the rest of the compiler that they're just ordinary external dependencies. In particular, we will still emit these depenencies into .d files, and code completion will still index them.
Query the SourceLookupCache for the operator decls,
and use ModuleDecl::getOperatorDecls for both
frontend stats and to clean up some code
completion logic.
In addition, this commit switches getPrecedenceGroups
over to querying SourceLookupCache.
It turns out that, if you pull in any nontrivial module, there are thousands of submodules and none of them could possibly have a cross-import overlay. Avoid evaluating them.
Add ExecuteSILPipelineRequest which executes a
pipeline plan on a given SIL (and possibly IRGen)
module. This serves as a top-level request for
the SILOptimizer that we'll be able to hang
dependencies off.
This eliminates the entire 'lazy generic environment' concept;
essentially, all generic environments are now lazy, and since
each signature has exactly one environment, their construction
no longer needs to be co-ordinated with deserialization.
Ensure that lazy parsing of the members of nominal type definitions
and extensions is handled through a request. Most of the effort here
is in establishing a new request zone for parser requests.
Begin refactoring the request evaluator by swapping SWIFT_TYPEID for
SWIFT_REQUEST. Introduce the Zone of the request as a formal parameter
to the macro, then re-expand the request macro to get the type info
back.
SWIFT_REQUEST will eventually grow to encompass more information about
requests as we seek to reduce the boilerplate involved in their
definitions.
When we encounter a cycle in the one-way component graph due to constraints
that (e.g.) tie together the outputs of two one-way constraints, collapse
the components along the cycle into a single connected component, because
all of those type variables must be solved together.
IDE functionality needs some internal type checking logics, e.g. checking
whether an extension is applicable to a concrete type. We used to directly
expose an header from sema called IDETypeChecking.h so that IDE functionalities
could invoke these APIs. The goal of the commit and following commits is to
expose evaluator requests instead of directly exposing function entry points from
sema so that we could later move IDETypeChecking.h to libIDE and implement these functions
by internally evaluating these requests.
This improves on the previous situation:
- The request ensures that the backing storage for lazy properties
and property wrappers gets synthesized first; previously it was
only somewhat guaranteed by callers.
- Instead of returning a range this just returns an ArrayRef,
which simplifies clients.
- Indexing into the ArrayRef is O(1), which addresses some FIXMEs
in the SIL optimizer.
Adds a NumStoredPropertiesQueries stat.
Adds a test case for an increasing number of lazy stored class
properties. Each property requires a formal access within the
initializer. This would manifest as cubic behavior in
AccessEnforcementOpts, which scales as O(1.5) in the above stat.
Sema no longer adds conformances to a per-SourceFile list that it thinks
are going to be "used" by SILGen, IRGen and the runtime. Instead, previous
commits already ensure that SILGen determines the set of conformances to be
emitted, triggering conformance checking as needed.