This adds a new copy of LLVMSupport into the runtime. This is the final
step before changing the inline namespace for the runtime support. This
will allow us to avoid the ODR violations from the header definitions of
LLVMSupport.
LLVMSupport forked at: 22492eead218ec91d349c8c50439880fbeacf2b7
Changes made to LLVMSupport from that revision:
process.inc forward declares `_beginthreadex` due to compilation issues due to custom flag handling
API changes required that we alter the `Deallocate` routine to account
for the alignment.
This is a temporary state, meant to simplify the process. We do not use
the entire LLVMSupport library and there is no value in keeping the
entire library. Subsequent commits will prune the library to the needs
for the runtime.
The runtime has an embedded copy of LLVMSupport which provides
`llvm::function_ref` as `__swift::__runtime::llvm::function_ref`. This
forward declaration can cause the inlined namespaces to be ignored.
Remove the forward declaration for the runtime.
There are a set of headers shared between the Swift compiler and the
runtime. Ensure that we explicitly use `llvm::ArrayRef` rather than
`ArrayRef` which is aliased to `::llvm::ArrayRef`. Doing so enables us
to replace the `ArrayRef` with an inline namespaced version fixing ODR
violations when the swift runtime is loaded into an address space with
LLVM.
- Always use the line number in the actual source file when extracting excerpts and adding highlights/fix-its
- Always use the display name when printing excerpt titles
- Don't print surrounding lines when an annotated line is in a virtual file
This reverts commit f919e047834eddf8863723b9db1fcb8d344d2006.
Beyond allowing us to emit better errors, this will allow me to (in a subsequent
commit) count the amount of uses that are "outside" of the linear lifetime. I
can then compare that against a passed in set of "non consuming uses". If the
count of the number of uses "outside" of the linear lifetime equals the count of
the passed in set of "non consuming uses", then I know that /all/ non consuming
uses that I am testing against are not co-incident with the linear lifetime,
meaning that they can not effect (in a local, direct sense) the linear lifetime.
I am going to use that information to determine when it is safe to convert an
inout form a load [copy] to a load_borrow in the face of local mutations. I can
pass the set of local mutations as non-consuming uses to a linear lifetime
consisting of the load [copy]/destroy_values and thus prove that no writes occur
in between the load [copy]/destroy_value meaning it is safe to conver them to
borrow form.
NOTE: The aforementioned optimization is an extension of an optimization already
in tree that just bails if we have any writes to an inout locally, which is so
unfortunate.
I implemented this in a similar way to the way blotting is implemented in a blot
map vector:
1. I changed this to store (Key, Optional<Value>) pairs.
2. I made it so that once frozen, we can "erase" things from the multimap by
setting all Optional<Value> to none.
3. I changed the range we vend to be an OptionalTransformRange instead of just a
TransformRange so we skip all keys with .none values, meaning that a user will
get the nice behavior that getRange() still works after erasing.
One interesting thing to note is that one /cannot/ erase elements when
initializing the frozen multi-map since we haven't sorted it yet. At first this
seems weird, but it actually fits with the use case of this data structure:
building up state by processing IR in a readonly way and then later working with
it in a worklist like way (and perhaps checking for unhandled cases at the end
of processing).
The nice thing additional thing is that I was able to ensure that the actual
exposed API did not change in terms of how one uses it. I just changed the
underlying iterators/etc.
Move the playground and debugger transforms out
of the Frontend and into `performTypeChecking`, as
we'd want them to be applied if
`performTypeChecking` was called lazily.
Implement a new "fast" dependency scanning option,
`-scan-dependencies`, in the Swift frontend that determines all
of the source file and module dependencies for a given set of
Swift sources. It covers four forms of modules:
1) Swift (serialized) module files, by reading the module header
2) Swift interface files, by parsing the source code to find imports
3) Swift source modules, by parsing the source code to find imports
4) Clang modules, using Clang's fast dependency scanning tool
A single `-scan-dependencies` operation maps out the full
dependency graph for the given Swift source files, including all
of the Swift and Clang modules that may need to be built, such
that all of the work can be scheduled up front by the Swift
driver or any other build system that understands this
option. The dependency graph is emitted as JSON, which can be
consumed by these other tools.
Some code paths that see target triples go through the frontend
without seeing the driver. Therefore, perform the same "simulator"
inference for x86 iOS/tvOS/watchOS triples also in the frontend,
to ensure that we remain compatible. Also make sure that
-print-target-info performs the appropriate adjustment.
Delete all of the formalism and infrastructure around maintaining our own copy of the global context. The final frontier is the Builtins, which need to lookup intrinsics in a given scratch context and convert them into the appropriate Swift annotations and types. As these utilities have wormed their way through the compiler, I have decided to leave this alone for now.
This simplifies fixing the master-next build. Upstream LLVM already
has a copy of this function, so on master-next we only need to delete
the Swift copy, reducing the potential for merge conflicts.
Some code paths that see target triples go through the frontend
without seeing the driver. Therefore, perform the same "simulator"
inference for x86 iOS/tvOS/watchOS triples also in the frontend,
to ensure that we remain compatible. Also make sure that
-print-target-info performs the appropriate adjustment.
Teach the driver to pass the SDK version it computes (from the SDK
settings JSON in a Darwin-based platform's SDK) down into the frontend.
The frontend then sets that SDK version in the LLVM module, which
eventually makes its way into the Mach-O file.
Last part of rdar://problem/60332732.
The differentiation transform does the following:
- Canonicalizes differentiability witnesses by filling in missing derivative
function entries.
- Canonicalizes `differentiable_function` instructions by filling in missing
derivative function operands.
- If necessary, performs automatic differentiation: generating derivative
functions for original functions.
- When encountering non-differentiability code, produces a diagnostic and
errors out.
Partially resolves TF-1211: add the main canonicalization loop.
To incrementally stage changes, derivative functions are currently created
with empty bodies that fatal error with a nice message.
Derivative emitters will be upstreamed separately.
Request-based incremental dependencies are enabled by default. For the time being, add a flag that will turn them off and switch back to manual dependency tracking.