The `__future__` we relied on is now, where the 3 specific things are
all included [since Python 3.0](https://docs.python.org/3/library/__future__.html):
* absolute_import
* print_function
* unicode_literals
* division
These import statements are no-ops and are no longer necessary.
When using libc++, Swift imports `size_t` as Int despite `size_t` being an unsigned type. This is intentional & is specified in `lib/ClangImporter/MappedTypes.def`. Previously, MappedTypes were only honored for C/C++ types declared on the file level.
In libstdc++, `size_t` is declared within `namespace std` and not on the file level, so the mapping to Int was not applied.
This change ensures that MappedTypes are also applied to types declared in `namespace std`.
Updates the CMake version for Swift and the Swift Benchmarks to 3.19.6.
Updates the docs to reflect this change. Does not modify the required version for building the stdlib.
TLDR: I fixed a whole in the assembly-vision opt-remark pass where we were not
emitting a remark for end of scope instructions at the beginning of blocks. Now
all of these instructions (strong_release, end_access) should always reliably
have a remark emitted for them.
----
I think that this is a pragmatic first solution to the problem of
strong_release, release_value being the first instruction of a block. For those
who are unaware, this issue is that for a long time we have searched backwards
first for "end of scope" like instructions. This then allows us to identify the
"end of scope" instruction as happening at the end of the previous statement
which is where the developer thinks it should be:
```
var global: Klass
func bar() -> @owned Klass { global }
func foo() {
// We want the remark for the
bar() // expected-remark {{retain}}
}
```
This makes sense since we want to show end of scope instructions as being
applied to the earlier code whose scope it is ending. We can be clear that it is
at the end of the statement by placing the carrot on the end of statement
SourceLoc so there isn't any confusion upon whether or not
That generally has delivered nice looking results, but what if our release is
the first instruction in the block? In that case, we do not have any instruction
that we can immediately use, so traditionally we just gave up and didn't emit
anything. This is not an acceptable solution! We should be able to emit
something for every retain/release in the program if we want users to be able to
rely upon this! Thus we need to be able to get source location information from
somewhere around
First before we begin, my approach here is informed by my seeing over time that
the optimizer does a pretty good job of not breaking SourceLoc info for
terminators.
With that in mind, there are two possible approaches here: using the terminator
from the previous block and searching forward at worst taking the SourceLoc of
the current block's terminator (or earlier if we find a good SourceLoc). I
wasn't sure what the correct thing to do was at the time so I didn't fix the
issue. After some thought, I realized that the correct solution is to if we fail
a backwards search, search forwards. The reason why is that since our remarks
runs late in the optimization pipeline, there is a very high likelihood that if
we aren't folded into our previous block that there is a true need in the
program for conditional control flow here. We want to avoid placing the release
out of such pieces of code since it is misleading to the user:
```
In this example there is a release inside the case for .x but none for .y. In
that case it is possible that we get a release for .f since payload is passed in
at +1 at SILGen time. In such a case, using the terminator of the previous block
would mean that we would have the release be marked as on payload instead of
inside the case of .x. By using the terminator of the releases block, we can
switch payload {
case let .x(f):
...
case let .y:
...
}
```
So using the terminator from the previous block would be
misleading to the user. Instead it is better to pick a location after the release that way
we know at least the instruction we are inferring from must in some sense be
With this fix, we should not emit locations for all retains, releases. We may
not identify a good source loc for all of them, but we will identify them.
optimization pipeline, if our block was not folded into the previous block there
is a very high liklihood that there is some sort of conditional control flow
that is truly necessary in the program. If we
this
generally implies that there is a real side effect in the program that is
requiring conditional code execution (since the optimizer would have folded it).
The reason why is that we are at least going to hit a terminator or a
side-effect having instruction that generally have debug info preserved by the
optimizer.
* Use the "nearly divisionless" algorithm on all targets.
We have multipliedFullWidth available everywhere now, so we don't need to carry around the old implementation on 32b targets.
Also adds a few more benchmarks for random number generation.
* Obscure the range boundaries for some of the new random value benchmarks.
When these are visible compile-time constants, the compiler is smart enough to evaluate the division in the "nearly divisionless" algorithm, which makes it completely divisionless. That's good, but it obscures what the runtime performance of the algorithm will be when the bounds are _not_ available as compile-time constants. Thus, for some of the newly-added benchmarks, we pass the upper bound through `identity` to hide it from the optimizer (this is imperfect, but it's the simplest tool we have).
We don't want to do this for all the tests for two reasons:
- compile-time constant bounds are a common case that should still be reflected in our testing
- we don't want to perturb existing benchmark results more than we have to.