This fixes the Windows platform, where the aligned allocation path is
not malloc-compatible. It won't have any observable difference on
Darwin or Linux, aside from manually allocated memory on Linux now
being consistently 16-byte aligned (heap objects will still be 8-byte
aligned on Linux).
It is unfortunate that we can't guarantee Swift-allocated memory via
Unsafe*Pointer is malloc compatible on Windows. It would have been
nice for that to be a cross platform guarantee since it's normal to
allocate in C and deallocate in Swift or vice-versa. Now we have to
tell developers to always use _aligned_malloc/_aligned_free when
transitioning between Swift/C if they expect their code to work on
Windows.
Even though this fix isn't required today on Darwin/Linux, it makes
good sense to guarantee that the allocation/deallocation paths are
consistent.
This is done by specifying a constant that stdlib can use to round up
alignment, _swift_MinAllocationAlignment. The runtime asserts that
this constant is greater than MALLOC_ALIGN_MASK for all platforms.
This way, manually allocated buffers will always use the aligned
allocation path. If users specify an alignment less than m
round up so users don't need
to pass the same alignment to deallocate the buffer). This constant
does not need to be ABI.
Alternatives are:
1. Require users of Unsafe*Pointer to specify the same alignment
during deallocation. This is obviously madness.
2. Introduce new runtime entry points:
swift_alignedAlloc/swift_alignedDealloc, introduce corresponding
new builtins, and have Unsafe*Pointer always call those. This would
make the runtime API a little more obvious but would introduce
complexity in other areas of the compiler and it doesn't have any
other significant benefit. Less than 16-byte alignment of manually
allocated buffers on Linux is a non-goal.
* Remove apparently obsolete builtin functions.
- Remove s_to_u_checked_conversion and u_to_s_checked_conversion functions from builtin AST parsing, SIL/IR generation and from SIL optimisations.
* Remove apparently obsolete builtin functions - unit tests.
- Remove unit tests for SIL transformations relating to s_to_u_checked_conversion and u_to_s_checked_conversion builtin functions.
* Remove apparently obsolete builtin functions.
- Remove s_to_u_checked_conversion and u_to_s_checked_conversion functions from builtin AST parsing, SIL/IR generation and from SIL optimisations.
* Remove apparently obsolete builtin functions - unit tests.
- Remove unit tests for SIL transformations relating to s_to_u_checked_conversion and u_to_s_checked_conversion builtin functions.
`#assert` is a new static assertion statement that will let us write
tests for the new constant evaluation infrastructure that we are working
on. `#assert` works by lowering to a `Builtin.poundAssert` SIL
instruction. The constant evaluation infrastructure will look for these
SIL instructions, const-evaluate their conditions, and emit errors if
the conditions are non-constant or false.
This commit implements parsing, typechecking and SILGen for `#assert`.
`\.self` is the final chosen syntax. Implement support for this syntax, and remove the stopgap builtin and `WritableKeyPath._identity` property that were in place before.
Make sure the implementation can handle a key path with zero components by removing inappropriate assumptions that the number of components is always non-empty. Identity key paths also need some special behavior:
- Appending an identity key path should be an identity operation for the other operand
- Identity key paths have a `MemoryLayout.offset(of:)` zero
- Identity key paths interop with KVC as key paths to `@"self"`
To be able to exercise and test this behavior, add a `Builtin.identityKeyPath()` function and `WritableKeyPath._identity` accessor in lieu of finalized syntax.
So there is no need to initialize global string variables dynamically (with dispatch_once) anymore.
This is much more efficient, both in terms of code size and performance
Since the functions produce pointers with tightly-scoped lifetimes there's no formal reason these have to only work on `inout` things. Now that arguments can be +0, we can even do this without copying values that we already have at +0.
* Reduce array abstraction on apple platforms dealing with literals
Part of the ongoing quest to reduce swift array literal abstraction
penalties: make the SIL optimizer able to eliminate bridging overhead
when dealing with array literals.
Introduce a new classify_bridge_object SIL instruction to handle the
logic of extracting platform specific bits from a Builtin.BridgeObject
value that indicate whether it contains a ObjC tagged pointer object,
or a normal ObjC object. This allows the SIL optimizer to eliminate
these, which allows constant folding a ton of code. On the example
added to test/SILOptimizer/static_arrays.swift, this results in 4x
less SIL code, and also leads to a lot more commonality between linux
and apple platform codegen when passing an array literal.
This also introduces a couple of SIL combines for patterns that occur
in the array literal passing case.
These will be used for unit-testing the Type::join functionality in the
type checker. The result of the join is replaced during constraint
generation with the actual type.
There is currently no checking for whether the arguments can be used to
statically compute the value, so bad things will likely happen if
e.g. they are type variables. Once more of the basic functionality of
Type::join is working I'll make this a bit more bullet-proof in that
regard.
They include:
// Compute the join of T and U and return the metatype of that type.
Builtin.type_join<T, U, V>(_: T.Type, _: U.Type) -> V.Type
// Compute the join of &T and U and return the metatype of that type.
Builtin.type_join_inout<T, U, V>(_: inout T, _: U.Type) -> V.Type
// Compute the join of T.Type and U.Type and return that type.
Builtin.type_join_meta<T, U, V>(_: T.Type, _: U.Type) -> V.Type
I've added a couple simple tests to start off, based on what currently
works (aka doesn't cause an assert, crash, etc.).
Having such a builtin makes it easier for the optimizer to reason about what is actually happening.
I plan to add later some optimizations which can optimize pieces of code dominated by such a check.
Introduce a new runtime entry point,
`swift_objc_swift3ImplicitObjCEntrypoint`, which is called from any
Objective-C method that was generated due to `@objc` inference rules
that were removed by SE-0160. Aside from being a central place where
users can set a breakpoint to catch when this occurs, this operation
provides logging capabilities that can be enabled by setting the
environment variable SWIFT_DEBUG_IMPLICIT_OBJC_ENTRYPOINT:
SWIFT_DEBUG_IMPLICIT_OBJC_ENTRYPOINT=0 (default): do not log
SWIFT_DEBUG_IMPLICIT_OBJC_ENTRYPOINT=1: log failed messages
SWIFT_DEBUG_IMPLICIT_OBJC_ENTRYPOINT=2: log failed messages with
backtrace
SWIFT_DEBUG_IMPLICIT_OBJC_ENTRYPOINT=3: log failed messages with
backtrace and abort the process.
The log messages look something like:
***Swift runtime: entrypoint -[t.MyClass foo] generated by
implicit @objc inference is deprecated and will be removed in
Swift 4
Previously often times when casting a value, we would just pass along the
cleanup of the uncasted value. With semantic SIL this is no longer correct since
the cleanup now needs to be on the cast result.
This caused problems for certain usages of Builtin.castToNativeObject(...) by
the stdlib. Specifically, the stdlib was using this on AnyObject values that
were not necessarily native. Since we were recreating the cleanup on the native
value, a swift native release was being used =><=.
In this commit I solve this problem by:
1. Adding an assert in Builtin.castToNativeObject(...) that ensures that any value
passed to Builtin.castToNativeObject() is known conservatively to use swift
native reference counting.
2. I changed all uses where we do not have a precondition of a native ref
counting type to use Builtin.castToUnknownObject(...).
3. I added a new Builtin called Builtin.unsafeCastToNativeObject(...) that does
not have the compile time check. I used this to rewrite callsites in the stdlib
where we know via preconditions that an AnyObject will dynamically always be
native.
rdar://29791263
...and IRGen it into a call to __tsan_write1 in compiler-rt. This is
preparatory work for a later patch that will add an experimental
option to treat Swift inout accesses as TSan writes.
We need the encode string to be able to construct NSValues using the core valueWithBytes:objCType: API. This builtin only works with concrete, @objc-representable types for now, which should be sufficient for a stdlib-internal API.
Those builtins are: allocWithTailElems_<n>, getTailAddr and projectTailElems
Also rename the "gep" builtin, which indexes raw bytes, to "gepRaw" and add a new "gep" builtin to index in a typed array.
Those builtins are: allocWithTailElems_<n>, getTailAddr and projectTailElems
Also rename the "gep" builtin, which indexes raw bytes, to "gepRaw" and add a new "gep" builtin to index in a typed array.
Strict aliasing only applies to memory operations that use strict
addresses. The optimizer needs to be aware of this flag. Uses of raw
addresses should not have their address substituted with a strict
address.
Also add Builtin.LoadRaw which will be used by raw pointer loads.
There are a couple of features that are not yet implemented, because they require additions to the Builtin module. Specifically, this implementation does not have:
- formRemainder(dividingBy:)
- formSquareRoot()
- addProduct(_:,_:)
Also missing are the generic initializers and comparisons whose implementation depends on having new Integer protocols.
The last remaining feature of SE-0067 is that while the basic operators +,-,*,/, etc are moved onto the FloatingPoint protocol, they are still required on the concrete types in order to disambiguate overloads. Fixing this seems to require either modifying the overload resolution rules or removing these operators from some other protocols. Or it might just require that someone smarter than me looks at the problem.
Passes all the existing tests (with the included changes). I'm working on additional tests for the new features.
It is a hint to the optimizer that the code, where this builtin is called, is on the fast path.
Specifically, the inliner takes it into account and increases the assumed benefit for code where the builtin is located.
Compared to the fastPath/slowPath builtins, this builtin can be placed into plain linear code and doesn't need to be used in conditions.
Compared to the @inline(__always) attribute, this builtin has also an effect on the caller function. Let's assume
foo() calls bar() contains onFastPath
and both foo and bar are small functions. Then if bar gets inlined into foo, the builtin also increases the chances that foo gets inlined.
This would not be the case if @inline(__always) is used just for bar.