This restructures the indentation logic around producing a single IndentContext
for the line being indented. An IndentContext has:
- a ContextLoc, which points to a source location to indent relative to,
- a Kind, indicating whether we should align with that location exactly, or
with the start of the content on its containing line, and
- an IndentLevel with the relative number of levels to indent by.
It also improves the handling of:
- chained and nested parens, braces, square brackets and angle brackets, and
how those interact with the exact alignment of parameters, call arguments,
and tuple, array and dictionary elements.
- Indenting to the correct level after an incomplete expression, statement or
decl.
Resolves:
rdar://problem/59135010
rdar://problem/25519439
rdar://problem/50137394
rdar://problem/48410444
rdar://problem/48643521
rdar://problem/42171947
rdar://problem/40130724
rdar://problem/41405163
rdar://problem/39367027
rdar://problem/36332430
rdar://problem/34464828
rdar://problem/33113738
rdar://problem/32314354
rdar://problem/30106520
rdar://problem/29773848
rdar://problem/27301544
rdar://problem/27776466
rdar://problem/27230819
rdar://problem/25490868
rdar://problem/23482354
rdar://problem/20193017
rdar://problem/47117735
rdar://problem/55950781
rdar://problem/55939440
rdar://problem/53247352
rdar://problem/54326612
rdar://problem/53131527
rdar://problem/48399673
rdar://problem/51361639
rdar://problem/58285950
rdar://problem/58286076
rdar://problem/53828204
rdar://problem/58286182
rdar://problem/58504167
rdar://problem/58286327
rdar://problem/53828026
rdar://problem/57623821
rdar://problem/56965360
rdar://problem/54470937
rdar://problem/55580761
rdar://problem/46928002
rdar://problem/35807378
rdar://problem/39397252
rdar://problem/26692035
rdar://problem/33760223
rdar://problem/48934744
rdar://problem/43315903
rdar://problem/24630624
We need this anyways for -Onone and I want to do some experiments with running
this very early so I can expose more of the stdlib (modulo inlining) to the new
ownership optimizing passes.
I also changed how the inliner handles inlining around OSSA by changing it to
check early that if the caller is in ossa, then we only inline if all of the
callees that the caller calls are in ossa. The intention is to hopefully avoid
weird swings in code-size/perf due to the inliner heuristic's calculation being
artificially manipulated due to some callees not being available to inline (due
to this difference) when others are already available.
I was being too clever here and in the process of "foot-gunned"
myself. Specifically, I was hoping to use bisection to do some is contained in
list tests. But the list that I actually bisected upon was not the original list
and was just based on the original sorted list! So the pointer comparison for
the bisect no longer worked. Since the bisection was on pointer addresses, if we
were lucky and the new addresses in the new array were sorted the same as the
original array, we wouldn't hit anything.
rdar://60262326
We use Header::NameAsWritten to collect the path of a header belongs to a module. Due
to an unclear clang-side issue, the path may contain "Headers/" and "PrivateHeaders/". To
walk-around this potential issue, we explicit check and remove these path components
from the Swift side.
rdar://problem/60249751
- optional object type of `.some` pattern ends with `OptionalPayload`
- type of sub-pattern used in a cast points to underlying sub-pattern declaration
- Enum element:
- parent type locator ends with `ParentType`
- member lookup constraint locator ends at `Member`
* Use in_guaranteed for let captures
With this all let values will be captured with in_guaranteed convention
by the closure. Following are the main changes :
SILGen changes:
- A new CaptureKind::Immutable is introduced, to capture let values as in_guaranteed.
- SILGen of in_guaranteed capture had to be fixed.
in_guaranteed captures as per convention are consumed by the closure. And so SILGen should not generate a destroy_addr for an in_guaranteed capture.
But LetValueInitialization can push Dealloc and Release states of the captured arg in the Cleanup stack, and there is no way to access the CleanupHandle and disable the emission of destroy_addr while emitting the captures in SILGenFunction::emitCaptures.
So we now create, temporary allocation of the in_guaranteed capture iduring SILGenFunction::emitCaptures without emitting destroy_addr for it.
SILOptimizer changes:
- Handle in_guaranteed in CopyForwarding.
- Adjust dealloc_stack of in_guaranteed capture to occur after destroy_addr for on_stack closures in ClosureLifetimeFixup.
IRGen changes :
- Since HeapLayout can be non-fixed now, make sure emitSize is used conditionally
- Don't consider ClassPointerSource kind parameter type for fulfillments while generating code for partial apply forwarder.
The TypeMetadata of ClassPointSource kind sources are not populated in HeapLayout's NecessaryBindings. If we have a generic parameter on the HeapLayout which can be fulfilled by a ClassPointerSource, its TypeMetaData will not be found while constructing the dtor function of the HeapLayout.
So it is important to skip considering sources of ClassPointerSource kind, so that TypeMetadata of a dependent generic parameters gets populated in HeapLayout's NecessaryBindings.
The design implemented in this patch is that we lower the types of accessors with pattern substitutions when lowering them against a different accessor, which happens with class overrides and protocol witnesses, and that we introduce pattern substitutions when substituting into a non-patterned coroutine type. This seems to achieve consistent abstraction without introduce a ton of new complexity.
An earlier version of this patch tried to define witness thunks (conservatively, just for accessors) by simply applying the requirement substitutions directly to the requirement. Conceptually that should work, but I ran into a lot of trouble with things that assumed that pattern substitutions didn't conceal significant substitution work. for example, resolving a dependent member in a component type could find a new use of an opaque archetype when the code assumed that such types had already been substituted away. So while I think that is definiteely a promising direction, I had to back that out in order to make the number of changes manageable for a single PR.
As part of this, I had to fix a number of little bugs here and there, some of which I just introduced. One of these bugfixes is a place where the substitution code was trying to improperly abstract function types when substituting them in for a type parameter, and it's been in the code for a really long time, and I'm really not sure how it's never blown up before.
I'm increasingly of the opinion that invocation substitutions are not actually necessary, but that --- after we've solved the substitution issues above --- we may want the ability to build multiple levels of pattern substitution so that we can guarantee that e.g. witness thunks always have the exact component structure of the requirement before a certain level of substitution, thus allowing the witness substitutions to be easily extracted.
Requirements on extensions were only being gathered indirectly. This adds a new
optional field to `conformsTo` relationship edges, `swiftConstraints`, which
provides the requirements there.
rdar://60091161
Because we block the calling thread, this should be safe for the existing
implementation of PrettyStackTrace; however, it would be much nicer if
LLVM provided direct support for this.
The most important thing from the trace that we want to print is the
original command line, so an alternative would be to just make a
PrettyStackTrace that printed all of the CompilerInvocation context;
I think that's accessible somewhere. But this is a nice, simple
improvement.
I banned this in an earlier commit since it wasn't really useful and introduced
a bunch of unneeded complexity into the ownership system (namely the name for
BranchPropagatedUser). Now that we are free of that, we can clean up the
ownership code and just treat cond_br simply! I left in a special section in
OperandOwnership so I could put in a nice comment there.
This pass makes assumptions about the visibility of a type's memory
based on the visibility of its properties. This is the wrong way to
think about memory visibility.
Fixes <rdar://57564377> Struct component with 'let', unexpected
behavior
This pass wants assume that the contents of a property is known based
on whether the property is declared as a 'let' and the visibility of
the initializers that access the property. For example:
```
public struct X<T> {
public let hidden: T
init(t: T) { self.hidden = t }
}
```
The pass currently assumes that `X` only takes on values that are
assigned by the invocations of `X.init`, which is only visible in `X`s
module. This is wrong if the layout of `Impl` is exposed to other
modules. A struct's memory may be initialized by any module with
access to the struct's layout.
In fact, this assumption is wrong even if the struct, and it's let
property cannot be accessed externally by name. In this next example,
external modules cannot access `Impl` or `Impl.hidden` by name, but
can still access the memory.
```
internal struct Impl<T> {
let hidden: T
init(t: T) { self.hidden = t }
}
public struct Wrapper<T> {
var impl: Impl<T>
public var property: T {
get {
return impl.hidden
}
}
}
```
As long as `Wrapper`s layout is exposed to other modules, the contents
of `Wrapper`, `Impl`, and `hidden' can all be initialized in another
module
```
// Legal as long Wrapper's home module is *not* built with library evolution
// (or if Wrapper is declared `@frozen`).
func inExternalModule(buffer: UnsafeRawPointer) -> Wrapper<Int64> {
return buffer.load(as: Wrapper<Int64>.self)
}
```
If library evolution is enabled and a `public` struct is not declared
`@frozen` then external modules cannot assume its layout, and therefore
cannot initialize the struct memory. In that case, it is possible to optimize
`X.hidden` and `Impl.hidden` as if the properties are only initialized inside
their home module.
The right way to view a type's memory visibility is to consider whether
external modules have access to the layout of the type. If not, then the
property can still be optimized As long as a struct is never enclosed in a
public effectively-`@frozen` type. However, finding all places where a struct
is explicitly created is still insufficient. Instead, the optimization needs
to find all uses of enclosing types and determine if every use has a known
constant initialization, or is simply copied from another value. If an
escaping unsafe pointer to any enclosing type is created, then the
optimization is not valid.
When viewed this way, the fact that a property is declared 'let' is mostly
irrelevant to this optimization--it can be expanded to handle non-'let'
properties. The more salient feature is whether the propery has a public
setter.
For now, this optimization only recognizes class properties because class
properties are only accessibly via a ref_element_addr instruction. This is a
side effect of the fact that accessing a class property requires a "formal
access". This means that begin_access marker must be emitted directly on the
address produced by a ref_element_addr. Struct properties are not handled, as
explained above, because they can be indirectly accessed via addresses of
outer types.
This is the most simple initial version that I can commit. The hope is that this will help to bring this up in a nice way.
I am going to handle the multiple phi node and load [copy] case later to reduce
code churn.
<rdar://problem/56720436>
Call arguments sometimes affect the inference for the generic parameters of the
type expression. When we want to show all initializers from all
extensions, we do not want to infer any generic arguments.
rdar://problem/53516588
In the case of copy_addr, we move it onto the source address and in the case of
a store, put it on the source object.
I just noted this pattern happening a bunch in the stdlib when I was looking
through it for ownership patterns.
Consider following example:
```swift
struct MyType<TyA, TyB> {
var a : TyA, b : TyB
}
typealias B<T1> = MyType<T1, T1>
_ = B(a: "foo", b: 42)
```
Here `T1` is equal to `TyA` and `TyB` so diagnostic about
conflicting arguments ('String' vs. 'Int') should only be
produced for "representative" `T1`.
When present in a function builder, buildFinalResult() will be called on
the value of the outermost block to form the final result of the closure.
This allows one to collapse the full function builder computation into
a single result without having to do it in each buildBlock() call.