* [Foundation] Byte access and methods that funnel to byte access for slices of discontiguous data (ala backed by dispatch_data_t) should void heap corruption and walking off the ends of buffers
* add missing parens on test_byte_access_of_discontiguousData
* Use the proper byte count in testing
* Mutations of slices of data should preserve relative indexing as well as cow semantics of slices
* Ensure hashes of ranges are uniform to the expected hash for Data
* Correct a few mistakes in the slice mutation tests
* Update sequence initializations to avoid directly calling mutableCopy which prevents slice offset mismatches
* Avoid invalid index slices in creating mirrors
* Restore the original Data description
* Resetting a slice region should expand the slice to the maximum of the region (not a out of bounds index of the backing buffer)
* Remove stray comment and use a stack buffer for sequence appending
* Return false when allocations fail in _resizeConditionalAllocationBuffer (not yet in use)
* Enumeration of regions of a slice should be limited to the slice range in the case of custom backing (e.g. dispatch_data_t)
* adjust assertion warnings for data indexes that are negative
* Unify the capitalization across all user-visible error messages (fatal errors, assertion failures, precondition failures) produced by the runtime, standard library and the compiler.
* Update some more tests to the new expectations.
As a temporary workaround for SR-5206, certain Foundation types which had custom behavior in JSONEncoder and JSONDecoder were granted special knowledge of those types internally in order to preserve strategies on encode/decode.
This replaces that special knowledge with a more general-purpose fix that works for all types and all encoders/decoders.
One of the limitations of not having conditional conformance at the
moment is that the implementation of `init(from:)` and `encode(to:)` on
types which require it is that failure to cast dependent types to
`Encodable` or `Decodable` is a runtime failure. There is no way to
statically guarantee that the wrapped type is `Encodable` or
`Decodable`.
As such, in those implementations, at best we can directly call
`(element as! Encodable).encode(to: encoder)`, or similar. However, this
encodes the element directly into an encoder, without giving the encoder
a chance to intercept the type. This is problematic for `JSONEncoder`
because it cannot apply a strategy if it doesn't get to intercept the
type.
This gives a temporary workaround for JSON strategies because of
internal Foundation knowledge.
This avoids indirection by making calls directly to the C implementations which prevents potentials of mismatched intent or changes of calling convention of @_silgen. The added benefit is that all of the shims in this case are no longer visible symbols (anyone using them was not authorized out side of the Foundation overlay). Also the callout methods in the headers now all share similar naming shcemes for easier refactoring and searching in the style of __NS<class><action> style. The previous compiled C/Objective-C source files were built with MRR the new headers MUST be ARC by Swift import rules.
The one caveat is that certain functions MUST avoid the bridge case (since they are part of the bridging code-paths and that would incur a recursive potential) which have the types erased up to NSObject * via the macro NS_NON_BRIDGED.
The remaining @_silgen declarations are either swift functions exposed externally to the rest of Swift’s runtime or are included in NSNumber.gyb which the Foundation team has other plans for removing those @_silgen functions at a later date and Data.swift has one external function left with @_silgen which is blocked by a bug in the compiler which seems to improperly import that particular method as an inline c function.
Data can encapsulate it’s own sub-sequence type by housing a range of the slice in the structural type for Data. By doing this it avoids the API explosion of supporting all APIs that take Data would need overloads to take a slice of Data. This does come at a small conceptual cost: any index based iteration should always account for the startIndex and endIndex of the Data (which was an implicit requirement previously by being a Collection). Moreover this prevents the requirement of O(n) copies of Data if it is never mutated while parsing sub sequences; so more than an API amelioration this also could offer a more effecient code-path for applications to use.
* replace unused closure parameters with '_' in stdlib source
* fold some _ closure arguments into line above
* fold more _ closure arguments into line above