* Let the customBits and lastInitializedBitfieldID share a single uint64_t. This increases the number of available bits in SILNode and Operand from 8 to 20. Also, it simplifies the Operand class because no PointerIntPairs are used anymore to store the operand pointer fields.
* Instead make the "deleted" flag a separate bool field in SILNode (instead of encoding it with the sign of lastInitializedBitfieldID). Another simplification
* Enable important invariant checks also in release builds by using `require` instead of `assert`. Not catching such errors in release builds would be a disaster.
* Let the Swift optimization passes use all the available bits and not only a fixed amount of 8 (SILNode) and 16 (SILBasicBlock).
Instead of setting the parent pointer to null, set the `lastInitializedBitfieldID` to -1.
This allows to keep the parent block information, even when an instruction is removed from it's list.
It's used to implement `InstructionSet` and `ValueSet`: sets of SILValues and SILInstructions.
Just like `BasicBlockSet` for basic blocks, the set is implemented by setting bits directly in SILNode.
This is super efficient because insertion and deletion to/from the set are basic bit operations.
The cost is an additional word in SILNode. But this is basically negligible: it just adds ~0.7% of memory used for SILInstructions.
In my experiments, I didn't see any relevant changes in memory consumption or compile time.
* add a BasicBlockSetVector class
* add a second argument to BasicBlockFlag::set, for the set value.
* rename BasicBlockSet::remove -> BasicBlockSet::erase.
* add a MaxBitfieldID statistics value in SILFunction.cpp