That Crazy Carl Guy That Crazy Carl Guy - 29 days ago 17
C++ Question

Convert C++ type int16_t to int64_t without modifying the underlying binary

I am trying to generate a hash code for an object in 3D space so it can be quickly found in an array using a binary search algorithm.

Since each object in this array has a unique XYZ location, I figured I could use those three values to generate the hash code. I used the following code to try and generate the hash code.

int64_t generateCode(int16_t x, int16_t y, int16_t z) {
int64_t hashCode = z;//Set Z bits.
hashCode <<= 16;//Shift them 16 bits.
hashCode |= y;//Set Y bits.
hashCode <<= 16;//Shift them 16 bits.
hashCode |= x;//Set X bits.
}


Now here is the problem from what I can tell. Consider the following peace of code:

int16_t x = -1;
cout << "X: " << bitset<16>(x) << endl;//Prints the binary value of X.
int64_t y = x;//Set Y to X. This will automatically cast the types.
cout << "Y: " << bitset<64>(y) << endl;//Prints the binary value of Y.


The output of this program is as follows:

X: 1111111111111111
Y: 1111111111111111111111111111111111111111111111111111111111111111


It keeps the numerical value of the number, but changes the underlying binary to do that. I don't want to modify that binary so I can have an output like the following:

X: 1111111111111111
Y: 0000000000000000000000000000000000000000000000001111111111111111


By doing that, I can then create a unique hash code from the XYZ values that would look like the following:

Unused X Y Z
HashCode: [0000000000000000][0000000000000000][0000000000000000][0000000000000000]


And that will be used for the binary search.

Answer

Convert the int16_t to a uint16_t first, then merge them together into a uint64_t that you finally cast to a int64_t:

int64_t generateCode(int16_t x, int16_t y, int16_t z) {
    uint64_t hashCode = static_cast<uint16_t>(z);
    hashCode <<= 16;
    hashCode |= static_cast<uint16_t>(y);
    hashCode <<= 16;
    hashCode |= static_cast<uint16_t>(x);
    return static_cast<int64_t>(hashCode);
}

The int16_t/int64_t types will be a two's complement representation (7.20.1.1 paragraph 1 of the C standard requires this), so converting them to a uint*_t of the same size will be a bit-wise no-op.