I was wondering if system endianness matters when converting a byte array to a short / int / long. Would this be incorrect to do if the code runs on both big-endian and little-endian machines?
short s = (b << 8) | (b);
int i = (b << 24) | (b << 16) | (b << 8) | (b)
Yes, endianness matters. In little endian you have the most significant byte in the upper part of the short or int - i.e. bits 8-15 for short and 24-31 for int. For big endian the byte order would need to be reversed:
short s = ((b << 8) | b); int i = (b << 24) | (b << 16) | (b << 8) | (b);
Note that this assumes that the byte array is in little endian order. Endianness and conversion between byte array and integer types depends not only on the endianness of the CPU but also on the endianness of the byte array data.
It is recommended to wrap these conversions in functions that will know (either via compilation flags or at run time) the endianness of the system and perform the conversion correctly.
In addition, creating a standard for the byte array data (always big endian, for example) and then using the
ntoh_l will offload the decision regarding endianness to the OS
socket implementation that is aware of such things. Note that the default network order is big endian (the
ntoh_x), so having the byte array data as big endian would be the most straight forward way to do this.
As pointed out by the OP (@Mike),
boost also provides endianness conversion functions.