My head is starting to hurt... I've been looking at this way too long.
I'm trying to mask the most significant nibble of an int, regardless of the int bit length and the endianness of the machine. Let's say
x = 8425
0010 0000 1110 1001
. I know that to get the least significant nibble,
, I just need to do something like
x & 0xF
to get back
. But how about the most significant nibble,
I apologize if my logic from here on out falls apart, my brain is completely fried, but here I go:
My book tells me that the bit length w of the data type int can be computed with
w = sizeof(int)<<3
. If I knew that the machine were big-endian, I could do
0xF << w-4
for the most significant nibble and
for the rest, i.e.
1111 0000 0000 0000
. If I knew that the machine were little-endian, I could do
0xF >> w-8
0000 0000 0000 1111
. Fortunately, this works even though we are told to assume that right shifts are done arithmetically just because
always gives me the first bit of
. But this is not a proper solution. We are not allowed to test for endianness and then proceed from there, so what do I do?