I've seen this pattern used a lot in C & C++.
unsigned int flags = -1; // all bits are true
I recommend you to do it exactly as you have shown, since it is the most straight forward one. Initialize to
-1 which will work always, independent of the actual sign representation, while
~ will sometimes have surprising behavior because you will have to have the right operand type. Only then you will get the most high value of an
For an example of a possible surprise, consider this one:
unsigned long a = ~0u;
It won't necessarily store a pattern with all bits 1 into
a. But it will first create a pattern with all bits 1 in an
unsigned int, and then assign it to
a. What happens when
unsigned long has more bits is that not all of those are 1.
And consider this one, which will fail on a non-two's complement representation:
unsigned int a = ~0; // Should have done ~0u !
The reason for that is that
~0 has to invert all bits. Inverting that will yield
-1 on a two's complement machine (which is the value we need!), but will not yield
-1 on another representation. On a one's complement machine, it yields zero. Thus, on a one's complement machine, the above will initialize
a to zero.
The thing you should understand is that it's all about values - not bits. The variable is initialized with a value. If in the initializer you modify the bits of the variable used for initialization, the value will be generated according to those bits. The value you need, to initialize
a to the highest possible value, is
UINT_MAX. The second will depend on the type of
a - you will need to use
ULONG_MAX for an
unsigned long. However, the first will not depend on its type, and it's a nice way of getting the most highest value.
We are not talking about whether
-1 has all bits one (it doesn't always have). And we're not talking about whether
~0 has all bits one (it has, of course).
But what we are talking about is what the result of the initialized
flags variable is. And for it, only
-1 will work with every type and machine.