So when I was taught programming I learned that a simple 'x % 2' would be a nice quick way to determine whether a number is even. I have since started to use 'x & 1' as I believe it is quicker in the CPU (though given todays speeds maybe it's pointless).
Can anyone who knows more about CPUs shed some light on whether this is actually quicker, or is there some simple compiler optimisation going on?
C is a compiled language. It is not assembly language.
It's a compiler's job to know at least as much (usually much more) about your machine's instruction set than you do.
If you know that
x % 2 and
x & 1 are equivalent, don't you think that's the sort of thing your compiler should know, as well?
For any decent compiler, if you write
x % 2, and the compiler knows that a bit test will be faster on your machine, it will emit code to perform the bit test, all by itself.
The bottom line is that you should write code that expresses your intentions clearly, and let the compiler worry about optimizing it. Only under rare circumstances can you significantly improve the performance of code with "microoptimizations" like this. It's worth performing them only if you've demonstrated that the code you're trying to optimize is a significant bottleneck, and you've demonstrated that the proposed improvement is truly significantly faster. Otherwise, don't bother: you're probably wasting your time.
But in any case, we can't tell you which way is faster in general, because every machine might be different. To find out for sure, you're going to have to measure it yourself.