I'm trying to understand a C program which includes a .h file with the line
#define random ((float) rand() / (float)((1 << 31) - 1))
Ostensibly yes. But it's wrong in two principal ways:
RAND_MAX instead. That's what it's there for. It might be much smaller than
1 << 31 - 1.
1 << 31 will give you undefined behaviour on a platform with a 32 bit
int or less, which is remarkably common. Don't do that!
Note that if you don't want to ever recover the value 1, (as is often the case), then use
RAND_MAX + 1.0 on the denominator. The
1.0 forces evaluation in floating point: you run the risk of overflowing an integral type if you write
RAND_MAX + 1.