I am used to choosing the smallest data type needed to fully represent my values while preserving semantics. I don't use
As a default, any literal like
2.0 is automatically interpreted as a
double unless otherwise specified. This could contribute to the consistently higher use of
double than other floating-point representations. Just a side note.
As far as the absence of
single goes, the
float keyword translates to the