Philip Philip - 3 months ago 8
C# Question

Any reason to prefer single precision to double precision data type?

I am used to choosing the smallest data type needed to fully represent my values while preserving semantics. I don't use

long
when
int
is guaranteed to suffice. Same for
int
vs
short
.

But for real numbers, in C# there is the commonly used
double
-- and no corresponding
single
or
float
. I can still use
System.Single
, but I wonder why C# didn't bother to make it into a language keyword like they did with
double
.

In contrast, there are language keywords
short
,
int
,
long
,
ushort
,
uint
, and
ulong
.

So, is this a signal to developers that single-precision is antiquated, deprecated, or should otherwise not be used in favor of
double
or
decimal
?

(Needless to say, single-precision has the downside of less precision. That's a well-known tradeoff for smaller size, so let's not focus on that.)

edit: My apologies, I mistakenly thought that
float
isn't a keyword in C#. But it is, which renders this question moot.

Answer

As a default, any literal like 2.0 is automatically interpreted as a double unless otherwise specified. This could contribute to the consistently higher use of double than other floating-point representations. Just a side note.

As far as the absence of single goes, the float keyword translates to the System.Single type.

Comments