JCisar JCisar - 4 months ago 15
C# Question

What does the "M" stand for in decimal value assignment?

MSDN says:

"Without the suffix m, the number is treated as a double, thus generating a compiler error."

What does the "M" in:

decimal current = 10.99M;

stand for?

Is it any different than:

decimal current = (decimal)10.99


M makes the number a decimal representation in code.

To answer the second part of your question, yes they are different.

decimal current = (decimal)10.99

is the same as

double tmp = 10.99;
decimal current = (decimal)tmp;

Now for numbers larger than sigma it should not be a problem but if you meant decimal you should specify decimal.


Wow, i was wrong. I went to go check the IL to prove my point and the compiler optimized it away.

Update 2:

I was right after all!, you still need to be careful. Compare the output of these two functions.

class Program
    static void Main(string[] args)

    static decimal Test1()
        return 10.999999999999999999999M;
    static decimal Test2()
        return (decimal)10.999999999999999999999;

The first returns 10.999999999999999999999 but the seccond returns 11

Just as a side note, double will get you 15 decimal digits of precision but decimal will get you 96 bits of precision with a scaling factor from 0 to 28. So you can represent any number in the range ((-296 to 296) / 10(0 to 28))