Solved

# How to set double precision?

Posted on 2008-06-21
Medium Priority
6,998 Views
My question is fairly simple. I just write a code that multiples to doubles say 3.13 and 3.13 and as a result I get 9.7968999999999991 instead of 9.7969. Why do I get those unwanted extra digits as 0.000099999991 etc. These extra digits propagate errors in the following calculations.

Could you suggest a solution pls!
0
Question by:Lexiks

Expert Comment

ID: 21837310
Hi,
See if your can use BCD (binary coded decimal) types instead of doubles. Doubles calculate in radix 2 (binary) which doesn't always convert to/from radix 10 (decimal) exactly.

Jim
0

LVL 6

Expert Comment

ID: 21837314
It's a system bug, you can use ToString function to set the decimal number like this
``````double number = 9.7968999999999991;

Console.WriteLine(number.ToString("#0.0000"));
``````
0

Expert Comment

ID: 21837335
Hi,
I had a chance to look it up in the C# compiler. Have a look at "Decimal Structure" in the compiler help. The example should be helpful.
Jim
0

LVL 7

Accepted Solution

photowhiz earned 500 total points
ID: 21839343
It is not a "bug", it is a fundamental property of the way computers represent fractions.  If you want to maintain a fixed number of decimal places, use the decimal type. Note that decimal types cannot represent fractions exactly either.

0

Author Comment

ID: 21840097
Yeah photowhiz thats right. I myself kept searching and found the fact that you are saying. Ok thx all...
0

## Featured Post

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.