Precision of double type variable causing weird decimal issues.

-JDK VERSION 1.3.1_03
-WIN2000 OS

OK,

I have a custom class that has a double variable. The class also has a method to add another object of the same class to itself (by adding the double variables and storing the result in the calling object's variable). For each of the following numbers (below example), the class is created, the number is set for the field, and then stored in the hashtable. Each subsequent time the object is stored, it is removed from the hashtable, the add method is called on the stored object, and then put back into the hashtable. The problem is that when these objects later used to create a new ArrayList by calling the Hashtable.values() method the sum of these numbers has an additional 0.0000000000002. I iterate through the arraylist and call the .getDoubleVariable() method for each object. Oddly enough, I get the following value: 1244.8000000000002 (the ArrayList index would only have 1 entry in this example).

Example:

AnObject o= new AnObject();
o.setDoubleVariable(*x*);
o2=(AnObject)hashtable.get("theobject");
o2.addObjectToThisInstance(o);
hashtable.put("theobject", o2);

Can anyone explain to me why this is happening? The answer I expect is 1244.80. It seems odd that it would throw in such a small decimal value somewhere. This is a pain since it obviously is not giving me the value I expect.

Mayank SAssociate Director - Product EngineeringCommented:

Floating-point numbers always behave this way. If you have a C compiler, check this out:

float a = 0.7 ;

if ( a < 0.7 )
printf ( "A is lesser than 0.7" ) ;
else if ( a == 0.7 )
printf ( "A is equal to 0.7" ) ;
else
printf ( "A is greater than 0.7" ) ;

The idea is that there are always issues related with precision, type-cast, etc. In this example, consider the 'if' statement: if ( a < 0.7 ) - here, 'a' is a 'float', whereas the '0.7' constant is treated as a 'double', so there is a temporary implicit type-conversion of 'a' and that results in a precision-loss. But this example will print the correct result if you declare 'a' as double (instead of float), OR let it be a float and compare using: if ( a < ((float) 0.7) )

However, Java was a little more strict not to allow this at all. If you write:

float a = 0.7 ;

if ( a < 0.7 )

- the above 'if' statement will probably result in a compilation error saying "Possible loss of precision" because it knows that the LHS ('a') is a float and the RHS of the in-equality (0.7) is a double.

However, in your case, I guess its again a case of precision-related issues which are always there with floating-point and double values.

OK. well firstly, I only need a precision of 2 decimals. In my application I am checking equality with user input that is limited to two digits. When I compare the user entry with the total stored, ithey are not equal. Furthermore, I don't want to store the value to the database with these extra unrequired decimal values.

I called the toFloat() method for the variables and the problem went away, but can this potentially be a problem for the float type also? I'm thinking of going back through my 150 some odd classes and changing all my double types to float.

I'm glad that a few have you have given me some verbose responses, but they sort of boil down to "Thats the way they work". If someone can include a common way to correct this problem (2 decimal precision stays as the expected 2 decimal precision), I would be a little more satisfied.

The standard tactic for dealing with numbers that require precision, that must avoid the vagueries of floating point numbers, is to use a scaled decimal.

In our software, for instance, we do extensive financial calculations. We must have complete control over the results. A scaled integer is one in which you simply *imagine* the decimal point in the correct place. So, for instance, to represent dollars, we use integers in which we commonly agree that the decimal point occurs before the last two digits. So, to represent 1 dollar, we use the number 100 (i.e. the number of pennies). Integer arithmetic behaves in a straightforward predictable manner.

The extra work here is on input and output. For input you must take the placing of the decimal point into account, and possible scale the input as necessary, and for output you usually have to output the number in two pieces and "manually" insert the decimal point. The work here is normally alleviated by having central input and output routines that are responsible for the necessary manipulations.

P.S. if you need to represent number larger than can be supported in an integer you can, of course use a long in the same manner. If you need even more digits than that (about 18 in a long), *then* I would reach for the BigDecimal kind of solution.

the example in your question is accurate to 2 decimal places

> I don't want to store the value to the database with these extra unrequired decimal values.

what type are you using in the database?

> If someone can include a common way to correct this problem (2 decimal precision stays as
> the expected 2 decimal precision), I would be a little more satisfied.

Depends exactly why this is causing a problem.
Obvious solution is don't use floating point numbers if you don't like it.

objects, you are credited with 350 points for answering the original question. I appreciate your selection of a link that was targetted to my audience and answered the question appropriately (Although I would suggest breifly summarizing a link rather than responding with it).

imladris, you get 150 for the best answer to the secondary question of how to deal with it.

I suppose if my education had been in core programming rather than systems analysis I would have been told at some point about these problems with double and float type numbers. (Better to find out now before I start writing those moon landing programs I had planned, lol)