The CompTIA Cloud+ Basic training course will teach you about cloud concepts and models, data storage, networking, and network infrastructure.

Hi Experts,

I am encountering a weird problem using double datatype in my VC++ application.

For ex:

double x = 132.23

When I debug what "x" is actually having, it shows me "132.22999999999999"

I thought this is understood because as double stores the floating point numbers in base 2 and as 0.23 CANNOT be absolutely represented in base 2, there is some rounding issue when converting back to decimal and so I see an error of .00000000000001 due to precision limits.

However, when I change it to

double x = 132.16

X shows to be having exactly "132.16"!!

I dont understand how this is possible because I thought .16 cannot be absolutely represented in base 2!

Why do I see this difference? Any help would be greatly appreciated

I am encountering a weird problem using double datatype in my VC++ application.

For ex:

double x = 132.23

When I debug what "x" is actually having, it shows me "132.22999999999999"

I thought this is understood because as double stores the floating point numbers in base 2 and as 0.23 CANNOT be absolutely represented in base 2, there is some rounding issue when converting back to decimal and so I see an error of .00000000000001 due to precision limits.

However, when I change it to

double x = 132.16

X shows to be having exactly "132.16"!!

I dont understand how this is possible because I thought .16 cannot be absolutely represented in base 2!

Why do I see this difference? Any help would be greatly appreciated

the entire double-precision number uses 1 bit for sign, 11 bits for exponent and 52 bits for fraction (mantissa). a number is described in IEEE 754 by:

(-1)^sign * 2^(exponent - exponent bias) * 1.mantissa

In the case of subnormals(E=0) the double-precision number is described by:

(-1)^sign * 2^(1 - exponent bias) * 0.mantissa

if you want to compare doubles or have to output them you always need to consider a little rounding difference.

a good description you find at wikipedia or http://www.cquestions.com/2009/06/memory-representation-of-double-in-c.html

Sara

I understand that double saves the whole number in base 2 (binary). However, what I couldn't understand is why in my VC++ application 132.23 is represented as 132.22999999999 while 132.16 is represented as 132.16 (absolute value) when both of these numbers result in infinite recurring when attempting to convert into base 2.

I used the calculator you provided and this is what it shows:

For 132.23

Decimal Representation 132.23

Binary Representation 01000011000001000011101011

Hexadecimal Representation 0x43043ae1

After casting to double precision 132.22999572753906

For 132.16

Decimal Representation 132.16

Binary Representation 01000011000001000010100011

Hexadecimal Representation 0x430428f6

After casting to double precision 132.16000366210938

So, when I do the below:

double x = 132.16

I thought x would have some indication of the overflowing base 2 conversion and so was surprised that it did not.

Issue here is I am saving this value to Oracle database and when somebody else is trying to pul the data, they complain about the discrepancy.

Is there a way I can make double to save unto only 2 decimal digits?

I tried doing floor(x*100.0 + 0.5)/100.0 but nothing seems to work.

what I couldn't understand is why in my VC++ application 132.23 is represented as 132.22999999999 while 132.16 is represented as 132.16 (absolute value)

132.22999999999 is not a representation but output (presentation). when converting from base 2 to base 10 small rounding differences at the end of the precision (means after 15 - 17 decimal digits) are normal and cannot prevented principally. only the output can be made rightly by active rounding.

```
std::cout << std::fixed << std::setprecision(2) << mydouble; // two digits after decimal point
```

Sara

Sorry about the naive terminology I used when I said "representation" but essentially, I meant the output of the double variable.

So, even though both 132.23 and 132.16 cannot be finitely stored in base 2, do you think the reason I see the output as "132.16" in the latter case and not in the former is because of the rounding errors that cannot be avoided when converting back to base 10?

Thanks,

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.