C++ floating point denormal
Posted on 2011-05-13
I have a large project with numerous floating point operations which I am converting from Visual C++ 6 to Visual Studio 2010 c++. I've hit a problem in the different way that VS2010 deals with underflow denormals.
Consider the following code:
unsigned int cw = _controlfp(0, 0);//this returns 589855 in both VC6 and VS2010
for (myin=0; myin<100; myin ++)
abba = abba/10.1;
Both VS2010 and VC6 behave the same up until abba reaches the FLT_MIN lower bound for a float, around 1e-38. After that point in VC6 abba goes to 1e-45, and then to 0.0000. In VS2010, it goes to a garbage 8e-39#DEN value.
500 points to the person who can show me how to get the VC6 behavior back.
Notice the _controlfp(0,0) integer in both VS2010 and VC6 they are the same, 589855.