ChemstationsDeveloper
asked on
C++ floating point denormal
I have a large project with numerous floating point operations which I am converting from Visual C++ 6 to Visual Studio 2010 c++. I've hit a problem in the different way that VS2010 deals with underflow denormals.
Consider the following code:
unsigned int cw = _controlfp(0, 0);//this returns 589855 in both VC6 and VS2010
float abba=12.5;
int myin=0;
for (myin=0; myin<100; myin ++)
{
abba = abba/10.1;
}
Both VS2010 and VC6 behave the same up until abba reaches the FLT_MIN lower bound for a float, around 1e-38. After that point in VC6 abba goes to 1e-45, and then to 0.0000. In VS2010, it goes to a garbage 8e-39#DEN value.
500 points to the person who can show me how to get the VC6 behavior back.
Notice the _controlfp(0,0) integer in both VS2010 and VC6 they are the same, 589855.
Consider the following code:
unsigned int cw = _controlfp(0, 0);//this returns 589855 in both VC6 and VS2010
float abba=12.5;
int myin=0;
for (myin=0; myin<100; myin ++)
{
abba = abba/10.1;
}
Both VS2010 and VC6 behave the same up until abba reaches the FLT_MIN lower bound for a float, around 1e-38. After that point in VC6 abba goes to 1e-45, and then to 0.0000. In VS2010, it goes to a garbage 8e-39#DEN value.
500 points to the person who can show me how to get the VC6 behavior back.
Notice the _controlfp(0,0) integer in both VS2010 and VC6 they are the same, 589855.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Try to use _control87() (or even __control87_2()) instead of _controlfp().
ASKER
thanks!