I have a small problem when trying to cast a double precision number to an integer:
When I input a double number into my program (its a command line program) I want to immediatly multiply it by 100 and convert it to an integer. The user should always be inputing numbers with no more than 2 decimal places as it will always be an amount of money.
Basically it usually works for most numbers, but certain numbers such as 19.99 gets converted to 1998 (or 1998.99999999999 when I output the multipled double).
I was just wondering if there was a simple fix for this.