I have a vb.net program that reads in data values from a text file and store them in double data type. What I discover is that those double values has "added" on some fake precisions. For example, if a value is 0.5 in the text file, it might turn into 0.499999900000000444. This is very annoying as it can accumulate into some serious discrepancies when doing math on these values.
If you just assign 0.5 to a double variable in the code, it will stay as 0.5. I am just not sure how to force VB.net to stop adding on these fake precisions on double values when reading them in from external sources.
I mean, I could just somehow force a precision, but some of the values are truly small. I guess I am asking for a compiler choice or setting somewhere to make VB.net to take in values as is and not to self-indulge in its own interpretation.