Link to home
Start Free TrialLog in
Avatar of GlobaLevel
GlobaLevelFlag for United States of America

asked on

Ms SQL server 2005 datacasting

What's the difference between a float, decimal, int casting?

If I have a value at '-2.5'
And I pull in from a table via nvarchar, how can I recast to do mathematical calculations on it.
ASKER CERTIFIED SOLUTION
Avatar of strickdd
strickdd
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
>floating point arithmetic is most commonly used for accounting and financial data. Decimal is used when precision isn't quite as important.

That's backwards, my friend.


Int's are exact, but do not carry decimal points.

Like int's, decimal's are also exact.  But, they can be more precise to the number of decimal points given in the declaration.  Decimals are excellent for accounting and financial data where decimals are needed and accuracy is critical.

Float's are approximations, meaning, not all numbers can be represented exactly.  They can have high precision (lots of decimal places) using less space than decimals--thus you trade accuracy for space.  Float's are useful for very large or small numbers where exactness is not critical.