Ms SQL server 2005 datacasting

What's the difference between a float, decimal, int casting?

If I have a value at '-2.5'
And I pull in from a table via nvarchar, how can I recast to do mathematical calculations on it.
LVL 10
GlobaLevelProgrammerAsked:
Who is Participating?
 
strickddConnect With a Mentor Commented:
floating point arithmetic is most commonly used for accounting and financial data. Decimal is used when precision isn't quite as important. The integer type is for when you don't want to worry about decimal points at all.

In most cases, a decimal will suffice for "normal" calculations, so you can simply do:

DECLARE @myDecimal As Decimal(18,5)
SET @myDecimal = CAST(@myNVarchar, Decimal(18,5))
0
 
dqmqCommented:
>floating point arithmetic is most commonly used for accounting and financial data. Decimal is used when precision isn't quite as important.

That's backwards, my friend.


Int's are exact, but do not carry decimal points.

Like int's, decimal's are also exact.  But, they can be more precise to the number of decimal points given in the declaration.  Decimals are excellent for accounting and financial data where decimals are needed and accuracy is critical.

Float's are approximations, meaning, not all numbers can be represented exactly.  They can have high precision (lots of decimal places) using less space than decimals--thus you trade accuracy for space.  Float's are useful for very large or small numbers where exactness is not critical.  
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.