Precision n Scale in Real and Double
Posted on 2007-07-26
I am hoping someone can explain to me the real and Double/Float datatypes in terms of precision and scale. Precision is defined as no of total digits in the number and scale as the total number of digits on the right of decimal point.
Does Real (defined as a single precision floating point number in DB2 docs) only have a single digit on the left hand side of decimal?
Does Double/Float (defined as a double precision floating point number) have only 2 digits to the left of decimal?