Because the same number of bits are used to represent all normalized numbers, the smaller the exponent, the greater the proximity of representable numbers. For example, there are approximately 8.3 million single-precision numbers between 1.0 and 2.0, while there are only about 8k between 1023.0 and 1024.0.

On any computer, mathematically equivalent expressions can produce different results. In this example, Z and Z1 will typically have different values because (1/Y) or 1/7 is not exactly representable in binary floating-point:

REAL X, Y, Y1, Z, Z1

DATA X/77777/, Y/7/

Y1 = 1 / Y

Z = X / Y

Z1 = X * Y1

IF (Z .NE. Z1) PRINT *, 'Not equal!'

END

A programmer has to keep the inexact nature of the floating point number in mind when writing a program where rounding can take you to either to a negative or positive zero when rounding. Unexpected cases can arise.

The vast majority of problems with positive and negative zero representations are ones that are a result of programmers failure to handle the negative zero rounding case properly. Programs will work and seem bug free but make mistakes with corner cases that arise infrequently. Intel got burned on such a rounding problem pretty good if you recall.

Hope that helps. If you have any further questions, let me know.

RoseFire