average = A

sample = x

deviation = x-A

square of deviation = (x-A)^2

mean of the square of the deviation = Sum((x-A)^2) / N

N = number of samples.

standart deviation = Sqrt(Sum((x-A)^2) / N)

That's all I know. And that is what I found on a web page:

"You use the N-1 if the estimate is unbiased". And the definition of bias is:

"A statistic is biased if, in the long run, it consistently over or underestimates the parameter it is estimating. More technically it is biased if its expected value is not equal to the parameter. A stop watch that is a little bit fast gives biased estimates of elapsed time. Bias in this sense is different from the notion of a biased sample. A statistic is positively biased if it tends to overestimate the parameter; a statistic is negatively biased if it tends to underestimate the parameter. An unbiased statistic is not necessarily an accurate statistic. If a statistic is sometimes much too high and sometimes much too low, it can still be unbiased. It would be very imprecise, however. A slightly biased statistic that systematically results in very small overestimates of a parameter could be quite efficient."

Hope it helps. I didnt get it very well...