• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 41668
  • Last Modified:

Biased/unbiased Standard Deviation

In standard deviation formula we sometimes divide by (N) and sometimes (N-1)
where N = number of data points.

Somewhere I read that 'N' or 'N-1' does not make difference for large datasets.
but when we calculate std. dev. for less than 20 data points, dividing by 'N' gives
a biased estimate and 'N-1' gives unbiased estimate.

Can someone explain with example..how does subtracting 1 help?
0
prashant_n_mhatre
Asked:
prashant_n_mhatre
  • 2
  • 2
1 Solution
 
acerolaCommented:
standart deviation is the square root of the mean of the square of the deviation:

average = A

sample = x

deviation = x-A

square of deviation = (x-A)^2

mean of the square of the deviation = Sum((x-A)^2) / N

N = number of samples.

standart deviation = Sqrt(Sum((x-A)^2) / N)

That's all I know. And that is what I found on a web page:

"You use the N-1 if the estimate is unbiased". And the definition of bias is:

"A statistic is biased if, in the long run, it consistently over or underestimates the parameter it is estimating. More technically it is biased if its expected value is not equal to the parameter. A stop watch that is a little bit fast gives biased estimates of elapsed time. Bias in this sense is different from the notion of a biased sample. A statistic is positively biased if it tends to overestimate the parameter; a statistic is negatively biased if it tends to underestimate the parameter. An unbiased statistic is not necessarily an accurate statistic. If a statistic is sometimes much too high and sometimes much too low, it can still be unbiased. It would be very imprecise, however. A slightly biased statistic that systematically results in very small overestimates of a parameter could be quite efficient."

Hope it helps. I didnt get it very well...
0
 
gd2000Commented:
Okay - too long since I've done this stuff - but I can tell you for definite that you can derive the formula for standard deviation from a method called the Maximum Likelihood Estimator. This is essentially a (quite complex) method which will give you an estimator for a statistic for your data. Because it is complex, it can be difficult to solve for some statistics, but (relatively) easy for the mean and variance. As part of the derivation it can be found that while dividing by N given an unbiased estimator for a population, it would give a biased estimator for a sample. Dividing by N - 1 will solve the problem for a sample. If you really want, I can try to dig out some links for MLE, but quite honestly the logic ain't easy! Essentially in the calculation of an MLE there is also a bias element. You can trade off bias for accuracy (if memory serves).

I'm sorry the explanation isn't a simple one - but it's the best I can do without trying to relearn my college notes on the topic (and that's not worth 1000 points!!!).
0
 
prashant_n_mhatreAuthor Commented:
Still it is not fully clear to me...let us keep this question open for few days !!!!
0
 
gd2000Commented:
Try the following links. Probably unlikely to explain things in clear terms (mainly because I'm not sure a wholly accurate explanation can be put in lay terms), but at least it will give you something to chew on (if you want to delve in further!).

http://www.asp.ucar.edu/colloquium/1992/notes/part1/node21.html (actually explains the reasons)

MLE for the mean: http://www.esg.montana.edu/eguchi/Biol504Fall2000/MaxLikelihoodsummary.pdf

Introduction to MLE: http://socserv.socsci.mcmaster.ca/jfox/Courses/soc740/MLE.pdf

The final link says that the property of an MLE estimator “asymptotically unbiased – but may be biased in finite samples”. This is essentially the reason (you need to multiply by n / (n - 1) to make the variance estimator unbiased for samples, you can see this from the bias term.
0
 
prashant_n_mhatreAuthor Commented:
Thank you all...

The reson is explained very well in "Statistics In Plain English" book. Yep, it has to do with 'Sample' and 'population'. Generally the standard deviation calculated using sample is lower than population. To accomodate that, we divide it by N-1. Dividing by 1000 or 999 doesn't make much difference..but 10 or 9 numbers do...

I'd a look at Maximum Likelihood...I did study it long back....The links are very useful.
0

Featured Post

Free Tool: IP Lookup

Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

  • 2
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now