In standard deviation formula we sometimes divide by (N) and sometimes (N-1)
where N = number of data points.
Somewhere I read that 'N' or 'N-1' does not make difference for large datasets.
but when we calculate std. dev. for less than 20 data points, dividing by 'N' gives
a biased estimate and 'N-1' gives unbiased estimate.
Can someone explain with example..how does subtracting 1 help?