Calculating confidence for sample sizes when testing pass/failure
Posted on 2004-04-30
I am designing a system for product safety inspectors. These inspectors go to a shop or warehouse, take a sample of the goods for sale and test them for safety. If they fail the test they are assigned certain failure codes. The system will allow them to store various types of information about the Inspection, including Quantity Inspected and Quantity Failed. It will also calculate optimum Sample Sizes and confidence intervals for these, as well as some kind of confidence data for the detected rate of failure.
I need to confirm that my understanding of this is correct and that the answers my system will give to the users will make sense.
I know how to calculate the confidence interval for a given sample size (assuming arbitrarily large population size and random sampling method):
cc = sqrt( Z^2 * p * (1-p) / ss )
cc = confidence interval as a percentage of detected failure rate
Z = Z value (e.g. 1.96 for 95% confidence)
p = failure rate (0.5 for worst case scenario)
ss = sample size (Quantity Inspected)
What I'm not so sure about is how to measure the confidence that an inspector may have in the detected failure rate *after* the inspection. For example, if no items in the sample fail, how much confidence can he/she have that no other items in the population will fail? If I calculate the confidence interval in this circumstance, with p = 0.0, the answer is always 0 (obviously!) whatever the sample size. However, this can't be true because if I sample only 10 items, surely the confidence will be significantly less than if I had sampled 100 items? Or, is the detected failure rate inconsequential here?