Calculating confidence for sample sizes when testing pass/failure

I am designing a system for product safety inspectors. These inspectors go to a shop or warehouse, take a sample of the goods for sale and test them for safety. If they fail the test they are assigned certain failure codes. The system will allow them to store various types of information about the Inspection, including Quantity Inspected and Quantity Failed. It will also calculate optimum Sample Sizes and confidence intervals for these, as well as some kind of confidence data for the detected rate of failure.

I need to confirm that my understanding of this is correct and that the answers my system will give to the users will make sense.

I know how to calculate the confidence interval for a given sample size (assuming arbitrarily large population size and random sampling method):

cc = sqrt( Z^2 * p * (1-p) / ss )

  cc = confidence interval as a percentage of detected failure rate
  Z = Z value (e.g. 1.96 for 95% confidence)
  p = failure rate (0.5 for worst case scenario)
  ss = sample size (Quantity Inspected)

What I'm not so sure about is how to measure the confidence that an inspector may have in the detected failure rate *after* the inspection. For example, if no items in the sample fail, how much confidence can he/she have that no other items in the population will fail? If I calculate the confidence interval in this circumstance, with p = 0.0, the answer is always 0 (obviously!) whatever the sample size. However, this can't be true because if I sample only 10 items, surely the confidence will be significantly less than if I had sampled 100 items? Or, is the detected failure rate inconsequential here?

Who is Participating?
PointyEarsConnect With a Mentor Commented:
If no items fail it means that the failure rate is less than 1/sample.  With a sample size of 10, the failure rate can only be estimated to be less than 0.1 .

This in fact represents the "sensitivity" of your measurament.  To have proper estimates of failure rates, you should have a sample large enough to ensure that in most cases there is at least one failure.

On the other hand, you are probably only interested in checking whether the failure rate remains below a maximum determined on the basis of safety laws, quality target, and what-have-you.  Therefore, you might be happy with the smallest sample size which satisfies the following:
1. applicability of statistical criteria (10? 20?)
2. enough spread to have the required maximum above 1 (2? 3?)

I would ask the manufacturer to tell you what failure rate they expect.  If that were not possible, I would start with a sample of 10 and see whether it needs to be increased.  After all, there will certainly be a period of testing, calibration, verification.
jpkempAuthor Commented:
The kind of failures we're talking about here are serious failures, possibly resulting in a total product recall and/or prosecution of the vendor or supplier. It would not be appropriate or useful to ask the manufacturers what failure rate they expect - the failure rate we expect would normally be zero.
Even if their target is zero, they should define a limit below which they are satisfied.

In any case, if they set their limit so low, it means that any practicable sample will be comparatively small.  That is, no matter how large you take it, it will still give you a limited confidence.  For example, a sample of 100 with no failures tells you that the failure rate is expected to be below 1%.

If you manufacture telephone exchanges, you might be asked by your customers to ensure a maximum off-line time of 5 minutes per year.  Considering that there are more than half a million minutes in a year, this is almost equivalent to a zero-time failure.  But it is measurable.
A proven path to a career in data science

At Springboard, we know how to get you a job in data science. With Springboard’s Data Science Career Track, you’ll master data science  with a curriculum built by industry experts. You’ll work on real projects, and get 1-on-1 mentorship from a data scientist.

jpkempAuthor Commented:
So, you're saying the confidence interval is 1/ss? At what confidence level is this?

I need more details and a more rigorous mathematical foundation.

No, 1/ss is not a confidence interval.

Normally, the expected value of an event and the standard deviation of its distribution are estimated from the distribution of events occurring in the sample.  To do so you have to hipotesise a particular distribution (let's say normal/gaussian).

Then, you can ask yourself: "with this distribution, mean, and standard deviation, what is the range of values that gives me a 95% of probability of not being further away from the mean?  This is your confidence interval.

The problem in your situation is that you have no statistical data to work with.  If all samples always return zero, you have no distribution of results on which you can calculate mean and standard deviation.  Therefore, you can only say that you expect an occurrence below 1 out of ss, but cannot really say with what probability.

Sorry, I have to rush, but I din't want to leave you hanging.
jpkempAuthor Commented:
Thanks PointyEars.

I think I'm starting to understand. My calculation of a confidence interval is meaningless with a zero mean failure rate because the formula assumes the rate for the entire population is proportional to the rate for the sample.

Considering the 1/ss formula; if we take a sample of 10 items from a crate of 100 and find zero failures, 1/ss yields 10% (i.e. up to 10 items in the crate might have failed); if a sample of 20 was taken, 1/ss is 5% (i.e. up to 5 items in the crate might have failed). Is this a valid interpretation?
Basically yes, but not exactly.

If "m" is the expected failure rate and "d" its standard deviation, the formula
  f(n) = (m + 3*d) * n
gives you the maximum number of failures that you would expect on average in a sample of size "n".

If "m" is very low, f(n) will remain below 1, and you will measure zero failures.  Then, you will only know that
  (m + 3*d) * n < 1

So, you cannot say that m < 1/n, but that m + 3*d < 1/n.

If you assume d = sqrt(m) and solve:
  m + 3*sqrt(m) < 1/n
you get:
  m < 0.00027
  d = 0.017

BUT, can you really assume that d = sqrt(m) ?

I believe that it is more reasonable to stop at:
  m + 3*d < 1/n
jpkempAuthor Commented:
What can m+3*d<1/n tell me when I only know the value of n (>0) and m=0, and how do I use this to measure confidence?
Consider that you are looking at an upper limit.  Therefore, you can safely say that m < 1/n, because "d" certainly is > 0.

That's how you arrive to what I said in a previous email: 1/sampleSize = upper limit for the percentage of failures.

If you think about it, it makes perfectly [common] sense: if you make 10 measurements and none fails, you can just say that not more than 10% of your items fail.  This is obviously only statistically valid if your sample is large enough (10, 20...)
jpkempAuthor Commented:
Ok, I think that makes sense to me.

Thanks for hanging in there.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.