Enable Your Employees To Focus On The Core With Intuitive Onscreen Guidance That is With You At The Moment of Need.
Become a Premium Member and unlock a new, free course in leading technologies each month.
Add your voice to the tech community where 5M+ people just like you are talking about what matters.
We are willing to accept a maximum 1% chance, that in a run of 10,000 units, there will be at least one item that is bad.That is for every 1,000,000 units produced, 990,000 in 99 batches will be within tolerance, and of the remaining units in 1 batch 1 (or more) could be out of tolerance.
That is for every 1,000,000 units produced, 990,000 in 99 batches will be within tolerance, and of the remaining units in 1 batch 1 (or more) could be out of tolerance.
For a single out of tolerance item the probability would be 1E-6 (1.0 * 10 to the -6 or 1 in a million).
Assuming that you have independence between units ** then 2 or more defectives in 1,000,000 units is of the order of 1E-12 which is too small to make any change to the solution.
The solution settles down to a question about relative costs. First there is the cost of measurement. Next, when the testing detects that the production process is going out of tolerance there is a cost to manufacturing of stopping production and re-adjusting the machinery.
Those costs need to be balanced against the cost of exceeding your target of more than 1 batch in 100 with 1 (or more) defectives. You might consider this cost is infinite, but if it was truly infinite then the solution would be examine and measure 990,000 items in each 1,000,000 !
Do you have an idea of these (relative) costs?
From the manufacturing perspective, is there information about the "speed" of movement of the mean away from 100 mm when the process starts to go wrong. For example your engineers may be able to say that the mean will not move quicker than 0.001 mm each item.
There is a big body of statistical work on process control as well as on sampling of batches for defectives. The process control work develops "charts" where you constantly plot the data as sampled against time. You pick a formula (based on cumulative sums and/or standard deviations) to detect the "out of control" situation.
Ian
** Note on independence: When the process is "in control" it is reasonable to assume that the variation from the mean of 100 mm is independent from unit to unit. However, when the process is straying from the 100 mm value, the likelihood of the next unit having a positive (or negative) error will be related to the size and direction of the error on the current unit. That is, errors will not be independent.
You need standard deviation information.
Measure it
If the sd is 3mm you are in deep trouble.
If the sd is 0.00003 mm you are in great shape
Find out how fast the 100mm shifts and time your recalibrations.
You need standard deviation information.While this is an important step, it is far from adequate.
Measure it
If you are experiencing a similar issue, please ask a related question
Join the community of 500,000 technology professionals and ask your questions.