I need a stat head. I used to work for a PhD in Statistics...I might have to hit him up.
This is not exactly a String Theory question. I think it's basic. I just need to take the best approach.
My manufacturing company makes widgets. Every 10 minutes, we collect 200 points of data--mostly voltages, conveyor speeds, etc. We store this data in SQL Server.
Usually widgets turn out fine, but sometimes widgets turn out poorly, quality-wise. This is almost always because there's a "bad" voltage setting somewhere on the production line. This can only be found by one of our "experts" who finds that Voltage 13 is incorrect.
What I would like: I can "train" the system by feeding it lots of data (on 10-minute intervals), and by showing it scenarios (hours-long collections of data) where the widget quality was good.
I can also train it by using "bad" scenarios--periods of time where poor-quality widgets were produced. By comparing the data during the periods of bad quality with the data from good-producing periods, the system can "learn" what makes for bad quality.
The final step is that the system can also now "warn". As soon as it encounters a voltage that has historically produced bad-quality widgets, we don't need to wait to see the results. It can warn us now.
I'd love for some help getting started on this--methodology, for starters. And ultimately, some SQL-specific direction would be great as well.
Is this regression analysis? How would I approach it? How would I do this "training" and scenario "feeding"?
Thanks very much.