Solved

First Chance exception crashes in CWinThread::Run()

Posted on 2004-08-27
10
2,457 Views
Last Modified: 2013-11-20
I have a stereo image processing application using direct x and direct show. Nothing wrong in that framework. I am trying to incorporate an MLP neural network into the application. The neural network does work on its own but when using it with the app i get a first chance exception when trying to run it (by pressing a button on the dialog). The class is instantiated (MLP = new MultilayerPerceptron) but when trying to train the app crashes when exiting the function. Here is the code:


//from the main form
void CAppForm::OnTrain()
{
      // TODO: Add your control notification handler code here
      MLP->train_the_network ();

}


//MaxInp etc are defined constants
double MultiLayerPerceptron::sigmoid (double x)
  {  if (abs(x) < 38)              // Handle possible overflow
      return 1/(1+exp(-x));        // exp only valid between -39 and 38
     else
      if (x >= 38)
       return 1;
      else
       return 0;
  }

void MultiLayerPerceptron::run_input_layer ()
  { double sum;
    for (int i = 1; i <= MAX_INP; i++)
     { sum = 0;
       for (int j = 1; j <= MAX_INP; j++)
        sum += ipl[i].w[j] * test_pat[j];
       ipl[i].a = sigmoid(sum - ipl[i].threshold);
     }
  }

void MultiLayerPerceptron::run_hidden_layer ()
  { double sum;
    for (int i = 1; i <= MAX_HID; i++)
     {  sum = 0;
        for (int j = 1; j <= MAX_INP; j++)
         sum += hl[i].w[j] * ipl[j].a;
        hl[i].a = sigmoid(sum - hl[i].threshold);
     }
  }

void MultiLayerPerceptron::run_output_layer ()
  { double sum;
    for (int i = 1; i <= MAX_OUT; i++)
     {  sum = 0;
        for (int j = 1; j <= MAX_HID; j++)
         sum += ol[i].w[j] * hl[j].a;
        ol[i].a = sigmoid(sum - ol[i].threshold);
     }
  }

void MultiLayerPerceptron::run_the_network ()
  { run_input_layer();
    run_hidden_layer();
    run_output_layer();
  }

// This procedure displays the results of the test on the screen.
void MultiLayerPerceptron::display_the_results ()
  { /*cout << endl << "Inputs: ";
    for (int i = 1; i <= MAX_INP; i++)
     cout << test_pat[i] << " ";
    cout << endl << "Outputs: ";
    for (i = 1; i <= MAX_OUT; i++)      // 'i' already declared
     cout << ol[i].a << " ";
    cout << endl; */
  }

void MultiLayerPerceptron::test_the_network ()
  { /*cout << endl;
    cout << "I will ask you for test patterns. At the end of each test "
         << "you will" << endl
         << "be asked if you want to do another test." << endl;
    do
     { get_test_pattern();
       run_the_network();
       display_the_results();
     }
    while (contin() == 1);     // Function contin returns 1 while user
  }                          // wants to continue
 */
}         // THE FOLLOWING PROCEDURES ARE FOR TRAINING THE NETWORK

void MultiLayerPerceptron::calculate_output_layer_errors ()
  {  for (int j = 1; j <= MAX_OUT; j++)
      ol[j].E = (desired[j] - ol[j].a) * ol[j].a * (1 - ol[j].a);
  }

void MultiLayerPerceptron::calculate_hidden_layer_errors ()
  { double sum;
    for (int i = 1; i <= MAX_HID; i++)
     { sum = 0;             // Find sum of error products for output layer
       for (int j = 1; j <= MAX_OUT; j++)
        sum += ol[j].E * ol[j].w[i];
       hl[i].E = hl[i].a * (1 - hl[i].a) * sum;
     }
  }

void MultiLayerPerceptron::calculate_input_layer_errors ()
  { double sum;
    for (int i = 1; i <= MAX_INP; i++)
     { sum = 0;
       for (int j = 1; j <= MAX_HID; j++)
        sum += hl[j].E * hl[j].w[i];
       ipl[i].E = ipl[i].a * (1 - ipl[i].a) * sum;
     }
  }

/* You will notice that the lines changing the threshold values have been
 bracketed out in the following procedure - they should be there according
 to theory, but if I include them, the network doesn't produce the correct
 values. If I miss them out, as I have now, it produces correct values.
 If anyone can throw any light on this, I would be most grateful! */

void MultiLayerPerceptron::weight_change ()
 { const double BETA = 0.9;  // Learning rate
   const double M = 0.9;     // Momentum parameter

     // First tackle weights from hidden layer to output layer
     // 'i' refers to a node in hidden layer, j to a node in output layer
   for (int j = 1; j <= MAX_OUT; j++)  // Go through all output nodes
    { for (int i = 1; i <= MAX_HID; i++)  // Adapt all weights
       { ol[j].change[i] = BETA * ol[j].E * hl[i].a + M * ol[j].change[i];
                             // This is the previous value -------^
         ol[j].w[i] += ol[j].change[i];
               // Now adapt threshold as if from a node with activation 1
         ol[j].t_change = BETA * ol[j].E * 1 + M * ol[j].t_change;
         // ol[j].threshold += ol[j].t_change;
       }
    }

     // Now tackle weights from input layer to hidden layer
     // i refers to a node in input layer, j refers to node in hidden layer
   for (j = 1; j <= MAX_HID; j++)
    { for (int i = 1; i <= MAX_INP; i++)
       { hl[j].change[i] = BETA * hl[j].E * ipl[i].a + M * hl[j].change[i];
                                 // This is the previous value ----^
         hl[j].w[i] += hl[j].change[i];
       }
           // Now adpapt threshold as if from a node with activation 1
       hl[j].t_change = BETA * hl[j].E * 1 + M * hl[j].t_change;
       // hl[j].threshold += hl[j].t_change;
    }

    // Now tackle weights from input to net to input layer
    // i refers to a pattern input, j refers to node in input layer
   for (j = 1; j <= MAX_INP; j++)    // go through all input layer nodes
    { for (int i = 1; i <= MAX_INP; i++) // Adapt all weights
       { ipl[j].change[i] = BETA*ipl[j].E*test_pat[i] + M*ipl[j].change[i];
                                // This is the previous value ------^
         ipl[j].w[i] += ipl[j].change[i];
       }
         // Now adapt threshold as if from a node with activation 1
      ipl[j].t_change = BETA * ipl[j].E * 1 + M * ipl[j].t_change;
      // ipl[j].threshold += ipl[j].t_change;
    }
 }

// Perform back propagation on the network}
void MultiLayerPerceptron::back_propagate ()
 { calculate_output_layer_errors();
   calculate_hidden_layer_errors();
   calculate_input_layer_errors();
   weight_change();
 }

// Get a random number in the range 0 to 1 as a double
double r ()
  { return (rand() + 0.0)/RAND_MAX;  }

//Set the weights and thresholds for all the nodes to small random values
// in the range 0 to 1
void MultiLayerPerceptron::random_weights ()
 { for (int i = 1; i <= MAX_INP; i++)
    { for (int j = 1; j <= MAX_INP; j++)
       ipl[i].w[j] = r();
      ipl[i].threshold = r();
    }
   for (i = 1; i <= MAX_HID; i++)
    { for (int j = 1; j <= MAX_INP; j++)
       hl[i].w[j] = r();
      hl[i].threshold = r();
    }
   for (i = 1; i <= MAX_OUT; i++)
    { for (int j = 1; j <= MAX_HID; j++)
       ol[i].w[j] = r();
      ol[i].threshold = r();
    }
 }

// At the start of back propagation, there are no weight changes to
// influence the next cycle, so clear the arrays
void MultiLayerPerceptron::blank_changes ()
 {       
      for (int j = 1; j <= MAX_INP; j++)
    { for (int i = 1; i <= MAX_INP; i++)
       ipl[j].change[i] = 0;
      ipl[j].t_change = 0;
    }
   for (j = 1; j <= MAX_HID; j++)
    { for (int i = 1; i <= MAX_INP; i++)
       hl[j].change[i] = 0;
      hl[j].t_change = 0;
    }
   for (j = 1; j <= MAX_OUT; j++)
    { for (int i = 1; i <= MAX_HID; i++)
       ol[j].change[i] = 0;
      ol[j].t_change = 0;
    }

 }

void MultiLayerPerceptron::train_the_network ()
 {
   long num_cycles = 10000;               // Might be VERY big value!
   //cout << endl;
  // cout << "Enter the number of training cycles (typically 100) : ";
   //cin >> num_cycles;
   blank_changes();              // Clear all "previous" weight changes
   for (long loop = 1; loop <= num_cycles; loop++)
    for (int pat = 1; pat <= MAX_PAT; pat++) // Cycle through all patterns
     { for (int i = 1; i <= MAX_INP; i++)    // Copies input pattern into
        test_pat[i] = INP_PATTERNS[pat][i];  // 'test_pat' array
       for (i = 1; i <= MAX_OUT; i++)        // Copies output pattern into
        desired[i] = OUT_PATTERNS[pat][i];   // 'desired' array
       run_the_network();
       back_propagate();         
     }
      TRACE("NETWORK IS TRAINED, RESULT SUPPLIED,\n");      
}
After this runs for a while the app crashes Debug shows this:
First-chance exception in MFC3D.exe (MFC42D.DLL): 0xC0000005: Access Violation.


I traced this to a call to CWinThread::Run() and the part that goes:      
if (IsIdleMessage(&m_msgCur))
{
bIdle = TRUE;
lIdleCount = 0;
}

Is this a multithreading issue if so how do i go about utilizing CWinThread? Or am i bit blind to something i have done wrong??
Any help greatly appreciated, more code can be supplied but bear in mind this a gargantuan application!!
0
Comment
Question by:jezpop
  • 4
  • 3
  • 2
  • +1
10 Comments
 
LVL 5

Expert Comment

by:millsoft
Comment Utility
It appears that your stack is corrupt.  Have you looked at the call stack when you get the GPF?  Does it appear valid or invalid?

IsIdleMessage doesn't do anything that could cause a GPF unless your stack or thread-local storage is somehow corrupted.  Most likely candidate is that you are accessing one of the arrays out of bounds.  Another possibility is that where you commented out all the cout<< functions you accidentally changes the code in a fatal way.

Brad

0
 
LVL 5

Assisted Solution

by:millsoft
millsoft earned 250 total points
Comment Utility
jezpop,
Another way to troubleshoot the problem is this.  Try returning prematurely at different points in the train_the_network function to try and narrow down where things go south.

> void MultiLayerPerceptron::train_the_network ()
>  {
>    long num_cycles = 10000;               // Might be VERY big value!
>    //cout << endl;
>   // cout << "Enter the number of training cycles (typically 100) : ";
>    //cin >> num_cycles;
step 1, try returning here.  Everything should be perfect.

>    blank_changes();              // Clear all "previous" weight changes
step 2, try returning here.  
>    for (long loop = 1; loop <= num_cycles; loop++)
step 3, try returning here.  
>     for (int pat = 1; pat <= MAX_PAT; pat++) // Cycle through all patterns
>      { for (int i = 1; i <= MAX_INP; i++)    // Copies input pattern into
step 4, etc...
>         test_pat[i] = INP_PATTERNS[pat][i];  // 'test_pat' array
>        for (i = 1; i <= MAX_OUT; i++)        // Copies output pattern into
>         desired[i] = OUT_PATTERNS[pat][i];   // 'desired' array
step 5
>        run_the_network();
step 6
>        back_propagate();        
>      }
>      TRACE("NETWORK IS TRAINED, RESULT SUPPLIED,\n");    
> }
0
 
LVL 86

Expert Comment

by:jkr
Comment Utility
>>First-chance exception in MFC3D.exe (MFC42D.DLL): 0xC0000005: Access Violation.

First-chance exception in xxx...' just means that a function from within the 'xxx' caused an access-violation exception that was handled successfully inside the SEH frame that was active when the exception occurred. You can think of it being the same as if you use code like this:

long l;

__try // set up current SEH frame
{
CopyMemory ( &l, 0, sizeof ( long)); // read from 0x00000000
}
__except( EXCEPTION_EXECUTE_HANDLER) // handler for current frame
{
puts ( "We knew that this would go wrong...");
}

So let's hope that the MS progrmmer knew what they were doing ;-)

(Additional info: MS KB Article Q105675)

The article can be found at http://support.microsoft.com/support/kb/articles/q105/6/75.asp

A first chance exception is called so as it is passed to a debugger before the application 'sees' it. This is done by sending a 'EXCEPTION_DEBUG_EVENT' to the debugger, which can now decide whether it is passed to the apllication to handle it or 'ignore' it (e.g. like an 'EXCEPTION_BREAKPOINT' aka 'int 3'). If the exception isn't handled, it becomes a '2nd chance' exception, the debugger 'sees' it the 2nd time and will usually terminate the program (without using a debugger, these exceptions end up at 'UnhandledExceptionFilter()' which will also signal the exception to the user with one of these 'nice' message boxes and terminate the program, also...)

In short: This message is only generated by a debugger & you can safely ignore it...

BTW, how do you get to a 'crash'?
0
 
LVL 2

Accepted Solution

by:
carribus earned 250 total points
Comment Utility
Yeah... I think you may want to revisit all the loops in your code. you seem to be starting with 1 and ending at the n+1'th element, eg:

 for (int pat = 1; pat <= MAX_PAT; pat++) // Cycle through all patterns

which resolves to looping from 1 to MAX_PAT. Now I don't know what the definition of INP_PATTERNS is, but I'll assume its something along the lines of:

int INP_PATTERNS[MAX_PAT];

in which case, your indexing starts at 0 and ends at MAX_PAT-1, and worse even, your loops are consistently accessing one element past the end of the array (i.e. out of bounds like millsoft mentioned above.

In debug mode, you will not crash, because the debug environment is friendly to those kind of instances, and this raises the point that assert() statements should be littering your code to ensure that no parameters are out of bounds etc.

You may also find that once you re-adjust your loops, that that commented out calculation code might start working for you.

Ciao
0
 

Author Comment

by:jezpop
Comment Utility
////////////////////////////////////////////////////////////////////////////////////////////////////
SORRY TO ALL.....  jkr you're right in that this isn't actually a 'crash' as such, but an unhandled exception.I copied and pasted the debugger output into the question box when originally asking this question. So not a crash but it would be if i was nearer making this thing into a release !:)  
To hopefully allow for a bit more light to be thrown on where i think i may have bungled the windows messaging thingy, I am including a bit more code, these are only small parts of the whole system which, basically, is being designed to guide a robotic arm to an object chosen (clicked on with the mouse in a captured bitmap from a live stream using two web cams\devices) by user selection. The chosen object ( i am always using the same one at the moment ) is selected, the centre of the object is found,  the neural network knows/learns the real position of the object relative to screen space and display this info in textual form and in the D3Ddevice, the whole application can then direct the robotic arm to the object. I know this set up can working using seperate applications i have done it but that isn't what i need as the final solution. I also wanted to do this with the Windows OS because of the incompatibility of my hardware and Linux. If you want the whole thing i can make it available!

const double INP_PATTERNS[MPT][MIP] = {{0,0,0,0},{169,91,92,93},{94,95,96,97},{98,99,100,101},{102,103,104,105}};
const double OUT_PATTERNS[MPT][MOT] = {{0,0,0},{115,95,2},{95,94,93},{2,3,4}};

const int MAX_INP = 4;      // Maximum number of nodes in input layer
const int MIP = MAX_INP+1;  // Ditto used when defining array limits so
                            // that indices can go from 1 to MAX_INP
const int MAX_HID = 128;      // Maximum number of nodes in hidden layer}
const int MAX_OUT = 3;      // Maximum number of nodes in output layer}
const int MOT = MAX_OUT+1;
const int MAX_PAT = 4;      //Maximum number of training patterns}
const int MPT = MAX_PAT+1;

//Here is some further code from Actual application :
struct neuron_type
 { double w[MIP], change[MIP];  // w = the weights themselves
                                // change = change in weights (used in
                                // training only!
   double threshold,a;
   double t_change;        // Change in threshold (used in training only)
   double E;               // Error for this node (used in training only)
 };

//An array to hold any input patterns used for training or typed in
double test_pat[MIP];

// An array for holding output patterns
double desired[MOT];

neuron_type ipl[MIP];        // Input layer
neuron_type hl[MAX_HID+1];   // Hidden layer
neuron_type ol[MOT];
etc....etc.....

the bitmap capture routine from both web cam devices
// Constants
#define WM_CAPTURE_BITMAP   WM_APP + 1
#define ABS(x) (((x) > 0) ? (x) : -(x))
#define POSITIVE(x) (((x) > 0) ? (x) :  0)
#define AVERAGE(a,b,c) ( ((a)+(b)+(c))/3 )

#define CLR_RED           RGB(255,0,0)
#define CLR_WHITE         RGB(255,255,255)
#define CLR_BLUE          RGB(0,0,255)
// Structures
//a pixel/sample in a 24bit bitmap consists of three components
typedef struct TPixel
{
      BYTE r, g, b;

} TPixel;

typedef struct _callbackinfo
{
    double dblSampleTime;
    long lBufferSize;
    BYTE *pBuffer;
    BYTE *pBuffer2;
      BYTE *tempBuffer;
      BYTE *workBuffer;
    BITMAPINFOHEADER bih;

} CALLBACKINFO;

CALLBACKINFO cb[2];

typedef struct
{
       int value;
       long total;

} COUNT;

// operator used by the sort algorithm
int operator<(COUNT& v1,COUNT& v2)
{
       return v1.value < v2.value ? 1 : 0;
}
 
//-----------------------------------------------------------------------------------------------
// Note: this object is a SEMI-COM object, and can only be created statically.
// We use this little semi-com object to handle the sample-grab-callback,
// since the callback must provide a COM interface. We could have had an interface
// where you provided a function-call callback, but that's really messy, so we
// did it this way. You can put anything you want into this C++ object, even
// a pointer to a CDialog. Be aware of multi-thread issues though.
//
char filter_str[] = "Dual Mode USB Camera Plus";
BITMAPINFOHEADER *pbih;
BOOL g_bOneShot[2];
HWND g_hwnd;
BYTE *tempStr=NULL;

class CSampleGrabberCB : public ISampleGrabberCB
{
public:

      int deviceIndex;
      long lWidth;
      long lHeight;
      int linesCoppied;
      const char * pFileName;
      BYTE *prgb;
      COLORREF target;
      RGBQUAD quad;
//classes
      virtual ~CSampleGrabberCB();
      CAppForm *pOwner;
      CGraphCtrl m_Histogram;

      TCHAR m_szCapDir[MAX_PATH]; // the directory we want to capture to
      TCHAR m_szSnappedName[MAX_PATH];

      BOOL bFileWritten;
      BITMAPINFOHEADER bih;
      BITMAPFILEHEADER bfh;
      BITMAPINFO bm;
    // fake out any COM ref counting
    //
      STDMETHODIMP_(ULONG) AddRef() { return 2; }
      STDMETHODIMP_(ULONG) Release() { return 1; }
    // fake out any COM QI'ing
      STDMETHODIMP QueryInterface(REFIID riid, void ** ppv)
      {
      if( riid == IID_ISampleGrabberCB || riid == IID_IUnknown )
            {
        *ppv = (void *) static_cast<ISampleGrabberCB*> ( this );
        return NOERROR;
            }    
    return E_NOINTERFACE;
      }

      //CSampleGrabberCB::CSampleGrabberCB();
      //virtual CSampleGrabberCB::~CSampleGrabberCB();
STDMETHODIMP BufferCB( double dblSampleTime, BYTE * pBuffer, long lBufferSize )
{
        // this flag will get set to true in order to take a picture
        if( !g_bOneShot[deviceIndex] )
            return 0;
        // Since we can't access Windows API functions in this callback, just
        // copy the bitmap data to a global structure for later reference.
        cb[deviceIndex].dblSampleTime = dblSampleTime;
        cb[deviceIndex].lBufferSize   = lBufferSize;
        // If we haven't yet allocated the data buffer, do it now.
        // Just allocate what we need to store the new bitmap.
        if (cb[deviceIndex].pBuffer==NULL)
            cb[deviceIndex].pBuffer = new BYTE[lBufferSize];
        if (cb[deviceIndex].pBuffer2==NULL)
            cb[deviceIndex].pBuffer2 = new BYTE[lBufferSize];

            // Buffers still left in created not used..still think i need them
            // if to mask the correct hue.
            if (cb[deviceIndex].tempBuffer==NULL)
            cb[deviceIndex].tempBuffer = new BYTE[lBufferSize];
            if (cb[deviceIndex].workBuffer==NULL)
            cb[deviceIndex].workBuffer = new BYTE[lBufferSize];
      
        // Copy the bitmap data into our global buffer
        if (cb[deviceIndex].pBuffer!= NULL)
          memcpy(cb[deviceIndex].pBuffer, pBuffer, lBufferSize);
        // Post a message to our application, telling it to come back
        // and write the saved data to a bitmap file on the user's disk.

        PostMessage(g_hwnd, WM_CAPTURE_BITMAP, deviceIndex, 0L);
      
        return 0;
}
///////////////////////////////////////////////////////////////////////////////////////////////////////

this is the CFormView wind proc:

LRESULT CAppForm::WindowProc(UINT message, WPARAM wParam, LPARAM lParam)
{

      int deviceIndex;
      deviceIndex = (int)wParam;
      if (message == WM_CAPTURE_BITMAP)
        mCB[deviceIndex].CopyBitmap(cb[deviceIndex].dblSampleTime, cb[deviceIndex].pBuffer, cb[deviceIndex].lBufferSize);
      
      return CFormView ::WindowProc(message, wParam, lParam);
}
// This is the implementation function that writes the captured video
// data onto a bitmap on the user's disk. It is also intended to trigger the pixels processing
// as it is here that we definitely have the bitmap data.
//
BOOL CopyBitmap( double dblSampleTime, BYTE * pBuffer, long lBufferSize )
{
      if( !g_bOneShot[deviceIndex] )
      return 0;
        // we only take one at a time
        g_bOneShot[deviceIndex] = FALSE;
        //copy the captured buffer into another buffer so that there is
            //no conflict between reading from the buffer and writing to it
            memcpy(cb[deviceIndex].pBuffer2, cb[deviceIndex].pBuffer, lBufferSize);
            //and for the display
            memcpy(cb[deviceIndex].workBuffer, cb[deviceIndex].pBuffer2,lBufferSize);
        // figure out where to capture to
        TCHAR m_ShortName[MAX_PATH];
        wsprintf( m_szSnappedName, TEXT("%sStillCap%4.4ld.bmp"),
                  m_szCapDir, pOwner->m_nCapTimes );
        wsprintf( m_ShortName, TEXT("StillCap%4.4ld.bmp"),
                  pOwner->m_nCapTimes );
        // increment bitmap number if user requested it
        // otherwise, we'll reuse the filename next time
        if( pOwner->IsDlgButtonChecked( IDC_AUTOBUMP ) )
            pOwner->m_nCapTimes++;
        // write out a BMP file
        HANDLE hf = CreateFile(
            m_szSnappedName, GENERIC_WRITE, 0, NULL,
            CREATE_ALWAYS, NULL, NULL );
        if( hf == INVALID_HANDLE_VALUE )
            return 0;
        // write out the file header
        BITMAPFILEHEADER bfh;
        memset( &bfh, 0, sizeof( bfh ) );
        bfh.bfType = 'MB';
        bfh.bfSize = sizeof( bfh ) + lBufferSize + sizeof( BITMAPINFOHEADER );
        bfh.bfOffBits = sizeof( BITMAPINFOHEADER ) + sizeof( BITMAPFILEHEADER );
        DWORD dwWritten = 0;

        WriteFile( hf, &bfh, sizeof( bfh ), &dwWritten, NULL );
        // and the bitmap format
        if (bih.biWidth==0)
            {
          memset( &bih, 0, sizeof( bih ) );
          bih.biSize = sizeof( bih );
          bih.biWidth = lWidth;
          bih.biHeight = lHeight;
          bih.biPlanes = 1;
          bih.biBitCount = 24;
            }
            // Save bitmap header for later use when repainting the window
            //memcpy(&(cb[deviceIndex].bih), &bih, sizeof(bih));
        dwWritten = 0;

        WriteFile( hf, &bih, sizeof( bih ), &dwWritten, NULL );

            // the bits
            dwWritten = 0;
        WriteFile( hf, pBuffer, lBufferSize, &dwWritten, NULL );
        CloseHandle( hf );
            hf = 0;
        // Display the bitmap bits on the dialog's preview window
            HWND hwndStill = NULL;
            switch(deviceIndex)
            {
            case 0: { pOwner->GetDlgItem( IDC_STILL, &hwndStill ); break; }
            case 1: { pOwner->GetDlgItem( IDC_STILL2, &hwndStill ); break; }
            }
      
        RECT rc;
        ::GetWindowRect( hwndStill, &rc );
        long lStillWidth = rc.right - rc.left;
        long lStillHeight = rc.bottom - rc.top;
       
        HDC hdcStill = GetDC( hwndStill );
            if (deviceIndex==1)
            {
          // show where it captured
          pOwner->SetDlgItemText( IDC_SNAPNAME, m_ShortName );
          // Enable the 'View Still' button  
              HWND hwndButton = NULL;
          pOwner->GetDlgItem( IDC_BUTTON_VIEWSTILL, &hwndButton );
          ::EnableWindow(hwndButton, TRUE);
            }
            // and the bits themselves            
            DisplayCapturedBits(cb[deviceIndex].pBuffer, &bih);
            ReleaseDC( hwndStill, hdcStill );


            //CAUTION : recursive realtime view
            //pOwner->OnSnap();
      
            return S_OK;
 
}
      STDMETHODIMP SampleCB( double SampleTime, IMediaSample * pSample )
      {return 0;}
      //STDMETHODIMP BufferCB( double dblSampleTime, BYTE * pBuffer, long lBufferSize );
      //BOOL DisplayCapturedBits(BYTE *pBuffer, BITMAPINFOHEADER *pbih);
      //BOOL ProcessBitsFull(BYTE *pBuffer, BITMAPINFOHEADER *pbih);//
      //BOOL CopyBitmap( double dblSampleTime, BYTE * pBuffer, long lBufferSize );
      BYTE* DetectEdges(BYTE * pBuffer, int pWidth, int pHeight);
      BYTE* Read24BitBitmap(const char * pFileName, int * pWidth, int * pHeight);
      
};

// this semi-COM object will receive sample callbacks for us
CSampleGrabberCB mCB[2];

////////////////////////////////////////////////////////////////////////////////
Direct show dependant qedit.h dependant stuff

HRESULT CAppForm::InitStillGraph( char *filterName, int deviceIndex )
{
    HRESULT hrs;
      
    // create a filter graph
    //
    hrs = m_pGraph[deviceIndex].CoCreateInstance( CLSID_FilterGraph );
    if( !m_pGraph[deviceIndex] )
    {
        Error( TEXT("Could not create filter graph") );
        return E_FAIL;
    }

    // get whatever capture device exists
    //
    CComPtr< IBaseFilter > pCap;
    GetDefaultCapDevice( filterName, deviceIndex, &pCap );
    if( !pCap )
    {
        Error( TEXT("No video capture device was detected on your system.\r\n\r\n")
               TEXT("This sample requires a functional video capture device, such\r\n")
               TEXT("as a USB web camera.") );
        return E_FAIL;
    }

    // add the capture filter to the graph
    //
    hrs = m_pGraph[deviceIndex]->AddFilter( pCap, L"Cap" );
    if( FAILED( hrs ) )
    {
        Error( TEXT("Could not put capture device in graph"));
        return E_FAIL;
    }

    // create a sample grabber
    //
    hrs = m_pGrabber[deviceIndex].CoCreateInstance( CLSID_SampleGrabber );
    if( !m_pGrabber[deviceIndex] )
    {
        Error( TEXT("Could not create SampleGrabber (is qedit.dll registered?)"));
        return hrs;
    }
    CComQIPtr< IBaseFilter, &IID_IBaseFilter > pGrabBase( m_pGrabber[deviceIndex] );

    // force it to connect to video, 24 bit
    //
    CMediaType VideoType;

    VideoType.SetType( &MEDIATYPE_Video );

    VideoType.SetSubtype( &MEDIASUBTYPE_RGB24 );

    hrs = m_pGrabber[deviceIndex]->SetMediaType( &VideoType ); // shouldn't fail

    if( FAILED( hrs ) )
    {
        Error( TEXT("Could not set media type"));
        return hrs;
    }

    // add the grabber to the graph
    //
    hrs = m_pGraph[deviceIndex]->AddFilter( pGrabBase, L"Grabber" );
    if( FAILED( hrs ) )
    {
        Error( TEXT("Could not put sample grabber in graph"));
        return hrs;
    }

    // find the two pins and connect them
    //
    IPin * pCapOut = GetOutPin( pCap, 0 );
    IPin * pGrabIn = GetInPin( pGrabBase, 0 );
    hrs = m_pGraph[deviceIndex]->Connect( pCapOut, pGrabIn );

    if( FAILED( hrs ) )
    {
        Error( TEXT("Could not connect capture pin #0 to grabber.\r\n")
               TEXT("Is the capture device being used by another application?"));
        return hrs;
    }
    // render the sample grabber output pin, so we get a preview window
    //
    IPin * pGrabOut = GetOutPin( pGrabBase, 0 );
    hrs = m_pGraph[deviceIndex]->Render( pGrabOut );
    if( FAILED( hrs ) )
    {
        Error( TEXT("Could not render sample grabber output pin"));
        return hrs;
    }

    // ask for the connection media type so we know how big
    // it is, so we can write out bitmaps
    //
    AM_MEDIA_TYPE mt;
    hrs = m_pGrabber[deviceIndex]->GetConnectedMediaType( &mt );
    if ( FAILED( hrs) )
    {
        Error( TEXT("Could not read the connected media type"));
        return hrs;
    }
   
    VIDEOINFOHEADER * vih = (VIDEOINFOHEADER*) mt.pbFormat;
    mCB[deviceIndex].pOwner = this;
    mCB[deviceIndex].lWidth  = vih->bmiHeader.biWidth;
    mCB[deviceIndex].lHeight = vih->bmiHeader.biHeight;
      mCB[deviceIndex].deviceIndex = deviceIndex;

    FreeMediaType( mt );
      FreeMediaType(VideoType);

    // don't buffer the samples as they pass through
    //
    m_pGrabber[deviceIndex]->SetBufferSamples( FALSE );

    // only grab one at a time, stop stream after
    // grabbing one sample
    //
    m_pGrabber[deviceIndex]->SetOneShot( FALSE);/*jez*/

    // set the callback, so we can grab the one sample
    //
    m_pGrabber[deviceIndex]->SetCallback( &mCB[deviceIndex], 1 );

    // find the video window and stuff it in our window
    //
    CComQIPtr< IVideoWindow, &IID_IVideoWindow > pWindow = m_pGraph[deviceIndex];
    if( !pWindow )
    {
        Error( TEXT("Could not get video window interface"));
        return E_FAIL;
    }

    // set up the preview window to be in our dialog
    // instead of floating popup. to do this we have to get the handle
    //
    HWND hwndPreview = NULL;

      switch(deviceIndex)
      {
      case 0: { GetDlgItem( IDC_PREVIEW, &hwndPreview ); break; }
    case 1: { GetDlgItem( IDC_PREVIEW2, &hwndPreview ); break; }
      }

    RECT rc;
    ::GetWindowRect( hwndPreview, &rc );
    pWindow->put_Owner( (OAHWND) hwndPreview );
    pWindow->put_Left( 0 );
    pWindow->put_Top( 0 );
    pWindow->put_Width( rc.right - rc.left );
    pWindow->put_Height( rc.bottom - rc.top );
    pWindow->put_Visible( OATRUE );
    pWindow->put_WindowStyle( WS_CHILD | WS_CLIPSIBLINGS );

    // Add our graph to the running object table, which will allow
    // the GraphEdit application to "spy" on our graph
      // BUT THIS IS THE CAUSE OF OBJECTS REMAINING ACTIVE IN KPROXY.AX !!!!!!!!!!!!* JEZ  *
#ifdef REGISTER_FILTERGRAPH
    hrs = AddGraphToRot(m_pGraph[deviceIndex], &g_dwGraphRegister);
    if (FAILED(hrs))
    {
        Error(TEXT("Failed to register filter graph with ROT!"));
        g_dwGraphRegister = 0;
    }
#endif


    // run the graph
    //
    CComQIPtr< IMediaControl, &IID_IMediaControl > pControl = m_pGraph[deviceIndex];
    hrs = pControl->Run( );
    if( FAILED( hrs ) )
    {
        Error( TEXT("Could not run graph"));
        return hrs;
    }

      UpdateStatus(_T("Previewing Live Video"));
    return 0;
}
///////////////////////////////////////////////////////////////////////////////////////////////////////////
and the Direct3D device for displaying the screen relativity:
BOOL CApp::OnIdle( LONG )

    // Do not render if the app is minimized
    if( m_pMainWnd->IsIconic() )
        return FALSE;

    TCHAR strStatsPrev[200];

    lstrcpy(strStatsPrev, g_AppFormView->PstrFrameStats());

    // Update and render a frame
    if( g_AppFormView->IsReady() )
    {
        g_AppFormView->CheckForLostFullscreen();
        g_AppFormView->RenderScene();
    }

    // Keep requesting more idle time
    return TRUE;

////
this hasn't included any of the processing stuff i am also suppressing multiplely included objects by forcing output. There are no memory leaks and i have set breakpoints and i still trace the unhandled exception to CWinThread and the original question still stands ! (with speech marks around the word crash for jtk :}

Shall be trying some more stuff in the mean time, thanks to all
0
Maximize Your Threat Intelligence Reporting

Reporting is one of the most important and least talked about aspects of a world-class threat intelligence program. Here’s how to do it right.

 
LVL 86

Expert Comment

by:jkr
Comment Utility
>>jkr you're right in that this isn't actually a 'crash' as such, but an unhandled exception.

No, it is *not* 'unhandled', otherwise you'd get a Second chance exception, which is in fact a crash. That's why you can ignore that.
0
 
LVL 86

Expert Comment

by:jkr
Comment Utility
Um, sorry, what's happening here?
0
 

Author Comment

by:jezpop
Comment Utility
Thanks to millsoft and carribus for your advice, i intended for the inputs etc to start from 1 and so do the loop indexes but i didn't reflect this in my training patterns,
ie:
const double INP_PATTERNS[MPT][MIP] = {{0,0,0,0},{169,91,92,93},{94,95,96,97},{98,99,100,101},{102,103,104,105}};
const double OUT_PATTERNS[MPT][MOT] = {{0,0,0},{115,95,2},{95,94,93},{2,3,4}};
const double INP_PATTERNS[MPT][MIP] = {{0,0,0,0,0},{0,169,91,92,93},{0,94,95,96,97},{0,98,99,100,101},{0,102,103,104,105}};
const double OUT_PATTERNS[MPT][MOT] = {{0,0,0,0},{0,115,95,2},{0,95,94,93},{0,2,3,4}};

250 points each!

I think i had one of those daysmany thanks to all.
0
 
LVL 86

Expert Comment

by:jkr
Comment Utility
I am sorry, but your Q was about "First Chance exception" - that was the part that *I* answered...
0
 

Author Comment

by:jezpop
Comment Utility
Yes, sorry jkr i should have phrased the question title to be a little more meaningful to the problem in hand. The problem still isn't 100% fixed yet. The unhandled exception in CWinThread::Run() can still occur dependant on the amount of Hidden nodes i try to use, this then i would have thought, is a msg and buffering issue so i may be back to square 1 !!
0

Featured Post

Better Security Awareness With Threat Intelligence

See how one of the leading financial services organizations uses Recorded Future as part of a holistic threat intelligence program to promote security awareness and proactively and efficiently identify threats.

Join & Write a Comment

Suggested Solutions

Title # Comments Views Activity
MFC Dialog 9 47
Host to IP 7 73
fix34  challenge 9 95
SQUD PROXY SERVER, UNIX, SLL/HTTPS 5 45
This is to be the first in a series of articles demonstrating the development of a complete windows based application using the MFC classes.  I’ll try to keep each article focused on one (or a couple) of the tasks that one may meet.   Introductio…
Introduction: Finishing the grid – keyboard support for arrow keys to manoeuvre, entering the numbers.  The PreTranslateMessage function is to be used to intercept and respond to keyboard events. Continuing from the fourth article about sudoku. …
This video will show you how to get GIT to work in Eclipse.   It will walk you through how to install the EGit plugin in eclipse and how to checkout an existing repository.
When you create an app prototype with Adobe XD, you can insert system screens -- sharing or Control Center, for example -- with just a few clicks. This video shows you how. You can take the full course on Experts Exchange at http://bit.ly/XDcourse.

744 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

13 Experts available now in Live!

Get 1:1 Help Now