Link to home
Create AccountLog in
Avatar of ukjm2k
ukjm2k

asked on

Unhandled exception 0xC0000005: Access Violation

Hi every one. This is my first question here, so please bare with me.
 
I have a problem...

When i compile my progtam in vc++ 6, i do not get any compile or link errors, but
when i am running it, it gets to a point and freezes, and windows says the following:

"Unhandled exception 0xC0000005: Access Violation"
Debugging the program leads to a line which reads
"pMatches = pTempMatch->pNext;"


Does any one know why this is happening?

The code which is associated with this problem is below.

I am not sure what is going wrong.
Can anyone advise me in what to do, preferably with some code please.

Many thanks.

void tkDespatch(IplImage * left, IplImage * right, int mode){
      struct tCFeature * pMatches;
      struct tCFeature * pTempMatch;
      char out_text[2048];
      FILE * fp;

      .....      

      case MODE_RECONSTRUCT_RUN:
            {
                  pMatches = tkCorrespond(tkSegment(left),tkSegment(right));
                  if(pMatches!=NULL){
                        pTempMatch = pMatches;
                        while(pTempMatch!=NULL){
                              tkReconstruct(pTempMatch,out_text);
                              pMatches = pTempMatch->pNext;
                              free(pTempMatch);
                              pTempMatch = pMatches;
                        }
                        fp = fopen(out_file,"a");
                        fputs(out_text,fp);
                        fputs("\n",fp);
                        fclose(fp);
                        out_text[0] = '\0';
                  }
            }

      break;
}

Avatar of mgpeschke
mgpeschke

Hi:

I do not know if this is right but from my feeling I would suggest this way:

                    while(pTempMatch!=NULL){
                         tkReconstruct(pTempMatch,out_text);

                         if (pTempMatch->pNext == NULL)
                             break;

                         pMatches = pTempMatch->pNext;
                         free(pTempMatch);
                         pTempMatch = pMatches;
                    }

I hope this could help
Avatar of jkr
How are you initializing 'pMatches'? This part is missing from your code, and that might be of interest...
Avatar of ukjm2k

ASKER

Thanks for the replies.

mgpeschke:
I have tried the code you added, but it seems to be giving me the same "Unhandled exception 0xC0000005: Access Violation" error, except it now points to the new line: "if (pTempMatch->pNext == NULL)".
Further expansion in debug mode gives the errors:

pTempMatch 0x5bc712d0
pNext CXX0030: Error: Expression cannot be evaluated


jkr
Not too sure how you mean initialise m8. These 6 occurances are the only places where i have used 'pMatches'....
>>>> pTempMatch->pNext

The access violation is because pTempMatch is an invalid pointer. The arrow operator -> can't be accomplished because it has no valid memory address.

There are a few possibilities why the pointer is invalid:

1. pMatches = tkCorrespond(tkSegment(left),tkSegment(right));

pTempMatch was assigned by pMatches and pMatches comes from tkCorrespond function. You should make sure that this function always returns a valid pointer or NULL.

2. The 'next' member variable in struct tcFeature wasn't initialized or wasn't properly set.

Obviously, it's some sort of linked list. But, you have to make sure that the list was properly managed. If for example an entry was removed from the list, the 'next' variable must be relinked to the next entry or set to NULL.

3. The pointer was deleted and therefore invalid.

If you are using linked lists you never should delete entries while they are still members of the list. Either remove the entry from the list and delete or delete the whole list after it is isn't used again.

Regards, Alex

Avatar of ukjm2k

ASKER

itsmeandnobodyelse

Thanks for the reply.
I have taken your 3 suggestions into consideration, but again, as far as i can see, im not doing anything wrong.


Suggestion 1.
pMatches = tkCorrespond(tkSegment(left),tkSegment(right));

tcFeature is defined below:

struct tCFeature * tkCorrespond(struct t2DPoint * left_features,struct t2DPoint * right_features){
      if(left_features!=NULL && right_features!=NULL)
      return corrSimple(left_features,right_features);
      else
      return NULL;
      }


Suggestion 2.
The 'next' member variable in struct tcFeature wasn't initialized or wasn't properly set.

pNext defined as:

/* structure for feature correspondences */
typedef struct tCFeature {
      int iLeft[2];
      int iRight[2];
      struct tCFeature * pNext;
} tCFeature;

                        

Used later to find correspondences:

struct tCFeature * corrSimple(struct t2DPoint * left, struct t2DPoint * right){
      struct t2DPoint * t_left = left;
      struct t2DPoint * t_right = right;
      struct t2DPoint * match4left = (struct t2DPoint *)NULL;
      struct t2DPoint * match4right = (struct t2DPoint *)NULL;
      struct t2DPoint * temp_del = (struct t2DPoint *)NULL;
      struct tCFeature * result = (struct tCFeature *)NULL;
      struct tCFeature * temp_match = (struct tCFeature *)NULL;
      bool match = false;
      while(t_left!=NULL){
            match = false;
            match4left = corrSimple_1(t_left,right);
            if(match4left!=NULL){
                  match4right = corrSimple_1(match4left,left);
                  if(t_left==match4right){
                        match = true;

                        //printf("MATCHED! L[%i,%i] R[%i,%i]\n",t_left->x,t_left->y,match4left->x,match4left->y);
                        temp_match = (struct tCFeature *)malloc(sizeof(struct tCFeature));
                        temp_match->iLeft[0] = t_left->x;
                        temp_match->iLeft[1] = t_left->y;
                        temp_match->iRight[0] = match4left->x;
                        temp_match->iRight[1] = match4left->y;
                        temp_match->pNext = result;
                        result = temp_match;



Suggestion 3.
The pointer was deleted and therefore invalid.



Once the above matches are made, the elements are deleted:

//Delete the element from the left list
                        temp_del = t_left;
                        t_left=t_left->next;
                        if(temp_del->previous==NULL){
                              left = temp_del->next;
                        }
                        else {
                              temp_del->previous->next = temp_del->next;
                        }
                        if(temp_del->next!=NULL){
                              temp_del->next->previous = temp_del->previous;
                        }
                        free(temp_del);

//Now do the element from the right list
.............


Can you see anything wrong with these?
Coded examples would be very helpful.

Thanks..

Avatar of ukjm2k

ASKER

Any one help please....?
>>>>                    if(temp_del->previous==NULL){
>>>>                         left = temp_del->next;
>>>>                    }

What is left ????

If temp_del->previous is NULL, the left list is starting with the new next pointer. Generally, I can't see an error, but the sequence isn't programmed well. Look at that:

tCFeature* deleteNode(tCFeature* pNode)
{
    if (pNode == NULL)
         return NULL;
     tCFeature* pPrev = pNode->previous;
     tCFeature* pNext = pNode->next;
     if (pPrev != NULL)
          pPrev->next = pNext;
     if (pNext != NULL)
          pNext->previous = pPrev;
     free(pNode);
     return pNext;          
}
 
You could use it like that:


  // delete left node

  t_left = deleteNode(t_left);

  // delete right node

  t_right = deleteNode(t_right);


I would suggest you change your program to using functions like the one above. You would need an insertNode and appendNode function also. The advantage of separating linked-list functionality from normal functionality is, that you easily could verify if all pointers are valid. You could test the linked list functionality independent of the rest.

If your program still crashes, you should debug your program step by step. Check all pointers used if they are valid. You should see that by looking at the pointer members of your class. Any pointer should have a valid pointer value when viewed by the Debugger.

BTW, did you consider using std::list<tCFeature> instead of your own list?

Regards, Alex




 
Avatar of ukjm2k

ASKER

Any one else have a way as to how i can overcome this problem as it still persists...

Thanks
You have to provide all your code as the error most likely isn't in the parts you posted above.

Regards, Alex
Avatar of ukjm2k

ASKER

itsmeandnobodyelse

Im happy to do that, but the FULL code is around 2000 lines long :s .....
Ive never posted here before so im not sure what is the best way to go about this, but im sure 2000 lines in this section is a little too much!!

Perhaps if someone is willing to take a look at it i could personally send it to them... (again apologies if this is something that is not appropraite....)

Let me know guys....many thanks.
To send mails to experts is against EE rules as the expert would get an advantage.

2000 lines isn't too much to post here, you should remove all TAB characters and make clear where a new file begins. Alternatively you could provide your source on a web-site where experts could download it. However, as I am not at home til Thursday I couldn't download from an FTP site.

Regards, Alex
Avatar of ukjm2k

ASKER

In that case, i shall be putting my code up here some time this evening.
Just before, i thought id just tell every one that i am building a stereo vision 3D reconstruction system that uses OpenCV.
This is an external library that i use for things such as camera calibration.

I will include ALL the code and so thought it appropriate to declare the use of OpenCV first.

However, i am sure that the error i am getting has nothing to do with these external library functions, rather programming errors, so i dont see it being a problem...

Many thanks to all sofar, and i hope you check back later to help sort my problem.
Avatar of ukjm2k

ASKER

Hi guys. Below is the full code for my program.

As mentioned before, the debug error seems to be in "pMatches = pTempMatch->pNext;", but i have been unable to fix it.

Compiling and linking the program produces no errors, but whilst the program is being exectuted, it freezes.
To be more specific, the system is first initialised and camera calibration is completed successfully using OpenCV. After this, input from two webcams is supposed to render me a 3D reconstruction of the two inputs, but this is where it falls short; whilst finding correspondences between the two inputs.

Hope this makes it a little more clear.

Here is the code, and thanks in advance:

global.h

/* Global definitions */

#include <stdio.h>
#include <stdlib.h>
#include <cv.h>
#include <cvaux.h>
#include <highgui.h>
#ifndef GLOBALS
#define GLOBALS
/* define run-modes */
#define MODE_INIT 0
#define MODE_CALIBRATE_INT 1
#define MODE_CALIBRATE_EXT 2
#define MODE_RECONSTRUCT_SAMPLE 3
#define MODE_RECONSTRUCT_RUN 4
/* define different calibration types */
#define CALIB_UNSET 0
#define CALIB_FILE 1
#define CALIB_BMP 2
#define CALIB_LIVE 3
/* define number of points require for ext calibration */
#define EXT_REQ_POINTS 35
/* define left and right camera indices */
#define SRC_LEFT_CAMERA 0
#define SRC_RIGHT_CAMERA 1
/* colour channels in an IplImage */
#define C_BLUE 0
#define C_GREEN 1
#define C_RED 2
/* structure for feature correspondences */
typedef struct tCFeature {
int iLeft[2];
int iRight[2];
struct tCFeature * pNext;
} tCFeature;
/* 2D point struct (includes next/prev links unlike OpenCV */
typedef struct t2DPoint {
int x;
int y;
int size;
struct t2DPoint * next;
struct t2DPoint * previous;
} t2DPoint;
/* struct representing a camera ray toward the object */
typedef struct camera_ray {
double vector[3];
double cam_tran[3];
double cam_rot;
double pixelsize[2];
} camera_ray;
#endif

//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/* procedures for reading/writing images and data files */

library.h

/* fileio.h :: File I/O Library header */
void writeImagePair(IplImage** images,const char * prefix);
void writeImage(IplImage* image,const char * prefix);
void removeNoise(IplImage * src);
void combine(IplImage * src1,IplImage * src2,IplImage * output,int mode);


library.cpp

#include "global.h"
extern bool fIntrinsicDone;
extern bool fExtrinsicDone;
extern int fCalibMethod;
/* procedures for reading/writing images and data files */
void writeImagePair(IplImage** images,const char * prefix){
char filename[100];
sprintf(filename,"%s-imgL.bmp",prefix);
cvvSaveImage(filename,images[0]);
sprintf(filename,"%s-imgR.bmp",prefix);
cvvSaveImage(filename,images[1]);
}
void writeImage(IplImage* image,const char * prefix){
char filename[100];
sprintf(filename,"%s-img.bmp",prefix);
cvvSaveImage(filename,image);
}
void removeNoise(IplImage * src){
//get the size of input_image (src)
CvSize sz = cvSize(src->width & -2, src->height & -2);
//create temp-image
IplImage* pyr = cvCreateImage(cvSize(sz.width/2, sz.height/2),
src->depth, src->nChannels);
cvPyrDown( src, pyr, CV_GAUSSIAN_5x5); //pyr DOWN
cvPyrUp( pyr, src, CV_GAUSSIAN_5x5); //and UP
cvReleaseImage(&pyr); //release temp
}
void combine(IplImage * src1,IplImage * src2,IplImage * new_img,int mode){
int row,col;
char * new_pixel;
char * src_pixel;
CvFont disp_font;
int text_col = CV_RGB(0,255,0);
new_img->origin = 1;
for(row=0;row<src1->height;row++){
for(col=0;col<src1->width;col++){
new_pixel = &new_img->imageData[(row*new_img->widthStep)+(col*3)];
src_pixel = &src1->imageData[(row*src1->widthStep)+(col*3)];
new_pixel[C_BLUE] = src_pixel[C_BLUE];
new_pixel[C_GREEN] = src_pixel[C_GREEN];
new_pixel[C_RED] = src_pixel[C_RED];
}
for(col=0;col<src2->width;col++){
new_pixel = &new_img->imageData[(row*new_img->widthStep)+src1->widthStep+(col*3)];
src_pixel = &src2->imageData[(row*src2->widthStep)+(col*3)];
new_pixel[C_BLUE] = src_pixel[C_BLUE];
new_pixel[C_GREEN] = src_pixel[C_GREEN];
new_pixel[C_RED] = src_pixel[C_RED];
}
}
for(row=src1->height;row<new_img->height;row++){
for(col=0;col<new_img->width;col++){
new_pixel = &new_img->imageData[(row*new_img->widthStep)+(col*3)];
new_pixel[C_BLUE] = (char)0;
new_pixel[C_GREEN] = (char)0;
new_pixel[C_RED] = (char)0;
}
}
cvInitFont(&disp_font,CV_FONT_VECTOR0,0.35,0.35,0.0,1);
switch(mode){
case MODE_INIT:
{
cvPutText(new_img,"initialised.... awaiting calibration (press
'c')",cvPoint(5,300),&disp_font,text_col);
}
break;
case MODE_CALIBRATE_INT:
{
if(!fIntrinsicDone){
switch(fCalibMethod){
case CALIB_UNSET:
{
cvPutText(new_img,"intrinsic calibration from [1]file
[2]bmp [3]live",cvPoint(5,300),&disp_font,text_col);
}
break;
case CALIB_LIVE:
{
cvPutText(new_img,"intrinsic calibration from live
stream...",cvPoint(5,300),&disp_font,text_col);
}
break;
}
}
else
cvPutText(new_img,"done... awaiting extrinsic calibration (press
'c')",cvPoint(5,300),&disp_font,text_col);
}
break;
case MODE_CALIBRATE_EXT:
{
cvPutText(new_img,"extrinsic calibration
mode...",cvPoint(5,300),&disp_font,text_col);
}
break;
case MODE_RECONSTRUCT_SAMPLE:
{
cvPutText(new_img,"select sample marker
colour...",cvPoint(5,300),&disp_font,text_col);
}
break;
case MODE_RECONSTRUCT_RUN:
{
cvPutText(new_img,"reconstructing...",cvPoint(5,300),&disp_font,text_col);
}
break;
}
}

/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////


main.cpp

#include "global.h"
#include <conio.h>
#include <time.h>
#include <math.h>
#include <string.h>
#include "calibrate.h"
#include "segmentation.h"
#include "correspondence.h"
#include "reconstruct.h"
#include "library.h"
/* forward declaration of function */
void tkDespatch(IplImage * left, IplImage * right, int mode);
void tkLMseHandler(int event,int x, int y, int flags);
/* calibration variables */
CvCalibFilter clCalibration;
CvCamera *clLeftCamera, *clRightCamera;
bool fIntrinsicDone=false, fExtrinsicDone=false;
int fCalibMethod=CALIB_UNSET, iCalibPrevFrame;
/* colour segmentation variables */
//int thresh_r=0,thresh_g=0,thresh_b=0;
/* current system flags */
int fRunMode = MODE_INIT;
bool fGenColMap = false;
/* x,y coords to generate segmentation colour map from */
int colmap_x=0, colmap_y=0;
char out_file[80];
int main(int argc, char **argv)
{
CvCapture *left_camera = (CvCapture *)NULL ,*right_camera = (CvCapture *)NULL;
IplImage *left_frame = (IplImage *)NULL ,*right_frame = (IplImage *)NULL;
IplImage *display = (IplImage *)NULL;
int cCmd,fRunLoop = 1;
double dEtalonParams[3] = {8,6,3.3};
/* attach to the cameras and make sure that there are two */
printf("Selecting left camera....\n");
left_camera = cvCaptureFromCAM(-1);
printf("Selecting right camera....\n");
right_camera = cvCaptureFromCAM(-1);
if (!left_camera || !right_camera){
printf("Unable to attach to both cameras...\n");
if (left_camera) cvReleaseCapture(&left_camera);
if (right_camera) cvReleaseCapture(&right_camera);
exit(1);
}
/* create the output to display the camera images in */
/* we'll also set up a mouse callback for sampling the */
/* segmentation colour */
cvvNamedWindow("Output", CV_WINDOW_AUTOSIZE);
cvSetMouseCallback("Output", tkLMseHandler);
/* setup the calibration class */
clCalibration.SetEtalon(CV_CALIB_ETALON_CHESSBOARD,dEtalonParams);
clCalibration.SetCameraCount(2);
clCalibration.SetFrames(20);
/* define the name of the output file using the current */
/* time - simple, but effective!! */
sprintf(out_file,"%i.dat",clock());
/* enter the main loop */
while(fRunLoop){
left_frame = cvQueryFrame(left_camera);
right_frame = cvQueryFrame(right_camera);
cCmd = cvvWaitKeyEx(0,1);
switch(tolower(cCmd))
{
case 'q':
/* got a 'q', so we want to quit. set the loop flag */
/* appropriately */
{
fRunLoop = 0;
}
break;
case 'c':
/* got a 'c', so calibration has been inited. set */
/* the runmode correctly depending on what has */
/* already been done */
{
if(fRunMode == MODE_INIT){
printf("Intrinsic parameter calibration mode....\n");
printf("Select (1)Calibration File (2)Bitmaps (3)Live
Cameras\n");
fRunMode = MODE_CALIBRATE_INT;
}
else if(fRunMode == MODE_CALIBRATE_INT && fIntrinsicDone){
printf("Extrinsic parameter calibration mode...\n");
printf("Please place the checkerboard at the correct position
and\n");
printf("press a key...\n");
cvWaitKey(0);
fRunMode = MODE_CALIBRATE_EXT;
}
}
break;
case 'r':
/* got a 'r', so we want to resample... that is of */
/* course assuming that we have gotten that far! */
{
if(fRunMode>=MODE_RECONSTRUCT_SAMPLE){
fRunMode = MODE_RECONSTRUCT_SAMPLE;
fGenColMap = false;
colmap_x = 0;
colmap_y = 0;
}
}
break;
case 's':
/* swap the cameras over */
{
CvCapture *temp = left_camera;
left_camera = right_camera;
right_camera = temp;
}
break;
case '.':
/* save the current image pair */
{
IplImage *images[] = {left_frame,right_frame};
writeImagePair(images,"snap");
printf("SNAPSHOT!\n");
}
break;
case '1':
/* calibrate from file */
{
fCalibMethod = CALIB_FILE;
}
break;
case '2':
/* calibrate from bitmaps */
{
fCalibMethod = CALIB_BMP;
}
break;
case '3':
/* calibrate from the live cameras */
{
fCalibMethod = CALIB_LIVE;
}
break;
}
/* now that we've trapped all of the user key strokes, */
/* we dispatch the two frames and the current mode to */
/* the correct bits */
tkDespatch(left_frame,right_frame,fRunMode);
/* combine the two images for display and overlay some */
/* text to describe to the user what is going on. */
if(display==NULL) display = cvCreateImage(cvSize(left_frame->width+right_frame->width,
left_frame->height+15),
left_frame-
>depth,
left_frame-
>nChannels);
combine(left_frame,right_frame,display,fRunMode);
cvvShowImage("Output",display);
}
/* if we've got this far, the user has selected to quit, so */
/* release all of the stuff we've allocated */
cvReleaseCapture(&left_camera);
cvReleaseCapture(&right_camera);
cvReleaseImage(&display);
return 0;
}
void tkDespatch(IplImage * left, IplImage * right, int mode){
struct tCFeature * pMatches;
struct tCFeature * pTempMatch;
char out_text[2048];
FILE * fp;
switch(mode){
case MODE_CALIBRATE_INT:case MODE_CALIBRATE_EXT:
{
tkCalibrate(left,right,mode);
}
break;
case MODE_RECONSTRUCT_SAMPLE:
{
if(fGenColMap){
if(tkGenerateColMap(left,colmap_x,colmap_y))
fRunMode = MODE_RECONSTRUCT_RUN;
else{
fGenColMap = false;
fRunMode = MODE_RECONSTRUCT_SAMPLE;
}
}
}
break;
case MODE_RECONSTRUCT_RUN:
{
pMatches = tkCorrespond(tkSegment(left),tkSegment(right));
if(pMatches!=NULL){
pTempMatch = pMatches;
while(pTempMatch!=NULL){
tkReconstruct(pTempMatch,out_text);
pMatches = pTempMatch->pNext;
free(pTempMatch);
pTempMatch = pMatches;
}
fp = fopen(out_file,"a");
fputs(out_text,fp);
fputs("\n",fp);
fclose(fp);
out_text[0] = '\0';
}
}
break;
}
}
void tkLMseHandler(int event,int x, int y, int flags){
if(event==CV_EVENT_LBUTTONDOWN && fRunMode==MODE_RECONSTRUCT_SAMPLE){
fGenColMap = true;
colmap_x = x; colmap_y=y;
}
}

///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/*
Aims to fully generate a calibrated stereo system system in oreder to reconstruct           
the 3-D position. Opencv class CvCalibFilter will handle intrinsic calibration.       
Once these values have been set, each frame is passed to the FindEtalon() method.
 
After intrinsic calculation, the method IsCalibrated() is returned. Following        
successful calibration, the intrinsic parameters are written to a text file.
*/

calibrate.h

/* Header file for calibration component of tracking system */
void tkGetCameraParams(CvCamera * dest,int src);
void tkCalibrate(IplImage * left, IplImage * right, int mode);



calibrate.cpp

#include "global.h"
#include <time.h>
#include "calibrate.h"
#include "library.h"
/* the calibration class */
extern CvCalibFilter clCalibration;
/* variables for the two cameras */
extern CvCamera * clLeftCamera;
extern CvCamera * clRightCamera;
/* some status flags */
extern bool fIntrinsicDone;
extern bool fExtrinsicDone;
extern int fCalibMethod;
extern int iCalibPrevFrame;
extern int fRunMode;
void tkCalibrate(IplImage * left, IplImage * right, int mode){
IplImage *images[] = {left,right};
if(mode==MODE_CALIBRATE_INT){
switch(fCalibMethod)
{
case CALIB_FILE:
{
//Do it from file
printf("\nLoading calibration data from file 'intcalib.dat'...\n");
clCalibration.LoadCameraParams("./pre-calib/intcalib.dat");
if(clCalibration.IsCalibrated()){
clLeftCamera = (CvCamera *)calloc(1,sizeof(struct CvCamera));
tkGetCameraParams(clLeftCamera,0);
clRightCamera = (CvCamera *)calloc(1,sizeof(struct CvCamera));
tkGetCameraParams(clRightCamera,1);
printf("\nLEFT >> Focal Length:[%f,%f]\n",clLeftCamera-
>matrix[0],clLeftCamera->matrix[4]);
printf("LEFT >> Centre Point:[%f,%f]\n",clLeftCamera-
>matrix[2],clLeftCamera->matrix[5]);
printf("RIGHT >> Focal Length:[%f,%f]\n",clRightCamera-
>matrix[0],clRightCamera->matrix[4]);
printf("RIGHT >> Centre Point:[%f,%f]\n\n",clRightCamera-
>matrix[2],clRightCamera->matrix[5]);
fIntrinsicDone = true;
fCalibMethod = CALIB_UNSET;
}
else {
printf("Calibration failed...unable to locate parameter file\n");
fIntrinsicDone = false;
fCalibMethod = CALIB_UNSET;
fRunMode = MODE_INIT;
}
break;
}
case CALIB_BMP:
{
//Do it from saved bitmaps
int i=0;
char filename[80];
printf("Calibration from saved bitmaps... [");
while(!clCalibration.IsCalibrated() && i<20)
{
sprintf(filename,"./pre-calib/calib_%i-imgL.bmp",i);
images[0] = cvLoadImage(filename);
sprintf(filename,"./pre-calib/calib_%i-imgR.bmp",i);
images[1] = cvLoadImage(filename);
if(images[0]!=NULL && images[1]!=NULL){
if(clCalibration.FindEtalon(images))
{
printf("#");
clCalibration.Push();
if(clCalibration.IsCalibrated()){
fIntrinsicDone=true;
fCalibMethod = CALIB_UNSET;
printf("]\nIntrinsic parameters now
found...\n");
clLeftCamera = (CvCamera
*)calloc(1,sizeof(struct CvCamera));
tkGetCameraParams(clLeftCamera,0);
clRightCamera = (CvCamera
*)calloc(1,sizeof(struct CvCamera));
tkGetCameraParams(clRightCamera,1);
printf("\nLEFT >> Focal
Length:[%f,%f]\n",clLeftCamera->matrix[0],clLeftCamera->matrix[4]);
printf("LEFT >> Centre
Point:[%f,%f]\n",clLeftCamera->matrix[2],clLeftCamera->matrix[5]);
printf("RIGHT >> Focal
Length:[%f,%f]\n",clRightCamera->matrix[0],clRightCamera->matrix[4]);
printf("RIGHT >> Centre
Point:[%f,%f]\n\n",clRightCamera->matrix[2],clRightCamera->matrix[5]);
clCalibration.SaveCameraParams("./newcalib/
intcalib.dat");
printf("\nIntrinsic parameters written to
file 'intcalib.dat'...\n");
}
}
cvReleaseImage(&images[0]);
cvReleaseImage(&images[1]);
}
i++;
}
if(!clCalibration.IsCalibrated()){
printf("]...failed!\n");
clCalibration.Stop();
fIntrinsicDone = false;
fCalibMethod = CALIB_UNSET;
fRunMode = MODE_INIT;
}
break;
}
case CALIB_LIVE:
{
bool found = clCalibration.FindEtalon(images);
if(!found) clCalibration.DrawPoints(images);
else{
char filename[30];
int cur_time = clock();
if(cur_time >= iCalibPrevFrame + 1000){
int imgs = clCalibration.GetFrameCount();
if(imgs==0)printf("Calibration from live cameras
beginning ...[");
printf("#");
sprintf(filename,"./new-calib/calib_%i",imgs);
writeImagePair(images,filename);
iCalibPrevFrame = cur_time;
clCalibration.Push();
cvXorS(left,cvScalarAll(255),left);
cvXorS(right,cvScalarAll(255),right);
}
if(clCalibration.IsCalibrated()){
fIntrinsicDone = true;
fCalibMethod = CALIB_UNSET;
printf("]\nIntrinsic parameters now found...\n");
clLeftCamera = (CvCamera *)calloc(1,sizeof(struct
CvCamera));
tkGetCameraParams(clLeftCamera,0);
clRightCamera = (CvCamera *)calloc(1,sizeof(struct
CvCamera));
tkGetCameraParams(clRightCamera,1);
printf("\nLEFT >> Focal Length:[%f,%f]\n",clLeftCamera-
>matrix[0],clLeftCamera->matrix[4]);
printf("LEFT >> Centre Point:[%f,%f]\n",clLeftCamera-
>matrix[2],clLeftCamera->matrix[5]);
printf("RIGHT >> Focal Length:[%f,%f]\n",clRightCamera-
>matrix[0],clRightCamera->matrix[4]);
printf("RIGHT >> Centre Point:[%f,%f]\n\n",clRightCamera-
>matrix[2],clRightCamera->matrix[5]);
clCalibration.SaveCameraParams("./new-calib/intcalib.dat");
printf("\nIntrinsic parameters written to file
'intcalib.dat'...\n");
}
}
break;
}
}
}
else if(mode==MODE_CALIBRATE_EXT){
CvCalibFilter tempCalib;
double dEtalonParams[3] = {8,6,3.3};
int i=0,j=0,count = 0;
bool found = false;
float focalLength[2];
float rotVect[3];
float jacobian[3*9];
CvMat jacmat,vecmat,rotMatr;
FILE * fp;
CvPoint2D32f* pts = (CvPoint2D32f *) NULL;
CvPoint2D32f gloImgPoints[2][EXT_REQ_POINTS];
CvPoint3D32f gloWldPoints[EXT_REQ_POINTS];
tempCalib.SetEtalon(CV_CALIB_ETALON_CHESSBOARD,dEtalonParams);
tempCalib.SetCameraCount(2);
found = tempCalib.FindEtalon(images);
if(!found) tempCalib.DrawPoints(images);
if(found){
writeImagePair(images,"./new-calib/ext_cal");
//Populate the gloImgPoints and gloWrldPoints arrays
//Load the world points from the text file.
if((fp=fopen("worldpoints.txt", "r"))==NULL)
{
printf("Unable to open worldpoints.txt\n");
fRunMode=MODE_CALIBRATE_INT;
return;
}
for(i=0;i<35;i++)
fscanf (fp,"%f,%f,%f
",&gloWldPoints[i].x,&gloWldPoints[i].y,&gloWldPoints[i].z);
fclose(fp);
printf("done file!\n");
//Populate the image points array
for(i=0;i<2;i++){
tempCalib.GetLatestPoints(i, &pts, &count, &found);
if(pts[0].x < pts[5].x){
//array is sorted correctly
for(j=0;j<EXT_REQ_POINTS;j++){
gloImgPoints[i][j].x = pts[j].x;
gloImgPoints[i][j].y = pts[j].y;
printf("#");
}
}
else {
//array is not right, swap it around...
for(j=0;j<EXT_REQ_POINTS;j++){
gloImgPoints[i][j].x = pts[count-j-1].x;
gloImgPoints[i][j].y = pts[count-j-1].y;
}
}
printf("\n");
}
printf("Calibration using %i points....\n",EXT_REQ_POINTS);
for(i=0;i<EXT_REQ_POINTS;i++){
printf("[P%i]",i);
printf("\tL [%3.1f,%3.1f]\tR [%3.1f,%3.1f]\tW
[%3.1f,%3.1f,%3.1f]\n",gloImgPoints[SRC_LEFT_CAMERA][i].x,
gloImgPoints[SRC_LEFT_CAMERA][i].y,
gloImgPoints[SRC_RIGHT_CAMERA][i].x,
gloImgPoints[SRC_RIGHT_CAMERA][i].y,
gloWldPoints[i].x,
gloWldPoints[i].y,
gloWldPoints[i].z);
}
printf("\n");
focalLength[0] = clLeftCamera->matrix[0];
focalLength[1] = clLeftCamera->matrix[5];
cvFindExtrinsicCameraParams(EXT_REQ_POINTS,
cvSize(cvRound(clLeftCamera-
>imgSize[0]),cvRound(clLeftCamera->imgSize[1])),
&gloImgPoints[0][0],
&gloWldPoints[0],
focalLength,
cvPoint2D32f(clLeftCamera-
>matrix[3],clLeftCamera->matrix[6]),
&clLeftCamera->distortion[0],
&rotVect[0],
&clLeftCamera->transVect[0]);
rotMatr = cvMat( 3, 3, CV_MAT32F, clLeftCamera->rotMatr );
jacmat = cvMat( 3, 9, CV_MAT32F, jacobian );
vecmat = cvMat( 3, 1, CV_MAT32F, rotVect );
cvRodrigues( &rotMatr, &vecmat, &jacmat, CV_RODRIGUES_V2M );
printf("LEFT >> Rot :[%f | %f | %f]\n",clLeftCamera->rotMatr[0],clLeftCamera-
>rotMatr[1],clLeftCamera->rotMatr[2]);
printf("LEFT >> Trans :[%f | %f | %f]\n",clLeftCamera->transVect[0],clLeftCamera-
>transVect[1],clLeftCamera->transVect[2]);
focalLength[0] = clRightCamera->matrix[0];
focalLength[1] = clRightCamera->matrix[5];
cvFindExtrinsicCameraParams(EXT_REQ_POINTS,
cvSize(cvRound(clRightCamera-
>imgSize[0]),cvRound(clRightCamera->imgSize[1])),
&gloImgPoints[1][0],
&gloWldPoints[0],
focalLength,
cvPoint2D32f(clRightCamera-
>matrix[3],clRightCamera->matrix[6]),
&clRightCamera->distortion[0],
&rotVect[0],
&clRightCamera->transVect[0]);
rotMatr = cvMat( 3, 3, CV_MAT32F, clRightCamera->rotMatr );
jacmat = cvMat( 3, 9, CV_MAT32F, jacobian );
vecmat = cvMat( 3, 1, CV_MAT32F, rotVect );
cvRodrigues( &rotMatr, &vecmat, &jacmat, CV_RODRIGUES_V2M );
printf("RIGHT >> Rot :[%f | %f | %f]\n",clRightCamera->rotMatr[0],clRightCamera-
>rotMatr[1],clRightCamera->rotMatr[2]);
printf("RIGHT >> Trans :[%f | %f | %f]\n",clRightCamera-
>transVect[0],clRightCamera->transVect[1],clRightCamera->transVect[2]);
printf("\n *** CALIBRATED! ***\n");
fRunMode = MODE_RECONSTRUCT_SAMPLE;
fExtrinsicDone = true;
}
}
}
void tkGetCameraParams(CvCamera * dest,int src){
const CvCamera * temp_camera = clCalibration.GetCameraParams(src);
dest->distortion[0] = temp_camera->distortion[0];
dest->distortion[1] = temp_camera->distortion[1];
dest->distortion[2] = temp_camera->distortion[2];
dest->distortion[3] = temp_camera->distortion[3];
dest->imgSize[0] = temp_camera->imgSize[0];
dest->imgSize[1] = temp_camera->imgSize[1];
dest->matrix[0] = temp_camera->matrix[0];
dest->matrix[1] = temp_camera->matrix[1];
dest->matrix[2] = temp_camera->matrix[2];
dest->matrix[3] = temp_camera->matrix[3];
dest->matrix[4] = temp_camera->matrix[4];
dest->matrix[5] = temp_camera->matrix[5];
dest->matrix[6] = temp_camera->matrix[6];
dest->matrix[7] = temp_camera->matrix[7];
dest->matrix[8] = temp_camera->matrix[8];
dest->rotMatr[0] = temp_camera->rotMatr[0];
dest->rotMatr[1] = temp_camera->rotMatr[1];
dest->rotMatr[2] = temp_camera->rotMatr[2];
dest->rotMatr[3] = temp_camera->rotMatr[3];
dest->rotMatr[4] = temp_camera->rotMatr[4];
dest->rotMatr[5] = temp_camera->rotMatr[5];
dest->rotMatr[6] = temp_camera->rotMatr[6];
dest->rotMatr[7] = temp_camera->rotMatr[7];
dest->rotMatr[8] = temp_camera->rotMatr[8];
dest->transVect[0] = temp_camera->transVect[0];
dest->transVect[1] = temp_camera->transVect[1];
dest->transVect[2] = temp_camera->transVect[2];
}

/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/*
Identifies the features in each of the images in order to pass them onto the  
Correspondence component for matching.
 
Consists of a colour segmentation followed by a circle detection.
*/

segmentation.h

/* image segmentation */
/* how large an area to sample from around the click */
#define SAMPLE_SIZE 20
/* the threshhold window around the colour to be matched */
#define THRESH_BAND 60
bool tkGenerateColMap(IplImage * input,int x,int y);
bool segColMapSimpleAve(IplImage * input, int x, int y);
bool segColMapRThresh(IplImage * input, int x, int y);
struct t2DPoint * tkSegment(IplImage * input);
struct t2DPoint * segFindFeatures_Ave(IplImage * input);
struct t2DPoint * segFindFeatures_RThresh(IplImage * input);
struct t2DPoint * findCircles(IplImage * input,IplImage * draw);



segmentation.cpp

#include "global.h"
#include "segmentation.h"
#include "library.h"
int thresh_r;
int thresh_g;
int thresh_b;
char RchnThresh;
extern bool colmapdone;
bool tkGenerateColMap(IplImage * input,int x,int y){
if(segColMapSimpleAve(input,x,y)) return true;
else return false;
}
struct t2DPoint * tkSegment(IplImage * input){
return segFindFeatures_Ave(input);
}
bool segColMapSimpleAve(IplImage * input, int x, int y){
char sample[SAMPLE_SIZE][SAMPLE_SIZE][3];
int row,col,chan,v;
int norm_total;
int left = x - (SAMPLE_SIZE/2);
int right = x + (SAMPLE_SIZE/2);
int top = y - (SAMPLE_SIZE/2);
int bottom = y + (SAMPLE_SIZE/2);
/* First, lets duplicate the bit we want to sample */
for(row=top;row<bottom;row++){
for(col=left;col<right;col++){
sample[row-top][col-left][C_RED] = input->imageData[(row*input-
>widthStep)+(col*3)+C_RED];
sample[row-top][col-left][C_BLUE] = input->imageData[(row*input-
>widthStep)+(col*3)+C_BLUE];
sample[row-top][col-left][C_GREEN] = input->imageData[(row*input-
>widthStep)+(col*3)+C_GREEN];
}
}
/* Now we'll normalise the sample */
for(row=0;row<SAMPLE_SIZE;row++){
for(col=0;col<SAMPLE_SIZE;col++){
norm_total = sample[row][col][C_RED] + sample[row][col][C_BLUE] +
sample[row][col][C_GREEN];
sample[row][col][C_RED] =
(int)((float)sample[row][col][C_RED]/(float)norm_total*255.0);
sample[row][col][C_BLUE] =
(int)((float)sample[row][col][C_BLUE]/(float)norm_total*255.0);
sample[row][col][C_GREEN] =
(int)((float)sample[row][col][C_GREEN]/(float)norm_total*255.0);
}
}
/* Now let's smooth it a bit */
for(row=0;row<SAMPLE_SIZE;row++){
for(col=0;col<SAMPLE_SIZE;col++){
for(chan=0;chan<3;chan++){
v = sample[row][col][chan];
if(v>0){
v -= 1;
sample[row][col][chan] = v;
if(row>0 && sample[row-1][col][chan] < v) sample[row-1][col][chan]
= v;
if(col>0 && sample[row][col-1][chan] < v) sample[row][col-1][chan]
= v;
}
}
}
}
for(row=0;row<SAMPLE_SIZE;row++){
for(col=0;col<SAMPLE_SIZE;col++){
for(chan=0;chan<3;chan++){
v = sample[row][col][chan];
if(v>0){
v -= 1;
sample[row][col][chan] = v;
if(row < SAMPLE_SIZE-1 && sample[row+1][col][chan] < v)
sample[row+1][col][chan] = v;
if(col < SAMPLE_SIZE-1 && sample[row][col+1][chan] < v)
sample[row][col+1][chan] = v;
}
}
}
}
/*Now find the average*/
thresh_r = sample[0][0][C_RED];
thresh_b = sample[0][0][C_BLUE];
thresh_g = sample[0][0][C_GREEN];
for(row=0;row<SAMPLE_SIZE;row++){
for(col=0;col<SAMPLE_SIZE;col++){
thresh_r = (thresh_r+sample[row][col][C_RED])/2;
thresh_b = (thresh_b+sample[row][col][C_BLUE])/2;
thresh_g = (thresh_g+sample[row][col][C_GREEN])/2;
}
}
printf("\nTHRESHHOLDS: R:%i G:%i B:%i\n",thresh_r,thresh_g,thresh_b);
return true;
}
struct t2DPoint * segFindFeatures_Ave(IplImage * input){
IplImage * temp;
struct t2DPoint * features;
int i,j;
int norm_r=0,norm_g=0,norm_b=0;
int low_r=0,low_g=0,low_b=0;
int high_r=0,high_g=0,high_b=0;
int norm_total=0,in_total=0;
char * pixel_in = (char *)NULL;
char * pixel_out = (char *)NULL;
/* Clone the input image! */
temp = cvCloneImage(input);
removeNoise(input);
removeNoise(input);
/* Now work out what the normalised boundaries are from
the specified r,g,b values */
in_total = thresh_r+thresh_g+thresh_b;
if(in_total!=0){
low_r = (int)(((float)thresh_r / (float)in_total)*255.0);
low_g = (int)(((float)thresh_g / (float)in_total)*255.0);
low_b = (int)(((float)thresh_b / (float)in_total)*255.0);
high_r = low_r + THRESH_BAND;
high_g = low_g + THRESH_BAND;
high_b = low_b + THRESH_BAND;
low_r -= THRESH_BAND;
low_g -= THRESH_BAND;
low_b -= THRESH_BAND;
}
else {
low_r = 0;
low_g = 0;
low_b = 0;
high_r = THRESH_BAND;
high_g = THRESH_BAND;
high_b = THRESH_BAND;
}
for(i=0;i<input->height;i++){
for(j=0;j<(input->widthStep/3);j++){
pixel_in = &input->imageData[(i*input->widthStep)+(j*3)];
pixel_out = &temp->imageData[(i*input->widthStep)+(j*3)];
norm_total = (int)pixel_in[C_RED] + (int)pixel_in[C_GREEN] + (int)pixel_in[C_BLUE];
if(norm_total!=0){
norm_r = (int)(((float)pixel_in[C_RED] / (float)norm_total)*255.0);
norm_g = (int)(((float)pixel_in[C_GREEN] / (float)norm_total)*255.0);
norm_b = (int)(((float)pixel_in[C_BLUE] / (float)norm_total)*255.0);
}
else if(norm_total==0){
norm_r=0;norm_g=0;norm_b=0;
}
if(norm_r >= low_r && norm_r <= high_r &&
norm_g >= low_g && norm_g <= high_g &&
norm_b >= low_b && norm_b <= high_b){
pixel_out[C_RED] = (char)0;
pixel_out[C_GREEN] = (char)0;
pixel_out[C_BLUE] = (char)0;
}
else {
pixel_out[C_RED] = (char)255;
pixel_out[C_GREEN] = (char)255;
pixel_out[C_BLUE] = (char)255;
}
}
}
features = findCircles(temp,input);
cvReleaseImage(&temp);
return features;
}
bool segColMapRThresh(IplImage * input, int x, int y){
char * pixel;
int left = x - 2;
int right = x + 2;
int top = y - 2;
int bottom = y + 2;
int row,col;
if(input->depth == IPL_DEPTH_8U) printf ("IPL_DEPTH_8U\n");
if (input->depth == IPL_DEPTH_8S) printf ("IPL_DEPTH_8S\n");
if (input->depth == IPL_DEPTH_16S) printf ("IPL_DEPTH_16S\n");
if (input->depth == IPL_DEPTH_32S) printf ("IPL_DEPTH_32S\n");
if (input->depth == IPL_DEPTH_32F) printf ("IPL_DEPTH_32F\n");
if (input->depth == IPL_DEPTH_64F) printf ("IPL_DEPTH_64F\n");
for(row=top;row<bottom;row++){
for(col=left;col<right;col++){
pixel = &input->imageData[(input->widthStep*row)+(col*3)];
if(col==left && row==top){
RchnThresh = pixel[C_RED];
}
else{
RchnThresh = (RchnThresh/2) + (pixel[C_RED]/2);
}
printf("%c",pixel[C_RED]);
}
printf("\n");
}
return true;
}
struct t2DPoint * segFindFeatures_RThresh(IplImage * input){
IplImage * threshed = cvCloneImage(input);
struct t2DPoint * features;
int i,j;
char * pixel_in = (char *)NULL;
char * pixel_out = (char *)NULL;
for(i=0;i<input->height;i++){
for(j=0;j<(input->widthStep/3);j++){
pixel_in = &input->imageData[(i*input->widthStep)+(j*3)];
pixel_out = &threshed->imageData[(i*threshed->widthStep)+(j*3)];
if((pixel_in[C_RED] > (RchnThresh - 50)) &&
(pixel_in[C_RED] < (RchnThresh + 20)) &&
(pixel_in[C_GREEN] < (char)50) &&
(pixel_in[C_BLUE] < (char)110)){
pixel_out[C_RED] = (char)0;
pixel_out[C_BLUE] = (char)0;
pixel_out[C_GREEN] = (char)0;
}
else {
pixel_out[C_RED] = (char)255;
pixel_out[C_BLUE] = (char)255;
pixel_out[C_GREEN] = (char)255;
}
}
}
features = findCircles(threshed,input);
cvReleaseImage(&threshed);
return features;
}
struct t2DPoint * findCircles(IplImage * input,IplImage * draw){
CvMemStorage * storage;
CvSeq * contour;
CvBox2D * box;
CvPoint * pointArray;
CvPoint2D32f * pointArray32f;
CvPoint center;
float myAngle,ratio;
int i,header_size,count,length,width;
IplImage * gray_input = cvCreateImage(cvGetSize(input),IPL_DEPTH_8U,1);
struct t2DPoint * markers = (struct t2DPoint *)NULL;
struct t2DPoint * temppt = (struct t2DPoint *)NULL;
//Convert the input image to grayscale.
cvCvtColor(input,gray_input,CV_RGB2GRAY);
//Remove noise and smooth
removeNoise(gray_input);
//Edge detect the image with Canny algorithm
cvCanny(gray_input,gray_input,25,150,3);
//Allocate memory
box = (CvBox2D *)malloc(sizeof(CvBox2D));
header_size = sizeof(CvContour);
storage = cvCreateMemStorage(1000);
// Find all the contours in the image.
cvFindContours(gray_input,storage,&contour,header_size,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_TC89_KCOS
);
while(contour!=NULL)
{
if(CV_IS_SEQ_CURVE(contour))
{
count = contour->total;
pointArray = (CvPoint *)malloc(count * sizeof(CvPoint));
cvCvtSeqToArray(contour,pointArray,CV_WHOLE_SEQ);
pointArray32f = (CvPoint2D32f *)malloc((count + 1) * sizeof(CvPoint2D32f));
for(i=0;i<count-1;i++){
pointArray32f[i].x = (float)(pointArray[i].x);
pointArray32f[i].y = (float)(pointArray[i].y);
}
pointArray32f[i].x = (float)(pointArray[0].x);
pointArray32f[i].y = (float)(pointArray[0].y);
if(count>7){
cvFitEllipse(pointArray32f,count,box);
ratio = (float)box->size.width/(float)box->size.height;
center.x = (int)box->center.x;
center.y = (int)box->center.y;
length = (int)box->size.height;
width = (int)box->size.width;
myAngle = box->angle;
if((center.x>0) && (center.y>0)){
temppt = (struct t2DPoint *)malloc(sizeof(struct t2DPoint));
temppt->x = center.x;
temppt->y = center.y;
temppt->size = length;
temppt->next = markers;
temppt->previous = (struct t2DPoint *)NULL;
if(markers!=NULL) markers->previous = temppt;
markers = temppt;
if(draw!=NULL) cvCircle(draw,center,(int)length/2,RGB(0,0,255),-1);
/*cvEllipse(input,
center,
cvSize((int)width/2,(int)length/2),
-box->angle,
0,
360,
RGB(0,255,0),
1);*/
}
}
free(pointArray32f);
free(pointArray);
}
contour = contour->h_next;
}
free(contour);
free(box);
cvReleaseImage(&gray_input);
cvReleaseMemStorage(&storage);
return markers;
}

/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/*
Takes two link-lists of image features representing the features         
found in the left and reight images as chosen by segmentation.cpp       
& returns a list of matched features.
*/


correspondence.h

/* correspondence */
struct tCFeature * tkCorrespond(struct t2DPoint * left_features,struct t2DPoint * right_features);
struct tCFeature * corrSimple(struct t2DPoint * left, struct t2DPoint * right);
struct t2DPoint * corrSimple_1(struct t2DPoint * marker, struct t2DPoint * scene);



correspondence.cpp

#include "global.h"
#include "correspondence.h"
struct tCFeature * tkCorrespond(struct t2DPoint * left_features,struct t2DPoint * right_features){
if(left_features!=NULL && right_features!=NULL)
return corrSimple(left_features,right_features);
else
return NULL;
}
struct tCFeature * corrSimple(struct t2DPoint * left, struct t2DPoint * right){
struct t2DPoint * t_left = left;
struct t2DPoint * t_right = right;
struct t2DPoint * match4left = (struct t2DPoint *)NULL;
struct t2DPoint * match4right = (struct t2DPoint *)NULL;
struct t2DPoint * temp_del = (struct t2DPoint *)NULL;
struct tCFeature * result = (struct tCFeature *)NULL;
struct tCFeature * temp_match = (struct tCFeature *)NULL;
bool match = false;
while(t_left!=NULL){
match = false;
match4left = corrSimple_1(t_left,right);
if(match4left!=NULL){
match4right = corrSimple_1(match4left,left);
if(t_left==match4right){
match = true;
//printf("MATCHED! L[%i,%i] R[%i,%i]\n",t_left->x,t_left->y,match4left-
>x,match4left->y);
temp_match = (struct tCFeature *)malloc(sizeof(struct tCFeature));
temp_match->iLeft[0] = t_left->x;
temp_match->iLeft[1] = t_left->y;
temp_match->iRight[0] = match4left->x;
temp_match->iRight[1] = match4left->y;
temp_match->pNext = result;
result = temp_match;
//Delete the element from the left list
temp_del = t_left;
t_left=t_left->next;
if(temp_del->previous==NULL){
left = temp_del->next;
}
else {
temp_del->previous->next = temp_del->next;
}
if(temp_del->next!=NULL){
temp_del->next->previous = temp_del->previous;
}
free(temp_del);
//Now do the element from the right list
temp_del = match4left;
if(temp_del->previous==NULL){
right = temp_del->next;
}
else {
temp_del->previous->next = temp_del->next;
}
if(temp_del->next!=NULL){
temp_del->next->previous = temp_del->previous;
}
free(temp_del);
}
}
else{
//the right list is empty - all matched!
//break out!
match=true;
t_left = NULL;
}
//We should only increment the pointer if there hasn't been a match, otherwise
//all hell will break loose!
if(!match)t_left = t_left->next;
}
//if(left!=NULL) printf("Not all markers matched in the left frame\n");
//if(right!=NULL) printf("Not all markers matched int he right frame\n");
return result;
}
struct t2DPoint * corrSimple_1(struct t2DPoint * marker, struct t2DPoint * scene){
struct t2DPoint * temp_pt = scene;
double temp_dist = 0;
struct t2DPoint * best_pt = (struct t2DPoint *)NULL;
double best_dist = 1000000000;
while(temp_pt!=NULL){
if(abs(temp_pt->size-marker->size)<=(marker->size/5)){
temp_dist = sqrt(((abs(marker->x)-abs(temp_pt->x)) * (abs(marker->x)-abs(temp_pt-
>x)))
+((abs(marker->y)-abs(temp_pt->y)) * (abs(marker-
>y)-abs(temp_pt->y))));
if(temp_dist<best_dist){
best_dist = temp_dist;
best_pt = temp_pt;
}
}
temp_pt = temp_pt->next;
}
return best_pt;
}

//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/*
Involves a significant amount of matrix manipulation, and to avoid duplicating                
code, a file matlib.cpp was also written. This is a minimal matrix library         
which implements the functionality for this project only, and any exernal
libaries were ignored.       
See matlib.cpp
*/

reconstruct.h

//3D Reconstruction library header
#define PI 3.14159
void tkReconstruct(struct tCFeature * object,char * output);
struct tMatrix * reconstruct(struct camera_ray * left, struct camera_ray * right);
struct tMatrix * generateRotMat(double x_rot, double y_rot, double z_rot);
struct tMatrix * defPlane(struct tMatrix * point, struct tMatrix * vec1, struct tMatrix * vec2);
void pix2mm(struct camera_ray *);
double calcBeta(struct tMatrix * trans, struct tMatrix * vec, struct tMatrix * iv, struct tMatrix * ip);
struct tMatrix * genIntersect(struct tMatrix * trans, struct tMatrix * vec, double beta);



reconstruct.cpp

#include "global.h"
#include "matlib.h"
#include "reconstruct.h"
/* variables for the two cameras */
extern CvCamera * clLeftCamera;
extern CvCamera * clRightCamera;
void tkReconstruct(struct tCFeature * object,char * output){
struct camera_ray * leftcam_ray = (struct camera_ray *)malloc(sizeof(struct camera_ray));
struct camera_ray * rightcam_ray = (struct camera_ray *)malloc(sizeof(struct camera_ray));
struct tMatrix * result = (struct tMatrix *)NULL;
char outstring[256];
leftcam_ray->cam_tran[0] = 505.0;
leftcam_ray->cam_tran[1] = 485.0;
leftcam_ray->cam_tran[2] = 1000.0;
leftcam_ray->cam_rot = 0.0;
leftcam_ray->pixelsize[0] = (float)15.0/(float)352.0;
leftcam_ray->pixelsize[1] = (float)12.0/(float)288.0;
leftcam_ray->vector[0] = (float)175 - (float)object->iLeft[0];
leftcam_ray->vector[1] = (float)140 - (float)object->iLeft[1];
leftcam_ray->vector[2] = (float)clLeftCamera->matrix[0];
rightcam_ray->cam_tran[0] = 235.0;
rightcam_ray->cam_tran[1] = 485.0;
rightcam_ray->cam_tran[2] = 1000.0;
rightcam_ray->cam_rot = 0.0;
rightcam_ray->pixelsize[0] = (float)15.0/(float)352.0;
rightcam_ray->pixelsize[1] = (float)12.0/(float)288.0;
rightcam_ray->vector[0] = (float)175 - (float)object->iRight[0];
rightcam_ray->vector[1] = (float)140 - (float)object->iRight[1];
rightcam_ray->vector[2] = (float)clRightCamera->matrix[0];
result = reconstruct(leftcam_ray,rightcam_ray);
//printf("OBJECT LOCATED AT:\n");
//matPrint(result);
sprintf(outstring,"[P::%.1f,%.1f,%.1f]",result->matrix[0],result->matrix[1],result->matrix[2]);
strcat(output,outstring);
}
struct tMatrix * reconstruct(struct camera_ray * left, struct camera_ray * right){
struct tMatrix * vl_norm = (struct tMatrix *)NULL;
struct tMatrix * vr_norm = (struct tMatrix *)NULL;
struct tMatrix * rl = generateRotMat(0.0,left->cam_rot,0.0);
struct tMatrix * rr = generateRotMat(0.0,right->cam_rot,0.0);
struct tMatrix * tl = matInit(3,1,left->cam_tran);
struct tMatrix * tr = matInit(3,1,right->cam_tran);
struct tMatrix * vl_prime = (struct tMatrix *)NULL;
struct tMatrix * vr_prime = (struct tMatrix *)NULL;
struct tMatrix * axis = (struct tMatrix *)NULL;
struct tMatrix * l_plane = (struct tMatrix *)NULL;
struct tMatrix * r_plane = (struct tMatrix *)NULL;
struct tMatrix * lp_norm = (struct tMatrix *)NULL;
struct tMatrix * rp_norm = (struct tMatrix *)NULL;
struct tMatrix * iv = (struct tMatrix *)NULL;
struct tMatrix * ip = (struct tMatrix *)NULL;
struct tMatrix * l_int = (struct tMatrix *) NULL;
struct tMatrix * r_int = (struct tMatrix *) NULL;
struct tMatrix * object = (struct tMatrix *) NULL;
double l_beta = 0.0,r_beta = 0.0;
//Generate normalised left vector
pix2mm(left);
vl_norm = matInit(3,1,left->vector);
matNorm(vl_norm);
//Generate normalised right vector
pix2mm(right);
vr_norm = matInit(3,1,right->vector);
matNorm(vr_norm);
//Correct the image vectors for camera rotation.
vl_prime = matMultiply(rl,vl_norm);
vr_prime = matMultiply(rr,vr_norm);
//Generate the "axis" vector
axis = matCrossProd(vl_prime,vr_prime);
//Generate the equations for the planes
l_plane = defPlane(tl,vl_prime,axis);
r_plane = defPlane(tr,vr_prime,axis);
//Generate left and right plane normals
lp_norm = matInit(3,1,l_plane->matrix);
rp_norm = matInit(3,1,r_plane->matrix);
//Generate the intersection vector and point
iv = matCrossProd(lp_norm,rp_norm);
matNorm(iv);
ip = matInit(3,1,NULL);
ip->matrix[2] = 0.0;
ip->matrix[1] = (-l_plane->matrix[3]-(l_plane->matrix[0]*r_plane->matrix[3]))/
((r_plane->matrix[0]*l_plane->matrix[1])-(l_plane-
>matrix[0]*r_plane->matrix[1]));
ip->matrix[0] = ((-r_plane->matrix[3])-(r_plane->matrix[1]*ip->matrix[1]))/r_plane->matrix[0];
//Calculate the beta value
l_beta = calcBeta(tl,vl_prime,iv,ip);
r_beta = calcBeta(tr,vr_prime,iv,ip);
l_int = genIntersect(tl,vl_prime,l_beta);
r_int = genIntersect(tr,vr_prime,r_beta);
object = matInit(3,1,NULL);
object->matrix[X] = (l_int->matrix[X]+r_int->matrix[X])/2;
object->matrix[Y] = (l_int->matrix[Y]+r_int->matrix[Y])/2;
object->matrix[Z] = (l_int->matrix[Z]+r_int->matrix[Z])/2;
//Release all of the memory we have allocated for matrices.
matRelease(vl_norm);
matRelease(vr_norm);
matRelease(rl);
matRelease(rr);
matRelease(tr);
matRelease(tl);
matRelease(vl_prime);
matRelease(vr_prime);
matRelease(axis);
matRelease(l_plane);
matRelease(r_plane);
matRelease(lp_norm);
matRelease(rp_norm);
matRelease(iv);
matRelease(ip);
matRelease(l_int);
matRelease(r_int);
//Finally return the 3D position.
return object;
}
struct tMatrix * generateRotMat(double x_rot, double y_rot, double z_rot){
double x_mat[] = {1,0,0,0,cos(x_rot*PI/180.0),sin(x_rot*PI/180.0),0,-
sin(x_rot*PI/180.0),cos(x_rot*PI/180.0)};
double y_mat[] = {cos(y_rot*PI/180.0),0,sin(y_rot*PI/180.0),0,1,0,-
sin(y_rot*PI/180.0),0,cos(y_rot*PI/180.0)};
double z_mat[] = {cos(z_rot*PI/180.0),sin(z_rot*PI/180.0),0,-
sin(z_rot*PI/180.0),cos(z_rot*PI/180.0),0,0,0,1};
struct tMatrix * x_rot_mat = matInit(3,3,x_mat);
struct tMatrix * y_rot_mat = matInit(3,3,y_mat);
struct tMatrix * z_rot_mat = matInit(3,3,z_mat);
struct tMatrix * temp_mat;
struct tMatrix * result;
temp_mat = matMultiply(x_rot_mat,y_rot_mat);
result = matMultiply(temp_mat,z_rot_mat);
matRelease(x_rot_mat);
matRelease(y_rot_mat);
matRelease(z_rot_mat);
matRelease(temp_mat);
return result;
}
void pix2mm(struct camera_ray * camera){
camera->vector[0] = camera->vector[0]*camera->pixelsize[0];
camera->vector[1] = camera->vector[1]*camera->pixelsize[1];
camera->vector[2] = camera->vector[2]*(camera->pixelsize[0]+camera->pixelsize[1])/2;
}
struct tMatrix * defPlane(struct tMatrix * point, struct tMatrix * vec1, struct tMatrix * vec2){
double det_a[9] = {point->matrix[Y],point->matrix[Z],1.0,
vec1->matrix[Y],vec1->matrix[Z],0.0,
vec2->matrix[Y],vec2->matrix[Z],0.0};
double det_b[9] = {point->matrix[Z],1.0,point->matrix[X],
vec1->matrix[Z],0.0,vec1->matrix[X],
vec2->matrix[Z],0.0,vec2->matrix[X]};
double det_c[9] = {1.0,point->matrix[X],point->matrix[Y],
0.0,vec1->matrix[X],vec1->matrix[Y],
0.0,vec2->matrix[X],vec2->matrix[Y]};
double det_d[9] = {point->matrix[X],point->matrix[Y],point->matrix[Z],
vec1->matrix[X],vec1->matrix[Y],vec1->matrix[Z],
vec2->matrix[X],vec2->matrix[Y],vec2->matrix[Z]};
struct tMatrix * a_matrix = matInit(3,3,det_a);
struct tMatrix * b_matrix = matInit(3,3,det_b);
struct tMatrix * c_matrix = matInit(3,3,det_c);
struct tMatrix * d_matrix = matInit(3,3,det_d);
struct tMatrix * plane = matInit(4,1,NULL);
plane->matrix[0] = matDet(a_matrix);
plane->matrix[1] = matDet(b_matrix);
plane->matrix[2] = matDet(c_matrix);
plane->matrix[3] = matDet(d_matrix);
matRelease(a_matrix);
matRelease(b_matrix);
matRelease(c_matrix);
matRelease(d_matrix);
return plane;
}
double calcBeta(struct tMatrix * trans, struct tMatrix * vec, struct tMatrix * iv, struct tMatrix *
ip){
double beta_num = (ip->matrix[Y]*iv->matrix[X])
+(iv->matrix[Y]*trans->matrix[X])
-(ip->matrix[X]*iv->matrix[Y])-(iv->matrix[X]*trans->matrix[Y]);
double beta_denom = (iv->matrix[X]*vec->matrix[Y])-(iv->matrix[Y]*vec->matrix[X]);
return beta_num/beta_denom;
}
struct tMatrix * genIntersect(struct tMatrix * trans, struct tMatrix * vec, double beta){
double i_sect[3] = {trans->matrix[X]+(beta*vec->matrix[X]),
trans->matrix[Y]+(beta*vec->matrix[Y]),
trans->matrix[Z]+(beta*vec->matrix[Z])};
return matInit(3,1,i_sect);
}

/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/*
Core reconstruction component. Defines type, tMatrix which can be used to              
represent a matrix of any size, rather than statically defining the matrix       
 
The matrix itself is represented as a 1-D array of floating point numbers
*/


matlib.h

//Matrix handler library
#define X 0
#define Y 1
#define Z 2
typedef struct tMatrix {
double * matrix;
int cols;
int rows;
} tMatrix;
struct tMatrix * matMultiply(struct tMatrix * src1, struct tMatrix * src2);
void matRelease(struct tMatrix * mat);
void matAssign(struct tMatrix * mat,double * data);
struct tMatrix * matInit(int rows, int cols, double * data);
void matNorm(struct tMatrix * mat);
void matPrint(struct tMatrix * matrix);
double matDet(struct tMatrix * mat);
struct tMatrix * matCrossProd(struct tMatrix * vec1, struct tMatrix * vec2);
double matDotProd(struct tMatrix * vec1,struct tMatrix * vec2);



matlib.cpp

#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "matlib.h"
struct tMatrix * matInit(int rows, int cols, double * data){
int i;
struct tMatrix * new_mat = (struct tMatrix *)malloc(sizeof(struct tMatrix));
new_mat->cols = cols;
new_mat->rows = rows;
new_mat->matrix = (double *)malloc(sizeof(double)*cols*rows);
if(data!=NULL){
for(i=0;i<(new_mat->cols*new_mat->rows);i++){
new_mat->matrix[i] = data[i];
}
}
return new_mat;
}
void matRelease(struct tMatrix * mat){
free(mat->matrix);
free(mat);
}
void matAssign(struct tMatrix * mat,double * data){
int i;
for(i=0;i<(mat->cols*mat->rows);i++){
mat->matrix[i] = data[i];
}
}
struct tMatrix * matMultiply(struct tMatrix * src1, struct tMatrix * src2){
int row,col,k;
double current;
struct tMatrix * dest = matInit(src1->rows,src2->cols,NULL);
for(row=0;row<dest->rows;row++){
for(col=0;col<dest->cols;col++){
current = 0.0;
for(k=0;k<src1->cols;k++)
current += src1->matrix[(row*src1->cols)+k]*src2->matrix[(k*src2-
>cols)+col];
dest->matrix[(row*dest->cols)+col] = current;
}
}
return dest;
}
void matNorm(struct tMatrix * mat){
double length = sqrt((mat->matrix[X]*mat->matrix[X])
+(mat->matrix[Y]*mat->matrix[Y])
+(mat->matrix[Z]*mat->matrix[Z]));
mat->matrix[X] = mat->matrix[X]/length;
mat->matrix[Y] = mat->matrix[Y]/length;
mat->matrix[Z] = mat->matrix[Z]/length;
}
void matPrint(struct tMatrix * matrix){
if(matrix!=NULL){
int col,row;
printf("MATRIX\n======\n");
for(row=0;row<matrix->rows;row++){
for(col=0;col<matrix->cols;col++){
printf("\t%f",matrix->matrix[(row*matrix->cols)+col]);
}
printf("\n");
}
}
}
double matDet(struct tMatrix * mat){
double det = (mat->matrix[0]*mat->matrix[4]*mat->matrix[8])
- (mat->matrix[2]*mat->matrix[4]*mat->matrix[6])
+ (mat->matrix[1]*mat->matrix[5]*mat->matrix[6])
- (mat->matrix[0]*mat->matrix[5]*mat->matrix[7])
+ (mat->matrix[2]*mat->matrix[3]*mat->matrix[7])
- (mat->matrix[1]*mat->matrix[3]*mat->matrix[8]);
return det;
}
struct tMatrix * matCrossProd(struct tMatrix * vec1, struct tMatrix * vec2){
struct tMatrix * result = matInit(3,1,NULL);
result->matrix[0] = (vec1->matrix[1]*vec2->matrix[2]) - (vec1->matrix[2]*vec2->matrix[1]);
result->matrix[1] = (vec1->matrix[2]*vec2->matrix[0]) - (vec1->matrix[0]*vec2->matrix[2]);
result->matrix[2] = (vec1->matrix[0]*vec2->matrix[1]) - (vec1->matrix[1]*vec2->matrix[0]);
return result;
}
double matDotProd(struct tMatrix * vec1,struct tMatrix * vec2){
double result = (vec1->matrix[0]*vec2->matrix[0])
+ (vec1->matrix[1]*vec2->matrix[1])
+ (vec1->matrix[2]*vec2->matrix[2]);
return result;
}


just some thoughts:

do you use a "DEBUG" version of your library?
did you try your program in the release mode?

martin
Avatar of ukjm2k

ASKER

mgpeschke

Yes i have tried it in release mode also; same result
These header files are missing:

#include <cv.h>
#include <cvaux.h>
#include <highgui.h>

Alex
Avatar of ukjm2k

ASKER

Sorry, must have missed them out....

Im sure the problem is not as a result of this though.
>>>> Im sure the problem is not as a result of this though.

Yes, but I couldn't compile without them.

The problem I see with your code is that you are deleting entries from your linked list *but* didn't initialize the head pointers of these lists. I couldn't spot the error cause I am not being able to debug but I think you have to overthink your technique. If you would use std::list<tCFeature>  instead of your own embedded linked list you could avoid all of these dangerous deletions of pointers that might still be referenced in another collection.

Regards, Alex
Avatar of ukjm2k

ASKER

How would i go about using std::list? I am not very familiar with this and would appreciate any help to get me started.

Many thanks.
>>>> How would i go about using std::list

(1) You need:

    #include <list>
    using namespace std;

at top of your cpp.

(2) You have to remove tCFeature* pNext member from your struct tCFeature.
      And next and prev member in t2DPoint.  BTW, in C++ you don't need typedef struct but only

      struct tCFeature
      {
            int iLeft[2];
            int iRight[2];
      };

      struct t2DPoint
      {
          int x;
          int y;
          int size;
      };


After that you would simply skip the struct keyword and use tCFeature as a type, e. g.

      tCFeature left;  

Note, I would recommend to add a default constructor, that initializes the members:

     struct tCFeature
      {
            int iLeft[2];
            int iRight[2];
            tCFeature() { iLeft[0] = iLeft[1] = iRight[0] = iRight[1] = 0; }
      };

      struct t2DPoint
      {
          int x;
          int y;
          int size;
          t2DPoint() : x(0), y(0), size(0) {}
          t2DPoint(int xx, int yy, int s) : x(xx), y(xx), size(s) {}
     };

With that all your instances of tCFeature and t2DPoint were properly initialized.

(3) To use a list now you would do the following:

    list<t2DPoint> points;
    t2DPoint pt(10, 20, 2);
    points.push_back(pt);

Of course you could do that in a loop as well:

    list<t2DPoint> points;
    for (int i = 0; i < 20; ++i)
    {
        t2DPoint pt(10, 20, 2);
        points.push_back(t2DPoint(i*10, i*20, i));
    }

Now all your points are in list points.

You can retrieve the points by that:

    for (list<t2DPoint>::iterator it = points.begin(); it != points.end(); ++it)
    {
          t2DPoint pt = *it;   // get a copy from iterator
          int x = pt.x;           // get member variables
          int y = it->x;          // that's equivalent
          ....
    }

(4) To remove an element of the list you can:

     points.pop_back();  // deletes last element
     points.pop_first();   // deletes first element

    for (list<t2DPoint>::iterator it = points.begin(); it != points.end();)
    {
          list<t2DPoint>::iterator itnext = it+1;  // save next iterator
          if (it->x == 10 && it->y == 20)
          {
               points.erase(it);  
          }
          it = itnext;   // get saved iterator as it isn't valid after erase
    }
 
The main advantage of that approach is that you don't need any pointers.

Hope it was understandable.

Regards, Alex
 


Avatar of ukjm2k

ASKER

Hi thanks again for your reply. i would just like to ask a couple of follow up questions on the advice you gave me.

First, I dont understand what you mean by:
"After that you would simply skip the struct keyword and use tCFeature as a type, e. g. tCFeature left;"

Also, where you wrote:
You can retrieve the points by that:

    for (list<t2DPoint>::iterator it = points.begin(); it != points.end(); ++it)
    {
          t2DPoint pt = *it;   // get a copy from iterator
          int x = pt.x;           // get member variables
          int y = it->x;          // that's equivalent
          ....
    }

What would i need to do in the "...." section.

Thanks for your patience on the matter.
>>>> "After that you would simply skip the struct keyword"

In C if you have a struct type you need

  typedef
  struct
  {
     ....
  } MyStruct;

Whenever you use that struct you need the keyword struct

  struct MyStruct ms;

In C++ struct is a synonym to class and the only difference to class is that members defaults to public members while for class, members are private if not declared otherwise. Therefore you would define a struct like that:

struct MyStruct
{
    ....
};

not using typedef and defining the type name right after the struct keyword. Later, you would use that type similar to a class type by not using the struct or class keyword.

  MyStruct ms;     // create a object of type MyStruct


>>>> What would i need to do in the "...." section.

There, you would evaluate the points, e. g. when drawing a chart, you would draw a line to the next point. The for loop was just a sample for evaluating a list.

Regards,  Alex
Avatar of ukjm2k

ASKER

I am still having problems with the std::linked list implementation.
Im not sure how to convert the code above (correspondence.cpp) so that it can use the STL std::list container class.

itsmeandnobodyelse
>>>>There, you would evaluate the points. The for loop was just a sample for evaluating a list.

How would i evaluate points that are automatically chosen when the program is run?

Could someone help me with this, preferably by converting the code to be used in this way. As i have stated, this is my full code, but because i have never used std::list, i am not sure what i am doing wrong.

Thanks in advance.

Avatar of ukjm2k

ASKER

Anyone help with this please....
Below is a compilable correspondence.cpp, globals.h and correspondence.h.
You would have to convert the other cpp files and headers same way

In correspondence.cpp I most likely found the error (while I was converting it to std::list):

>>>>         //We should only increment the pointer if there hasn't been a match, otherwise
>>>>        //all hell will break loose!  ????
>>>>        if(!match)t_left = t_left->next;

With the last statement you'll have an infinite loop if match is false (all hell will be let loose!!!)

Regards, Alex


// correspondence.cpp

#include "global.h"
#include "correspondence.h"

#include <list>
#include <cmath>
using namespace std;

tCFeature tkCorrespond(list<t2DPoint>& left_features, list<t2DPoint>& right_features)
{
    if(!left_features.empty() && !right_features.empty())
        return corrSimple(left_features, right_features);
    else
        return tCFeature();
}

tCFeature corrSimple(list<t2DPoint>& left, list<t2DPoint>& right)
{
    t2DPoint  match4left;
    t2DPoint  match4right;
    tCFeature result;

    bool match = false;

    for (list<t2DPoint>::iterator t_left = left.begin(); t_left != left.end(); )
    {
        match = false;
        match4left = corrSimple_1(*t_left, right);

        // save next iterator in case of deletion
        list<t2DPoint>::iterator itnext = t_left;
        itnext++;

        if (!match4left.empty())                                        
        {                                                                
            match4right = corrSimple_1(match4left, left);                
            if(*t_left == match4right)
            {
                match = true;
                //printf("MATCHED! L[%i,%i] R[%i,%i]\n",t_left->x,t_left->y,match4left->x,match4left->y);
               
                result.iLeft[0]  = t_left->x;    
                result.iLeft[1]  = t_left->y;    
                result.iRight[0] = match4left.x;
                result.iRight[1] = match4left.y;

                // remove all points from right matching *t_left
                right.remove( *t_left );
                // remove t_left from left
                left.erase( t_left );
            }
            t_left = itnext;
        }
        else
        {
            //the right list is empty - all matched!
            //break out!
            match=true;
            break;
        }
        //We should only increment the pointer if there hasn't been a match, otherwise
        //all hell will break loose!  ????
        //if(!match)t_left = t_left->next;

        if (!match)
            break;
    }
    //if(left!=NULL) printf("Not all markers matched in the left frame\n");
    //if(right!=NULL) printf("Not all markers matched int he right frame\n");
    return result;
}

t2DPoint corrSimple_1(t2DPoint marker, list<t2DPoint>& scene)
{
    double   temp_dist = 0;
    double   best_dist = 1000000000;
    t2DPoint best_pt;

    for (list<t2DPoint>::iterator temp_pt = scene.begin(); temp_pt != scene.end(); ++temp_pt )
    {
        if(abs(temp_pt->size - marker.size) <= (marker.size/5))
        {
            temp_dist = sqrt(((abs(marker.x) - abs(temp_pt->x)) * (abs(marker.x) - abs(temp_pt->x)))
                        + ((abs(marker.y) - abs(temp_pt->y)) * (abs(marker.y) - abs(temp_pt->y))));

            if(temp_dist < best_dist)
            {
                best_dist = temp_dist;
                best_pt   = *temp_pt;
            }
        }
    }
    return best_pt;
}

//-------------- end of correspondence.cpp --------------------------

// global.h

/* Global definitions */

#include <stdio.h>
#include <stdlib.h>

// the following 3 includes I had to comment to get correspondence.cpp compiled
#include <cv.h>        
#include <cvaux.h>
#include <highgui.h>

#ifndef GLOBALS
#define GLOBALS
/* define run-modes */
#define MODE_INIT 0
#define MODE_CALIBRATE_INT 1
#define MODE_CALIBRATE_EXT 2
#define MODE_RECONSTRUCT_SAMPLE 3
#define MODE_RECONSTRUCT_RUN 4
/* define different calibration types */
#define CALIB_UNSET 0
#define CALIB_FILE 1
#define CALIB_BMP 2
#define CALIB_LIVE 3
/* define number of points require for ext calibration */
#define EXT_REQ_POINTS 35
/* define left and right camera indices */
#define SRC_LEFT_CAMERA 0
#define SRC_RIGHT_CAMERA 1
/* colour channels in an IplImage */
#define C_BLUE 0
#define C_GREEN 1
#define C_RED 2

/* structure for feature correspondences */
struct tCFeature
{
    int iLeft[2];
    int iRight[2];

    tCFeature() { iLeft[0] = iLeft[1] = iRight[0] = iRight[1] = 0; }
    tCFeature(const tCFeature& tcf )
    {
      iLeft[0]  = tcf.iLeft[0];
      iLeft[1]  = tcf.iLeft[1];
      iRight[0] = tcf.iRight[0];
      iRight[1] = tcf.iRight[1];
    }
    bool empty() { return (iLeft[0] == 0 && iLeft[1] == 0 && iRight[0] == 0 && iRight[1] == 0); }
    tCFeature& operator=(const tCFeature& tcf )
    {
      iLeft[0]  = tcf.iLeft[0];
      iLeft[1]  = tcf.iLeft[1];
      iRight[0] = tcf.iRight[0];
      iRight[1] = tcf.iRight[1];
      return *this;
    }
    bool operator==(const tCFeature& tcf)
    {
        return (iLeft[0]  == tcf.iLeft[0] && iLeft[1] == tcf.iLeft[1] && 
                iRight[0] == tcf.iRight[0] && iRight[1] == tcf.iRight[1]);
    }
};

/* 2D point struct (includes next/prev links unlike OpenCV */
struct t2DPoint
{
    int x;
    int y;
    int size;

    t2DPoint() : x(0), y(0), size(0) {}
    t2DPoint(const t2DPoint& pt) : x(pt.x), y(pt.y), size(pt.size) {}

    bool empty() { return (x == 0 && y == 0 && size == 0); }
    t2DPoint& operator=(const t2DPoint& pt )
    {
        x = pt.x; y = pt.y; size = pt.size;
        return *this;
    }
    bool operator==(const t2DPoint& pt)
    {  return (x == pt.x && y == pt.y && size == pt.size);  }
};

/* struct representing a camera ray toward the object */
struct camera_ray
{
    double vector[3];
    double cam_tran[3];
    double cam_rot;
    double pixelsize[2];
};

#endif

// -------------  end of global.h ------------------------

// correspondence.h

/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/*
Takes two link-lists of image features representing the features        
found in the left and reight images as chosen by segmentation.cpp      
& returns a list of matched features.
*/

#include <list>
using namespace std;

/* correspondence */
tCFeature tkCorrespond(list<t2DPoint>& left_features, list<t2DPoint>& right_features);
tCFeature corrSimple( list<t2DPoint>& left, list<t2DPoint>& right);
t2DPoint  corrSimple_1(t2DPoint marker, list<t2DPoint>& scene);

// ------------------- end of correspondence.h ----------------------------




Avatar of ukjm2k

ASKER

itsmeandnobodyelse

I am doing what you advised, and am changing segmentation.cpp at the moment.
I just wanted to ask if im going about this the right way, as i get a couple of errors in one of the functions:

t2DPoint findCircles(IplImage * input, IplImage * draw){   //IS THIS OK?
      CvMemStorage * storage;
      CvSeq * contour;
      CvBox2D * box;
      CvPoint * pointArray;
      CvPoint2D32f * pointArray32f;
      CvPoint center;
      t2DPoint match4left;
      t2DPoint match4right;
      t2DPoint result;

      float myAngle,ratio;
      int i,header_size,count,length,width;
      IplImage * gray_input = cvCreateImage(cvGetSize(input),IPL_DEPTH_8U,1);
        t2DPoint markers = (list <t2DPoint>)NULL;    //ERROR HERE  <---------------------------------------------------
        t2DPoint temppt = ( list <t2DPoint>)NULL;     //ERROR HERE  <---------------------------------------------------

      //Convert the input image to grayscale.
      cvCvtColor(input,gray_input,CV_RGB2GRAY);

      //Remove noise and smooth
      removeNoise(gray_input);

      //Edge detect the image with Canny algorithm
      cvCanny(gray_input,gray_input,25,150,3);

      //Allocate memory
      box = (CvBox2D *)malloc(sizeof(CvBox2D));
      header_size = sizeof(CvContour);
      storage = cvCreateMemStorage(1000);

      // Find all the contours in the image.
      cvFindContours(gray_input,storage,&contour,header_size,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_TC89_KCOS);
      while(contour!=NULL)
      {
            if(CV_IS_SEQ_CURVE(contour))
            {
                  count = contour->total;
                  pointArray = (CvPoint *)malloc(count * sizeof(CvPoint));
                  cvCvtSeqToArray(contour,pointArray,CV_WHOLE_SEQ);
                  pointArray32f = (CvPoint2D32f *)malloc((count + 1) * sizeof(CvPoint2D32f));
                  for(i=0;i<count-1;i++){
                        pointArray32f[i].x = (float)(pointArray[i].x);
                        pointArray32f[i].y = (float)(pointArray[i].y);
                  }

                  pointArray32f[i].x = (float)(pointArray[0].x);
                  pointArray32f[i].y = (float)(pointArray[0].y);
                  if(count>7){
                               cvFitEllipse(pointArray32f,count,box);
                        ratio = (float)box->size.width/(float)box->size.height;
                        center.x = (int)box->center.x;
                        center.y = (int)box->center.y;
                        length = (int)box->size.height;
                        width = (int)box->size.width;
                        myAngle = box->angle;
                        if((center.x>0) && (center.y>0)){
                                    result.x = center.x;      
                                        result.y = center.y;
                                        result.size = length;
                                        markers = temppt;
                              if(draw!=NULL) cvCircle(draw,center,(int)length/2,RGB(0,0,255),-1);
                              /*cvEllipse(input,
                              center,
                              cvSize((int)width/2,(int)length/2),
                              -box->angle,
                              0,
                              360,
                              RGB(0,255,0),
                              1);*/
                        }
                  }
                  free(pointArray32f);
                  free(pointArray);
            }
            contour = contour->h_next;
      }
      free(contour);
      free(box);
      cvReleaseImage(&gray_input);
      cvReleaseMemStorage(&storage);
      return markers;
}




I get the following two errors:

\segmentation.cpp(290) : error C2440: 'initializing' : cannot convert from 'class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >' to 'struct t2DPoint'
        No constructor could take the source type, or constructor overload resolution was ambiguous

\segmentation.cpp(291) : error C2440: 'initializing' : cannot convert from 'class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >' to 'struct t2DPoint'
        No constructor could take the source type, or constructor overload resolution was ambiguous


Could you please let me know where im goin wrong....

Many thanks.      
>>>> t2DPoint markers = (list <t2DPoint>)NULL;    //ERROR HERE  

I used objects in the list and not pointers. So

  t2DPoint marker;  

creates one empty t2DPoint object.

Note, I added t2DPoint::empty() member function. With that you could test objects for being empty rather than testing a return pointer on being NULL.

Could you change structs CVPoint and CVBox2D similar to t2DPoint and tCFeature or do you have a library that needs the current definitions?

Regards, Alex



Avatar of ukjm2k

ASKER

>>>>Could you change structs CVPoint and CVBox2D similar to t2DPoint and tCFeature or do you have a library that needs the current definitions?

Dont have a library, but i am struggling with how to change these structs, similarly with t2DPoint and tCFeature
>>>> but i am struggling with how to change these structs, ...

That is good as a program that has to work with different design paradigmas rarely could succeed...

Note, if your lists contain objects and not pointers, you should make sure that the object in the list is a singleton, i. e. that you don't store the same object somewhere else. Of course you could temporarily extract a local copy or even a temporary result set. But these temporaries shouldn't live longer than you make changes to the main lists.

Regards, Alex
Avatar of ukjm2k

ASKER

itsmeandnobodyelse

I am tryin to change the reconstruct.cpp file in this way.
Could you please advise me on how to do this. im am confused on how to convert the .h file aswell.

Many thanks
I started with matlib.h and made fundamental changes to struct tMatrix, thus not needing matlib.cpp anymore:

It compiles (using matlib.cpp only including matlib.h and nothing else) but you would need a lot of changes in reconstruct.cpp, mostly changing pointers of tMatrix to objects and using member functions of tMatrix instead of global functions.

Regards, Alex

/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/*
Core reconstruction component. Defines type, tMatrix which can be used to            
represent a matrix of any size, rather than statically defining the matrix      

  The matrix itself is represented as a vector of vector of double
*/

#include <vector>
#include <math.h>
#include <iostream>
using namespace std;

//Matrix handler library
enum Axis
{
    X,
    Y,
    Z
};

struct tMatrix
{
    int rows;
    int cols;
    vector< vector<double> > matrix;

    tMatrix() :  rows(0), cols(0) {}
    tMatrix(int r, int c, double init = 0.0)
        : rows(r), cols(c), matrix( r, vector<double>( c, init ) )
    {
    }

    tMatrix(const tMatrix& mat)
        : rows(mat.rows), cols(mat.cols), matrix( mat.rows, vector<double>( mat.cols, 0. ) )
    {
        *this = mat;
    }  

    vector<double>& operator[](int row)
    {
        return matrix[row];
    }

    const vector<double>& operator[](int row) const
    {
        return matrix[row];
    }

    tMatrix& operator=(const tMatrix& mat)
    {
        for (int i = 0; i < rows; ++i)
            for (int j = 0; j < cols; ++j)
                matrix[i][j] = mat[i][j];
        return *this;
    }

    tMatrix operator* (const tMatrix& src)
    {
        tMatrix result(rows, src.cols);
        for (int r = 0; r < rows; ++r)
        {
            for (int c2 = 0; c2 < src.cols; ++c2)
            {
                double sum = 0.;
                for (int c1 = 0; c1 < cols; ++c1)
                {
                    sum += matrix[r][c1] * src[c1][c2];
                }
                result[r][c2] = sum;
            }
        }
        return result;
    }

    void norm()
    {
        if (rows == 3 && cols == 1)
        {
            double dlen = sqrt((matrix[X][0]*matrix[X][0])
                              +(matrix[Y][0]*matrix[Y][0])
                              +(matrix[Z][0]*matrix[Z][0]));
            matrix[X][0] /= dlen;
            matrix[Y][0] /= dlen;
            matrix[Z][0] /= dlen;
        }
    }
    friend ostream& operator<<(ostream& os, const tMatrix& mat)
    {
        for (int i = 0; i < mat.rows; ++i)
            for (int j = 0; j < mat.cols; ++j)
                os << "\t" << mat[i][j];
        os << endl;
         
        return os;
    }

    double det33()
    {
        double det = 0.0;
        if (rows == 3 && cols == 3)
        {
            det =  (matrix[0][0] * matrix[1][1] * matrix[2][2])
                  -(matrix[0][2] * matrix[1][1] * matrix[2][0])
                  +(matrix[0][1] * matrix[1][2] * matrix[2][0])
                  -(matrix[0][0] * matrix[1][2] * matrix[2][1])
                  +(matrix[0][2] * matrix[1][0] * matrix[2][1])
                  -(matrix[0][1] * matrix[1][0] * matrix[2][2]);
        }
        return det;
    }

    tMatrix crossProd31x31(const tMatrix& vec)
    {
        tMatrix result(3, 1);
        if (rows == 3 && cols == 1 && vec.rows == 3 && vec.cols == 1)
        {
            result.matrix[0][0] = matrix[1][0]*vec[2][0] - matrix[2][0]*vec[1][0];
            result.matrix[1][0] = matrix[2][0]*vec[0][0] - matrix[0][0]*vec[2][0];
            result.matrix[2][0] = matrix[0][0]*vec[1][0] - matrix[1][0]*vec[0][0];
        }
        return result;
    }

    double dotProd(const tMatrix& vec)
    {
        double result = 0.0;
        if (rows == 3 && cols == 1 && vec.rows == 3 && vec.cols == 1)
        {
            result =   (matrix[0][0]*vec[0][0])
                     + (matrix[1][0]*vec[1][0])
                     + (matrix[2][0]*vec[2][0]);
        }
        return result;

    }
};

I would try to convert reconstruct.cpp as well but cannot compile cause of missing cv header files.

Please post the headers

#include <cv.h>
#include <cvaux.h>
#include <highgui.h>


Regards, Alex
Avatar of ukjm2k

ASKER

itsmeandnobodyelse

>>>>Please post the headers

>>>>#include <cv.h>
>>>>#include <cvaux.h>
>>>>#include <highgui.h>

Sorry m8, you want me to post the whole of the .h files?
The cv.h file alone is 3,500 lines...

The reason you cant complile is because you would be missing important 'include' and 'linker' files.

I wrote this program in VS 6.0

You would also need to rebuild a file named baseclasses.dsw within directx, and copy 2 .lib files that are generated.

then you would need to rebuild the open cv workspace, and copy all the .dll files.

I wrote this program in VS 6.0, and needed to include directories from directx, and opencv by going to the tools->options->directories tab
and including all the necessary include and library files.

Other configurations are also needed.

Compiling the program will clearly be a problem.
If you like, i can post complete steps on how to configure opencv and vs6.0 if the need be, or was there something else you would require?

Thanks for all your help sofar m8.
>>>> The cv.h file alone is 3,500 lines...

I would need only struct definitions and function prototypes used in reconstruct.cpp.  If some are too big, you coould omit them and I could comment them in my source. The problem is, if I couldn't compile I most likely would have to post not compilable code here, what I don't like.

I don't need libs and other stuff as I don't intend to link the prog.

Regards, Alex
Avatar of ukjm2k

ASKER

>>>>I would need only struct definitions and function prototypes used in reconstruct.cpp

Im not sure which ones these are.
Is there no way for you to view the files?
>>>> Is there no way for you to view the files?

From where? Could you give me an address to download (tomorrow)?

>>>> Im not sure which ones these are.

it's only (class ?) CvCamera and struct camera_ray.

Regards, Alex

Avatar of ukjm2k

ASKER

>>>>it's only (class ?) CvCamera and struct camera_ray.

i was able to find the CvCamera definitions. they were as follows...
Didnt have much luck with camera_ray though.

Hope this was what you wanted....

/************ Epiline functions *******************/

....

typedef struct CvCamera
{
    float   imgSize[2]; /* size of the camera view, used during calibration */
    float   matrix[9]; /* intinsic camera parameters:  [ fx 0 cx; 0 fy cy; 0 0 1 ] */
    float   distortion[4]; /* distortion coefficients - two coefficients for radial distortion
                              and another two for tangential: [ k1 k2 p1 p2 ] */
    float   rotMatr[9];
    float   transVect[3]; /* rotation matrix and transition vector relatively
                             to some reference point in the space. */
}
CvCamera;

typedef struct CvStereoCamera
{
    CvCamera* camera[2]; /* two individual camera parameters */
    float fundMatr[9]; /* fundamental matrix */

    /* New part for stereo */
    CvPoint3D32f epipole[2];
    CvPoint2D32f quad[2][4]; /* coordinates of destination quadrangle after
                                epipolar geometry rectification */
    double coeffs[2][3][3];/* coefficients for transformation */
    CvPoint2D32f border[2][4];
    CvSize warpSize;
    CvStereoLineCoeff* lineCoeffs;
    int needSwapCameras;/* flag set to 1 if need to swap cameras for good reconstruction */
    float rotMatrix[9];
    float transVector[3];
}
CvStereoCamera;


....

/****************************************************************************************\
*                                   Calibration engine                                   *
\****************************************************************************************/

....

   /* Retrieves camera parameters for specified camera.
       If camera is not calibrated the function returns 0 */
    virtual const CvCamera* GetCameraParams( int idx = 0 ) const;

    virtual const CvStereoCamera* GetStereoParams() const;

    /* Sets camera parameters for all cameras */
    virtual bool SetCameraParams( CvCamera* params );

....

   /* camera data */
    int     cameraCount;
    CvCamera cameraParams[MAX_CAMERAS];
    CvStereoCamera stereo;
    CvPoint2D32f* points[MAX_CAMERAS];
    CvMat*  undistMap[MAX_CAMERAS];
    CvMat*  undistImg;
    int     latestCounts[MAX_CAMERAS];
    CvPoint2D32f* latestPoints[MAX_CAMERAS];
    CvMat*  rectMap[MAX_CAMERAS];
Avatar of ukjm2k

ASKER

>>>> Hope this was what you wanted....

Was there anything else you needed?
Avatar of ukjm2k

ASKER

can anyone help with this?
Avatar of ukjm2k

ASKER

itsmeandnobodyelse

Any more help with this mate, your been very helpful so far, futher help will be greatly appreciated.

ukjm2k
>>>> can anyone help with this?

I was back from a one week cruise in the Caribics...

Did you try to convert reconstruct.h and reconstruct.cpp?

Is the matrix class I gave you that what you want?

I think, I'll find time tomorrow to help you with reconstruct.

Regards, Alex


 
Avatar of ukjm2k

ASKER

>>>Did you try to convert reconstruct.h and reconstruct.cpp?
Tried but was not successful.

>>>Is the matrix class I gave you that what you want?
Yes, this seems fine to me mate.
Here are compilable versions of reconstruct.cpp and reconstruct.cpp. I also had to add a new constructor to tMatrix:

    tMatrix(int r, int c, double* initArr)
        : rows(r), cols(c), matrix( r, vector<double>( c, 0.0 ) )
    {
        for (int i = 0; i < r; ++i)
            for (int j = 0; j < c; ++j)
                 matrix[i][j] = *initArr++;
    }


Note, that were the last files I transformed alone. If there are files left, you have to transform them yourselves and I'll help you if you got problems. OK?

Regards, Alex


// reconstruct.cpp

#include "global.h"
#include "matlib.h"
#include "reconstruct.h"

/* variables for the two cameras */
extern CvCamera * clLeftCamera;
extern CvCamera * clRightCamera;

void tkReconstruct(tCFeature& object,char * output)
{
    camera_ray leftcam_ray;
    camera_ray rightcam_ray;

    tMatrix result(3,1);
    char outstring[256];
    leftcam_ray.cam_tran[0] = 505.0;
    leftcam_ray.cam_tran[1] = 485.0;
    leftcam_ray.cam_tran[2] = 1000.0;
    leftcam_ray.cam_rot = 0.0;
    leftcam_ray.pixelsize[0] = (float)15.0/(float)352.0;
    leftcam_ray.pixelsize[1] = (float)12.0/(float)288.0;
    leftcam_ray.vector[0] = (float)175 - (float)object.iLeft[0];
    leftcam_ray.vector[1] = (float)140 - (float)object.iLeft[1];
    leftcam_ray.vector[2] = (float)clLeftCamera->matrix[0];
    rightcam_ray.cam_tran[0] = 235.0;
    rightcam_ray.cam_tran[1] = 485.0;
    rightcam_ray.cam_tran[2] = 1000.0;
    rightcam_ray.cam_rot = 0.0;
    rightcam_ray.pixelsize[0] = (float)15.0/(float)352.0;
    rightcam_ray.pixelsize[1] = (float)12.0/(float)288.0;
    rightcam_ray.vector[0] = (float)175 - (float)object.iRight[0];
    rightcam_ray.vector[1] = (float)140 - (float)object.iRight[1];
    rightcam_ray.vector[2] = (float)clRightCamera->matrix[0];

    result = reconstruct(leftcam_ray, rightcam_ray);
    //printf("OBJECT LOCATED AT:\n");
    //matPrint(result);
    sprintf(outstring,"[P::%.1f,%.1f,%.1f]",result[0][0],result[1][0],result.matrix[2][0]);
    strcat(output,outstring);
}

tMatrix reconstruct(camera_ray& left, camera_ray& right)
{
    tMatrix rl = generateRotMat(0.0,left.cam_rot,0.0);
    tMatrix rr = generateRotMat(0.0,right.cam_rot,0.0);
    tMatrix tl(3,1,left.cam_tran);
    tMatrix tr(3,1,right.cam_tran);
    double l_beta = 0.0,r_beta = 0.0;
    //Generate normalised left vector
    pix2mm(left);
    tMatrix vl_norm(3,1,left.vector);
    vl_norm.norm();
    //Generate normalised right vector
    pix2mm(right);
    tMatrix vr_norm(3,1,right.vector);
    vr_norm.norm();
    //Correct the image vectors for camera rotation.
    tMatrix vl_prime(rl*vl_norm);
    tMatrix vr_prime(rr*vr_norm);
    //Generate the "axis" vector
    tMatrix axis(vl_prime.crossProd31x31(vr_prime));
    //Generate the equations for the planes
    tMatrix l_plane = defPlane(tl, vl_prime,axis);
    tMatrix r_plane = defPlane(tr, vr_prime,axis);
    //Generate left and right plane normals
    tMatrix lp_norm(l_plane);
    tMatrix rp_norm(r_plane);
    //Generate the intersection vector and point
    tMatrix iv(lp_norm.crossProd31x31(rp_norm));
    iv.norm();
    tMatrix ip(3,1,0.0);
    ip[2][0] = 0.0;
    ip[1][0] = (-l_plane[3][0]-(l_plane[0][0]*r_plane[3][0]))/
        ((r_plane[0][0]*l_plane[1][0])-(l_plane[0][0]*r_plane[1][0]));
    ip[0][0] = ((-r_plane[3][0])-(r_plane[1][0]*ip[1][0]))/r_plane[0][0];
    //Calculate the beta value
    l_beta = calcBeta(tl,vl_prime,iv,ip);
    r_beta = calcBeta(tr,vr_prime,iv,ip);
    tMatrix l_int = genIntersect(tl,vl_prime,l_beta);
    tMatrix r_int = genIntersect(tr,vr_prime,r_beta);
    tMatrix object(3,1, 0.0);
    object[X][0] = (l_int[X][0]+r_int[X][0])/2;
    object[Y][0] = (l_int[Y][0]+r_int[Y][0])/2;
    object[Z][0] = (l_int[Z][0]+r_int[Z][0])/2;
    //Finally return the 3D position.
    return object;
}

tMatrix generateRotMat(double x_rot, double y_rot, double z_rot)
{
    double x_mat[] = {
                        1,0,0,0,cos(x_rot*PI/180.0),sin(x_rot*PI/180.0),0,
                        -sin(x_rot*PI/180.0),cos(x_rot*PI/180.0)
                     };

    double y_mat[] = {
                        cos(y_rot*PI/180.0),0,sin(y_rot*PI/180.0),0,1,0,
                        -sin(y_rot*PI/180.0),0,cos(y_rot*PI/180.0)
                     };

    double z_mat[] = {
                        cos(z_rot*PI/180.0),sin(z_rot*PI/180.0),0,
                        -sin(z_rot*PI/180.0),cos(z_rot*PI/180.0),0,0,0,1
                     };
    tMatrix x_rot_mat(3, 3, x_mat);
    tMatrix y_rot_mat(3, 3, y_mat);
    tMatrix z_rot_mat(3, 3, z_mat);
    tMatrix temp_mat(x_rot_mat * y_rot_mat);
    tMatrix result(temp_mat * z_rot_mat);
    return result;
}

void pix2mm(camera_ray camera)
{
    camera.vector[0] = camera.vector[0] * camera.pixelsize[0];
    camera.vector[1] = camera.vector[1] * camera.pixelsize[1];
    camera.vector[2] = camera.vector[2] * (camera.pixelsize[0] + camera.pixelsize[1]) / 2;
}

tMatrix defPlane(tMatrix& point, tMatrix& vec1, tMatrix& vec2)
{
    double det_a[9] = {
                        point[Y][0],point[Z][0],1.0,
                        vec1[Y][0],vec1[Z][0],0.0,
                        vec2[Y][0],vec2[Z][0],0.0
                      };
    double det_b[9] = {
                        point[Z][0],1.0,point[X][0],
                        vec1[Z][0],0.0,vec1[X][0],
                        vec2[Z][0],0.0,vec2[X][0]
                      };
    double det_c[9] = {
                        1.0,point[X][0],point[Y][0],
                        0.0,vec1[X][0],vec1[Y][0],
                        0.0,vec2[X][0],vec2[Y][0]
                      };
    double det_d[9] = {
                        point[X][0],point[Y][0],point[Z][0],
                        vec1[X][0],vec1[Y][0],vec1[Z][0],
                        vec2[X][0],vec2[Y][0],vec2[Z][0]
                      };
    tMatrix a_matrix(3,3,det_a);
    tMatrix b_matrix(3,3,det_b);
    tMatrix c_matrix(3,3,det_c);
    tMatrix d_matrix(3,3,det_d);
    tMatrix plane(4,1, 0.0);
    plane[0][0] = a_matrix.det33();
    plane[1][0] = b_matrix.det33();
    plane[2][0] = c_matrix.det33();
    plane[3][0] = d_matrix.det33();
    return plane;
}

double calcBeta(tMatrix& trans, tMatrix& vec, tMatrix& iv, tMatrix& ip)
{
    double beta_num =   (ip[Y][0]*iv[X][0])
                      + (iv[Y][0]*trans[X][0])
                      - (ip[X][0]*iv[Y][0])-(iv[X][0]*trans[Y][0]);
    double beta_denom = (iv[X][0]*vec[Y][0])-(iv[Y][0]*vec[X][0]);
    return beta_num/beta_denom;
}

tMatrix genIntersect(tMatrix& trans, tMatrix& vec, double beta)
{
    double i_sect[3] = {
                         trans[X][0] + (beta * vec[X][0]),
                         trans[Y][0] + (beta * vec[Y][0]),
                         trans[Z][0] + (beta * vec[Z][0])
                       };
    return tMatrix(3, 1, i_sect);
}

//--------------------------------------------------------------------------

// reconstruct.h

//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/*
Involves a significant amount of matrix manipulation, and to avoid duplicating              
code, a file matlib.cpp was also written. This is a minimal matrix library        
which implements the functionality for this project only, and any exernal
libaries were ignored.      
See matlib.cpp
*/

//3D Reconstruction library header
#define PI 3.14159
void tkReconstruct(tCFeature object,char * output);
tMatrix reconstruct(camera_ray& left, camera_ray& right);
tMatrix generateRotMat(double x_rot, double y_rot, double z_rot);
tMatrix defPlane(tMatrix& point, tMatrix& vec1, tMatrix& vec2);
void pix2mm(camera_ray&);
double calcBeta(tMatrix& trans, tMatrix& vec, tMatrix& iv, tMatrix& ip);
tMatrix genIntersect(tMatrix& trans, tMatrix& vec, double beta);

Avatar of ukjm2k

ASKER

>>>I also had to add a new constructor to tMatrix:
Im doing something wrong wioth this, but im not sure what, as i get errors.
Where does this new constructor need to be inserted?
>>>> Where does this new constructor need to be inserted?

In matlib.h where struct tMatrix was defined we already have 3 constructors. Here we need one more that takes an array of doubles as last argument.

    tMatrix() :  rows(0), cols(0) {}             // DEFAULT CONSTRUCTOR

    tMatrix(int r, int c, double init = 0.0)    // CONSTRUCTOR TAKES ROWS AND COLS AND *ONE* DOUBLE
        : rows(r), cols(c), matrix( r, vector<double>( c, init ) )
    {
    }

    tMatrix(const tMatrix& mat)  // COPY CONSTRUCTOR
        : rows(mat.rows), cols(mat.cols), matrix( mat.rows, vector<double>( mat.cols, 0. ) )
    {
        *this = mat;
    }  

    tMatrix(int r, int c, double* initArr) // CONSTRUCTOR TAKES ROWS AND COLS AND DOUBLE ARRAY
        : rows(r), cols(c), matrix( r, vector<double>( c, 0.0 ) )
    {
        for (int i = 0; i < r; ++i)
            for (int j = 0; j < c; ++j)
                 matrix[i][j] = *initArr++;
    }


Regards, Alex

Avatar of ukjm2k

ASKER

itsmeandnobodyelse
Ive converted segmentation.cpp, but i am having trouble with a bit in main.cpp. Could i get some help to convert this please.
The bit of code is:

case MODE_RECONSTRUCT_RUN:            
      
            pMatches = tkCorrespond(tkSegment(left),tkSegment(right));      
            if(corrSimple!=NULL){      
                  pTempMatch = pMatches;                  
                  while(pTempMatch!=NULL){                  
                        tkReconstruct(pTempMatch,out_text);                  
                        pMatches = pTempMatch->pNext;                  
                        free(pTempMatch);                  
                        pTempMatch = pMatches;                  
                  }                  
                  fp = fopen(out_file,"a");      
                  fputs(out_text,fp);            
                  fputs("\n",fp);                  
                  fclose(fp);                  
                  out_text[0] = '\0';            
            }            
      }                        
      break;      
      
}


the errors i get are:

Compiling...
main.cpp
My Documents\dissertation\main.cpp(213) : error C2064: term does not evaluate to a function
My Documents\dissertation\main.cpp(217) : error C2678: binary '!=' : no operator defined which takes a left-hand operand of type 'struct tCFeature' (or there is no acceptable conversion)
My Documents\dissertation\main.cpp(217) : fatal error C1903: unable to recover from previous error(s); stopping compilation
Error executing cl.exe.

main.obj - 3 error(s), 0 warning(s)


Thanks in advance
>>>> Ive converted segmentation.cpp

Please post segmentation.h and segmentation.cpp. Need some function prototypes declared in segmentation.h and have to check whether the conversion was successful ... beside of compiling ;-)

Regards, Alex
Avatar of ukjm2k

ASKER

>>>>Need some function prototypes declared in segmentation.h
In that case, i dont think i have done it correctly :( I just made changes within the code
Here it is:

//////////////////////////////////////////////////////////////////////////
Segmentation.h
/////////////////////////////////////////////////////////////////////////

/* image segmentation */
/* how large an area to sample from around the click */
#define SAMPLE_SIZE 20


/* the threshhold window around the colour to be matched */
#define THRESH_BAND 60
bool tkGenerateColMap(IplImage * input,int x,int y);
bool segColMapSimpleAve(IplImage * input, int x, int y);
bool segColMapRThresh(IplImage * input, int x, int y);
t2DPoint tkSegment(IplImage * input);
t2DPoint segFindFeatures_Ave(IplImage * input);
t2DPoint segFindFeatures_RThresh(IplImage * input);
t2DPoint findCircles(IplImage * input,IplImage * draw);


//////////////////////////////////////////////////////////////////////////
Segmentation.cpp
/////////////////////////////////////////////////////////////////////////

#include "global.h"
#include "segmentation.h"
#include "library.h"


int thresh_r;
int thresh_g;
int thresh_b;
char RchnThresh;
extern bool colmapdone;

bool tkGenerateColMap(IplImage * input,int x,int y){
      if(segColMapSimpleAve(input,x,y)) return true;
      else return false;
}

t2DPoint tkSegment(IplImage * input){
      return segFindFeatures_Ave(input);
}


/*
   This function creates a sample window around the selected point in order to
   average out the colour across this window; avoids "noisy" pixels
*/

bool segColMapSimpleAve(IplImage * input, int x, int y){
      char sample[SAMPLE_SIZE][SAMPLE_SIZE][3];
      int row,col,chan,v;
      int norm_total;
      int left = x - (SAMPLE_SIZE/2);
      int right = x + (SAMPLE_SIZE/2);
      int top = y - (SAMPLE_SIZE/2);
      int bottom = y + (SAMPLE_SIZE/2);

      /* Duplicate the bit we need to sample */
      for(row=top;row<bottom;row++){
            for(col=left;col<right;col++){
                  sample[row-top][col-left][C_RED] = input->imageData[(row*input->widthStep)+(col*3)+C_RED];
                  sample[row-top][col-left][C_BLUE] = input->imageData[(row*input->widthStep)+(col*3)+C_BLUE];
                  sample[row-top][col-left][C_GREEN] = input->imageData[(row*input->widthStep)+(col*3)+C_GREEN];
            }
      }

      /* Now normalise the sample */
      for(row=0;row<SAMPLE_SIZE;row++){
            for(col=0;col<SAMPLE_SIZE;col++){
                  norm_total = sample[row][col][C_RED] + sample[row][col][C_BLUE] +
                        sample[row][col][C_GREEN];
                  sample[row][col][C_RED] =
                        (int)((float)sample[row][col][C_RED]/(float)norm_total*255.0);
                  sample[row][col][C_BLUE] =
                        (int)((float)sample[row][col][C_BLUE]/(float)norm_total*255.0);
                  sample[row][col][C_GREEN] =
                        (int)((float)sample[row][col][C_GREEN]/(float)norm_total*255.0);
            }
      }

      /* Now smooth it a bit */
      for(row=0;row<SAMPLE_SIZE;row++){
            for(col=0;col<SAMPLE_SIZE;col++){
                  for(chan=0;chan<3;chan++){
                        v = sample[row][col][chan];
                        if(v>0){
                              v -= 1;
                              sample[row][col][chan] = v;
                              if(row>0 && sample[row-1][col][chan] < v) sample[row-1][col][chan]
                                    = v;

                              if(col>0 && sample[row][col-1][chan] < v) sample[row][col-1][chan]
                                    = v;
                        }
                  }
            }
      }

      for(row=0;row<SAMPLE_SIZE;row++){
            for(col=0;col<SAMPLE_SIZE;col++){
                  for(chan=0;chan<3;chan++){
                        v = sample[row][col][chan];
                        if(v>0){
                              v -= 1;
                              sample[row][col][chan] = v;
                              if(row < SAMPLE_SIZE-1 && sample[row+1][col][chan] < v)
                                    sample[row+1][col][chan] = v;
                              if(col < SAMPLE_SIZE-1 && sample[row][col+1][chan] < v)
                                    sample[row][col+1][chan] = v;
                        }
                  }
            }
      }

      /*Now find the average*/
      thresh_r = sample[0][0][C_RED];
      thresh_b = sample[0][0][C_BLUE];
      thresh_g = sample[0][0][C_GREEN];
      for(row=0;row<SAMPLE_SIZE;row++){
            for(col=0;col<SAMPLE_SIZE;col++){
                  thresh_r = (thresh_r+sample[row][col][C_RED])/2;
                  thresh_b = (thresh_b+sample[row][col][C_BLUE])/2;
                  thresh_g = (thresh_g+sample[row][col][C_GREEN])/2;
            }
      }

      printf("\nTHRESHHOLDS: R:%i G:%i B:%i\n",thresh_r,thresh_g,thresh_b);
      return true;
}


/*
   This function generates a binary image by passing over each of the pixels
   in the image, thresholding them on the values previously found from sampling;
*/

t2DPoint segFindFeatures_Ave(IplImage * input){
      IplImage * temp;
      t2DPoint features;

      int i,j;
      int norm_r=0,norm_g=0,norm_b=0;
      int low_r=0,low_g=0,low_b=0;
      int high_r=0,high_g=0,high_b=0;
      int norm_total=0,in_total=0;
      char * pixel_in = (char *)NULL;
      char * pixel_out = (char *)NULL;

      /* Clone the input image! */
      temp = cvCloneImage(input);
      removeNoise(input);
      removeNoise(input);

      /* Now work out what the normalised boundaries are from
      the specified r,g,b values */
      in_total = thresh_r+thresh_g+thresh_b;
      if(in_total!=0){
      low_r = (int)(((float)thresh_r / (float)in_total)*255.0);
      low_g = (int)(((float)thresh_g / (float)in_total)*255.0);
      low_b = (int)(((float)thresh_b / (float)in_total)*255.0);
      high_r = low_r + THRESH_BAND;
      high_g = low_g + THRESH_BAND;
      high_b = low_b + THRESH_BAND;
      low_r -= THRESH_BAND;
      low_g -= THRESH_BAND;
      low_b -= THRESH_BAND;
      }

      else {
            low_r = 0;
            low_g = 0;
            low_b = 0;
            high_r = THRESH_BAND;
            high_g = THRESH_BAND;
            high_b = THRESH_BAND;
      }

      for(i=0;i<input->height;i++){
            for(j=0;j<(input->widthStep/3);j++){
                  pixel_in = &input->imageData[(i*input->widthStep)+(j*3)];
                  pixel_out = &temp->imageData[(i*input->widthStep)+(j*3)];
                  norm_total = (int)pixel_in[C_RED] + (int)pixel_in[C_GREEN] + (int)pixel_in[C_BLUE];
                  if(norm_total!=0){
                        norm_r = (int)(((float)pixel_in[C_RED] / (float)norm_total)*255.0);
                        norm_g = (int)(((float)pixel_in[C_GREEN] / (float)norm_total)*255.0);
                        norm_b = (int)(((float)pixel_in[C_BLUE] / (float)norm_total)*255.0);
                  }

                  else if(norm_total==0){
                        norm_r=0;norm_g=0;norm_b=0;
                  }

                  if(norm_r >= low_r && norm_r <= high_r &&
                        norm_g >= low_g && norm_g <= high_g &&
                        norm_b >= low_b && norm_b <= high_b){
                        pixel_out[C_RED] = (char)0;
                        pixel_out[C_GREEN] = (char)0;
                        pixel_out[C_BLUE] = (char)0;
                  }
                  else {
                        pixel_out[C_RED] = (char)255;
                        pixel_out[C_GREEN] = (char)255;
                        pixel_out[C_BLUE] = (char)255;
                  }
            }
      }

      features = findCircles(temp,input);
      cvReleaseImage(&temp);
      return features;
}


bool segColMapRThresh(IplImage * input, int x, int y){
      char * pixel;
      int left = x - 2;
      int right = x + 2;
      int top = y - 2;
      int bottom = y + 2;
      int row,col;
      if (input->depth == IPL_DEPTH_8U) printf ("IPL_DEPTH_8U\n");
      if (input->depth == IPL_DEPTH_8S) printf ("IPL_DEPTH_8S\n");
      if (input->depth == IPL_DEPTH_16S) printf ("IPL_DEPTH_16S\n");
      if (input->depth == IPL_DEPTH_32S) printf ("IPL_DEPTH_32S\n");
      if (input->depth == IPL_DEPTH_32F) printf ("IPL_DEPTH_32F\n");
      if (input->depth == IPL_DEPTH_64F) printf ("IPL_DEPTH_64F\n");

      for(row=top;row<bottom;row++){
            for(col=left;col<right;col++){
                  pixel = &input->imageData[(input->widthStep*row)+(col*3)];
                  if(col==left && row==top){
                        RchnThresh = pixel[C_RED];
                  }

                  else{
                        RchnThresh = (RchnThresh/2) + (pixel[C_RED]/2);
                  }
                  printf("%c",pixel[C_RED]);
            }
            printf("\n");
      }

      return true;
}


/*
   This function scans the binary image for circles. Consists of 3 main steps;
   First, the binary image is edge detected. Then the OpenCV function cvFindContours
   is used to identify the contours (edges) within the images.
   Finally, an elipse is fitted over each contour.
*/

t2DPoint segFindFeatures_RThresh(IplImage * input){
      IplImage * threshed = cvCloneImage(input);
      t2DPoint features;
      int i,j;
      char * pixel_in = (char *)NULL;
      char * pixel_out = (char *)NULL;
      for(i=0;i<input->height;i++){
            for(j=0;j<(input->widthStep/3);j++){
                  pixel_in = &input->imageData[(i*input->widthStep)+(j*3)];
                  pixel_out = &threshed->imageData[(i*threshed->widthStep)+(j*3)];
                  if((pixel_in[C_RED] > (RchnThresh - 50)) &&
                        (pixel_in[C_RED] < (RchnThresh + 20)) &&
                        (pixel_in[C_GREEN] < (char)50) &&
                        (pixel_in[C_BLUE] < (char)110)){
                        pixel_out[C_RED] = (char)0;
                        pixel_out[C_BLUE] = (char)0;
                        pixel_out[C_GREEN] = (char)0;
                  }

                  else {
                        pixel_out[C_RED] = (char)255;
                        pixel_out[C_BLUE] = (char)255;
                        pixel_out[C_GREEN] = (char)255;
                  }
            }
      }
      
      features = findCircles(threshed,input);
      cvReleaseImage(&threshed);
      return features;
}


t2DPoint findCircles(IplImage * input, IplImage * draw){
      CvMemStorage * storage;
      CvSeq * contour;
      CvBox2D * box;
      CvPoint * pointArray;
      CvPoint2D32f * pointArray32f;
      CvPoint center;
      t2DPoint match4left;
      t2DPoint match4right;
      t2DPoint result;

      float myAngle,ratio;
      int i,header_size,count,length,width;
      IplImage * gray_input = cvCreateImage(cvGetSize(input),IPL_DEPTH_8U,1);
      t2DPoint markers;
      t2DPoint temppt;

      //Convert the input image to grayscale.
      cvCvtColor(input,gray_input,CV_RGB2GRAY);

      //Remove noise and smooth
      removeNoise(gray_input);

      //Edge detect the image with Canny algorithm
      cvCanny(gray_input,gray_input,25,150,3);

      //Allocate memory
      box = (CvBox2D *)malloc(sizeof(CvBox2D));
      header_size = sizeof(CvContour);
      storage = cvCreateMemStorage(1000);

      // Find all the contours in the image.
      cvFindContours(gray_input,storage,&contour,header_size,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_TC89_KCOS);
      while(contour!=NULL)
      {
            if(CV_IS_SEQ_CURVE(contour))
            {
                  count = contour->total;
                  pointArray = (CvPoint *)malloc(count * sizeof(CvPoint));
                  cvCvtSeqToArray(contour,pointArray,CV_WHOLE_SEQ);
                  pointArray32f = (CvPoint2D32f *)malloc((count + 1) * sizeof(CvPoint2D32f));
                  for(i=0;i<count-1;i++){
                        pointArray32f[i].x = (float)(pointArray[i].x);
                        pointArray32f[i].y = (float)(pointArray[i].y);
                  }

                  pointArray32f[i].x = (float)(pointArray[0].x);
                  pointArray32f[i].y = (float)(pointArray[0].y);
                  if(count>7){
                        cvFitEllipse(pointArray32f,count,box);
                        ratio = (float)box->size.width/(float)box->size.height;
                        center.x = (int)box->center.x;
                        center.y = (int)box->center.y;
                        length = (int)box->size.height;
                        width = (int)box->size.width;
                        myAngle = box->angle;
                        if((center.x>0) && (center.y>0)){
                              result.x = center.x;                  
                              result.y = center.y;            
                              result.size = length;      
                              
                              markers = temppt;
                              if(draw!=NULL) cvCircle(draw,center,(int)length/2,RGB(0,0,255),-1);
                              /*cvEllipse(input,
                              center,
                              cvSize((int)width/2,(int)length/2),
                              -box->angle,
                              0,
                              360,
                              RGB(0,255,0),
                              1);*/
                        }
                  }
                  free(pointArray32f);
                  free(pointArray);
            }
            contour = contour->h_next;
      }
      free(contour);
      free(box);
      cvReleaseImage(&gray_input);
      cvReleaseMemStorage(&storage);
      return markers;
}
>>>> In that case, i dont think i have done it correctly

Not so bad at the first glance ;-)   You removed all return pointers t2DPoint. Bingo!

Will need a few time ...

Regards  
Would need struct IplImage. Could you check your .h files where it is defined?

Regards, Alex
Avatar of ukjm2k

ASKER

typedef struct _IplImage {
    int  nSize;         /* sizeof(IplImage) */
    int  ID;            /* version (=0)*/
    int  nChannels;     /* Most of OpenCV functions support 1,2,3 or 4 channels */
    int  alphaChannel;  /* ignored by OpenCV */
    int  depth;         /* pixel depth in bits: IPL_DEPTH_8U, IPL_DEPTH_8S, IPL_DEPTH_16S,
                           IPL_DEPTH_32S, IPL_DEPTH_32F and IPL_DEPTH_64F are supported */
    char colorModel[4]; /* ignored by OpenCV */
    char channelSeq[4]; /* ditto */
    int  dataOrder;     /* 0 - interleaved color channels, 1 - separate color channels.
                           cvCreateImage can only create interleaved images */
    int  origin;        /* 0 - top-left origin,
                           1 - bottom-left origin (Windows bitmaps style) */
    int  align;         /* Alignment of image rows (4 or 8).
                           OpenCV ignores it and uses widthStep instead */
    int  width;         /* image width in pixels */
    int  height;        /* image height in pixels */
    struct _IplROI *roi;/* image ROI. if NULL, the whole image is selected */
    struct _IplImage *maskROI; /* must be NULL */
    void  *imageId;     /* ditto */
    struct _IplTileInfo *tileInfo; /* ditto */
    int  imageSize;     /* image data size in bytes
                           (==image->height*image->widthStep
                           in case of interleaved data)*/
    char *imageData;  /* pointer to aligned image data */
    int  widthStep;   /* size of aligned image row in bytes */
    int  BorderMode[4]; /* ignored by OpenCV */
    int  BorderConst[4]; /* ditto */
    char *imageDataOrigin; /* pointer to very origin of image data
                              (not necessarily aligned) -
                              needed for correct deallocation */
}
IplImage;
Avatar of ukjm2k

ASKER

Is this what you are talking about mate?
I tried to get segmentation.cpp compiled at home... but failed as I needed to create dummy prototypes for hundreds of cv structs and cv function calls where I had no headers.

Unfortunately I forgot to transfer my home project to the machine I am currently working on, so I would have to wait til tomorrow *or* try to work on the code snippet of main.cpp you posted above.

Regards, Alex

Here my attempt:

    case MODE_RECONSTRUCT_RUN:          
        {
            // get left list of points (formerly a pointer to the first array element)
            list<t2DPoint> rleft  = tkSegment(left);
           // get right list of points (formerly a pointer to the first array element)
            list<t2DPoint> rright = tkSegment(right);

            tCFeature matches = tkCorrespond(rrleft, rright);    
            // the following check is equivalent to the == NULL check
            if(!matches.empty())
            {  
                // In old version 'pMatches' was a pointer to a node
                // we could have made similar by returning an iterator (== a node of std::list)
                // but unfortunately we have a return by value
                // so we need to check the left list from the beginning
                bool found = false;
                // loop all elements of left list
                for (list<tCFeature>::iterator it = rleft.begin(); it != rleft.end(); )
                {
                    if (!found && *it == matches)    
                        found = true;             // node found
                    // the old version calls tkReconstruct for any node following
                    // the current node and erases the node
                    // we do the same though it doesn't make much sense as
                    // the lists are temporary only and where deleted anyhow
                    if (found)
                    {
                        list<tCFeature>::iterator itn = it;
                        ++itn;
                        tkReconstruct(*it, out_text);              
                        rleft.erase(it);
                        it = itn;
                    }
                    else
                        ++it;
                }
                fp = fopen(out_file,"a");    
                fputs(out_text, fp);          
                fputs("\n",fp);              
                fclose(fp);              
                out_text[0] = '\0';          
            }    
        }
        break;    


The conversion maybe is correct, but actually I am not sure what the code should do and whether the old version was correct here. The old version created some linked lists and selectively deletes some items at the end of the lists. It is unclear what happens to the items and lists that were *not* deleted. I would say, the old version had a lot of memory leaks as most items never were freed. However, it seems good that the items were not deleted as we had a many of copies of pointers. Deleting or freeing a copy of a pointer would destroy the original allocated storage, what most likely leads to a crash if the original pointer still was used.

Note, I only removed the *node property* of your structs and put the struct objects to separate list containers instead. What I couldn't change is the logic of the program, though it might be the next thing to do.

Regards, Alex
Avatar of ukjm2k

ASKER

i got the following errors when i put this in:

Compiling...
main.cpp
C:\My Documents\dissertation\main.cpp(215) : error C2440: 'initializing' : cannot convert from 'struct t2DPoint' to 'class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >'
        No constructor could take the source type, or constructor overload resolution was ambiguous
C:\My Documents\dissertation\main.cpp(217) : error C2440: 'initializing' : cannot convert from 'struct t2DPoint' to 'class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >'
        No constructor could take the source type, or constructor overload resolution was ambiguous
C:\My Documents\dissertation\main.cpp(219) : error C2065: 'rrleft' : undeclared identifier
C:\My Documents\dissertation\main.cpp(229) : error C2440: 'initializing' : cannot convert from 'class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >::iterator' to 'class std::list<struct tCFeature,clas
s std::allocator<struct tCFeature> >::iterator'
        No constructor could take the source type, or constructor overload resolution was ambiguous
C:\My Documents\dissertation\main.cpp(229) : error C2678: binary '!=' : no operator defined which takes a left-hand operand of type 'class std::list<struct tCFeature,class std::allocator<struct tCFeature> >::iterator' (o
r there is no acceptable conversion)
C:\My Documents\dissertation\main.cpp(242) : error C2664: 'class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >::iterator __thiscall std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >::e
rase(class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >::iterator)' : cannot convert parameter 1 from 'class std::list<struct tCFeature,class std::allocator<struct tCFeature> >::iterator' to 'class std::list<struct t2DPoint,clas
s std::allocator<struct t2DPoint> >::iterator'
        No constructor could take the source type, or constructor overload resolution was ambiguous
Error executing cl.exe.

main.obj - 6 error(s), 0 warning(s)
You need to change segmentation.h and segmentation.cpp. The return values turn from t2DPoint to list<t2DPoint>  what is an equivalent to the t2DPoint * (pointer to the first element of a linked list) you had before.

Note, I couldn't compile the code below cause the cv structures and functions were missing.

Regards, Alex


//////////////////////////////////////////////////////////////////////////
// Segmentation.h
/////////////////////////////////////////////////////////////////////////

#include <list>
using namespace std;

/* image segmentation */
/* how large an area to sample from around the click */
#define SAMPLE_SIZE 20


/* the threshhold window around the colour to be matched */
#define THRESH_BAND 60
bool tkGenerateColMap(IplImage * input,int x,int y);
bool segColMapSimpleAve(IplImage * input, int x, int y);
bool segColMapRThresh(IplImage * input, int x, int y);
list<t2DPoint> tkSegment(IplImage * input);
list<t2DPoint> segFindFeatures_Ave(IplImage * input);
list<t2DPoint> segFindFeatures_RThresh(IplImage * input);
list<t2DPoint> findCircles(IplImage * input,IplImage * draw);


//////////////////////////////////////////////////////////////////////////
// Segmentation.cpp
/////////////////////////////////////////////////////////////////////////

#include "global.h"
#include "segmentation.h"
#include "library.h"


int thresh_r;
int thresh_g;
int thresh_b;
char RchnThresh;
extern bool colmapdone;

bool tkGenerateColMap(IplImage * input,int x,int y){
     if(segColMapSimpleAve(input,x,y)) return true;
     else return false;
}

list<t2DPoint> tkSegment(IplImage * input){
     return segFindFeatures_Ave(input);
}


/*
   This function creates a sample window around the selected point in order to
   average out the colour across this window; avoids "noisy" pixels
*/

bool segColMapSimpleAve(IplImage * input, int x, int y){
     char sample[SAMPLE_SIZE][SAMPLE_SIZE][3];
     int row,col,chan,v;
     int norm_total;
     int left = x - (SAMPLE_SIZE/2);
     int right = x + (SAMPLE_SIZE/2);
     int top = y - (SAMPLE_SIZE/2);
     int bottom = y + (SAMPLE_SIZE/2);

     /* Duplicate the bit we need to sample */
     for(row=top;row<bottom;row++){
          for(col=left;col<right;col++){
               sample[row-top][col-left][C_RED] = input->imageData[(row*input->widthStep)+(col*3)+C_RED];
               sample[row-top][col-left][C_BLUE] = input->imageData[(row*input->widthStep)+(col*3)+C_BLUE];
               sample[row-top][col-left][C_GREEN] = input->imageData[(row*input->widthStep)+(col*3)+C_GREEN];
          }
     }

     /* Now normalise the sample */
     for(row=0;row<SAMPLE_SIZE;row++){
          for(col=0;col<SAMPLE_SIZE;col++){
               norm_total = sample[row][col][C_RED] + sample[row][col][C_BLUE] +
                    sample[row][col][C_GREEN];
               sample[row][col][C_RED] =
                    (int)((float)sample[row][col][C_RED]/(float)norm_total*255.0);
               sample[row][col][C_BLUE] =
                    (int)((float)sample[row][col][C_BLUE]/(float)norm_total*255.0);
               sample[row][col][C_GREEN] =
                    (int)((float)sample[row][col][C_GREEN]/(float)norm_total*255.0);
          }
     }

     /* Now smooth it a bit */
     for(row=0;row<SAMPLE_SIZE;row++){
          for(col=0;col<SAMPLE_SIZE;col++){
               for(chan=0;chan<3;chan++){
                    v = sample[row][col][chan];
                    if(v>0){
                         v -= 1;
                         sample[row][col][chan] = v;
                         if(row>0 && sample[row-1][col][chan] < v) sample[row-1][col][chan]
                              = v;

                         if(col>0 && sample[row][col-1][chan] < v) sample[row][col-1][chan]
                              = v;
                    }
               }
          }
     }

     for(row=0;row<SAMPLE_SIZE;row++){
          for(col=0;col<SAMPLE_SIZE;col++){
               for(chan=0;chan<3;chan++){
                    v = sample[row][col][chan];
                    if(v>0){
                         v -= 1;
                         sample[row][col][chan] = v;
                         if(row < SAMPLE_SIZE-1 && sample[row+1][col][chan] < v)
                              sample[row+1][col][chan] = v;
                         if(col < SAMPLE_SIZE-1 && sample[row][col+1][chan] < v)
                              sample[row][col+1][chan] = v;
                    }
               }
          }
     }

     /*Now find the average*/
     thresh_r = sample[0][0][C_RED];
     thresh_b = sample[0][0][C_BLUE];
     thresh_g = sample[0][0][C_GREEN];
     for(row=0;row<SAMPLE_SIZE;row++){
          for(col=0;col<SAMPLE_SIZE;col++){
               thresh_r = (thresh_r+sample[row][col][C_RED])/2;
               thresh_b = (thresh_b+sample[row][col][C_BLUE])/2;
               thresh_g = (thresh_g+sample[row][col][C_GREEN])/2;
          }
     }

     printf("\nTHRESHHOLDS: R:%i G:%i B:%i\n",thresh_r,thresh_g,thresh_b);
     return true;
}


/*
   This function generates a binary image by passing over each of the pixels
   in the image, thresholding them on the values previously found from sampling;
*/

list<t2DPoint> segFindFeatures_Ave(IplImage * input){
     IplImage * temp;
     list<t2DPoint> features;

     int i,j;
     int norm_r=0,norm_g=0,norm_b=0;
     int low_r=0,low_g=0,low_b=0;
     int high_r=0,high_g=0,high_b=0;
     int norm_total=0,in_total=0;
     char * pixel_in = (char *)NULL;
     char * pixel_out = (char *)NULL;

     /* Clone the input image! */
     temp = cvCloneImage(input);
     removeNoise(input);
     removeNoise(input);

     /* Now work out what the normalised boundaries are from
     the specified r,g,b values */
     in_total = thresh_r+thresh_g+thresh_b;
     if(in_total!=0){
     low_r = (int)(((float)thresh_r / (float)in_total)*255.0);
     low_g = (int)(((float)thresh_g / (float)in_total)*255.0);
     low_b = (int)(((float)thresh_b / (float)in_total)*255.0);
     high_r = low_r + THRESH_BAND;
     high_g = low_g + THRESH_BAND;
     high_b = low_b + THRESH_BAND;
     low_r -= THRESH_BAND;
     low_g -= THRESH_BAND;
     low_b -= THRESH_BAND;
     }

     else {
          low_r = 0;
          low_g = 0;
          low_b = 0;
          high_r = THRESH_BAND;
          high_g = THRESH_BAND;
          high_b = THRESH_BAND;
     }

     for(i=0;i<input->height;i++){
          for(j=0;j<(input->widthStep/3);j++){
               pixel_in = &input->imageData[(i*input->widthStep)+(j*3)];
               pixel_out = &temp->imageData[(i*input->widthStep)+(j*3)];
               norm_total = (int)pixel_in[C_RED] + (int)pixel_in[C_GREEN] + (int)pixel_in[C_BLUE];
               if(norm_total!=0){
                    norm_r = (int)(((float)pixel_in[C_RED] / (float)norm_total)*255.0);
                    norm_g = (int)(((float)pixel_in[C_GREEN] / (float)norm_total)*255.0);
                    norm_b = (int)(((float)pixel_in[C_BLUE] / (float)norm_total)*255.0);
               }

               else if(norm_total==0){
                    norm_r=0;norm_g=0;norm_b=0;
               }

               if(norm_r >= low_r && norm_r <= high_r &&
                    norm_g >= low_g && norm_g <= high_g &&
                    norm_b >= low_b && norm_b <= high_b){
                    pixel_out[C_RED] = (char)0;
                    pixel_out[C_GREEN] = (char)0;
                    pixel_out[C_BLUE] = (char)0;
               }
               else {
                    pixel_out[C_RED] = (char)255;
                    pixel_out[C_GREEN] = (char)255;
                    pixel_out[C_BLUE] = (char)255;
               }
          }
     }

     features = findCircles(temp,input);
     cvReleaseImage(&temp);
     return features;
}


bool segColMapRThresh(IplImage * input, int x, int y){
     char * pixel;
     int left = x - 2;
     int right = x + 2;
     int top = y - 2;
     int bottom = y + 2;
     int row,col;
     if (input->depth == IPL_DEPTH_8U) printf ("IPL_DEPTH_8U\n");
     if (input->depth == IPL_DEPTH_8S) printf ("IPL_DEPTH_8S\n");
     if (input->depth == IPL_DEPTH_16S) printf ("IPL_DEPTH_16S\n");
     if (input->depth == IPL_DEPTH_32S) printf ("IPL_DEPTH_32S\n");
     if (input->depth == IPL_DEPTH_32F) printf ("IPL_DEPTH_32F\n");
     if (input->depth == IPL_DEPTH_64F) printf ("IPL_DEPTH_64F\n");

     for(row=top;row<bottom;row++){
          for(col=left;col<right;col++){
               pixel = &input->imageData[(input->widthStep*row)+(col*3)];
               if(col==left && row==top){
                    RchnThresh = pixel[C_RED];
               }

               else{
                    RchnThresh = (RchnThresh/2) + (pixel[C_RED]/2);
               }
               printf("%c",pixel[C_RED]);
          }
          printf("\n");
     }

     return true;
}


/*
   This function scans the binary image for circles. Consists of 3 main steps;
   First, the binary image is edge detected. Then the OpenCV function cvFindContours
   is used to identify the contours (edges) within the images.
   Finally, an elipse is fitted over each contour.
*/

list<t2DPoint> segFindFeatures_RThresh(IplImage * input){
     IplImage * threshed = cvCloneImage(input);
     list<t2DPoint> features;
     int i,j;
     char * pixel_in = (char *)NULL;
     char * pixel_out = (char *)NULL;
     for(i=0;i<input->height;i++){
          for(j=0;j<(input->widthStep/3);j++){
               pixel_in = &input->imageData[(i*input->widthStep)+(j*3)];
               pixel_out = &threshed->imageData[(i*threshed->widthStep)+(j*3)];
               if((pixel_in[C_RED] > (RchnThresh - 50)) &&
                    (pixel_in[C_RED] < (RchnThresh + 20)) &&
                    (pixel_in[C_GREEN] < (char)50) &&
                    (pixel_in[C_BLUE] < (char)110)){
                    pixel_out[C_RED] = (char)0;
                    pixel_out[C_BLUE] = (char)0;
                    pixel_out[C_GREEN] = (char)0;
               }

               else {
                    pixel_out[C_RED] = (char)255;
                    pixel_out[C_BLUE] = (char)255;
                    pixel_out[C_GREEN] = (char)255;
               }
          }
     }
     
     features = findCircles(threshed,input);
     cvReleaseImage(&threshed);
     return features;
}


list<t2DPoint> findCircles(IplImage * input, IplImage * draw){
     CvMemStorage * storage;
     CvSeq * contour;
     CvBox2D * box;
     CvPoint * pointArray;
     CvPoint2D32f * pointArray32f;
     CvPoint center;
     t2DPoint result;

     float myAngle,ratio;
     int i,header_size,count,length,width;
     IplImage * gray_input = cvCreateImage(cvGetSize(input),IPL_DEPTH_8U,1);
     list<t2DPoint> markers;

     //Convert the input image to grayscale.
     cvCvtColor(input,gray_input,CV_RGB2GRAY);

     //Remove noise and smooth
     removeNoise(gray_input);

     //Edge detect the image with Canny algorithm
     cvCanny(gray_input,gray_input,25,150,3);

     //Allocate memory
     box = (CvBox2D *)malloc(sizeof(CvBox2D));
     header_size = sizeof(CvContour);
     storage = cvCreateMemStorage(1000);

     // Find all the contours in the image.
     cvFindContours(gray_input,storage,&contour,header_size,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_TC89_KCOS);
     while(contour!=NULL)
     {
          if(CV_IS_SEQ_CURVE(contour))
          {
               count = contour->total;
               pointArray = (CvPoint *)malloc(count * sizeof(CvPoint));
               cvCvtSeqToArray(contour,pointArray,CV_WHOLE_SEQ);
               pointArray32f = (CvPoint2D32f *)malloc((count + 1) * sizeof(CvPoint2D32f));
               for(i=0;i<count-1;i++){
                    pointArray32f[i].x = (float)(pointArray[i].x);
                    pointArray32f[i].y = (float)(pointArray[i].y);
               }

               pointArray32f[i].x = (float)(pointArray[0].x);
               pointArray32f[i].y = (float)(pointArray[0].y);
               if(count>7){
                    cvFitEllipse(pointArray32f,count,box);
                    ratio = (float)box->size.width/(float)box->size.height;
                    center.x = (int)box->center.x;
                    center.y = (int)box->center.y;
                    length = (int)box->size.height;
                    width = (int)box->size.width;
                    myAngle = box->angle;
                    if((center.x>0) && (center.y>0)){
                         result.x = center.x;              
                         result.y = center.y;          
                         result.size = length;    
                         
                         markers.push_front(result);
                         if(draw!=NULL) cvCircle(draw,center,(int)length/2,RGB(0,0,255),-1);
                         /*cvEllipse(input,
                         center,
                         cvSize((int)width/2,(int)length/2),
                         -box->angle,
                         0,
                         360,
                         RGB(0,255,0),
                         1);*/
                    }
               }
               free(pointArray32f);
               free(pointArray);
          }
          contour = contour->h_next;
     }
     free(contour);
     free(box);
     cvReleaseImage(&gray_input);
     cvReleaseMemStorage(&storage);
     return markers;
}

ASKER CERTIFIED SOLUTION
Avatar of itsmeandnobodyelse
itsmeandnobodyelse
Flag of Germany image

Link to home
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
See answer
Avatar of ukjm2k

ASKER

Im sure i adjusted the code as required, but i get these errors:

segmentation.cpp
C:\My Documents\dissertation\segmentation.cpp(127) : error C2143: syntax error : missing ',' before '<'
C:\My Documents\dissertation\segmentation.cpp(127) : error C2059: syntax error : '<'
C:\My Documents\dissertation\segmentation.cpp(139) : error C2065: 'input' : undeclared identifier
C:\My Documents\dissertation\segmentation.cpp(167) : error C2227: left of '->height' must point to class/struct/union
C:\My Documents\dissertation\segmentation.cpp(168) : error C2227: left of '->widthStep' must point to class/struct/union
C:\My Documents\dissertation\segmentation.cpp(169) : error C2227: left of '->imageData' must point to class/struct/union
C:\My Documents\dissertation\segmentation.cpp(169) : error C2227: left of '->widthStep' must point to class/struct/union
C:\My Documents\dissertation\segmentation.cpp(170) : error C2227: left of '->widthStep' must point to class/struct/union
C:\My Documents\dissertation\segmentation.cpp(197) : error C2065: 'features' : undeclared identifier
C:\My Documents\dissertation\segmentation.cpp(270) : error C2660: 'findCircles' : function does not take 2 parameters
C:\My Documents\dissertation\segmentation.cpp(357) : error C2143: syntax error : missing ';' before '}'
C:\My Documents\dissertation\segmentation.cpp(357) : error C2143: syntax error : missing ';' before '}'
C:\My Documents\dissertation\segmentation.cpp(357) : error C2143: syntax error : missing ';' before '}'
Error executing cl.exe.

segmentation.obj - 13 error(s), 0 warning(s)
Seems as if some header includes are missing:

You need the headers where IplImage and t2DPoint where defined.
You need list header and using namespace std.

You should post the latest versions of your header files as I couldn't compile myself.

>>>> syntax error : missing ',' before '<'

I assume it's line

     list<t2DPoint> segFindFeatures_Ave(IplImage * input){


The error comes cause global.h wasn't included or cause <list> wasn't included.

>>>> error C2065: 'input' : undeclared identifier

Seems as if IplImage header wasn't included. Search all header files for struct IplImage and include it.

>>>> left of '->height' must point to class/struct/union
>>>> left of '->widthStep' must point to class/struct/union
>>>> left of '->imageData' must point to class/struct/union
>>>> left of '->widthStep' must point to class/struct/union
>>>> left of '->widthStep' must point to class/struct/union

All these need IplImage header

>>>> 'features' : undeclared identifier

if  line 128 compiles that would compile also:

128:      list<t2DPoint> features;

>>>> 'findCircles' : function does not take 2 parameters

The findCircles function from above has 2 parameters. Check that all header files have the same declaration.

>>>> syntax error : missing ';' before '}'

The last '}' must be removed.


ukjm2k, all these errors are not very difficult to find and solve if you have some experience using Visual C++. If it is your very first program, I don't think you'll succeed as you would need to debug and change the code after it compiles. As I don't have the camera library and code, I couldn't help you on that.

Regards, Alex