ukjm2k
asked on
Unhandled exception 0xC0000005: Access Violation
Hi every one. This is my first question here, so please bare with me.
Â
I have a problem...
When i compile my progtam in vc++ 6, i do not get any compile or link errors, but
when i am running it, it gets to a point and freezes, and windows says the following:
"Unhandled exception 0xC0000005: Access Violation"
Debugging the program leads to a line which reads
"pMatches = pTempMatch->pNext;"
Does any one know why this is happening?
The code which is associated with this problem is below.
I am not sure what is going wrong.
Can anyone advise me in what to do, preferably with some code please.
Many thanks.
void tkDespatch(IplImage * left, IplImage * right, int mode){
      struct tCFeature * pMatches;
      struct tCFeature * pTempMatch;
      char out_text[2048];
      FILE * fp;
      .....     Â
      case MODE_RECONSTRUCT_RUN:
           {
                 pMatches = tkCorrespond(tkSegment(lef t),tkSegme nt(right)) ;
                 if(pMatches!=NULL){
                      pTempMatch = pMatches;
                      while(pTempMatch!=NULL){
                            tkReconstruct(pTempMatch,o ut_text);
                            pMatches = pTempMatch->pNext;
                            free(pTempMatch);
                            pTempMatch = pMatches;
                      }
                      fp = fopen(out_file,"a");
                      fputs(out_text,fp);
                      fputs("\n",fp);
                      fclose(fp);
                      out_text[0] = '\0';
                 }
           }
      break;
}
Â
I have a problem...
When i compile my progtam in vc++ 6, i do not get any compile or link errors, but
when i am running it, it gets to a point and freezes, and windows says the following:
"Unhandled exception 0xC0000005: Access Violation"
Debugging the program leads to a line which reads
"pMatches = pTempMatch->pNext;"
Does any one know why this is happening?
The code which is associated with this problem is below.
I am not sure what is going wrong.
Can anyone advise me in what to do, preferably with some code please.
Many thanks.
void tkDespatch(IplImage * left, IplImage * right, int mode){
      struct tCFeature * pMatches;
      struct tCFeature * pTempMatch;
      char out_text[2048];
      FILE * fp;
      .....     Â
      case MODE_RECONSTRUCT_RUN:
           {
                 pMatches = tkCorrespond(tkSegment(lef
                 if(pMatches!=NULL){
                      pTempMatch = pMatches;
                      while(pTempMatch!=NULL){
                            tkReconstruct(pTempMatch,o
                            pMatches = pTempMatch->pNext;
                            free(pTempMatch);
                            pTempMatch = pMatches;
                      }
                      fp = fopen(out_file,"a");
                      fputs(out_text,fp);
                      fputs("\n",fp);
                      fclose(fp);
                      out_text[0] = '\0';
                 }
           }
      break;
}
How are you initializing 'pMatches'? This part is missing from your code, and that might be of interest...
ASKER
Thanks for the replies.
mgpeschke:
I have tried the code you added, but it seems to be giving me the same "Unhandled exception 0xC0000005: Access Violation" error, except it now points to the new line: "if (pTempMatch->pNext == NULL)".
Further expansion in debug mode gives the errors:
pTempMatch 0x5bc712d0
pNext CXX0030: Error: Expression cannot be evaluated
jkr
Not too sure how you mean initialise m8. These 6 occurances are the only places where i have used 'pMatches'....
mgpeschke:
I have tried the code you added, but it seems to be giving me the same "Unhandled exception 0xC0000005: Access Violation" error, except it now points to the new line: "if (pTempMatch->pNext == NULL)".
Further expansion in debug mode gives the errors:
pTempMatch 0x5bc712d0
pNext CXX0030: Error: Expression cannot be evaluated
jkr
Not too sure how you mean initialise m8. These 6 occurances are the only places where i have used 'pMatches'....
>>>> pTempMatch->pNext
The access violation is because pTempMatch is an invalid pointer. The arrow operator -> can't be accomplished because it has no valid memory address.
There are a few possibilities why the pointer is invalid:
1. pMatches = tkCorrespond(tkSegment(lef t),tkSegme nt(right)) ;
pTempMatch was assigned by pMatches and pMatches comes from tkCorrespond function. You should make sure that this function always returns a valid pointer or NULL.
2. The 'next' member variable in struct tcFeature wasn't initialized or wasn't properly set.
Obviously, it's some sort of linked list. But, you have to make sure that the list was properly managed. If for example an entry was removed from the list, the 'next' variable must be relinked to the next entry or set to NULL.
3. The pointer was deleted and therefore invalid.
If you are using linked lists you never should delete entries while they are still members of the list. Either remove the entry from the list and delete or delete the whole list after it is isn't used again.
Regards, Alex
The access violation is because pTempMatch is an invalid pointer. The arrow operator -> can't be accomplished because it has no valid memory address.
There are a few possibilities why the pointer is invalid:
1. pMatches = tkCorrespond(tkSegment(lef
pTempMatch was assigned by pMatches and pMatches comes from tkCorrespond function. You should make sure that this function always returns a valid pointer or NULL.
2. The 'next' member variable in struct tcFeature wasn't initialized or wasn't properly set.
Obviously, it's some sort of linked list. But, you have to make sure that the list was properly managed. If for example an entry was removed from the list, the 'next' variable must be relinked to the next entry or set to NULL.
3. The pointer was deleted and therefore invalid.
If you are using linked lists you never should delete entries while they are still members of the list. Either remove the entry from the list and delete or delete the whole list after it is isn't used again.
Regards, Alex
ASKER
itsmeandnobodyelse
Thanks for the reply.
I have taken your 3 suggestions into consideration, but again, as far as i can see, im not doing anything wrong.
Suggestion 1.
pMatches = tkCorrespond(tkSegment(lef t),tkSegme nt(right)) ;
tcFeature is defined below:
struct tCFeature * tkCorrespond(struct t2DPoint * left_features,struct t2DPoint * right_features){
      if(left_features!=NULL && right_features!=NULL)
      return corrSimple(left_features,r ight_featu res);
      else
      return NULL;
      }
Suggestion 2.
The 'next' member variable in struct tcFeature wasn't initialized or wasn't properly set.
pNext defined as:
/* structure for feature correspondences */
typedef struct tCFeature {
      int iLeft[2];
      int iRight[2];
      struct tCFeature * pNext;
} tCFeature;
                     Â
Used later to find correspondences:
struct tCFeature * corrSimple(struct t2DPoint * left, struct t2DPoint * right){
      struct t2DPoint * t_left = left;
      struct t2DPoint * t_right = right;
      struct t2DPoint * match4left = (struct t2DPoint *)NULL;
      struct t2DPoint * match4right = (struct t2DPoint *)NULL;
      struct t2DPoint * temp_del = (struct t2DPoint *)NULL;
      struct tCFeature * result = (struct tCFeature *)NULL;
      struct tCFeature * temp_match = (struct tCFeature *)NULL;
      bool match = false;
      while(t_left!=NULL){
           match = false;
           match4left = corrSimple_1(t_left,right) ;
           if(match4left!=NULL){
                 match4right = corrSimple_1(match4left,le ft);
                 if(t_left==match4right){
                      match = true;
                      //printf("MATCHED! L[%i,%i] R[%i,%i]\n",t_left->x,t_le ft->y,matc h4left->x, match4left ->y);
                      temp_match = (struct tCFeature *)malloc(sizeof(struct tCFeature));
                      temp_match->iLeft[0] = t_left->x;
                      temp_match->iLeft[1] = t_left->y;
                      temp_match->iRight[0] = match4left->x;
                      temp_match->iRight[1] = match4left->y;
                      temp_match->pNext = result;
                      result = temp_match;
Suggestion 3.
The pointer was deleted and therefore invalid.
Once the above matches are made, the elements are deleted:
//Delete the element from the left list
                      temp_del = t_left;
                      t_left=t_left->next;
                      if(temp_del->previous==NUL L){
                            left = temp_del->next;
                      }
                      else {
                            temp_del->previous->next = temp_del->next;
                      }
                      if(temp_del->next!=NULL){
                            temp_del->next->previous = temp_del->previous;
                      }
                      free(temp_del);
//Now do the element from the right list
.............
Can you see anything wrong with these?
Coded examples would be very helpful.
Thanks..
Thanks for the reply.
I have taken your 3 suggestions into consideration, but again, as far as i can see, im not doing anything wrong.
Suggestion 1.
pMatches = tkCorrespond(tkSegment(lef
tcFeature is defined below:
struct tCFeature * tkCorrespond(struct t2DPoint * left_features,struct t2DPoint * right_features){
      if(left_features!=NULL && right_features!=NULL)
      return corrSimple(left_features,r
      else
      return NULL;
      }
Suggestion 2.
The 'next' member variable in struct tcFeature wasn't initialized or wasn't properly set.
pNext defined as:
/* structure for feature correspondences */
typedef struct tCFeature {
      int iLeft[2];
      int iRight[2];
      struct tCFeature * pNext;
} tCFeature;
                     Â
Used later to find correspondences:
struct tCFeature * corrSimple(struct t2DPoint * left, struct t2DPoint * right){
      struct t2DPoint * t_left = left;
      struct t2DPoint * t_right = right;
      struct t2DPoint * match4left = (struct t2DPoint *)NULL;
      struct t2DPoint * match4right = (struct t2DPoint *)NULL;
      struct t2DPoint * temp_del = (struct t2DPoint *)NULL;
      struct tCFeature * result = (struct tCFeature *)NULL;
      struct tCFeature * temp_match = (struct tCFeature *)NULL;
      bool match = false;
      while(t_left!=NULL){
           match = false;
           match4left = corrSimple_1(t_left,right)
           if(match4left!=NULL){
                 match4right = corrSimple_1(match4left,le
                 if(t_left==match4right){
                      match = true;
                      //printf("MATCHED! L[%i,%i] R[%i,%i]\n",t_left->x,t_le
                      temp_match = (struct tCFeature *)malloc(sizeof(struct tCFeature));
                      temp_match->iLeft[0] = t_left->x;
                      temp_match->iLeft[1] = t_left->y;
                      temp_match->iRight[0] = match4left->x;
                      temp_match->iRight[1] = match4left->y;
                      temp_match->pNext = result;
                      result = temp_match;
Suggestion 3.
The pointer was deleted and therefore invalid.
Once the above matches are made, the elements are deleted:
//Delete the element from the left list
                      temp_del = t_left;
                      t_left=t_left->next;
                      if(temp_del->previous==NUL
                            left = temp_del->next;
                      }
                      else {
                            temp_del->previous->next = temp_del->next;
                      }
                      if(temp_del->next!=NULL){
                            temp_del->next->previous = temp_del->previous;
                      }
                      free(temp_del);
//Now do the element from the right list
.............
Can you see anything wrong with these?
Coded examples would be very helpful.
Thanks..
ASKER
Any one help please....?
>>>> Â Â Â Â Â Â Â Â Â Â if(temp_del->previous==NUL L){
>>>>Â Â Â Â Â Â Â Â Â Â Â Â Â left = temp_del->next;
>>>>Â Â Â Â Â Â Â Â Â Â }
What is left ????
If temp_del->previous is NULL, the left list is starting with the new next pointer. Generally, I can't see an error, but the sequence isn't programmed well. Look at that:
tCFeature* deleteNode(tCFeature* pNode)
{
  if (pNode == NULL)
     return NULL;
   tCFeature* pPrev = pNode->previous;
   tCFeature* pNext = pNode->next;
   if (pPrev != NULL)
     pPrev->next = pNext;
   if (pNext != NULL)
     pNext->previous = pPrev;
   free(pNode);
   return pNext;     Â
}
Â
You could use it like that:
 // delete left node
 t_left = deleteNode(t_left);
 // delete right node
 t_right = deleteNode(t_right);
I would suggest you change your program to using functions like the one above. You would need an insertNode and appendNode function also. The advantage of separating linked-list functionality from normal functionality is, that you easily could verify if all pointers are valid. You could test the linked list functionality independent of the rest.
If your program still crashes, you should debug your program step by step. Check all pointers used if they are valid. You should see that by looking at the pointer members of your class. Any pointer should have a valid pointer value when viewed by the Debugger.
BTW, did you consider using std::list<tCFeature> instead of your own list?
Regards, Alex
Â
>>>>Â Â Â Â Â Â Â Â Â Â Â Â Â left = temp_del->next;
>>>>Â Â Â Â Â Â Â Â Â Â }
What is left ????
If temp_del->previous is NULL, the left list is starting with the new next pointer. Generally, I can't see an error, but the sequence isn't programmed well. Look at that:
tCFeature* deleteNode(tCFeature* pNode)
{
  if (pNode == NULL)
     return NULL;
   tCFeature* pPrev = pNode->previous;
   tCFeature* pNext = pNode->next;
   if (pPrev != NULL)
     pPrev->next = pNext;
   if (pNext != NULL)
     pNext->previous = pPrev;
   free(pNode);
   return pNext;     Â
}
Â
You could use it like that:
 // delete left node
 t_left = deleteNode(t_left);
 // delete right node
 t_right = deleteNode(t_right);
I would suggest you change your program to using functions like the one above. You would need an insertNode and appendNode function also. The advantage of separating linked-list functionality from normal functionality is, that you easily could verify if all pointers are valid. You could test the linked list functionality independent of the rest.
If your program still crashes, you should debug your program step by step. Check all pointers used if they are valid. You should see that by looking at the pointer members of your class. Any pointer should have a valid pointer value when viewed by the Debugger.
BTW, did you consider using std::list<tCFeature> instead of your own list?
Regards, Alex
Â
ASKER
Any one else have a way as to how i can overcome this problem as it still persists...
Thanks
Thanks
You have to provide all your code as the error most likely isn't in the parts you posted above.
Regards, Alex
Regards, Alex
ASKER
itsmeandnobodyelse
Im happy to do that, but the FULL code is around 2000 lines long :s .....
Ive never posted here before so im not sure what is the best way to go about this, but im sure 2000 lines in this section is a little too much!!
Perhaps if someone is willing to take a look at it i could personally send it to them... (again apologies if this is something that is not appropraite....)
Let me know guys....many thanks.
Im happy to do that, but the FULL code is around 2000 lines long :s .....
Ive never posted here before so im not sure what is the best way to go about this, but im sure 2000 lines in this section is a little too much!!
Perhaps if someone is willing to take a look at it i could personally send it to them... (again apologies if this is something that is not appropraite....)
Let me know guys....many thanks.
To send mails to experts is against EE rules as the expert would get an advantage.
2000 lines isn't too much to post here, you should remove all TAB characters and make clear where a new file begins. Alternatively you could provide your source on a web-site where experts could download it. However, as I am not at home til Thursday I couldn't download from an FTP site.
Regards, Alex
2000 lines isn't too much to post here, you should remove all TAB characters and make clear where a new file begins. Alternatively you could provide your source on a web-site where experts could download it. However, as I am not at home til Thursday I couldn't download from an FTP site.
Regards, Alex
ASKER
In that case, i shall be putting my code up here some time this evening.
Just before, i thought id just tell every one that i am building a stereo vision 3D reconstruction system that uses OpenCV.
This is an external library that i use for things such as camera calibration.
I will include ALL the code and so thought it appropriate to declare the use of OpenCV first.
However, i am sure that the error i am getting has nothing to do with these external library functions, rather programming errors, so i dont see it being a problem...
Many thanks to all sofar, and i hope you check back later to help sort my problem.
Just before, i thought id just tell every one that i am building a stereo vision 3D reconstruction system that uses OpenCV.
This is an external library that i use for things such as camera calibration.
I will include ALL the code and so thought it appropriate to declare the use of OpenCV first.
However, i am sure that the error i am getting has nothing to do with these external library functions, rather programming errors, so i dont see it being a problem...
Many thanks to all sofar, and i hope you check back later to help sort my problem.
ASKER
Hi guys. Below is the full code for my program.
As mentioned before, the debug error seems to be in "pMatches = pTempMatch->pNext;", but i have been unable to fix it.
Compiling and linking the program produces no errors, but whilst the program is being exectuted, it freezes.
To be more specific, the system is first initialised and camera calibration is completed successfully using OpenCV. After this, input from two webcams is supposed to render me a 3D reconstruction of the two inputs, but this is where it falls short; whilst finding correspondences between the two inputs.
Hope this makes it a little more clear.
Here is the code, and thanks in advance:
global.h
/* Global definitions */
#include <stdio.h>
#include <stdlib.h>
#include <cv.h>
#include <cvaux.h>
#include <highgui.h>
#ifndef GLOBALS
#define GLOBALS
/* define run-modes */
#define MODE_INIT 0
#define MODE_CALIBRATE_INT 1
#define MODE_CALIBRATE_EXT 2
#define MODE_RECONSTRUCT_SAMPLE 3
#define MODE_RECONSTRUCT_RUN 4
/* define different calibration types */
#define CALIB_UNSET 0
#define CALIB_FILE 1
#define CALIB_BMP 2
#define CALIB_LIVE 3
/* define number of points require for ext calibration */
#define EXT_REQ_POINTS 35
/* define left and right camera indices */
#define SRC_LEFT_CAMERA 0
#define SRC_RIGHT_CAMERA 1
/* colour channels in an IplImage */
#define C_BLUE 0
#define C_GREEN 1
#define C_RED 2
/* structure for feature correspondences */
typedef struct tCFeature {
int iLeft[2];
int iRight[2];
struct tCFeature * pNext;
} tCFeature;
/* 2D point struct (includes next/prev links unlike OpenCV */
typedef struct t2DPoint {
int x;
int y;
int size;
struct t2DPoint * next;
struct t2DPoint * previous;
} t2DPoint;
/* struct representing a camera ray toward the object */
typedef struct camera_ray {
double vector[3];
double cam_tran[3];
double cam_rot;
double pixelsize[2];
} camera_ray;
#endif
////////////////////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////
/* procedures for reading/writing images and data files */
library.h
/* fileio.h :: File I/O Library header */
void writeImagePair(IplImage** images,const char * prefix);
void writeImage(IplImage* image,const char * prefix);
void removeNoise(IplImage * src);
void combine(IplImage * src1,IplImage * src2,IplImage * output,int mode);
library.cpp
#include "global.h"
extern bool fIntrinsicDone;
extern bool fExtrinsicDone;
extern int fCalibMethod;
/* procedures for reading/writing images and data files */
void writeImagePair(IplImage** images,const char * prefix){
char filename[100];
sprintf(filename,"%s-imgL. bmp",prefi x);
cvvSaveImage(filename,imag es[0]);
sprintf(filename,"%s-imgR. bmp",prefi x);
cvvSaveImage(filename,imag es[1]);
}
void writeImage(IplImage* image,const char * prefix){
char filename[100];
sprintf(filename,"%s-img.b mp",prefix );
cvvSaveImage(filename,imag e);
}
void removeNoise(IplImage * src){
//get the size of input_image (src)
CvSize sz = cvSize(src->width &Â -2, src->height &Â -2);
//create temp-image
IplImage* pyr = cvCreateImage(cvSize(sz.wi dth/2, sz.height/2),
src->depth, src->nChannels);
cvPyrDown( src, pyr, CV_GAUSSIAN_5x5); //pyr DOWN
cvPyrUp( pyr, src, CV_GAUSSIAN_5x5); //and UP
cvReleaseImage(&pyr); //release temp
}
void combine(IplImage * src1,IplImage * src2,IplImage * new_img,int mode){
int row,col;
char * new_pixel;
char * src_pixel;
CvFont disp_font;
int text_col = CV_RGB(0,255,0);
new_img->origin = 1;
for(row=0;row<src1->height ;row++){
for(col=0;col<src1->width; col++){
new_pixel = &new_img->imageData[(row*n ew_img->wi dthStep)+( col*3)];
src_pixel = &src1->imageData[(row*src1 ->widthSte p)+(col*3) ];
new_pixel[C_BLUE] = src_pixel[C_BLUE];
new_pixel[C_GREEN] = src_pixel[C_GREEN];
new_pixel[C_RED] = src_pixel[C_RED];
}
for(col=0;col<src2->width; col++){
new_pixel = &new_img->imageData[(row*n ew_img->wi dthStep)+s rc1->width Step+(col* 3)];
src_pixel = &src2->imageData[(row*src2 ->widthSte p)+(col*3) ];
new_pixel[C_BLUE] = src_pixel[C_BLUE];
new_pixel[C_GREEN] = src_pixel[C_GREEN];
new_pixel[C_RED] = src_pixel[C_RED];
}
}
for(row=src1->height;row<n ew_img->he ight;row++ ){
for(col=0;col<new_img->wid th;col++){
new_pixel = &new_img->imageData[(row*n ew_img->wi dthStep)+( col*3)];
new_pixel[C_BLUE] = (char)0;
new_pixel[C_GREEN] = (char)0;
new_pixel[C_RED] = (char)0;
}
}
cvInitFont(&disp_font,CV_F ONT_VECTOR 0,0.35,0.3 5,0.0,1);
switch(mode){
case MODE_INIT:
{
cvPutText(new_img,"initial ised.... awaiting calibration (press
'c')",cvPoint(5,300),&disp _font,text _col);
}
break;
case MODE_CALIBRATE_INT:
{
if(!fIntrinsicDone){
switch(fCalibMethod){
case CALIB_UNSET:
{
cvPutText(new_img,"intrins ic calibration from [1]file
[2]bmp [3]live",cvPoint(5,300),&d isp_font,t ext_col);
}
break;
case CALIB_LIVE:
{
cvPutText(new_img,"intrins ic calibration from live
stream...",cvPoint(5,300), &disp_font ,text_col) ;
}
break;
}
}
else
cvPutText(new_img,"done... awaiting extrinsic calibration (press
'c')",cvPoint(5,300),&disp _font,text _col);
}
break;
case MODE_CALIBRATE_EXT:
{
cvPutText(new_img,"extrins ic calibration
mode...",cvPoint(5,300),&d isp_font,t ext_col);
}
break;
case MODE_RECONSTRUCT_SAMPLE:
{
cvPutText(new_img,"select sample marker
colour...",cvPoint(5,300), &disp_font ,text_col) ;
}
break;
case MODE_RECONSTRUCT_RUN:
{
cvPutText(new_img,"reconst ructing... ",cvPoint( 5,300),&di sp_font,te xt_col);
}
break;
}
}
////////////////////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// /////////
main.cpp
#include "global.h"
#include <conio.h>
#include <time.h>
#include <math.h>
#include <string.h>
#include "calibrate.h"
#include "segmentation.h"
#include "correspondence.h"
#include "reconstruct.h"
#include "library.h"
/* forward declaration of function */
void tkDespatch(IplImage * left, IplImage * right, int mode);
void tkLMseHandler(int event,int x, int y, int flags);
/* calibration variables */
CvCalibFilter clCalibration;
CvCamera *clLeftCamera, *clRightCamera;
bool fIntrinsicDone=false, fExtrinsicDone=false;
int fCalibMethod=CALIB_UNSET, iCalibPrevFrame;
/* colour segmentation variables */
//int thresh_r=0,thresh_g=0,thre sh_b=0;
/* current system flags */
int fRunMode = MODE_INIT;
bool fGenColMap = false;
/* x,y coords to generate segmentation colour map from */
int colmap_x=0, colmap_y=0;
char out_file[80];
int main(int argc, char **argv)
{
CvCapture *left_camera = (CvCapture *)NULL ,*right_camera = (CvCapture *)NULL;
IplImage *left_frame = (IplImage *)NULL ,*right_frame = (IplImage *)NULL;
IplImage *display = (IplImage *)NULL;
int cCmd,fRunLoop = 1;
double dEtalonParams[3] = {8,6,3.3};
/* attach to the cameras and make sure that there are two */
printf("Selecting left camera....\n");
left_camera = cvCaptureFromCAM(-1);
printf("Selecting right camera....\n");
right_camera = cvCaptureFromCAM(-1);
if (!left_camera || !right_camera){
printf("Unable to attach to both cameras...\n");
if (left_camera) cvReleaseCapture(&left_cam era);
if (right_camera) cvReleaseCapture(&right_ca mera);
exit(1);
}
/* create the output to display the camera images in */
/* we'll also set up a mouse callback for sampling the */
/* segmentation colour */
cvvNamedWindow("Output", CV_WINDOW_AUTOSIZE);
cvSetMouseCallback("Output ", tkLMseHandler);
/* setup the calibration class */
clCalibration.SetEtalon(CV _CALIB_ETA LON_CHESSB OARD,dEtal onParams);
clCalibration.SetCameraCou nt(2);
clCalibration.SetFrames(20 );
/* define the name of the output file using the current */
/* time - simple, but effective!! */
sprintf(out_file,"%i.dat", clock());
/* enter the main loop */
while(fRunLoop){
left_frame = cvQueryFrame(left_camera);
right_frame = cvQueryFrame(right_camera) ;
cCmd = cvvWaitKeyEx(0,1);
switch(tolower(cCmd))
{
case 'q':
/* got a 'q', so we want to quit. set the loop flag */
/* appropriately */
{
fRunLoop = 0;
}
break;
case 'c':
/* got a 'c', so calibration has been inited. set */
/* the runmode correctly depending on what has */
/* already been done */
{
if(fRunMode == MODE_INIT){
printf("Intrinsic parameter calibration mode....\n");
printf("Select (1)Calibration File (2)Bitmaps (3)Live
Cameras\n");
fRunMode = MODE_CALIBRATE_INT;
}
else if(fRunMode == MODE_CALIBRATE_INT &&Â fIntrinsicDone){
printf("Extrinsic parameter calibration mode...\n");
printf("Please place the checkerboard at the correct position
and\n");
printf("press a key...\n");
cvWaitKey(0);
fRunMode = MODE_CALIBRATE_EXT;
}
}
break;
case 'r':
/* got a 'r', so we want to resample... that is of */
/* course assuming that we have gotten that far! */
{
if(fRunMode>=MODE_RECONSTR UCT_SAMPLE ){
fRunMode = MODE_RECONSTRUCT_SAMPLE;
fGenColMap = false;
colmap_x = 0;
colmap_y = 0;
}
}
break;
case 's':
/* swap the cameras over */
{
CvCapture *temp = left_camera;
left_camera = right_camera;
right_camera = temp;
}
break;
case '.':
/* save the current image pair */
{
IplImage *images[] = {left_frame,right_frame};
writeImagePair(images,"sna p");
printf("SNAPSHOT!\n");
}
break;
case '1':
/* calibrate from file */
{
fCalibMethod = CALIB_FILE;
}
break;
case '2':
/* calibrate from bitmaps */
{
fCalibMethod = CALIB_BMP;
}
break;
case '3':
/* calibrate from the live cameras */
{
fCalibMethod = CALIB_LIVE;
}
break;
}
/* now that we've trapped all of the user key strokes, */
/* we dispatch the two frames and the current mode to */
/* the correct bits */
tkDespatch(left_frame,righ t_frame,fR unMode);
/* combine the two images for display and overlay some */
/* text to describe to the user what is going on. */
if(display==NULL) display = cvCreateImage(cvSize(left_ frame->wid th+right_f rame->widt h,
left_frame->height+15),
left_frame-
>depth,
left_frame-
>nChannels);
combine(left_frame,right_f rame,displ ay,fRunMod e);
cvvShowImage("Output",disp lay);
}
/* if we've got this far, the user has selected to quit, so */
/* release all of the stuff we've allocated */
cvReleaseCapture(&left_cam era);
cvReleaseCapture(&right_ca mera);
cvReleaseImage(&display);
return 0;
}
void tkDespatch(IplImage * left, IplImage * right, int mode){
struct tCFeature * pMatches;
struct tCFeature * pTempMatch;
char out_text[2048];
FILE * fp;
switch(mode){
case MODE_CALIBRATE_INT:case MODE_CALIBRATE_EXT:
{
tkCalibrate(left,right,mod e);
}
break;
case MODE_RECONSTRUCT_SAMPLE:
{
if(fGenColMap){
if(tkGenerateColMap(left,c olmap_x,co lmap_y))
fRunMode = MODE_RECONSTRUCT_RUN;
else{
fGenColMap = false;
fRunMode = MODE_RECONSTRUCT_SAMPLE;
}
}
}
break;
case MODE_RECONSTRUCT_RUN:
{
pMatches = tkCorrespond(tkSegment(lef t),tkSegme nt(right)) ;
if(pMatches!=NULL){
pTempMatch = pMatches;
while(pTempMatch!=NULL){
tkReconstruct(pTempMatch,o ut_text);
pMatches = pTempMatch->pNext;
free(pTempMatch);
pTempMatch = pMatches;
}
fp = fopen(out_file,"a");
fputs(out_text,fp);
fputs("\n",fp);
fclose(fp);
out_text[0] = '\0';
}
}
break;
}
}
void tkLMseHandler(int event,int x, int y, int flags){
if(event==CV_EVENT_LBUTTON DOWN &&Â fRunMode==MODE_RECONSTRUCT _SAMPLE){
fGenColMap = true;
colmap_x = x; colmap_y=y;
}
}
////////////////////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ///
/*
Aims to fully generate a calibrated stereo system system in oreder to reconstruct        Â
the 3-D position. Opencv class CvCalibFilter will handle intrinsic calibration. Â Â Â Â Â Â
Once these values have been set, each frame is passed to the FindEtalon() method.
Â
After intrinsic calculation, the method IsCalibrated() is returned. Following    Â
successful calibration, the intrinsic parameters are written to a text file.
*/
calibrate.h
/* Header file for calibration component of tracking system */
void tkGetCameraParams(CvCamera * dest,int src);
void tkCalibrate(IplImage * left, IplImage * right, int mode);
calibrate.cpp
#include "global.h"
#include <time.h>
#include "calibrate.h"
#include "library.h"
/* the calibration class */
extern CvCalibFilter clCalibration;
/* variables for the two cameras */
extern CvCamera * clLeftCamera;
extern CvCamera * clRightCamera;
/* some status flags */
extern bool fIntrinsicDone;
extern bool fExtrinsicDone;
extern int fCalibMethod;
extern int iCalibPrevFrame;
extern int fRunMode;
void tkCalibrate(IplImage * left, IplImage * right, int mode){
IplImage *images[] = {left,right};
if(mode==MODE_CALIBRATE_IN T){
switch(fCalibMethod)
{
case CALIB_FILE:
{
//Do it from file
printf("\nLoading calibration data from file 'intcalib.dat'...\n");
clCalibration.LoadCameraPa rams("./pr e-calib/in tcalib.dat ");
if(clCalibration.IsCalibra ted()){
clLeftCamera = (CvCamera *)calloc(1,sizeof(struct CvCamera));
tkGetCameraParams(clLeftCa mera,0);
clRightCamera = (CvCamera *)calloc(1,sizeof(struct CvCamera));
tkGetCameraParams(clRightC amera,1);
printf("\nLEFT >>Â Focal Length:[%f,%f]\n",clLeftCa mera-
>matrix[0],clLeftCamera->m atrix[4]);
printf("LEFT >>Â Centre Point:[%f,%f]\n",clLeftCam era-
>matrix[2],clLeftCamera->m atrix[5]);
printf("RIGHT >>Â Focal Length:[%f,%f]\n",clRightC amera-
>matrix[0],clRightCamera-> matrix[4]) ;
printf("RIGHT >>Â Centre Point:[%f,%f]\n\n",clRight Camera-
>matrix[2],clRightCamera-> matrix[5]) ;
fIntrinsicDone = true;
fCalibMethod = CALIB_UNSET;
}
else {
printf("Calibration failed...unable to locate parameter file\n");
fIntrinsicDone = false;
fCalibMethod = CALIB_UNSET;
fRunMode = MODE_INIT;
}
break;
}
case CALIB_BMP:
{
//Do it from saved bitmaps
int i=0;
char filename[80];
printf("Calibration from saved bitmaps... [");
while(!clCalibration.IsCal ibrated() &&Â i<20)
{
sprintf(filename,"./pre-ca lib/calib_ %i-imgL.bm p",i);
images[0] = cvLoadImage(filename);
sprintf(filename,"./pre-ca lib/calib_ %i-imgR.bm p",i);
images[1] = cvLoadImage(filename);
if(images[0]!=NULL &&Â images[1]!=NULL){
if(clCalibration.FindEtalo n(images))
{
printf("#");
clCalibration.Push();
if(clCalibration.IsCalibra ted()){
fIntrinsicDone=true;
fCalibMethod = CALIB_UNSET;
printf("]\nIntrinsic parameters now
found...\n");
clLeftCamera = (CvCamera
*)calloc(1,sizeof(struct CvCamera));
tkGetCameraParams(clLeftCa mera,0);
clRightCamera = (CvCamera
*)calloc(1,sizeof(struct CvCamera));
tkGetCameraParams(clRightC amera,1);
printf("\nLEFT >>Â Focal
Length:[%f,%f]\n",clLeftCa mera->matr ix[0],clLe ftCamera-> matrix[4]) ;
printf("LEFT >>Â Centre
Point:[%f,%f]\n",clLeftCam era->matri x[2],clLef tCamera->m atrix[5]);
printf("RIGHT >>Â Focal
Length:[%f,%f]\n",clRightC amera->mat rix[0],clR ightCamera ->matrix[4 ]);
printf("RIGHT >>Â Centre
Point:[%f,%f]\n\n",clRight Camera->ma trix[2],cl RightCamer a->matrix[ 5]);
clCalibration.SaveCameraPa rams("./ne wcalib/
intcalib.dat");
printf("\nIntrinsic parameters written to
file 'intcalib.dat'...\n");
}
}
cvReleaseImage(&images[0]) ;
cvReleaseImage(&images[1]) ;
}
i++;
}
if(!clCalibration.IsCalibr ated()){
printf("]...failed!\n");
clCalibration.Stop();
fIntrinsicDone = false;
fCalibMethod = CALIB_UNSET;
fRunMode = MODE_INIT;
}
break;
}
case CALIB_LIVE:
{
bool found = clCalibration.FindEtalon(i mages);
if(!found) clCalibration.DrawPoints(i mages);
else{
char filename[30];
int cur_time = clock();
if(cur_time >= iCalibPrevFrame + 1000){
int imgs = clCalibration.GetFrameCoun t();
if(imgs==0)printf("Calibra tion from live cameras
beginning ...[");
printf("#");
sprintf(filename,"./new-ca lib/calib_ %i",imgs);
writeImagePair(images,file name);
iCalibPrevFrame = cur_time;
clCalibration.Push();
cvXorS(left,cvScalarAll(25 5),left);
cvXorS(right,cvScalarAll(2 55),right) ;
}
if(clCalibration.IsCalibra ted()){
fIntrinsicDone = true;
fCalibMethod = CALIB_UNSET;
printf("]\nIntrinsic parameters now found...\n");
clLeftCamera = (CvCamera *)calloc(1,sizeof(struct
CvCamera));
tkGetCameraParams(clLeftCa mera,0);
clRightCamera = (CvCamera *)calloc(1,sizeof(struct
CvCamera));
tkGetCameraParams(clRightC amera,1);
printf("\nLEFT >>Â Focal Length:[%f,%f]\n",clLeftCa mera-
>matrix[0],clLeftCamera->m atrix[4]);
printf("LEFT >>Â Centre Point:[%f,%f]\n",clLeftCam era-
>matrix[2],clLeftCamera->m atrix[5]);
printf("RIGHT >>Â Focal Length:[%f,%f]\n",clRightC amera-
>matrix[0],clRightCamera-> matrix[4]) ;
printf("RIGHT >>Â Centre Point:[%f,%f]\n\n",clRight Camera-
>matrix[2],clRightCamera-> matrix[5]) ;
clCalibration.SaveCameraPa rams("./ne w-calib/in tcalib.dat ");
printf("\nIntrinsic parameters written to file
'intcalib.dat'...\n");
}
}
break;
}
}
}
else if(mode==MODE_CALIBRATE_EX T){
CvCalibFilter tempCalib;
double dEtalonParams[3] = {8,6,3.3};
int i=0,j=0,count = 0;
bool found = false;
float focalLength[2];
float rotVect[3];
float jacobian[3*9];
CvMat jacmat,vecmat,rotMatr;
FILE * fp;
CvPoint2D32f* pts = (CvPoint2D32f *) NULL;
CvPoint2D32f gloImgPoints[2][EXT_REQ_PO INTS];
CvPoint3D32f gloWldPoints[EXT_REQ_POINT S];
tempCalib.SetEtalon(CV_CAL IB_ETALON_ CHESSBOARD ,dEtalonPa rams);
tempCalib.SetCameraCount(2 );
found = tempCalib.FindEtalon(image s);
if(!found) tempCalib.DrawPoints(image s);
if(found){
writeImagePair(images,"./n ew-calib/e xt_cal");
//Populate the gloImgPoints and gloWrldPoints arrays
//Load the world points from the text file.
if((fp=fopen("worldpoints. txt", "r"))==NULL)
{
printf("Unable to open worldpoints.txt\n");
fRunMode=MODE_CALIBRATE_IN T;
return;
}
for(i=0;i<35;i++)
fscanf (fp,"%f,%f,%f
",&gloWldPoints[i].x,&gloW ldPoints[i ].y,&gloWl dPoints[i] .z);
fclose(fp);
printf("done file!\n");
//Populate the image points array
for(i=0;i<2;i++){
tempCalib.GetLatestPoints( i, &pts, &count, &found);
if(pts[0].x <Â pts[5].x){
//array is sorted correctly
for(j=0;j<EXT_REQ_POINTS;j ++){
gloImgPoints[i][j].x = pts[j].x;
gloImgPoints[i][j].y = pts[j].y;
printf("#");
}
}
else {
//array is not right, swap it around...
for(j=0;j<EXT_REQ_POINTS;j ++){
gloImgPoints[i][j].x = pts[count-j-1].x;
gloImgPoints[i][j].y = pts[count-j-1].y;
}
}
printf("\n");
}
printf("Calibration using %i points....\n",EXT_REQ_POIN TS);
for(i=0;i<EXT_REQ_POINTS;i ++){
printf("[P%i]",i);
printf("\tL [%3.1f,%3.1f]\tR [%3.1f,%3.1f]\tW
[%3.1f,%3.1f,%3.1f]\n",glo ImgPoints[ SRC_LEFT_C AMERA][i]. x,
gloImgPoints[SRC_LEFT_CAME RA][i].y,
gloImgPoints[SRC_RIGHT_CAM ERA][i].x,
gloImgPoints[SRC_RIGHT_CAM ERA][i].y,
gloWldPoints[i].x,
gloWldPoints[i].y,
gloWldPoints[i].z);
}
printf("\n");
focalLength[0] = clLeftCamera->matrix[0];
focalLength[1] = clLeftCamera->matrix[5];
cvFindExtrinsicCameraParam s(EXT_REQ_ POINTS,
cvSize(cvRound(clLeftCamer a-
>imgSize[0]),cvRound(clLef tCamera->i mgSize[1]) ),
&gloImgPoints[0][0],
&gloWldPoints[0],
focalLength,
cvPoint2D32f(clLeftCamera-
>matrix[3],clLeftCamera->m atrix[6]),
&clLeftCamera->distortion[ 0],
&rotVect[0],
&clLeftCamera->transVect[0 ]);
rotMatr = cvMat( 3, 3, CV_MAT32F, clLeftCamera->rotMatr );
jacmat = cvMat( 3, 9, CV_MAT32F, jacobian );
vecmat = cvMat( 3, 1, CV_MAT32F, rotVect );
cvRodrigues( &rotMatr, &vecmat, &jacmat, CV_RODRIGUES_V2M );
printf("LEFT >>Â Rot :[%f | %f | %f]\n",clLeftCamera->rotMa tr[0],clLe ftCamera-
>rotMatr[1],clLeftCamera-> rotMatr[2] );
printf("LEFT >>Â Trans :[%f | %f | %f]\n",clLeftCamera->trans Vect[0],cl LeftCamera -
>transVect[1],clLeftCamera ->transVec t[2]);
focalLength[0] = clRightCamera->matrix[0];
focalLength[1] = clRightCamera->matrix[5];
cvFindExtrinsicCameraParam s(EXT_REQ_ POINTS,
cvSize(cvRound(clRightCame ra-
>imgSize[0]),cvRound(clRig htCamera-> imgSize[1] )),
&gloImgPoints[1][0],
&gloWldPoints[0],
focalLength,
cvPoint2D32f(clRightCamera -
>matrix[3],clRightCamera-> matrix[6]) ,
&clRightCamera->distortion [0],
&rotVect[0],
&clRightCamera->transVect[ 0]);
rotMatr = cvMat( 3, 3, CV_MAT32F, clRightCamera->rotMatr );
jacmat = cvMat( 3, 9, CV_MAT32F, jacobian );
vecmat = cvMat( 3, 1, CV_MAT32F, rotVect );
cvRodrigues( &rotMatr, &vecmat, &jacmat, CV_RODRIGUES_V2M );
printf("RIGHT >>Â Rot :[%f | %f | %f]\n",clRightCamera->rotM atr[0],clR ightCamera -
>rotMatr[1],clRightCamera- >rotMatr[2 ]);
printf("RIGHT >>Â Trans :[%f | %f | %f]\n",clRightCamera-
>transVect[0],clRightCamer a->transVe ct[1],clRi ghtCamera- >transVect [2]);
printf("\n *** CALIBRATED! ***\n");
fRunMode = MODE_RECONSTRUCT_SAMPLE;
fExtrinsicDone = true;
}
}
}
void tkGetCameraParams(CvCamera * dest,int src){
const CvCamera * temp_camera = clCalibration.GetCameraPar ams(src);
dest->distortion[0] = temp_camera->distortion[0] ;
dest->distortion[1] = temp_camera->distortion[1] ;
dest->distortion[2] = temp_camera->distortion[2] ;
dest->distortion[3] = temp_camera->distortion[3] ;
dest->imgSize[0] = temp_camera->imgSize[0];
dest->imgSize[1] = temp_camera->imgSize[1];
dest->matrix[0] = temp_camera->matrix[0];
dest->matrix[1] = temp_camera->matrix[1];
dest->matrix[2] = temp_camera->matrix[2];
dest->matrix[3] = temp_camera->matrix[3];
dest->matrix[4] = temp_camera->matrix[4];
dest->matrix[5] = temp_camera->matrix[5];
dest->matrix[6] = temp_camera->matrix[6];
dest->matrix[7] = temp_camera->matrix[7];
dest->matrix[8] = temp_camera->matrix[8];
dest->rotMatr[0] = temp_camera->rotMatr[0];
dest->rotMatr[1] = temp_camera->rotMatr[1];
dest->rotMatr[2] = temp_camera->rotMatr[2];
dest->rotMatr[3] = temp_camera->rotMatr[3];
dest->rotMatr[4] = temp_camera->rotMatr[4];
dest->rotMatr[5] = temp_camera->rotMatr[5];
dest->rotMatr[6] = temp_camera->rotMatr[6];
dest->rotMatr[7] = temp_camera->rotMatr[7];
dest->rotMatr[8] = temp_camera->rotMatr[8];
dest->transVect[0] = temp_camera->transVect[0];
dest->transVect[1] = temp_camera->transVect[1];
dest->transVect[2] = temp_camera->transVect[2];
}
////////////////////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ///////
/*
Identifies the features in each of the images in order to pass them onto the Â
Correspondence component for matching.
Â
Consists of a colour segmentation followed by a circle detection.
*/
segmentation.h
/* image segmentation */
/* how large an area to sample from around the click */
#define SAMPLE_SIZE 20
/* the threshhold window around the colour to be matched */
#define THRESH_BAND 60
bool tkGenerateColMap(IplImage * input,int x,int y);
bool segColMapSimpleAve(IplImag e * input, int x, int y);
bool segColMapRThresh(IplImage * input, int x, int y);
struct t2DPoint * tkSegment(IplImage * input);
struct t2DPoint * segFindFeatures_Ave(IplIma ge * input);
struct t2DPoint * segFindFeatures_RThresh(Ip lImage * input);
struct t2DPoint * findCircles(IplImage * input,IplImage * draw);
segmentation.cpp
#include "global.h"
#include "segmentation.h"
#include "library.h"
int thresh_r;
int thresh_g;
int thresh_b;
char RchnThresh;
extern bool colmapdone;
bool tkGenerateColMap(IplImage * input,int x,int y){
if(segColMapSimpleAve(inpu t,x,y)) return true;
else return false;
}
struct t2DPoint * tkSegment(IplImage * input){
return segFindFeatures_Ave(input) ;
}
bool segColMapSimpleAve(IplImag e * input, int x, int y){
char sample[SAMPLE_SIZE][SAMPLE _SIZE][3];
int row,col,chan,v;
int norm_total;
int left = x - (SAMPLE_SIZE/2);
int right = x + (SAMPLE_SIZE/2);
int top = y - (SAMPLE_SIZE/2);
int bottom = y + (SAMPLE_SIZE/2);
/* First, lets duplicate the bit we want to sample */
for(row=top;row<bottom;row ++){
for(col=left;col<right;col ++){
sample[row-top][col-left][ C_RED] = input->imageData[(row*inpu t-
>widthStep)+(col*3)+C_RED] ;
sample[row-top][col-left][ C_BLUE] = input->imageData[(row*inpu t-
>widthStep)+(col*3)+C_BLUE ];
sample[row-top][col-left][ C_GREEN] = input->imageData[(row*inpu t-
>widthStep)+(col*3)+C_GREE N];
}
}
/* Now we'll normalise the sample */
for(row=0;row<SAMPLE_SIZE; row++){
for(col=0;col<SAMPLE_SIZE; col++){
norm_total = sample[row][col][C_RED] + sample[row][col][C_BLUE] +
sample[row][col][C_GREEN];
sample[row][col][C_RED] =
(int)((float)sample[row][c ol][C_RED] /(float)no rm_total*2 55.0);
sample[row][col][C_BLUE] =
(int)((float)sample[row][c ol][C_BLUE ]/(float)n orm_total* 255.0);
sample[row][col][C_GREEN] =
(int)((float)sample[row][c ol][C_GREE N]/(float) norm_total *255.0);
}
}
/* Now let's smooth it a bit */
for(row=0;row<SAMPLE_SIZE; row++){
for(col=0;col<SAMPLE_SIZE; col++){
for(chan=0;chan<3;chan++){
v = sample[row][col][chan];
if(v>0){
v -= 1;
sample[row][col][chan] = v;
if(row>0 &&Â sample[row-1][col][chan] <Â v) sample[row-1][col][chan]
= v;
if(col>0 &&Â sample[row][col-1][chan] <Â v) sample[row][col-1][chan]
= v;
}
}
}
}
for(row=0;row<SAMPLE_SIZE; row++){
for(col=0;col<SAMPLE_SIZE; col++){
for(chan=0;chan<3;chan++){
v = sample[row][col][chan];
if(v>0){
v -= 1;
sample[row][col][chan] = v;
if(row <Â SAMPLE_SIZE-1 &&Â sample[row+1][col][chan] <Â v)
sample[row+1][col][chan] = v;
if(col <Â SAMPLE_SIZE-1 &&Â sample[row][col+1][chan] <Â v)
sample[row][col+1][chan] = v;
}
}
}
}
/*Now find the average*/
thresh_r = sample[0][0][C_RED];
thresh_b = sample[0][0][C_BLUE];
thresh_g = sample[0][0][C_GREEN];
for(row=0;row<SAMPLE_SIZE; row++){
for(col=0;col<SAMPLE_SIZE; col++){
thresh_r = (thresh_r+sample[row][col] [C_RED])/2 ;
thresh_b = (thresh_b+sample[row][col] [C_BLUE])/ 2;
thresh_g = (thresh_g+sample[row][col] [C_GREEN]) /2;
}
}
printf("\nTHRESHHOLDS: R:%i G:%i B:%i\n",thresh_r,thresh_g, thresh_b);
return true;
}
struct t2DPoint * segFindFeatures_Ave(IplIma ge * input){
IplImage * temp;
struct t2DPoint * features;
int i,j;
int norm_r=0,norm_g=0,norm_b=0 ;
int low_r=0,low_g=0,low_b=0;
int high_r=0,high_g=0,high_b=0 ;
int norm_total=0,in_total=0;
char * pixel_in = (char *)NULL;
char * pixel_out = (char *)NULL;
/* Clone the input image! */
temp = cvCloneImage(input);
removeNoise(input);
removeNoise(input);
/* Now work out what the normalised boundaries are from
the specified r,g,b values */
in_total = thresh_r+thresh_g+thresh_b ;
if(in_total!=0){
low_r = (int)(((float)thresh_r / (float)in_total)*255.0);
low_g = (int)(((float)thresh_g / (float)in_total)*255.0);
low_b = (int)(((float)thresh_b / (float)in_total)*255.0);
high_r = low_r + THRESH_BAND;
high_g = low_g + THRESH_BAND;
high_b = low_b + THRESH_BAND;
low_r -= THRESH_BAND;
low_g -= THRESH_BAND;
low_b -= THRESH_BAND;
}
else {
low_r = 0;
low_g = 0;
low_b = 0;
high_r = THRESH_BAND;
high_g = THRESH_BAND;
high_b = THRESH_BAND;
}
for(i=0;i<input->height;i+ +){
for(j=0;j<(input->widthSte p/3);j++){
pixel_in = &input->imageData[(i*input ->widthSte p)+(j*3)];
pixel_out = &temp->imageData[(i*input- >widthStep )+(j*3)];
norm_total = (int)pixel_in[C_RED] + (int)pixel_in[C_GREEN] + (int)pixel_in[C_BLUE];
if(norm_total!=0){
norm_r = (int)(((float)pixel_in[C_R ED] / (float)norm_total)*255.0);
norm_g = (int)(((float)pixel_in[C_G REEN] / (float)norm_total)*255.0);
norm_b = (int)(((float)pixel_in[C_B LUE] / (float)norm_total)*255.0);
}
else if(norm_total==0){
norm_r=0;norm_g=0;norm_b=0 ;
}
if(norm_r >= low_r &&Â norm_r <= high_r &&
norm_g >= low_g &&Â norm_g <= high_g &&
norm_b >= low_b &&Â norm_b <= high_b){
pixel_out[C_RED] = (char)0;
pixel_out[C_GREEN] = (char)0;
pixel_out[C_BLUE] = (char)0;
}
else {
pixel_out[C_RED] = (char)255;
pixel_out[C_GREEN] = (char)255;
pixel_out[C_BLUE] = (char)255;
}
}
}
features = findCircles(temp,input);
cvReleaseImage(&temp);
return features;
}
bool segColMapRThresh(IplImage * input, int x, int y){
char * pixel;
int left = x - 2;
int right = x + 2;
int top = y - 2;
int bottom = y + 2;
int row,col;
if(input->depth == IPL_DEPTH_8U) printf ("IPL_DEPTH_8U\n");
if (input->depth == IPL_DEPTH_8S) printf ("IPL_DEPTH_8S\n");
if (input->depth == IPL_DEPTH_16S) printf ("IPL_DEPTH_16S\n");
if (input->depth == IPL_DEPTH_32S) printf ("IPL_DEPTH_32S\n");
if (input->depth == IPL_DEPTH_32F) printf ("IPL_DEPTH_32F\n");
if (input->depth == IPL_DEPTH_64F) printf ("IPL_DEPTH_64F\n");
for(row=top;row<bottom;row ++){
for(col=left;col<right;col ++){
pixel = &input->imageData[(input-> widthStep* row)+(col* 3)];
if(col==left &&Â row==top){
RchnThresh = pixel[C_RED];
}
else{
RchnThresh = (RchnThresh/2) + (pixel[C_RED]/2);
}
printf("%c",pixel[C_RED]);
}
printf("\n");
}
return true;
}
struct t2DPoint * segFindFeatures_RThresh(Ip lImage * input){
IplImage * threshed = cvCloneImage(input);
struct t2DPoint * features;
int i,j;
char * pixel_in = (char *)NULL;
char * pixel_out = (char *)NULL;
for(i=0;i<input->height;i+ +){
for(j=0;j<(input->widthSte p/3);j++){
pixel_in = &input->imageData[(i*input ->widthSte p)+(j*3)];
pixel_out = &threshed->imageData[(i*th reshed->wi dthStep)+( j*3)];
if((pixel_in[C_RED] >Â (RchnThresh - 50)) &&
(pixel_in[C_RED] <Â (RchnThresh + 20)) &&
(pixel_in[C_GREEN] <Â (char)50) &&
(pixel_in[C_BLUE] <Â (char)110)){
pixel_out[C_RED] = (char)0;
pixel_out[C_BLUE] = (char)0;
pixel_out[C_GREEN] = (char)0;
}
else {
pixel_out[C_RED] = (char)255;
pixel_out[C_BLUE] = (char)255;
pixel_out[C_GREEN] = (char)255;
}
}
}
features = findCircles(threshed,input );
cvReleaseImage(&threshed);
return features;
}
struct t2DPoint * findCircles(IplImage * input,IplImage * draw){
CvMemStorage * storage;
CvSeq * contour;
CvBox2D * box;
CvPoint * pointArray;
CvPoint2D32f * pointArray32f;
CvPoint center;
float myAngle,ratio;
int i,header_size,count,length ,width;
IplImage * gray_input = cvCreateImage(cvGetSize(in put),IPL_D EPTH_8U,1) ;
struct t2DPoint * markers = (struct t2DPoint *)NULL;
struct t2DPoint * temppt = (struct t2DPoint *)NULL;
//Convert the input image to grayscale.
cvCvtColor(input,gray_inpu t,CV_RGB2G RAY);
//Remove noise and smooth
removeNoise(gray_input);
//Edge detect the image with Canny algorithm
cvCanny(gray_input,gray_in put,25,150 ,3);
//Allocate memory
box = (CvBox2D *)malloc(sizeof(CvBox2D));
header_size = sizeof(CvContour);
storage = cvCreateMemStorage(1000);
// Find all the contours in the image.
cvFindContours(gray_input, storage,&c ontour,hea der_size,C V_RETR_EXT ERNAL,CV_C HAIN_APPRO X_TC89_KCO S
);
while(contour!=NULL)
{
if(CV_IS_SEQ_CURVE(contour ))
{
count = contour->total;
pointArray = (CvPoint *)malloc(count * sizeof(CvPoint));
cvCvtSeqToArray(contour,po intArray,C V_WHOLE_SE Q);
pointArray32f = (CvPoint2D32f *)malloc((count + 1) * sizeof(CvPoint2D32f));
for(i=0;i<count-1;i++){
pointArray32f[i].x = (float)(pointArray[i].x);
pointArray32f[i].y = (float)(pointArray[i].y);
}
pointArray32f[i].x = (float)(pointArray[0].x);
pointArray32f[i].y = (float)(pointArray[0].y);
if(count>7){
cvFitEllipse(pointArray32f ,count,box );
ratio = (float)box->size.width/(fl oat)box->s ize.height ;
center.x = (int)box->center.x;
center.y = (int)box->center.y;
length = (int)box->size.height;
width = (int)box->size.width;
myAngle = box->angle;
if((center.x>0) &&Â (center.y>0)){
temppt = (struct t2DPoint *)malloc(sizeof(struct t2DPoint));
temppt->x = center.x;
temppt->y = center.y;
temppt->size = length;
temppt->next = markers;
temppt->previous = (struct t2DPoint *)NULL;
if(markers!=NULL) markers->previous = temppt;
markers = temppt;
if(draw!=NULL) cvCircle(draw,center,(int) length/2,R GB(0,0,255 ),-1);
/*cvEllipse(input,
center,
cvSize((int)width/2,(int)l ength/2),
-box->angle,
0,
360,
RGB(0,255,0),
1);*/
}
}
free(pointArray32f);
free(pointArray);
}
contour = contour->h_next;
}
free(contour);
free(box);
cvReleaseImage(&gray_input );
cvReleaseMemStorage(&stora ge);
return markers;
}
////////////////////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// /////
/*
Takes two link-lists of image features representing the features       Â
found in the left and reight images as chosen by segmentation.cpp      Â
&Â returns a list of matched features.
*/
correspondence.h
/* correspondence */
struct tCFeature * tkCorrespond(struct t2DPoint * left_features,struct t2DPoint * right_features);
struct tCFeature * corrSimple(struct t2DPoint * left, struct t2DPoint * right);
struct t2DPoint * corrSimple_1(struct t2DPoint * marker, struct t2DPoint * scene);
correspondence.cpp
#include "global.h"
#include "correspondence.h"
struct tCFeature * tkCorrespond(struct t2DPoint * left_features,struct t2DPoint * right_features){
if(left_features!=NULL &&Â right_features!=NULL)
return corrSimple(left_features,r ight_featu res);
else
return NULL;
}
struct tCFeature * corrSimple(struct t2DPoint * left, struct t2DPoint * right){
struct t2DPoint * t_left = left;
struct t2DPoint * t_right = right;
struct t2DPoint * match4left = (struct t2DPoint *)NULL;
struct t2DPoint * match4right = (struct t2DPoint *)NULL;
struct t2DPoint * temp_del = (struct t2DPoint *)NULL;
struct tCFeature * result = (struct tCFeature *)NULL;
struct tCFeature * temp_match = (struct tCFeature *)NULL;
bool match = false;
while(t_left!=NULL){
match = false;
match4left = corrSimple_1(t_left,right) ;
if(match4left!=NULL){
match4right = corrSimple_1(match4left,le ft);
if(t_left==match4right){
match = true;
//printf("MATCHED! L[%i,%i] R[%i,%i]\n",t_left->x,t_le ft->y,matc h4left-
>x,match4left->y);
temp_match = (struct tCFeature *)malloc(sizeof(struct tCFeature));
temp_match->iLeft[0] = t_left->x;
temp_match->iLeft[1] = t_left->y;
temp_match->iRight[0] = match4left->x;
temp_match->iRight[1] = match4left->y;
temp_match->pNext = result;
result = temp_match;
//Delete the element from the left list
temp_del = t_left;
t_left=t_left->next;
if(temp_del->previous==NUL L){
left = temp_del->next;
}
else {
temp_del->previous->next = temp_del->next;
}
if(temp_del->next!=NULL){
temp_del->next->previous = temp_del->previous;
}
free(temp_del);
//Now do the element from the right list
temp_del = match4left;
if(temp_del->previous==NUL L){
right = temp_del->next;
}
else {
temp_del->previous->next = temp_del->next;
}
if(temp_del->next!=NULL){
temp_del->next->previous = temp_del->previous;
}
free(temp_del);
}
}
else{
//the right list is empty - all matched!
//break out!
match=true;
t_left = NULL;
}
//We should only increment the pointer if there hasn't been a match, otherwise
//all hell will break loose!
if(!match)t_left = t_left->next;
}
//if(left!=NULL) printf("Not all markers matched in the left frame\n");
//if(right!=NULL) printf("Not all markers matched int he right frame\n");
return result;
}
struct t2DPoint * corrSimple_1(struct t2DPoint * marker, struct t2DPoint * scene){
struct t2DPoint * temp_pt = scene;
double temp_dist = 0;
struct t2DPoint * best_pt = (struct t2DPoint *)NULL;
double best_dist = 1000000000;
while(temp_pt!=NULL){
if(abs(temp_pt->size-marke r->size)<= (marker->s ize/5)){
temp_dist = sqrt(((abs(marker->x)-abs( temp_pt->x )) * (abs(marker->x)-abs(temp_p t-
>x)))
+((abs(marker->y)-abs(temp _pt->y)) * (abs(marker-
>y)-abs(temp_pt->y))));
if(temp_dist<best_dist){
best_dist = temp_dist;
best_pt = temp_pt;
}
}
temp_pt = temp_pt->next;
}
return best_pt;
}
////////////////////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// //////////
/*
Involves a significant amount of matrix manipulation, and to avoid duplicating             Â
code, a file matlib.cpp was also written. This is a minimal matrix library       Â
which implements the functionality for this project only, and any exernal
libaries were ignored. Â Â Â Â Â Â
See matlib.cpp
*/
reconstruct.h
//3D Reconstruction library header
#define PI 3.14159
void tkReconstruct(struct tCFeature * object,char * output);
struct tMatrix * reconstruct(struct camera_ray * left, struct camera_ray * right);
struct tMatrix * generateRotMat(double x_rot, double y_rot, double z_rot);
struct tMatrix * defPlane(struct tMatrix * point, struct tMatrix * vec1, struct tMatrix * vec2);
void pix2mm(struct camera_ray *);
double calcBeta(struct tMatrix * trans, struct tMatrix * vec, struct tMatrix * iv, struct tMatrix * ip);
struct tMatrix * genIntersect(struct tMatrix * trans, struct tMatrix * vec, double beta);
reconstruct.cpp
#include "global.h"
#include "matlib.h"
#include "reconstruct.h"
/* variables for the two cameras */
extern CvCamera * clLeftCamera;
extern CvCamera * clRightCamera;
void tkReconstruct(struct tCFeature * object,char * output){
struct camera_ray * leftcam_ray = (struct camera_ray *)malloc(sizeof(struct camera_ray));
struct camera_ray * rightcam_ray = (struct camera_ray *)malloc(sizeof(struct camera_ray));
struct tMatrix * result = (struct tMatrix *)NULL;
char outstring[256];
leftcam_ray->cam_tran[0] = 505.0;
leftcam_ray->cam_tran[1] = 485.0;
leftcam_ray->cam_tran[2] = 1000.0;
leftcam_ray->cam_rot = 0.0;
leftcam_ray->pixelsize[0] = (float)15.0/(float)352.0;
leftcam_ray->pixelsize[1] = (float)12.0/(float)288.0;
leftcam_ray->vector[0] = (float)175 - (float)object->iLeft[0];
leftcam_ray->vector[1] = (float)140 - (float)object->iLeft[1];
leftcam_ray->vector[2] = (float)clLeftCamera->matri x[0];
rightcam_ray->cam_tran[0] = 235.0;
rightcam_ray->cam_tran[1] = 485.0;
rightcam_ray->cam_tran[2] = 1000.0;
rightcam_ray->cam_rot = 0.0;
rightcam_ray->pixelsize[0] = (float)15.0/(float)352.0;
rightcam_ray->pixelsize[1] = (float)12.0/(float)288.0;
rightcam_ray->vector[0] = (float)175 - (float)object->iRight[0];
rightcam_ray->vector[1] = (float)140 - (float)object->iRight[1];
rightcam_ray->vector[2] = (float)clRightCamera->matr ix[0];
result = reconstruct(leftcam_ray,ri ghtcam_ray );
//printf("OBJECT LOCATED AT:\n");
//matPrint(result);
sprintf(outstring,"[P::%.1 f,%.1f,%.1 f]",result ->matrix[0 ],result-> matrix[1], result->ma trix[2]);
strcat(output,outstring);
}
struct tMatrix * reconstruct(struct camera_ray * left, struct camera_ray * right){
struct tMatrix * vl_norm = (struct tMatrix *)NULL;
struct tMatrix * vr_norm = (struct tMatrix *)NULL;
struct tMatrix * rl = generateRotMat(0.0,left->c am_rot,0.0 );
struct tMatrix * rr = generateRotMat(0.0,right-> cam_rot,0. 0);
struct tMatrix * tl = matInit(3,1,left->cam_tran );
struct tMatrix * tr = matInit(3,1,right->cam_tra n);
struct tMatrix * vl_prime = (struct tMatrix *)NULL;
struct tMatrix * vr_prime = (struct tMatrix *)NULL;
struct tMatrix * axis = (struct tMatrix *)NULL;
struct tMatrix * l_plane = (struct tMatrix *)NULL;
struct tMatrix * r_plane = (struct tMatrix *)NULL;
struct tMatrix * lp_norm = (struct tMatrix *)NULL;
struct tMatrix * rp_norm = (struct tMatrix *)NULL;
struct tMatrix * iv = (struct tMatrix *)NULL;
struct tMatrix * ip = (struct tMatrix *)NULL;
struct tMatrix * l_int = (struct tMatrix *) NULL;
struct tMatrix * r_int = (struct tMatrix *) NULL;
struct tMatrix * object = (struct tMatrix *) NULL;
double l_beta = 0.0,r_beta = 0.0;
//Generate normalised left vector
pix2mm(left);
vl_norm = matInit(3,1,left->vector);
matNorm(vl_norm);
//Generate normalised right vector
pix2mm(right);
vr_norm = matInit(3,1,right->vector) ;
matNorm(vr_norm);
//Correct the image vectors for camera rotation.
vl_prime = matMultiply(rl,vl_norm);
vr_prime = matMultiply(rr,vr_norm);
//Generate the "axis" vector
axis = matCrossProd(vl_prime,vr_p rime);
//Generate the equations for the planes
l_plane = defPlane(tl,vl_prime,axis) ;
r_plane = defPlane(tr,vr_prime,axis) ;
//Generate left and right plane normals
lp_norm = matInit(3,1,l_plane->matri x);
rp_norm = matInit(3,1,r_plane->matri x);
//Generate the intersection vector and point
iv = matCrossProd(lp_norm,rp_no rm);
matNorm(iv);
ip = matInit(3,1,NULL);
ip->matrix[2] = 0.0;
ip->matrix[1] = (-l_plane->matrix[3]-(l_pl ane->matri x[0]*r_pla ne->matrix [3]))/
((r_plane->matrix[0]*l_pla ne->matrix [1])-(l_pl ane-
>matrix[0]*r_plane->matrix [1]));
ip->matrix[0] = ((-r_plane->matrix[3])-(r_ plane->mat rix[1]*ip- >matrix[1] ))/r_plane ->matrix[0 ];
//Calculate the beta value
l_beta = calcBeta(tl,vl_prime,iv,ip );
r_beta = calcBeta(tr,vr_prime,iv,ip );
l_int = genIntersect(tl,vl_prime,l _beta);
r_int = genIntersect(tr,vr_prime,r _beta);
object = matInit(3,1,NULL);
object->matrix[X] = (l_int->matrix[X]+r_int->m atrix[X])/ 2;
object->matrix[Y] = (l_int->matrix[Y]+r_int->m atrix[Y])/ 2;
object->matrix[Z] = (l_int->matrix[Z]+r_int->m atrix[Z])/ 2;
//Release all of the memory we have allocated for matrices.
matRelease(vl_norm);
matRelease(vr_norm);
matRelease(rl);
matRelease(rr);
matRelease(tr);
matRelease(tl);
matRelease(vl_prime);
matRelease(vr_prime);
matRelease(axis);
matRelease(l_plane);
matRelease(r_plane);
matRelease(lp_norm);
matRelease(rp_norm);
matRelease(iv);
matRelease(ip);
matRelease(l_int);
matRelease(r_int);
//Finally return the 3D position.
return object;
}
struct tMatrix * generateRotMat(double x_rot, double y_rot, double z_rot){
double x_mat[] = {1,0,0,0,cos(x_rot*PI/180. 0),sin(x_r ot*PI/180. 0),0,-
sin(x_rot*PI/180.0),cos(x_ rot*PI/180 .0)};
double y_mat[] = {cos(y_rot*PI/180.0),0,sin (y_rot*PI/ 180.0),0,1 ,0,-
sin(y_rot*PI/180.0),0,cos( y_rot*PI/1 80.0)};
double z_mat[] = {cos(z_rot*PI/180.0),sin(z _rot*PI/18 0.0),0,-
sin(z_rot*PI/180.0),cos(z_ rot*PI/180 .0),0,0,0, 1};
struct tMatrix * x_rot_mat = matInit(3,3,x_mat);
struct tMatrix * y_rot_mat = matInit(3,3,y_mat);
struct tMatrix * z_rot_mat = matInit(3,3,z_mat);
struct tMatrix * temp_mat;
struct tMatrix * result;
temp_mat = matMultiply(x_rot_mat,y_ro t_mat);
result = matMultiply(temp_mat,z_rot _mat);
matRelease(x_rot_mat);
matRelease(y_rot_mat);
matRelease(z_rot_mat);
matRelease(temp_mat);
return result;
}
void pix2mm(struct camera_ray * camera){
camera->vector[0] = camera->vector[0]*camera-> pixelsize[ 0];
camera->vector[1] = camera->vector[1]*camera-> pixelsize[ 1];
camera->vector[2] = camera->vector[2]*(camera- >pixelsize [0]+camera ->pixelsiz e[1])/2;
}
struct tMatrix * defPlane(struct tMatrix * point, struct tMatrix * vec1, struct tMatrix * vec2){
double det_a[9] = {point->matrix[Y],point->m atrix[Z],1 .0,
vec1->matrix[Y],vec1->matr ix[Z],0.0,
vec2->matrix[Y],vec2->matr ix[Z],0.0} ;
double det_b[9] = {point->matrix[Z],1.0,poin t->matrix[ X],
vec1->matrix[Z],0.0,vec1-> matrix[X],
vec2->matrix[Z],0.0,vec2-> matrix[X]} ;
double det_c[9] = {1.0,point->matrix[X],poin t->matrix[ Y],
0.0,vec1->matrix[X],vec1-> matrix[Y],
0.0,vec2->matrix[X],vec2-> matrix[Y]} ;
double det_d[9] = {point->matrix[X],point->m atrix[Y],p oint->matr ix[Z],
vec1->matrix[X],vec1->matr ix[Y],vec1 ->matrix[Z ],
vec2->matrix[X],vec2->matr ix[Y],vec2 ->matrix[Z ]};
struct tMatrix * a_matrix = matInit(3,3,det_a);
struct tMatrix * b_matrix = matInit(3,3,det_b);
struct tMatrix * c_matrix = matInit(3,3,det_c);
struct tMatrix * d_matrix = matInit(3,3,det_d);
struct tMatrix * plane = matInit(4,1,NULL);
plane->matrix[0] = matDet(a_matrix);
plane->matrix[1] = matDet(b_matrix);
plane->matrix[2] = matDet(c_matrix);
plane->matrix[3] = matDet(d_matrix);
matRelease(a_matrix);
matRelease(b_matrix);
matRelease(c_matrix);
matRelease(d_matrix);
return plane;
}
double calcBeta(struct tMatrix * trans, struct tMatrix * vec, struct tMatrix * iv, struct tMatrix *
ip){
double beta_num = (ip->matrix[Y]*iv->matrix[ X])
+(iv->matrix[Y]*trans->mat rix[X])
-(ip->matrix[X]*iv->matrix [Y])-(iv-> matrix[X]* trans->mat rix[Y]);
double beta_denom = (iv->matrix[X]*vec->matrix [Y])-(iv-> matrix[Y]* vec->matri x[X]);
return beta_num/beta_denom;
}
struct tMatrix * genIntersect(struct tMatrix * trans, struct tMatrix * vec, double beta){
double i_sect[3] = {trans->matrix[X]+(beta*ve c->matrix[ X]),
trans->matrix[Y]+(beta*vec ->matrix[Y ]),
trans->matrix[Z]+(beta*vec ->matrix[Z ])};
return matInit(3,1,i_sect);
}
////////////////////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// /////////
/*
Core reconstruction component. Defines type, tMatrix which can be used to            Â
represent a matrix of any size, rather than statically defining the matrix      Â
Â
The matrix itself is represented as a 1-D array of floating point numbers
*/
matlib.h
//Matrix handler library
#define X 0
#define Y 1
#define Z 2
typedef struct tMatrix {
double * matrix;
int cols;
int rows;
} tMatrix;
struct tMatrix * matMultiply(struct tMatrix * src1, struct tMatrix * src2);
void matRelease(struct tMatrix * mat);
void matAssign(struct tMatrix * mat,double * data);
struct tMatrix * matInit(int rows, int cols, double * data);
void matNorm(struct tMatrix * mat);
void matPrint(struct tMatrix * matrix);
double matDet(struct tMatrix * mat);
struct tMatrix * matCrossProd(struct tMatrix * vec1, struct tMatrix * vec2);
double matDotProd(struct tMatrix * vec1,struct tMatrix * vec2);
matlib.cpp
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "matlib.h"
struct tMatrix * matInit(int rows, int cols, double * data){
int i;
struct tMatrix * new_mat = (struct tMatrix *)malloc(sizeof(struct tMatrix));
new_mat->cols = cols;
new_mat->rows = rows;
new_mat->matrix = (double *)malloc(sizeof(double)*co ls*rows);
if(data!=NULL){
for(i=0;i<(new_mat->cols*n ew_mat->ro ws);i++){
new_mat->matrix[i] = data[i];
}
}
return new_mat;
}
void matRelease(struct tMatrix * mat){
free(mat->matrix);
free(mat);
}
void matAssign(struct tMatrix * mat,double * data){
int i;
for(i=0;i<(mat->cols*mat-> rows);i++) {
mat->matrix[i] = data[i];
}
}
struct tMatrix * matMultiply(struct tMatrix * src1, struct tMatrix * src2){
int row,col,k;
double current;
struct tMatrix * dest = matInit(src1->rows,src2->c ols,NULL);
for(row=0;row<dest->rows;r ow++){
for(col=0;col<dest->cols;c ol++){
current = 0.0;
for(k=0;k<src1->cols;k++)
current += src1->matrix[(row*src1->co ls)+k]*src 2->matrix[ (k*src2-
>cols)+col];
dest->matrix[(row*dest->co ls)+col] = current;
}
}
return dest;
}
void matNorm(struct tMatrix * mat){
double length = sqrt((mat->matrix[X]*mat-> matrix[X])
+(mat->matrix[Y]*mat->matr ix[Y])
+(mat->matrix[Z]*mat->matr ix[Z]));
mat->matrix[X] = mat->matrix[X]/length;
mat->matrix[Y] = mat->matrix[Y]/length;
mat->matrix[Z] = mat->matrix[Z]/length;
}
void matPrint(struct tMatrix * matrix){
if(matrix!=NULL){
int col,row;
printf("MATRIX\n======\n") ;
for(row=0;row<matrix->rows ;row++){
for(col=0;col<matrix->cols ;col++){
printf("\t%f",matrix->matr ix[(row*ma trix->cols )+col]);
}
printf("\n");
}
}
}
double matDet(struct tMatrix * mat){
double det = (mat->matrix[0]*mat->matri x[4]*mat-> matrix[8])
- (mat->matrix[2]*mat->matri x[4]*mat-> matrix[6])
+ (mat->matrix[1]*mat->matri x[5]*mat-> matrix[6])
- (mat->matrix[0]*mat->matri x[5]*mat-> matrix[7])
+ (mat->matrix[2]*mat->matri x[3]*mat-> matrix[7])
- (mat->matrix[1]*mat->matri x[3]*mat-> matrix[8]) ;
return det;
}
struct tMatrix * matCrossProd(struct tMatrix * vec1, struct tMatrix * vec2){
struct tMatrix * result = matInit(3,1,NULL);
result->matrix[0] = (vec1->matrix[1]*vec2->mat rix[2]) - (vec1->matrix[2]*vec2->mat rix[1]);
result->matrix[1] = (vec1->matrix[2]*vec2->mat rix[0]) - (vec1->matrix[0]*vec2->mat rix[2]);
result->matrix[2] = (vec1->matrix[0]*vec2->mat rix[1]) - (vec1->matrix[1]*vec2->mat rix[0]);
return result;
}
double matDotProd(struct tMatrix * vec1,struct tMatrix * vec2){
double result = (vec1->matrix[0]*vec2->mat rix[0])
+ (vec1->matrix[1]*vec2->mat rix[1])
+ (vec1->matrix[2]*vec2->mat rix[2]);
return result;
}
As mentioned before, the debug error seems to be in "pMatches = pTempMatch->pNext;", but i have been unable to fix it.
Compiling and linking the program produces no errors, but whilst the program is being exectuted, it freezes.
To be more specific, the system is first initialised and camera calibration is completed successfully using OpenCV. After this, input from two webcams is supposed to render me a 3D reconstruction of the two inputs, but this is where it falls short; whilst finding correspondences between the two inputs.
Hope this makes it a little more clear.
Here is the code, and thanks in advance:
global.h
/* Global definitions */
#include <stdio.h>
#include <stdlib.h>
#include <cv.h>
#include <cvaux.h>
#include <highgui.h>
#ifndef GLOBALS
#define GLOBALS
/* define run-modes */
#define MODE_INIT 0
#define MODE_CALIBRATE_INT 1
#define MODE_CALIBRATE_EXT 2
#define MODE_RECONSTRUCT_SAMPLE 3
#define MODE_RECONSTRUCT_RUN 4
/* define different calibration types */
#define CALIB_UNSET 0
#define CALIB_FILE 1
#define CALIB_BMP 2
#define CALIB_LIVE 3
/* define number of points require for ext calibration */
#define EXT_REQ_POINTS 35
/* define left and right camera indices */
#define SRC_LEFT_CAMERA 0
#define SRC_RIGHT_CAMERA 1
/* colour channels in an IplImage */
#define C_BLUE 0
#define C_GREEN 1
#define C_RED 2
/* structure for feature correspondences */
typedef struct tCFeature {
int iLeft[2];
int iRight[2];
struct tCFeature * pNext;
} tCFeature;
/* 2D point struct (includes next/prev links unlike OpenCV */
typedef struct t2DPoint {
int x;
int y;
int size;
struct t2DPoint * next;
struct t2DPoint * previous;
} t2DPoint;
/* struct representing a camera ray toward the object */
typedef struct camera_ray {
double vector[3];
double cam_tran[3];
double cam_rot;
double pixelsize[2];
} camera_ray;
#endif
//////////////////////////
/* procedures for reading/writing images and data files */
library.h
/* fileio.h :: File I/O Library header */
void writeImagePair(IplImage** images,const char * prefix);
void writeImage(IplImage* image,const char * prefix);
void removeNoise(IplImage * src);
void combine(IplImage * src1,IplImage * src2,IplImage * output,int mode);
library.cpp
#include "global.h"
extern bool fIntrinsicDone;
extern bool fExtrinsicDone;
extern int fCalibMethod;
/* procedures for reading/writing images and data files */
void writeImagePair(IplImage** images,const char * prefix){
char filename[100];
sprintf(filename,"%s-imgL.
cvvSaveImage(filename,imag
sprintf(filename,"%s-imgR.
cvvSaveImage(filename,imag
}
void writeImage(IplImage* image,const char * prefix){
char filename[100];
sprintf(filename,"%s-img.b
cvvSaveImage(filename,imag
}
void removeNoise(IplImage * src){
//get the size of input_image (src)
CvSize sz = cvSize(src->width &Â -2, src->height &Â -2);
//create temp-image
IplImage* pyr = cvCreateImage(cvSize(sz.wi
src->depth, src->nChannels);
cvPyrDown( src, pyr, CV_GAUSSIAN_5x5); //pyr DOWN
cvPyrUp( pyr, src, CV_GAUSSIAN_5x5); //and UP
cvReleaseImage(&pyr); //release temp
}
void combine(IplImage * src1,IplImage * src2,IplImage * new_img,int mode){
int row,col;
char * new_pixel;
char * src_pixel;
CvFont disp_font;
int text_col = CV_RGB(0,255,0);
new_img->origin = 1;
for(row=0;row<src1->height
for(col=0;col<src1->width;
new_pixel = &new_img->imageData[(row*n
src_pixel = &src1->imageData[(row*src1
new_pixel[C_BLUE] = src_pixel[C_BLUE];
new_pixel[C_GREEN] = src_pixel[C_GREEN];
new_pixel[C_RED] = src_pixel[C_RED];
}
for(col=0;col<src2->width;
new_pixel = &new_img->imageData[(row*n
src_pixel = &src2->imageData[(row*src2
new_pixel[C_BLUE] = src_pixel[C_BLUE];
new_pixel[C_GREEN] = src_pixel[C_GREEN];
new_pixel[C_RED] = src_pixel[C_RED];
}
}
for(row=src1->height;row<n
for(col=0;col<new_img->wid
new_pixel = &new_img->imageData[(row*n
new_pixel[C_BLUE] = (char)0;
new_pixel[C_GREEN] = (char)0;
new_pixel[C_RED] = (char)0;
}
}
cvInitFont(&disp_font,CV_F
switch(mode){
case MODE_INIT:
{
cvPutText(new_img,"initial
'c')",cvPoint(5,300),&disp
}
break;
case MODE_CALIBRATE_INT:
{
if(!fIntrinsicDone){
switch(fCalibMethod){
case CALIB_UNSET:
{
cvPutText(new_img,"intrins
[2]bmp [3]live",cvPoint(5,300),&d
}
break;
case CALIB_LIVE:
{
cvPutText(new_img,"intrins
stream...",cvPoint(5,300),
}
break;
}
}
else
cvPutText(new_img,"done...
'c')",cvPoint(5,300),&disp
}
break;
case MODE_CALIBRATE_EXT:
{
cvPutText(new_img,"extrins
mode...",cvPoint(5,300),&d
}
break;
case MODE_RECONSTRUCT_SAMPLE:
{
cvPutText(new_img,"select sample marker
colour...",cvPoint(5,300),
}
break;
case MODE_RECONSTRUCT_RUN:
{
cvPutText(new_img,"reconst
}
break;
}
}
//////////////////////////
main.cpp
#include "global.h"
#include <conio.h>
#include <time.h>
#include <math.h>
#include <string.h>
#include "calibrate.h"
#include "segmentation.h"
#include "correspondence.h"
#include "reconstruct.h"
#include "library.h"
/* forward declaration of function */
void tkDespatch(IplImage * left, IplImage * right, int mode);
void tkLMseHandler(int event,int x, int y, int flags);
/* calibration variables */
CvCalibFilter clCalibration;
CvCamera *clLeftCamera, *clRightCamera;
bool fIntrinsicDone=false, fExtrinsicDone=false;
int fCalibMethod=CALIB_UNSET, iCalibPrevFrame;
/* colour segmentation variables */
//int thresh_r=0,thresh_g=0,thre
/* current system flags */
int fRunMode = MODE_INIT;
bool fGenColMap = false;
/* x,y coords to generate segmentation colour map from */
int colmap_x=0, colmap_y=0;
char out_file[80];
int main(int argc, char **argv)
{
CvCapture *left_camera = (CvCapture *)NULL ,*right_camera = (CvCapture *)NULL;
IplImage *left_frame = (IplImage *)NULL ,*right_frame = (IplImage *)NULL;
IplImage *display = (IplImage *)NULL;
int cCmd,fRunLoop = 1;
double dEtalonParams[3] = {8,6,3.3};
/* attach to the cameras and make sure that there are two */
printf("Selecting left camera....\n");
left_camera = cvCaptureFromCAM(-1);
printf("Selecting right camera....\n");
right_camera = cvCaptureFromCAM(-1);
if (!left_camera || !right_camera){
printf("Unable to attach to both cameras...\n");
if (left_camera) cvReleaseCapture(&left_cam
if (right_camera) cvReleaseCapture(&right_ca
exit(1);
}
/* create the output to display the camera images in */
/* we'll also set up a mouse callback for sampling the */
/* segmentation colour */
cvvNamedWindow("Output", CV_WINDOW_AUTOSIZE);
cvSetMouseCallback("Output
/* setup the calibration class */
clCalibration.SetEtalon(CV
clCalibration.SetCameraCou
clCalibration.SetFrames(20
/* define the name of the output file using the current */
/* time - simple, but effective!! */
sprintf(out_file,"%i.dat",
/* enter the main loop */
while(fRunLoop){
left_frame = cvQueryFrame(left_camera);
right_frame = cvQueryFrame(right_camera)
cCmd = cvvWaitKeyEx(0,1);
switch(tolower(cCmd))
{
case 'q':
/* got a 'q', so we want to quit. set the loop flag */
/* appropriately */
{
fRunLoop = 0;
}
break;
case 'c':
/* got a 'c', so calibration has been inited. set */
/* the runmode correctly depending on what has */
/* already been done */
{
if(fRunMode == MODE_INIT){
printf("Intrinsic parameter calibration mode....\n");
printf("Select (1)Calibration File (2)Bitmaps (3)Live
Cameras\n");
fRunMode = MODE_CALIBRATE_INT;
}
else if(fRunMode == MODE_CALIBRATE_INT &&Â fIntrinsicDone){
printf("Extrinsic parameter calibration mode...\n");
printf("Please place the checkerboard at the correct position
and\n");
printf("press a key...\n");
cvWaitKey(0);
fRunMode = MODE_CALIBRATE_EXT;
}
}
break;
case 'r':
/* got a 'r', so we want to resample... that is of */
/* course assuming that we have gotten that far! */
{
if(fRunMode>=MODE_RECONSTR
fRunMode = MODE_RECONSTRUCT_SAMPLE;
fGenColMap = false;
colmap_x = 0;
colmap_y = 0;
}
}
break;
case 's':
/* swap the cameras over */
{
CvCapture *temp = left_camera;
left_camera = right_camera;
right_camera = temp;
}
break;
case '.':
/* save the current image pair */
{
IplImage *images[] = {left_frame,right_frame};
writeImagePair(images,"sna
printf("SNAPSHOT!\n");
}
break;
case '1':
/* calibrate from file */
{
fCalibMethod = CALIB_FILE;
}
break;
case '2':
/* calibrate from bitmaps */
{
fCalibMethod = CALIB_BMP;
}
break;
case '3':
/* calibrate from the live cameras */
{
fCalibMethod = CALIB_LIVE;
}
break;
}
/* now that we've trapped all of the user key strokes, */
/* we dispatch the two frames and the current mode to */
/* the correct bits */
tkDespatch(left_frame,righ
/* combine the two images for display and overlay some */
/* text to describe to the user what is going on. */
if(display==NULL) display = cvCreateImage(cvSize(left_
left_frame->height+15),
left_frame-
>depth,
left_frame-
>nChannels);
combine(left_frame,right_f
cvvShowImage("Output",disp
}
/* if we've got this far, the user has selected to quit, so */
/* release all of the stuff we've allocated */
cvReleaseCapture(&left_cam
cvReleaseCapture(&right_ca
cvReleaseImage(&display);
return 0;
}
void tkDespatch(IplImage * left, IplImage * right, int mode){
struct tCFeature * pMatches;
struct tCFeature * pTempMatch;
char out_text[2048];
FILE * fp;
switch(mode){
case MODE_CALIBRATE_INT:case MODE_CALIBRATE_EXT:
{
tkCalibrate(left,right,mod
}
break;
case MODE_RECONSTRUCT_SAMPLE:
{
if(fGenColMap){
if(tkGenerateColMap(left,c
fRunMode = MODE_RECONSTRUCT_RUN;
else{
fGenColMap = false;
fRunMode = MODE_RECONSTRUCT_SAMPLE;
}
}
}
break;
case MODE_RECONSTRUCT_RUN:
{
pMatches = tkCorrespond(tkSegment(lef
if(pMatches!=NULL){
pTempMatch = pMatches;
while(pTempMatch!=NULL){
tkReconstruct(pTempMatch,o
pMatches = pTempMatch->pNext;
free(pTempMatch);
pTempMatch = pMatches;
}
fp = fopen(out_file,"a");
fputs(out_text,fp);
fputs("\n",fp);
fclose(fp);
out_text[0] = '\0';
}
}
break;
}
}
void tkLMseHandler(int event,int x, int y, int flags){
if(event==CV_EVENT_LBUTTON
fGenColMap = true;
colmap_x = x; colmap_y=y;
}
}
//////////////////////////
/*
Aims to fully generate a calibrated stereo system system in oreder to reconstruct        Â
the 3-D position. Opencv class CvCalibFilter will handle intrinsic calibration. Â Â Â Â Â Â
Once these values have been set, each frame is passed to the FindEtalon() method.
Â
After intrinsic calculation, the method IsCalibrated() is returned. Following    Â
successful calibration, the intrinsic parameters are written to a text file.
*/
calibrate.h
/* Header file for calibration component of tracking system */
void tkGetCameraParams(CvCamera
void tkCalibrate(IplImage * left, IplImage * right, int mode);
calibrate.cpp
#include "global.h"
#include <time.h>
#include "calibrate.h"
#include "library.h"
/* the calibration class */
extern CvCalibFilter clCalibration;
/* variables for the two cameras */
extern CvCamera * clLeftCamera;
extern CvCamera * clRightCamera;
/* some status flags */
extern bool fIntrinsicDone;
extern bool fExtrinsicDone;
extern int fCalibMethod;
extern int iCalibPrevFrame;
extern int fRunMode;
void tkCalibrate(IplImage * left, IplImage * right, int mode){
IplImage *images[] = {left,right};
if(mode==MODE_CALIBRATE_IN
switch(fCalibMethod)
{
case CALIB_FILE:
{
//Do it from file
printf("\nLoading calibration data from file 'intcalib.dat'...\n");
clCalibration.LoadCameraPa
if(clCalibration.IsCalibra
clLeftCamera = (CvCamera *)calloc(1,sizeof(struct CvCamera));
tkGetCameraParams(clLeftCa
clRightCamera = (CvCamera *)calloc(1,sizeof(struct CvCamera));
tkGetCameraParams(clRightC
printf("\nLEFT >>Â Focal Length:[%f,%f]\n",clLeftCa
>matrix[0],clLeftCamera->m
printf("LEFT >>Â Centre Point:[%f,%f]\n",clLeftCam
>matrix[2],clLeftCamera->m
printf("RIGHT >>Â Focal Length:[%f,%f]\n",clRightC
>matrix[0],clRightCamera->
printf("RIGHT >>Â Centre Point:[%f,%f]\n\n",clRight
>matrix[2],clRightCamera->
fIntrinsicDone = true;
fCalibMethod = CALIB_UNSET;
}
else {
printf("Calibration failed...unable to locate parameter file\n");
fIntrinsicDone = false;
fCalibMethod = CALIB_UNSET;
fRunMode = MODE_INIT;
}
break;
}
case CALIB_BMP:
{
//Do it from saved bitmaps
int i=0;
char filename[80];
printf("Calibration from saved bitmaps... [");
while(!clCalibration.IsCal
{
sprintf(filename,"./pre-ca
images[0] = cvLoadImage(filename);
sprintf(filename,"./pre-ca
images[1] = cvLoadImage(filename);
if(images[0]!=NULL &&Â images[1]!=NULL){
if(clCalibration.FindEtalo
{
printf("#");
clCalibration.Push();
if(clCalibration.IsCalibra
fIntrinsicDone=true;
fCalibMethod = CALIB_UNSET;
printf("]\nIntrinsic parameters now
found...\n");
clLeftCamera = (CvCamera
*)calloc(1,sizeof(struct CvCamera));
tkGetCameraParams(clLeftCa
clRightCamera = (CvCamera
*)calloc(1,sizeof(struct CvCamera));
tkGetCameraParams(clRightC
printf("\nLEFT >>Â Focal
Length:[%f,%f]\n",clLeftCa
printf("LEFT >>Â Centre
Point:[%f,%f]\n",clLeftCam
printf("RIGHT >>Â Focal
Length:[%f,%f]\n",clRightC
printf("RIGHT >>Â Centre
Point:[%f,%f]\n\n",clRight
clCalibration.SaveCameraPa
intcalib.dat");
printf("\nIntrinsic parameters written to
file 'intcalib.dat'...\n");
}
}
cvReleaseImage(&images[0])
cvReleaseImage(&images[1])
}
i++;
}
if(!clCalibration.IsCalibr
printf("]...failed!\n");
clCalibration.Stop();
fIntrinsicDone = false;
fCalibMethod = CALIB_UNSET;
fRunMode = MODE_INIT;
}
break;
}
case CALIB_LIVE:
{
bool found = clCalibration.FindEtalon(i
if(!found) clCalibration.DrawPoints(i
else{
char filename[30];
int cur_time = clock();
if(cur_time >= iCalibPrevFrame + 1000){
int imgs = clCalibration.GetFrameCoun
if(imgs==0)printf("Calibra
beginning ...[");
printf("#");
sprintf(filename,"./new-ca
writeImagePair(images,file
iCalibPrevFrame = cur_time;
clCalibration.Push();
cvXorS(left,cvScalarAll(25
cvXorS(right,cvScalarAll(2
}
if(clCalibration.IsCalibra
fIntrinsicDone = true;
fCalibMethod = CALIB_UNSET;
printf("]\nIntrinsic parameters now found...\n");
clLeftCamera = (CvCamera *)calloc(1,sizeof(struct
CvCamera));
tkGetCameraParams(clLeftCa
clRightCamera = (CvCamera *)calloc(1,sizeof(struct
CvCamera));
tkGetCameraParams(clRightC
printf("\nLEFT >>Â Focal Length:[%f,%f]\n",clLeftCa
>matrix[0],clLeftCamera->m
printf("LEFT >>Â Centre Point:[%f,%f]\n",clLeftCam
>matrix[2],clLeftCamera->m
printf("RIGHT >>Â Focal Length:[%f,%f]\n",clRightC
>matrix[0],clRightCamera->
printf("RIGHT >>Â Centre Point:[%f,%f]\n\n",clRight
>matrix[2],clRightCamera->
clCalibration.SaveCameraPa
printf("\nIntrinsic parameters written to file
'intcalib.dat'...\n");
}
}
break;
}
}
}
else if(mode==MODE_CALIBRATE_EX
CvCalibFilter tempCalib;
double dEtalonParams[3] = {8,6,3.3};
int i=0,j=0,count = 0;
bool found = false;
float focalLength[2];
float rotVect[3];
float jacobian[3*9];
CvMat jacmat,vecmat,rotMatr;
FILE * fp;
CvPoint2D32f* pts = (CvPoint2D32f *) NULL;
CvPoint2D32f gloImgPoints[2][EXT_REQ_PO
CvPoint3D32f gloWldPoints[EXT_REQ_POINT
tempCalib.SetEtalon(CV_CAL
tempCalib.SetCameraCount(2
found = tempCalib.FindEtalon(image
if(!found) tempCalib.DrawPoints(image
if(found){
writeImagePair(images,"./n
//Populate the gloImgPoints and gloWrldPoints arrays
//Load the world points from the text file.
if((fp=fopen("worldpoints.
{
printf("Unable to open worldpoints.txt\n");
fRunMode=MODE_CALIBRATE_IN
return;
}
for(i=0;i<35;i++)
fscanf (fp,"%f,%f,%f
",&gloWldPoints[i].x,&gloW
fclose(fp);
printf("done file!\n");
//Populate the image points array
for(i=0;i<2;i++){
tempCalib.GetLatestPoints(
if(pts[0].x <Â pts[5].x){
//array is sorted correctly
for(j=0;j<EXT_REQ_POINTS;j
gloImgPoints[i][j].x = pts[j].x;
gloImgPoints[i][j].y = pts[j].y;
printf("#");
}
}
else {
//array is not right, swap it around...
for(j=0;j<EXT_REQ_POINTS;j
gloImgPoints[i][j].x = pts[count-j-1].x;
gloImgPoints[i][j].y = pts[count-j-1].y;
}
}
printf("\n");
}
printf("Calibration using %i points....\n",EXT_REQ_POIN
for(i=0;i<EXT_REQ_POINTS;i
printf("[P%i]",i);
printf("\tL [%3.1f,%3.1f]\tR [%3.1f,%3.1f]\tW
[%3.1f,%3.1f,%3.1f]\n",glo
gloImgPoints[SRC_LEFT_CAME
gloImgPoints[SRC_RIGHT_CAM
gloImgPoints[SRC_RIGHT_CAM
gloWldPoints[i].x,
gloWldPoints[i].y,
gloWldPoints[i].z);
}
printf("\n");
focalLength[0] = clLeftCamera->matrix[0];
focalLength[1] = clLeftCamera->matrix[5];
cvFindExtrinsicCameraParam
cvSize(cvRound(clLeftCamer
>imgSize[0]),cvRound(clLef
&gloImgPoints[0][0],
&gloWldPoints[0],
focalLength,
cvPoint2D32f(clLeftCamera-
>matrix[3],clLeftCamera->m
&clLeftCamera->distortion[
&rotVect[0],
&clLeftCamera->transVect[0
rotMatr = cvMat( 3, 3, CV_MAT32F, clLeftCamera->rotMatr );
jacmat = cvMat( 3, 9, CV_MAT32F, jacobian );
vecmat = cvMat( 3, 1, CV_MAT32F, rotVect );
cvRodrigues( &rotMatr, &vecmat, &jacmat, CV_RODRIGUES_V2M );
printf("LEFT >>Â Rot :[%f | %f | %f]\n",clLeftCamera->rotMa
>rotMatr[1],clLeftCamera->
printf("LEFT >>Â Trans :[%f | %f | %f]\n",clLeftCamera->trans
>transVect[1],clLeftCamera
focalLength[0] = clRightCamera->matrix[0];
focalLength[1] = clRightCamera->matrix[5];
cvFindExtrinsicCameraParam
cvSize(cvRound(clRightCame
>imgSize[0]),cvRound(clRig
&gloImgPoints[1][0],
&gloWldPoints[0],
focalLength,
cvPoint2D32f(clRightCamera
>matrix[3],clRightCamera->
&clRightCamera->distortion
&rotVect[0],
&clRightCamera->transVect[
rotMatr = cvMat( 3, 3, CV_MAT32F, clRightCamera->rotMatr );
jacmat = cvMat( 3, 9, CV_MAT32F, jacobian );
vecmat = cvMat( 3, 1, CV_MAT32F, rotVect );
cvRodrigues( &rotMatr, &vecmat, &jacmat, CV_RODRIGUES_V2M );
printf("RIGHT >>Â Rot :[%f | %f | %f]\n",clRightCamera->rotM
>rotMatr[1],clRightCamera-
printf("RIGHT >>Â Trans :[%f | %f | %f]\n",clRightCamera-
>transVect[0],clRightCamer
printf("\n *** CALIBRATED! ***\n");
fRunMode = MODE_RECONSTRUCT_SAMPLE;
fExtrinsicDone = true;
}
}
}
void tkGetCameraParams(CvCamera
const CvCamera * temp_camera = clCalibration.GetCameraPar
dest->distortion[0] = temp_camera->distortion[0]
dest->distortion[1] = temp_camera->distortion[1]
dest->distortion[2] = temp_camera->distortion[2]
dest->distortion[3] = temp_camera->distortion[3]
dest->imgSize[0] = temp_camera->imgSize[0];
dest->imgSize[1] = temp_camera->imgSize[1];
dest->matrix[0] = temp_camera->matrix[0];
dest->matrix[1] = temp_camera->matrix[1];
dest->matrix[2] = temp_camera->matrix[2];
dest->matrix[3] = temp_camera->matrix[3];
dest->matrix[4] = temp_camera->matrix[4];
dest->matrix[5] = temp_camera->matrix[5];
dest->matrix[6] = temp_camera->matrix[6];
dest->matrix[7] = temp_camera->matrix[7];
dest->matrix[8] = temp_camera->matrix[8];
dest->rotMatr[0] = temp_camera->rotMatr[0];
dest->rotMatr[1] = temp_camera->rotMatr[1];
dest->rotMatr[2] = temp_camera->rotMatr[2];
dest->rotMatr[3] = temp_camera->rotMatr[3];
dest->rotMatr[4] = temp_camera->rotMatr[4];
dest->rotMatr[5] = temp_camera->rotMatr[5];
dest->rotMatr[6] = temp_camera->rotMatr[6];
dest->rotMatr[7] = temp_camera->rotMatr[7];
dest->rotMatr[8] = temp_camera->rotMatr[8];
dest->transVect[0] = temp_camera->transVect[0];
dest->transVect[1] = temp_camera->transVect[1];
dest->transVect[2] = temp_camera->transVect[2];
}
//////////////////////////
/*
Identifies the features in each of the images in order to pass them onto the Â
Correspondence component for matching.
Â
Consists of a colour segmentation followed by a circle detection.
*/
segmentation.h
/* image segmentation */
/* how large an area to sample from around the click */
#define SAMPLE_SIZE 20
/* the threshhold window around the colour to be matched */
#define THRESH_BAND 60
bool tkGenerateColMap(IplImage * input,int x,int y);
bool segColMapSimpleAve(IplImag
bool segColMapRThresh(IplImage * input, int x, int y);
struct t2DPoint * tkSegment(IplImage * input);
struct t2DPoint * segFindFeatures_Ave(IplIma
struct t2DPoint * segFindFeatures_RThresh(Ip
struct t2DPoint * findCircles(IplImage * input,IplImage * draw);
segmentation.cpp
#include "global.h"
#include "segmentation.h"
#include "library.h"
int thresh_r;
int thresh_g;
int thresh_b;
char RchnThresh;
extern bool colmapdone;
bool tkGenerateColMap(IplImage * input,int x,int y){
if(segColMapSimpleAve(inpu
else return false;
}
struct t2DPoint * tkSegment(IplImage * input){
return segFindFeatures_Ave(input)
}
bool segColMapSimpleAve(IplImag
char sample[SAMPLE_SIZE][SAMPLE
int row,col,chan,v;
int norm_total;
int left = x - (SAMPLE_SIZE/2);
int right = x + (SAMPLE_SIZE/2);
int top = y - (SAMPLE_SIZE/2);
int bottom = y + (SAMPLE_SIZE/2);
/* First, lets duplicate the bit we want to sample */
for(row=top;row<bottom;row
for(col=left;col<right;col
sample[row-top][col-left][
>widthStep)+(col*3)+C_RED]
sample[row-top][col-left][
>widthStep)+(col*3)+C_BLUE
sample[row-top][col-left][
>widthStep)+(col*3)+C_GREE
}
}
/* Now we'll normalise the sample */
for(row=0;row<SAMPLE_SIZE;
for(col=0;col<SAMPLE_SIZE;
norm_total = sample[row][col][C_RED] + sample[row][col][C_BLUE] +
sample[row][col][C_GREEN];
sample[row][col][C_RED] =
(int)((float)sample[row][c
sample[row][col][C_BLUE] =
(int)((float)sample[row][c
sample[row][col][C_GREEN] =
(int)((float)sample[row][c
}
}
/* Now let's smooth it a bit */
for(row=0;row<SAMPLE_SIZE;
for(col=0;col<SAMPLE_SIZE;
for(chan=0;chan<3;chan++){
v = sample[row][col][chan];
if(v>0){
v -= 1;
sample[row][col][chan] = v;
if(row>0 &&Â sample[row-1][col][chan] <Â v) sample[row-1][col][chan]
= v;
if(col>0 &&Â sample[row][col-1][chan] <Â v) sample[row][col-1][chan]
= v;
}
}
}
}
for(row=0;row<SAMPLE_SIZE;
for(col=0;col<SAMPLE_SIZE;
for(chan=0;chan<3;chan++){
v = sample[row][col][chan];
if(v>0){
v -= 1;
sample[row][col][chan] = v;
if(row <Â SAMPLE_SIZE-1 &&Â sample[row+1][col][chan] <Â v)
sample[row+1][col][chan] = v;
if(col <Â SAMPLE_SIZE-1 &&Â sample[row][col+1][chan] <Â v)
sample[row][col+1][chan] = v;
}
}
}
}
/*Now find the average*/
thresh_r = sample[0][0][C_RED];
thresh_b = sample[0][0][C_BLUE];
thresh_g = sample[0][0][C_GREEN];
for(row=0;row<SAMPLE_SIZE;
for(col=0;col<SAMPLE_SIZE;
thresh_r = (thresh_r+sample[row][col]
thresh_b = (thresh_b+sample[row][col]
thresh_g = (thresh_g+sample[row][col]
}
}
printf("\nTHRESHHOLDS: R:%i G:%i B:%i\n",thresh_r,thresh_g,
return true;
}
struct t2DPoint * segFindFeatures_Ave(IplIma
IplImage * temp;
struct t2DPoint * features;
int i,j;
int norm_r=0,norm_g=0,norm_b=0
int low_r=0,low_g=0,low_b=0;
int high_r=0,high_g=0,high_b=0
int norm_total=0,in_total=0;
char * pixel_in = (char *)NULL;
char * pixel_out = (char *)NULL;
/* Clone the input image! */
temp = cvCloneImage(input);
removeNoise(input);
removeNoise(input);
/* Now work out what the normalised boundaries are from
the specified r,g,b values */
in_total = thresh_r+thresh_g+thresh_b
if(in_total!=0){
low_r = (int)(((float)thresh_r / (float)in_total)*255.0);
low_g = (int)(((float)thresh_g / (float)in_total)*255.0);
low_b = (int)(((float)thresh_b / (float)in_total)*255.0);
high_r = low_r + THRESH_BAND;
high_g = low_g + THRESH_BAND;
high_b = low_b + THRESH_BAND;
low_r -= THRESH_BAND;
low_g -= THRESH_BAND;
low_b -= THRESH_BAND;
}
else {
low_r = 0;
low_g = 0;
low_b = 0;
high_r = THRESH_BAND;
high_g = THRESH_BAND;
high_b = THRESH_BAND;
}
for(i=0;i<input->height;i+
for(j=0;j<(input->widthSte
pixel_in = &input->imageData[(i*input
pixel_out = &temp->imageData[(i*input-
norm_total = (int)pixel_in[C_RED] + (int)pixel_in[C_GREEN] + (int)pixel_in[C_BLUE];
if(norm_total!=0){
norm_r = (int)(((float)pixel_in[C_R
norm_g = (int)(((float)pixel_in[C_G
norm_b = (int)(((float)pixel_in[C_B
}
else if(norm_total==0){
norm_r=0;norm_g=0;norm_b=0
}
if(norm_r >= low_r &&Â norm_r <= high_r &&
norm_g >= low_g &&Â norm_g <= high_g &&
norm_b >= low_b &&Â norm_b <= high_b){
pixel_out[C_RED] = (char)0;
pixel_out[C_GREEN] = (char)0;
pixel_out[C_BLUE] = (char)0;
}
else {
pixel_out[C_RED] = (char)255;
pixel_out[C_GREEN] = (char)255;
pixel_out[C_BLUE] = (char)255;
}
}
}
features = findCircles(temp,input);
cvReleaseImage(&temp);
return features;
}
bool segColMapRThresh(IplImage * input, int x, int y){
char * pixel;
int left = x - 2;
int right = x + 2;
int top = y - 2;
int bottom = y + 2;
int row,col;
if(input->depth == IPL_DEPTH_8U) printf ("IPL_DEPTH_8U\n");
if (input->depth == IPL_DEPTH_8S) printf ("IPL_DEPTH_8S\n");
if (input->depth == IPL_DEPTH_16S) printf ("IPL_DEPTH_16S\n");
if (input->depth == IPL_DEPTH_32S) printf ("IPL_DEPTH_32S\n");
if (input->depth == IPL_DEPTH_32F) printf ("IPL_DEPTH_32F\n");
if (input->depth == IPL_DEPTH_64F) printf ("IPL_DEPTH_64F\n");
for(row=top;row<bottom;row
for(col=left;col<right;col
pixel = &input->imageData[(input->
if(col==left &&Â row==top){
RchnThresh = pixel[C_RED];
}
else{
RchnThresh = (RchnThresh/2) + (pixel[C_RED]/2);
}
printf("%c",pixel[C_RED]);
}
printf("\n");
}
return true;
}
struct t2DPoint * segFindFeatures_RThresh(Ip
IplImage * threshed = cvCloneImage(input);
struct t2DPoint * features;
int i,j;
char * pixel_in = (char *)NULL;
char * pixel_out = (char *)NULL;
for(i=0;i<input->height;i+
for(j=0;j<(input->widthSte
pixel_in = &input->imageData[(i*input
pixel_out = &threshed->imageData[(i*th
if((pixel_in[C_RED] >Â (RchnThresh - 50)) &&
(pixel_in[C_RED] <Â (RchnThresh + 20)) &&
(pixel_in[C_GREEN] <Â (char)50) &&
(pixel_in[C_BLUE] <Â (char)110)){
pixel_out[C_RED] = (char)0;
pixel_out[C_BLUE] = (char)0;
pixel_out[C_GREEN] = (char)0;
}
else {
pixel_out[C_RED] = (char)255;
pixel_out[C_BLUE] = (char)255;
pixel_out[C_GREEN] = (char)255;
}
}
}
features = findCircles(threshed,input
cvReleaseImage(&threshed);
return features;
}
struct t2DPoint * findCircles(IplImage * input,IplImage * draw){
CvMemStorage * storage;
CvSeq * contour;
CvBox2D * box;
CvPoint * pointArray;
CvPoint2D32f * pointArray32f;
CvPoint center;
float myAngle,ratio;
int i,header_size,count,length
IplImage * gray_input = cvCreateImage(cvGetSize(in
struct t2DPoint * markers = (struct t2DPoint *)NULL;
struct t2DPoint * temppt = (struct t2DPoint *)NULL;
//Convert the input image to grayscale.
cvCvtColor(input,gray_inpu
//Remove noise and smooth
removeNoise(gray_input);
//Edge detect the image with Canny algorithm
cvCanny(gray_input,gray_in
//Allocate memory
box = (CvBox2D *)malloc(sizeof(CvBox2D));
header_size = sizeof(CvContour);
storage = cvCreateMemStorage(1000);
// Find all the contours in the image.
cvFindContours(gray_input,
);
while(contour!=NULL)
{
if(CV_IS_SEQ_CURVE(contour
{
count = contour->total;
pointArray = (CvPoint *)malloc(count * sizeof(CvPoint));
cvCvtSeqToArray(contour,po
pointArray32f = (CvPoint2D32f *)malloc((count + 1) * sizeof(CvPoint2D32f));
for(i=0;i<count-1;i++){
pointArray32f[i].x = (float)(pointArray[i].x);
pointArray32f[i].y = (float)(pointArray[i].y);
}
pointArray32f[i].x = (float)(pointArray[0].x);
pointArray32f[i].y = (float)(pointArray[0].y);
if(count>7){
cvFitEllipse(pointArray32f
ratio = (float)box->size.width/(fl
center.x = (int)box->center.x;
center.y = (int)box->center.y;
length = (int)box->size.height;
width = (int)box->size.width;
myAngle = box->angle;
if((center.x>0) &&Â (center.y>0)){
temppt = (struct t2DPoint *)malloc(sizeof(struct t2DPoint));
temppt->x = center.x;
temppt->y = center.y;
temppt->size = length;
temppt->next = markers;
temppt->previous = (struct t2DPoint *)NULL;
if(markers!=NULL) markers->previous = temppt;
markers = temppt;
if(draw!=NULL) cvCircle(draw,center,(int)
/*cvEllipse(input,
center,
cvSize((int)width/2,(int)l
-box->angle,
0,
360,
RGB(0,255,0),
1);*/
}
}
free(pointArray32f);
free(pointArray);
}
contour = contour->h_next;
}
free(contour);
free(box);
cvReleaseImage(&gray_input
cvReleaseMemStorage(&stora
return markers;
}
//////////////////////////
/*
Takes two link-lists of image features representing the features       Â
found in the left and reight images as chosen by segmentation.cpp      Â
&Â returns a list of matched features.
*/
correspondence.h
/* correspondence */
struct tCFeature * tkCorrespond(struct t2DPoint * left_features,struct t2DPoint * right_features);
struct tCFeature * corrSimple(struct t2DPoint * left, struct t2DPoint * right);
struct t2DPoint * corrSimple_1(struct t2DPoint * marker, struct t2DPoint * scene);
correspondence.cpp
#include "global.h"
#include "correspondence.h"
struct tCFeature * tkCorrespond(struct t2DPoint * left_features,struct t2DPoint * right_features){
if(left_features!=NULL &&Â right_features!=NULL)
return corrSimple(left_features,r
else
return NULL;
}
struct tCFeature * corrSimple(struct t2DPoint * left, struct t2DPoint * right){
struct t2DPoint * t_left = left;
struct t2DPoint * t_right = right;
struct t2DPoint * match4left = (struct t2DPoint *)NULL;
struct t2DPoint * match4right = (struct t2DPoint *)NULL;
struct t2DPoint * temp_del = (struct t2DPoint *)NULL;
struct tCFeature * result = (struct tCFeature *)NULL;
struct tCFeature * temp_match = (struct tCFeature *)NULL;
bool match = false;
while(t_left!=NULL){
match = false;
match4left = corrSimple_1(t_left,right)
if(match4left!=NULL){
match4right = corrSimple_1(match4left,le
if(t_left==match4right){
match = true;
//printf("MATCHED! L[%i,%i] R[%i,%i]\n",t_left->x,t_le
>x,match4left->y);
temp_match = (struct tCFeature *)malloc(sizeof(struct tCFeature));
temp_match->iLeft[0] = t_left->x;
temp_match->iLeft[1] = t_left->y;
temp_match->iRight[0] = match4left->x;
temp_match->iRight[1] = match4left->y;
temp_match->pNext = result;
result = temp_match;
//Delete the element from the left list
temp_del = t_left;
t_left=t_left->next;
if(temp_del->previous==NUL
left = temp_del->next;
}
else {
temp_del->previous->next = temp_del->next;
}
if(temp_del->next!=NULL){
temp_del->next->previous = temp_del->previous;
}
free(temp_del);
//Now do the element from the right list
temp_del = match4left;
if(temp_del->previous==NUL
right = temp_del->next;
}
else {
temp_del->previous->next = temp_del->next;
}
if(temp_del->next!=NULL){
temp_del->next->previous = temp_del->previous;
}
free(temp_del);
}
}
else{
//the right list is empty - all matched!
//break out!
match=true;
t_left = NULL;
}
//We should only increment the pointer if there hasn't been a match, otherwise
//all hell will break loose!
if(!match)t_left = t_left->next;
}
//if(left!=NULL) printf("Not all markers matched in the left frame\n");
//if(right!=NULL) printf("Not all markers matched int he right frame\n");
return result;
}
struct t2DPoint * corrSimple_1(struct t2DPoint * marker, struct t2DPoint * scene){
struct t2DPoint * temp_pt = scene;
double temp_dist = 0;
struct t2DPoint * best_pt = (struct t2DPoint *)NULL;
double best_dist = 1000000000;
while(temp_pt!=NULL){
if(abs(temp_pt->size-marke
temp_dist = sqrt(((abs(marker->x)-abs(
>x)))
+((abs(marker->y)-abs(temp
>y)-abs(temp_pt->y))));
if(temp_dist<best_dist){
best_dist = temp_dist;
best_pt = temp_pt;
}
}
temp_pt = temp_pt->next;
}
return best_pt;
}
//////////////////////////
/*
Involves a significant amount of matrix manipulation, and to avoid duplicating             Â
code, a file matlib.cpp was also written. This is a minimal matrix library       Â
which implements the functionality for this project only, and any exernal
libaries were ignored. Â Â Â Â Â Â
See matlib.cpp
*/
reconstruct.h
//3D Reconstruction library header
#define PI 3.14159
void tkReconstruct(struct tCFeature * object,char * output);
struct tMatrix * reconstruct(struct camera_ray * left, struct camera_ray * right);
struct tMatrix * generateRotMat(double x_rot, double y_rot, double z_rot);
struct tMatrix * defPlane(struct tMatrix * point, struct tMatrix * vec1, struct tMatrix * vec2);
void pix2mm(struct camera_ray *);
double calcBeta(struct tMatrix * trans, struct tMatrix * vec, struct tMatrix * iv, struct tMatrix * ip);
struct tMatrix * genIntersect(struct tMatrix * trans, struct tMatrix * vec, double beta);
reconstruct.cpp
#include "global.h"
#include "matlib.h"
#include "reconstruct.h"
/* variables for the two cameras */
extern CvCamera * clLeftCamera;
extern CvCamera * clRightCamera;
void tkReconstruct(struct tCFeature * object,char * output){
struct camera_ray * leftcam_ray = (struct camera_ray *)malloc(sizeof(struct camera_ray));
struct camera_ray * rightcam_ray = (struct camera_ray *)malloc(sizeof(struct camera_ray));
struct tMatrix * result = (struct tMatrix *)NULL;
char outstring[256];
leftcam_ray->cam_tran[0] = 505.0;
leftcam_ray->cam_tran[1] = 485.0;
leftcam_ray->cam_tran[2] = 1000.0;
leftcam_ray->cam_rot = 0.0;
leftcam_ray->pixelsize[0] = (float)15.0/(float)352.0;
leftcam_ray->pixelsize[1] = (float)12.0/(float)288.0;
leftcam_ray->vector[0] = (float)175 - (float)object->iLeft[0];
leftcam_ray->vector[1] = (float)140 - (float)object->iLeft[1];
leftcam_ray->vector[2] = (float)clLeftCamera->matri
rightcam_ray->cam_tran[0] = 235.0;
rightcam_ray->cam_tran[1] = 485.0;
rightcam_ray->cam_tran[2] = 1000.0;
rightcam_ray->cam_rot = 0.0;
rightcam_ray->pixelsize[0]
rightcam_ray->pixelsize[1]
rightcam_ray->vector[0] = (float)175 - (float)object->iRight[0];
rightcam_ray->vector[1] = (float)140 - (float)object->iRight[1];
rightcam_ray->vector[2] = (float)clRightCamera->matr
result = reconstruct(leftcam_ray,ri
//printf("OBJECT LOCATED AT:\n");
//matPrint(result);
sprintf(outstring,"[P::%.1
strcat(output,outstring);
}
struct tMatrix * reconstruct(struct camera_ray * left, struct camera_ray * right){
struct tMatrix * vl_norm = (struct tMatrix *)NULL;
struct tMatrix * vr_norm = (struct tMatrix *)NULL;
struct tMatrix * rl = generateRotMat(0.0,left->c
struct tMatrix * rr = generateRotMat(0.0,right->
struct tMatrix * tl = matInit(3,1,left->cam_tran
struct tMatrix * tr = matInit(3,1,right->cam_tra
struct tMatrix * vl_prime = (struct tMatrix *)NULL;
struct tMatrix * vr_prime = (struct tMatrix *)NULL;
struct tMatrix * axis = (struct tMatrix *)NULL;
struct tMatrix * l_plane = (struct tMatrix *)NULL;
struct tMatrix * r_plane = (struct tMatrix *)NULL;
struct tMatrix * lp_norm = (struct tMatrix *)NULL;
struct tMatrix * rp_norm = (struct tMatrix *)NULL;
struct tMatrix * iv = (struct tMatrix *)NULL;
struct tMatrix * ip = (struct tMatrix *)NULL;
struct tMatrix * l_int = (struct tMatrix *) NULL;
struct tMatrix * r_int = (struct tMatrix *) NULL;
struct tMatrix * object = (struct tMatrix *) NULL;
double l_beta = 0.0,r_beta = 0.0;
//Generate normalised left vector
pix2mm(left);
vl_norm = matInit(3,1,left->vector);
matNorm(vl_norm);
//Generate normalised right vector
pix2mm(right);
vr_norm = matInit(3,1,right->vector)
matNorm(vr_norm);
//Correct the image vectors for camera rotation.
vl_prime = matMultiply(rl,vl_norm);
vr_prime = matMultiply(rr,vr_norm);
//Generate the "axis" vector
axis = matCrossProd(vl_prime,vr_p
//Generate the equations for the planes
l_plane = defPlane(tl,vl_prime,axis)
r_plane = defPlane(tr,vr_prime,axis)
//Generate left and right plane normals
lp_norm = matInit(3,1,l_plane->matri
rp_norm = matInit(3,1,r_plane->matri
//Generate the intersection vector and point
iv = matCrossProd(lp_norm,rp_no
matNorm(iv);
ip = matInit(3,1,NULL);
ip->matrix[2] = 0.0;
ip->matrix[1] = (-l_plane->matrix[3]-(l_pl
((r_plane->matrix[0]*l_pla
>matrix[0]*r_plane->matrix
ip->matrix[0] = ((-r_plane->matrix[3])-(r_
//Calculate the beta value
l_beta = calcBeta(tl,vl_prime,iv,ip
r_beta = calcBeta(tr,vr_prime,iv,ip
l_int = genIntersect(tl,vl_prime,l
r_int = genIntersect(tr,vr_prime,r
object = matInit(3,1,NULL);
object->matrix[X] = (l_int->matrix[X]+r_int->m
object->matrix[Y] = (l_int->matrix[Y]+r_int->m
object->matrix[Z] = (l_int->matrix[Z]+r_int->m
//Release all of the memory we have allocated for matrices.
matRelease(vl_norm);
matRelease(vr_norm);
matRelease(rl);
matRelease(rr);
matRelease(tr);
matRelease(tl);
matRelease(vl_prime);
matRelease(vr_prime);
matRelease(axis);
matRelease(l_plane);
matRelease(r_plane);
matRelease(lp_norm);
matRelease(rp_norm);
matRelease(iv);
matRelease(ip);
matRelease(l_int);
matRelease(r_int);
//Finally return the 3D position.
return object;
}
struct tMatrix * generateRotMat(double x_rot, double y_rot, double z_rot){
double x_mat[] = {1,0,0,0,cos(x_rot*PI/180.
sin(x_rot*PI/180.0),cos(x_
double y_mat[] = {cos(y_rot*PI/180.0),0,sin
sin(y_rot*PI/180.0),0,cos(
double z_mat[] = {cos(z_rot*PI/180.0),sin(z
sin(z_rot*PI/180.0),cos(z_
struct tMatrix * x_rot_mat = matInit(3,3,x_mat);
struct tMatrix * y_rot_mat = matInit(3,3,y_mat);
struct tMatrix * z_rot_mat = matInit(3,3,z_mat);
struct tMatrix * temp_mat;
struct tMatrix * result;
temp_mat = matMultiply(x_rot_mat,y_ro
result = matMultiply(temp_mat,z_rot
matRelease(x_rot_mat);
matRelease(y_rot_mat);
matRelease(z_rot_mat);
matRelease(temp_mat);
return result;
}
void pix2mm(struct camera_ray * camera){
camera->vector[0] = camera->vector[0]*camera->
camera->vector[1] = camera->vector[1]*camera->
camera->vector[2] = camera->vector[2]*(camera-
}
struct tMatrix * defPlane(struct tMatrix * point, struct tMatrix * vec1, struct tMatrix * vec2){
double det_a[9] = {point->matrix[Y],point->m
vec1->matrix[Y],vec1->matr
vec2->matrix[Y],vec2->matr
double det_b[9] = {point->matrix[Z],1.0,poin
vec1->matrix[Z],0.0,vec1->
vec2->matrix[Z],0.0,vec2->
double det_c[9] = {1.0,point->matrix[X],poin
0.0,vec1->matrix[X],vec1->
0.0,vec2->matrix[X],vec2->
double det_d[9] = {point->matrix[X],point->m
vec1->matrix[X],vec1->matr
vec2->matrix[X],vec2->matr
struct tMatrix * a_matrix = matInit(3,3,det_a);
struct tMatrix * b_matrix = matInit(3,3,det_b);
struct tMatrix * c_matrix = matInit(3,3,det_c);
struct tMatrix * d_matrix = matInit(3,3,det_d);
struct tMatrix * plane = matInit(4,1,NULL);
plane->matrix[0] = matDet(a_matrix);
plane->matrix[1] = matDet(b_matrix);
plane->matrix[2] = matDet(c_matrix);
plane->matrix[3] = matDet(d_matrix);
matRelease(a_matrix);
matRelease(b_matrix);
matRelease(c_matrix);
matRelease(d_matrix);
return plane;
}
double calcBeta(struct tMatrix * trans, struct tMatrix * vec, struct tMatrix * iv, struct tMatrix *
ip){
double beta_num = (ip->matrix[Y]*iv->matrix[
+(iv->matrix[Y]*trans->mat
-(ip->matrix[X]*iv->matrix
double beta_denom = (iv->matrix[X]*vec->matrix
return beta_num/beta_denom;
}
struct tMatrix * genIntersect(struct tMatrix * trans, struct tMatrix * vec, double beta){
double i_sect[3] = {trans->matrix[X]+(beta*ve
trans->matrix[Y]+(beta*vec
trans->matrix[Z]+(beta*vec
return matInit(3,1,i_sect);
}
//////////////////////////
/*
Core reconstruction component. Defines type, tMatrix which can be used to            Â
represent a matrix of any size, rather than statically defining the matrix      Â
Â
The matrix itself is represented as a 1-D array of floating point numbers
*/
matlib.h
//Matrix handler library
#define X 0
#define Y 1
#define Z 2
typedef struct tMatrix {
double * matrix;
int cols;
int rows;
} tMatrix;
struct tMatrix * matMultiply(struct tMatrix * src1, struct tMatrix * src2);
void matRelease(struct tMatrix * mat);
void matAssign(struct tMatrix * mat,double * data);
struct tMatrix * matInit(int rows, int cols, double * data);
void matNorm(struct tMatrix * mat);
void matPrint(struct tMatrix * matrix);
double matDet(struct tMatrix * mat);
struct tMatrix * matCrossProd(struct tMatrix * vec1, struct tMatrix * vec2);
double matDotProd(struct tMatrix * vec1,struct tMatrix * vec2);
matlib.cpp
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "matlib.h"
struct tMatrix * matInit(int rows, int cols, double * data){
int i;
struct tMatrix * new_mat = (struct tMatrix *)malloc(sizeof(struct tMatrix));
new_mat->cols = cols;
new_mat->rows = rows;
new_mat->matrix = (double *)malloc(sizeof(double)*co
if(data!=NULL){
for(i=0;i<(new_mat->cols*n
new_mat->matrix[i] = data[i];
}
}
return new_mat;
}
void matRelease(struct tMatrix * mat){
free(mat->matrix);
free(mat);
}
void matAssign(struct tMatrix * mat,double * data){
int i;
for(i=0;i<(mat->cols*mat->
mat->matrix[i] = data[i];
}
}
struct tMatrix * matMultiply(struct tMatrix * src1, struct tMatrix * src2){
int row,col,k;
double current;
struct tMatrix * dest = matInit(src1->rows,src2->c
for(row=0;row<dest->rows;r
for(col=0;col<dest->cols;c
current = 0.0;
for(k=0;k<src1->cols;k++)
current += src1->matrix[(row*src1->co
>cols)+col];
dest->matrix[(row*dest->co
}
}
return dest;
}
void matNorm(struct tMatrix * mat){
double length = sqrt((mat->matrix[X]*mat->
+(mat->matrix[Y]*mat->matr
+(mat->matrix[Z]*mat->matr
mat->matrix[X] = mat->matrix[X]/length;
mat->matrix[Y] = mat->matrix[Y]/length;
mat->matrix[Z] = mat->matrix[Z]/length;
}
void matPrint(struct tMatrix * matrix){
if(matrix!=NULL){
int col,row;
printf("MATRIX\n======\n")
for(row=0;row<matrix->rows
for(col=0;col<matrix->cols
printf("\t%f",matrix->matr
}
printf("\n");
}
}
}
double matDet(struct tMatrix * mat){
double det = (mat->matrix[0]*mat->matri
- (mat->matrix[2]*mat->matri
+ (mat->matrix[1]*mat->matri
- (mat->matrix[0]*mat->matri
+ (mat->matrix[2]*mat->matri
- (mat->matrix[1]*mat->matri
return det;
}
struct tMatrix * matCrossProd(struct tMatrix * vec1, struct tMatrix * vec2){
struct tMatrix * result = matInit(3,1,NULL);
result->matrix[0] = (vec1->matrix[1]*vec2->mat
result->matrix[1] = (vec1->matrix[2]*vec2->mat
result->matrix[2] = (vec1->matrix[0]*vec2->mat
return result;
}
double matDotProd(struct tMatrix * vec1,struct tMatrix * vec2){
double result = (vec1->matrix[0]*vec2->mat
+ (vec1->matrix[1]*vec2->mat
+ (vec1->matrix[2]*vec2->mat
return result;
}
just some thoughts:
do you use a "DEBUG" version of your library?
did you try your program in the release mode?
martin
do you use a "DEBUG" version of your library?
did you try your program in the release mode?
martin
ASKER
mgpeschke
Yes i have tried it in release mode also; same result
Yes i have tried it in release mode also; same result
These header files are missing:
#include <cv.h>
#include <cvaux.h>
#include <highgui.h>
Alex
#include <cv.h>
#include <cvaux.h>
#include <highgui.h>
Alex
ASKER
Sorry, must have missed them out....
Im sure the problem is not as a result of this though.
Im sure the problem is not as a result of this though.
>>>> Im sure the problem is not as a result of this though.
Yes, but I couldn't compile without them.
The problem I see with your code is that you are deleting entries from your linked list *but* didn't initialize the head pointers of these lists. I couldn't spot the error cause I am not being able to debug but I think you have to overthink your technique. If you would use std::list<tCFeature> Â instead of your own embedded linked list you could avoid all of these dangerous deletions of pointers that might still be referenced in another collection.
Regards, Alex
Yes, but I couldn't compile without them.
The problem I see with your code is that you are deleting entries from your linked list *but* didn't initialize the head pointers of these lists. I couldn't spot the error cause I am not being able to debug but I think you have to overthink your technique. If you would use std::list<tCFeature> Â instead of your own embedded linked list you could avoid all of these dangerous deletions of pointers that might still be referenced in another collection.
Regards, Alex
ASKER
How would i go about using std::list? I am not very familiar with this and would appreciate any help to get me started.
Many thanks.
Many thanks.
>>>> How would i go about using std::list
(1) You need:
  #include <list>
  using namespace std;
at top of your cpp.
(2) You have to remove tCFeature* pNext member from your struct tCFeature.
   And next and prev member in t2DPoint.  BTW, in C++ you don't need typedef struct but only
   struct tCFeature
   {
      int iLeft[2];
      int iRight[2];
   };
   struct t2DPoint
   {
     int x;
     int y;
     int size;
   };
After that you would simply skip the struct keyword and use tCFeature as a type, e. g.
   tCFeature left; Â
Note, I would recommend to add a default constructor, that initializes the members:
   struct tCFeature
   {
      int iLeft[2];
      int iRight[2];
      tCFeature() { iLeft[0] = iLeft[1] = iRight[0] = iRight[1] = 0; }
   };
   struct t2DPoint
   {
     int x;
     int y;
     int size;
     t2DPoint() : x(0), y(0), size(0) {}
     t2DPoint(int xx, int yy, int s) : x(xx), y(xx), size(s) {}
   };
With that all your instances of tCFeature and t2DPoint were properly initialized.
(3) To use a list now you would do the following:
  list<t2DPoint> points;
  t2DPoint pt(10, 20, 2);
  points.push_back(pt);
Of course you could do that in a loop as well:
  list<t2DPoint> points;
  for (int i = 0; i < 20; ++i)
  {
    t2DPoint pt(10, 20, 2);
    points.push_back(t2DPoint( i*10, i*20, i));
  }
Now all your points are in list points.
You can retrieve the points by that:
  for (list<t2DPoint>::iterator it = points.begin(); it != points.end(); ++it)
  {
     t2DPoint pt = *it;  // get a copy from iterator
     int x = pt.x;      // get member variables
     int y = it->x;      // that's equivalent
     ....
  }
(4) To remove an element of the list you can:
   points.pop_back();  // deletes last element
   points.pop_first();  // deletes first element
  for (list<t2DPoint>::iterator it = points.begin(); it != points.end();)
  {
     list<t2DPoint>::iterator itnext = it+1;  // save next iterator
     if (it->x == 10 && it->y == 20)
     {
        points.erase(it); Â
     }
     it = itnext;  // get saved iterator as it isn't valid after erase
  }
Â
The main advantage of that approach is that you don't need any pointers.
Hope it was understandable.
Regards, Alex
Â
(1) You need:
  #include <list>
  using namespace std;
at top of your cpp.
(2) You have to remove tCFeature* pNext member from your struct tCFeature.
   And next and prev member in t2DPoint.  BTW, in C++ you don't need typedef struct but only
   struct tCFeature
   {
      int iLeft[2];
      int iRight[2];
   };
   struct t2DPoint
   {
     int x;
     int y;
     int size;
   };
After that you would simply skip the struct keyword and use tCFeature as a type, e. g.
   tCFeature left; Â
Note, I would recommend to add a default constructor, that initializes the members:
   struct tCFeature
   {
      int iLeft[2];
      int iRight[2];
      tCFeature() { iLeft[0] = iLeft[1] = iRight[0] = iRight[1] = 0; }
   };
   struct t2DPoint
   {
     int x;
     int y;
     int size;
     t2DPoint() : x(0), y(0), size(0) {}
     t2DPoint(int xx, int yy, int s) : x(xx), y(xx), size(s) {}
   };
With that all your instances of tCFeature and t2DPoint were properly initialized.
(3) To use a list now you would do the following:
  list<t2DPoint> points;
  t2DPoint pt(10, 20, 2);
  points.push_back(pt);
Of course you could do that in a loop as well:
  list<t2DPoint> points;
  for (int i = 0; i < 20; ++i)
  {
    t2DPoint pt(10, 20, 2);
    points.push_back(t2DPoint(
  }
Now all your points are in list points.
You can retrieve the points by that:
  for (list<t2DPoint>::iterator it = points.begin(); it != points.end(); ++it)
  {
     t2DPoint pt = *it;  // get a copy from iterator
     int x = pt.x;      // get member variables
     int y = it->x;      // that's equivalent
     ....
  }
(4) To remove an element of the list you can:
   points.pop_back();  // deletes last element
   points.pop_first();  // deletes first element
  for (list<t2DPoint>::iterator it = points.begin(); it != points.end();)
  {
     list<t2DPoint>::iterator itnext = it+1;  // save next iterator
     if (it->x == 10 && it->y == 20)
     {
        points.erase(it); Â
     }
     it = itnext;  // get saved iterator as it isn't valid after erase
  }
Â
The main advantage of that approach is that you don't need any pointers.
Hope it was understandable.
Regards, Alex
Â
ASKER
Hi thanks again for your reply. i would just like to ask a couple of follow up questions on the advice you gave me.
First, I dont understand what you mean by:
"After that you would simply skip the struct keyword and use tCFeature as a type, e. g. tCFeature left;"
Also, where you wrote:
You can retrieve the points by that:
  for (list<t2DPoint>::iterator it = points.begin(); it != points.end(); ++it)
  {
     t2DPoint pt = *it;  // get a copy from iterator
     int x = pt.x;      // get member variables
     int y = it->x;      // that's equivalent
     ....
  }
What would i need to do in the "...." section.
Thanks for your patience on the matter.
First, I dont understand what you mean by:
"After that you would simply skip the struct keyword and use tCFeature as a type, e. g. tCFeature left;"
Also, where you wrote:
You can retrieve the points by that:
  for (list<t2DPoint>::iterator it = points.begin(); it != points.end(); ++it)
  {
     t2DPoint pt = *it;  // get a copy from iterator
     int x = pt.x;      // get member variables
     int y = it->x;      // that's equivalent
     ....
  }
What would i need to do in the "...." section.
Thanks for your patience on the matter.
>>>> "After that you would simply skip the struct keyword"
In C if you have a struct type you need
 typedef
 struct
 {
   ....
 } MyStruct;
Whenever you use that struct you need the keyword struct
 struct MyStruct ms;
In C++ struct is a synonym to class and the only difference to class is that members defaults to public members while for class, members are private if not declared otherwise. Therefore you would define a struct like that:
struct MyStruct
{
  ....
};
not using typedef and defining the type name right after the struct keyword. Later, you would use that type similar to a class type by not using the struct or class keyword.
 MyStruct ms;   // create a object of type MyStruct
>>>>Â What would i need to do in the "...." section.
There, you would evaluate the points, e. g. when drawing a chart, you would draw a line to the next point. The for loop was just a sample for evaluating a list.
Regards, Â Alex
In C if you have a struct type you need
 typedef
 struct
 {
   ....
 } MyStruct;
Whenever you use that struct you need the keyword struct
 struct MyStruct ms;
In C++ struct is a synonym to class and the only difference to class is that members defaults to public members while for class, members are private if not declared otherwise. Therefore you would define a struct like that:
struct MyStruct
{
  ....
};
not using typedef and defining the type name right after the struct keyword. Later, you would use that type similar to a class type by not using the struct or class keyword.
 MyStruct ms;   // create a object of type MyStruct
>>>>Â What would i need to do in the "...." section.
There, you would evaluate the points, e. g. when drawing a chart, you would draw a line to the next point. The for loop was just a sample for evaluating a list.
Regards, Â Alex
ASKER
I am still having problems with the std::linked list implementation.
Im not sure how to convert the code above (correspondence.cpp) so that it can use the STL std::list container class.
itsmeandnobodyelse
>>>>There, you would evaluate the points. The for loop was just a sample for evaluating a list.
How would i evaluate points that are automatically chosen when the program is run?
Could someone help me with this, preferably by converting the code to be used in this way. As i have stated, this is my full code, but because i have never used std::list, i am not sure what i am doing wrong.
Thanks in advance.
Im not sure how to convert the code above (correspondence.cpp) so that it can use the STL std::list container class.
itsmeandnobodyelse
>>>>There, you would evaluate the points. The for loop was just a sample for evaluating a list.
How would i evaluate points that are automatically chosen when the program is run?
Could someone help me with this, preferably by converting the code to be used in this way. As i have stated, this is my full code, but because i have never used std::list, i am not sure what i am doing wrong.
Thanks in advance.
ASKER
Anyone help with this please....
Below is a compilable correspondence.cpp, globals.h and correspondence.h.
You would have to convert the other cpp files and headers same way
In correspondence.cpp I most likely found the error (while I was converting it to std::list):
>>>>Â Â Â Â Â //We should only increment the pointer if there hasn't been a match, otherwise
>>>>Â Â Â Â //all hell will break loose! Â ????
>>>>Â Â Â Â if(!match)t_left = t_left->next;
With the last statement you'll have an infinite loop if match is false (all hell will be let loose!!!)
Regards, Alex
// correspondence.cpp
#include "global.h"
#include "correspondence.h"
#include <list>
#include <cmath>
using namespace std;
tCFeature tkCorrespond(list<t2DPoint >& left_features, list<t2DPoint>& right_features)
{
  if(!left_features.empty() && !right_features.empty())
    return corrSimple(left_features, right_features);
  else
    return tCFeature();
}
tCFeature corrSimple(list<t2DPoint>& left, list<t2DPoint>& right)
{
  t2DPoint  match4left;
  t2DPoint  match4right;
  tCFeature result;
  bool match = false;
  for (list<t2DPoint>::iterator t_left = left.begin(); t_left != left.end(); )
  {
    match = false;
    match4left = corrSimple_1(*t_left, right);
    // save next iterator in case of deletion
    list<t2DPoint>::iterator itnext = t_left;
    itnext++;
    if (!match4left.empty())                    Â
    {                                Â
      match4right = corrSimple_1(match4left, left);        Â
      if(*t_left == match4right)
      {
        match = true;
        //printf("MATCHED! L[%i,%i] R[%i,%i]\n",t_left->x,t_le ft->y,matc h4left->x, match4left ->y);
       Â
        result.iLeft[0]  = t_left->x;  Â
        result.iLeft[1]  = t_left->y;  Â
        result.iRight[0] = match4left.x;
        result.iRight[1] = match4left.y;
        // remove all points from right matching *t_left
        right.remove( *t_left );
        // remove t_left from left
        left.erase( t_left );
      }
      t_left = itnext;
    }
    else
    {
      //the right list is empty - all matched!
      //break out!
      match=true;
      break;
    }
    //We should only increment the pointer if there hasn't been a match, otherwise
    //all hell will break loose!  ????
    //if(!match)t_left = t_left->next;
    if (!match)
      break;
  }
  //if(left!=NULL) printf("Not all markers matched in the left frame\n");
  //if(right!=NULL) printf("Not all markers matched int he right frame\n");
  return result;
}
t2DPoint corrSimple_1(t2DPoint marker, list<t2DPoint>& scene)
{
  double  temp_dist = 0;
  double  best_dist = 1000000000;
  t2DPoint best_pt;
  for (list<t2DPoint>::iterator temp_pt = scene.begin(); temp_pt != scene.end(); ++temp_pt )
  {
    if(abs(temp_pt->size - marker.size) <= (marker.size/5))
    {
      temp_dist = sqrt(((abs(marker.x) - abs(temp_pt->x)) * (abs(marker.x) - abs(temp_pt->x)))
            + ((abs(marker.y) - abs(temp_pt->y)) * (abs(marker.y) - abs(temp_pt->y))));
      if(temp_dist < best_dist)
      {
        best_dist = temp_dist;
        best_pt  = *temp_pt;
      }
    }
  }
  return best_pt;
}
//-------------- end of correspondence.cpp --------------------------
// global.h
/* Global definitions */
#include <stdio.h>
#include <stdlib.h>
// the following 3 includes I had to comment to get correspondence.cpp compiled
#include <cv.h> Â Â Â Â
#include <cvaux.h>
#include <highgui.h>
#ifndef GLOBALS
#define GLOBALS
/* define run-modes */
#define MODE_INIT 0
#define MODE_CALIBRATE_INT 1
#define MODE_CALIBRATE_EXT 2
#define MODE_RECONSTRUCT_SAMPLE 3
#define MODE_RECONSTRUCT_RUN 4
/* define different calibration types */
#define CALIB_UNSET 0
#define CALIB_FILE 1
#define CALIB_BMP 2
#define CALIB_LIVE 3
/* define number of points require for ext calibration */
#define EXT_REQ_POINTS 35
/* define left and right camera indices */
#define SRC_LEFT_CAMERA 0
#define SRC_RIGHT_CAMERA 1
/* colour channels in an IplImage */
#define C_BLUE 0
#define C_GREEN 1
#define C_RED 2
/* structure for feature correspondences */
struct tCFeature
{
  int iLeft[2];
  int iRight[2];
  tCFeature() { iLeft[0] = iLeft[1] = iRight[0] = iRight[1] = 0; }
  tCFeature(const tCFeature& tcf )
  {
   iLeft[0]  = tcf.iLeft[0];
   iLeft[1]  = tcf.iLeft[1];
   iRight[0] = tcf.iRight[0];
   iRight[1] = tcf.iRight[1];
  }
  bool empty() { return (iLeft[0] == 0 && iLeft[1] == 0 && iRight[0] == 0 && iRight[1] == 0); }
  tCFeature& operator=(const tCFeature& tcf )
  {
   iLeft[0]  = tcf.iLeft[0];
   iLeft[1]  = tcf.iLeft[1];
   iRight[0] = tcf.iRight[0];
   iRight[1] = tcf.iRight[1];
   return *this;
  }
  bool operator==(const tCFeature& tcf)
  {
    return (iLeft[0]  == tcf.iLeft[0] && iLeft[1] == tcf.iLeft[1] &&Â
        iRight[0] == tcf.iRight[0] && iRight[1] == tcf.iRight[1]);
  }
};
/* 2D point struct (includes next/prev links unlike OpenCV */
struct t2DPoint
{
  int x;
  int y;
  int size;
  t2DPoint() : x(0), y(0), size(0) {}
  t2DPoint(const t2DPoint& pt) : x(pt.x), y(pt.y), size(pt.size) {}
  bool empty() { return (x == 0 && y == 0 && size == 0); }
  t2DPoint& operator=(const t2DPoint& pt )
  {
    x = pt.x; y = pt.y; size = pt.size;
    return *this;
  }
  bool operator==(const t2DPoint& pt)
  {  return (x == pt.x && y == pt.y && size == pt.size);  }
};
/* struct representing a camera ray toward the object */
struct camera_ray
{
  double vector[3];
  double cam_tran[3];
  double cam_rot;
  double pixelsize[2];
};
#endif
// ------------- Â end of global.h ------------------------
// correspondence.h
////////////////////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// /////
/*
Takes two link-lists of image features representing the features    Â
found in the left and reight images as chosen by segmentation.cpp   Â
&Â returns a list of matched features.
*/
#include <list>
using namespace std;
/* correspondence */
tCFeature tkCorrespond(list<t2DPoint >& left_features, list<t2DPoint>& right_features);
tCFeature corrSimple( list<t2DPoint>& left, list<t2DPoint>& right);
t2DPoint  corrSimple_1(t2DPoint marker, list<t2DPoint>& scene);
// ------------------- end of correspondence.h -------------------------- --
You would have to convert the other cpp files and headers same way
In correspondence.cpp I most likely found the error (while I was converting it to std::list):
>>>>Â Â Â Â Â //We should only increment the pointer if there hasn't been a match, otherwise
>>>>Â Â Â Â //all hell will break loose! Â ????
>>>>Â Â Â Â if(!match)t_left = t_left->next;
With the last statement you'll have an infinite loop if match is false (all hell will be let loose!!!)
Regards, Alex
// correspondence.cpp
#include "global.h"
#include "correspondence.h"
#include <list>
#include <cmath>
using namespace std;
tCFeature tkCorrespond(list<t2DPoint
{
  if(!left_features.empty() && !right_features.empty())
    return corrSimple(left_features, right_features);
  else
    return tCFeature();
}
tCFeature corrSimple(list<t2DPoint>&
{
  t2DPoint  match4left;
  t2DPoint  match4right;
  tCFeature result;
  bool match = false;
  for (list<t2DPoint>::iterator t_left = left.begin(); t_left != left.end(); )
  {
    match = false;
    match4left = corrSimple_1(*t_left, right);
    // save next iterator in case of deletion
    list<t2DPoint>::iterator itnext = t_left;
    itnext++;
    if (!match4left.empty())                    Â
    {                                Â
      match4right = corrSimple_1(match4left, left);        Â
      if(*t_left == match4right)
      {
        match = true;
        //printf("MATCHED! L[%i,%i] R[%i,%i]\n",t_left->x,t_le
       Â
        result.iLeft[0]  = t_left->x;  Â
        result.iLeft[1]  = t_left->y;  Â
        result.iRight[0] = match4left.x;
        result.iRight[1] = match4left.y;
        // remove all points from right matching *t_left
        right.remove( *t_left );
        // remove t_left from left
        left.erase( t_left );
      }
      t_left = itnext;
    }
    else
    {
      //the right list is empty - all matched!
      //break out!
      match=true;
      break;
    }
    //We should only increment the pointer if there hasn't been a match, otherwise
    //all hell will break loose!  ????
    //if(!match)t_left = t_left->next;
    if (!match)
      break;
  }
  //if(left!=NULL) printf("Not all markers matched in the left frame\n");
  //if(right!=NULL) printf("Not all markers matched int he right frame\n");
  return result;
}
t2DPoint corrSimple_1(t2DPoint marker, list<t2DPoint>& scene)
{
  double  temp_dist = 0;
  double  best_dist = 1000000000;
  t2DPoint best_pt;
  for (list<t2DPoint>::iterator temp_pt = scene.begin(); temp_pt != scene.end(); ++temp_pt )
  {
    if(abs(temp_pt->size - marker.size) <= (marker.size/5))
    {
      temp_dist = sqrt(((abs(marker.x) - abs(temp_pt->x)) * (abs(marker.x) - abs(temp_pt->x)))
            + ((abs(marker.y) - abs(temp_pt->y)) * (abs(marker.y) - abs(temp_pt->y))));
      if(temp_dist < best_dist)
      {
        best_dist = temp_dist;
        best_pt  = *temp_pt;
      }
    }
  }
  return best_pt;
}
//-------------- end of correspondence.cpp --------------------------
// global.h
/* Global definitions */
#include <stdio.h>
#include <stdlib.h>
// the following 3 includes I had to comment to get correspondence.cpp compiled
#include <cv.h> Â Â Â Â
#include <cvaux.h>
#include <highgui.h>
#ifndef GLOBALS
#define GLOBALS
/* define run-modes */
#define MODE_INIT 0
#define MODE_CALIBRATE_INT 1
#define MODE_CALIBRATE_EXT 2
#define MODE_RECONSTRUCT_SAMPLE 3
#define MODE_RECONSTRUCT_RUN 4
/* define different calibration types */
#define CALIB_UNSET 0
#define CALIB_FILE 1
#define CALIB_BMP 2
#define CALIB_LIVE 3
/* define number of points require for ext calibration */
#define EXT_REQ_POINTS 35
/* define left and right camera indices */
#define SRC_LEFT_CAMERA 0
#define SRC_RIGHT_CAMERA 1
/* colour channels in an IplImage */
#define C_BLUE 0
#define C_GREEN 1
#define C_RED 2
/* structure for feature correspondences */
struct tCFeature
{
  int iLeft[2];
  int iRight[2];
  tCFeature() { iLeft[0] = iLeft[1] = iRight[0] = iRight[1] = 0; }
  tCFeature(const tCFeature& tcf )
  {
   iLeft[0]  = tcf.iLeft[0];
   iLeft[1]  = tcf.iLeft[1];
   iRight[0] = tcf.iRight[0];
   iRight[1] = tcf.iRight[1];
  }
  bool empty() { return (iLeft[0] == 0 && iLeft[1] == 0 && iRight[0] == 0 && iRight[1] == 0); }
  tCFeature& operator=(const tCFeature& tcf )
  {
   iLeft[0]  = tcf.iLeft[0];
   iLeft[1]  = tcf.iLeft[1];
   iRight[0] = tcf.iRight[0];
   iRight[1] = tcf.iRight[1];
   return *this;
  }
  bool operator==(const tCFeature& tcf)
  {
    return (iLeft[0]  == tcf.iLeft[0] && iLeft[1] == tcf.iLeft[1] &&Â
        iRight[0] == tcf.iRight[0] && iRight[1] == tcf.iRight[1]);
  }
};
/* 2D point struct (includes next/prev links unlike OpenCV */
struct t2DPoint
{
  int x;
  int y;
  int size;
  t2DPoint() : x(0), y(0), size(0) {}
  t2DPoint(const t2DPoint& pt) : x(pt.x), y(pt.y), size(pt.size) {}
  bool empty() { return (x == 0 && y == 0 && size == 0); }
  t2DPoint& operator=(const t2DPoint& pt )
  {
    x = pt.x; y = pt.y; size = pt.size;
    return *this;
  }
  bool operator==(const t2DPoint& pt)
  {  return (x == pt.x && y == pt.y && size == pt.size);  }
};
/* struct representing a camera ray toward the object */
struct camera_ray
{
  double vector[3];
  double cam_tran[3];
  double cam_rot;
  double pixelsize[2];
};
#endif
// ------------- Â end of global.h ------------------------
// correspondence.h
//////////////////////////
/*
Takes two link-lists of image features representing the features    Â
found in the left and reight images as chosen by segmentation.cpp   Â
&Â returns a list of matched features.
*/
#include <list>
using namespace std;
/* correspondence */
tCFeature tkCorrespond(list<t2DPoint
tCFeature corrSimple( list<t2DPoint>& left, list<t2DPoint>& right);
t2DPoint  corrSimple_1(t2DPoint marker, list<t2DPoint>& scene);
// ------------------- end of correspondence.h --------------------------
ASKER
itsmeandnobodyelse
I am doing what you advised, and am changing segmentation.cpp at the moment.
I just wanted to ask if im going about this the right way, as i get a couple of errors in one of the functions:
t2DPoint findCircles(IplImage * input, IplImage * draw){ Â //IS THIS OK?
      CvMemStorage * storage;
      CvSeq * contour;
      CvBox2D * box;
      CvPoint * pointArray;
      CvPoint2D32f * pointArray32f;
      CvPoint center;
      t2DPoint match4left;
      t2DPoint match4right;
      t2DPoint result;
      float myAngle,ratio;
      int i,header_size,count,length ,width;
      IplImage * gray_input = cvCreateImage(cvGetSize(in put),IPL_D EPTH_8U,1) ;
    t2DPoint markers = (list <t2DPoint>)NULL;   //ERROR HERE  <------------------------- ---------- ---------- ------
    t2DPoint temppt = ( list <t2DPoint>)NULL;   //ERROR HERE  <------------------------- ---------- ---------- ------
      //Convert the input image to grayscale.
      cvCvtColor(input,gray_inpu t,CV_RGB2G RAY);
      //Remove noise and smooth
      removeNoise(gray_input);
      //Edge detect the image with Canny algorithm
      cvCanny(gray_input,gray_in put,25,150 ,3);
      //Allocate memory
      box = (CvBox2D *)malloc(sizeof(CvBox2D));
      header_size = sizeof(CvContour);
      storage = cvCreateMemStorage(1000);
      // Find all the contours in the image.
      cvFindContours(gray_input, storage,&c ontour,hea der_size,C V_RETR_EXT ERNAL,CV_C HAIN_APPRO X_TC89_KCO S);
      while(contour!=NULL)
      {
           if(CV_IS_SEQ_CURVE(contour ))
           {
                 count = contour->total;
                 pointArray = (CvPoint *)malloc(count * sizeof(CvPoint));
                 cvCvtSeqToArray(contour,po intArray,C V_WHOLE_SE Q);
                 pointArray32f = (CvPoint2D32f *)malloc((count + 1) * sizeof(CvPoint2D32f));
                 for(i=0;i<count-1;i++){
                      pointArray32f[i].x = (float)(pointArray[i].x);
                      pointArray32f[i].y = (float)(pointArray[i].y);
                 }
                 pointArray32f[i].x = (float)(pointArray[0].x);
                 pointArray32f[i].y = (float)(pointArray[0].y);
                 if(count>7){
                          cvFitEllipse(pointArray32f ,count,box );
                      ratio = (float)box->size.width/(fl oat)box->s ize.height ;
                      center.x = (int)box->center.x;
                      center.y = (int)box->center.y;
                      length = (int)box->size.height;
                      width = (int)box->size.width;
                      myAngle = box->angle;
                      if((center.x>0) && (center.y>0)){
                       result.x = center.x;     Â
                    result.y = center.y;
                    result.size = length;
                    markers = temppt;
                            if(draw!=NULL) cvCircle(draw,center,(int) length/2,R GB(0,0,255 ),-1);
                            /*cvEllipse(input,
                            center,
                            cvSize((int)width/2,(int)l ength/2),
                            -box->angle,
                            0,
                            360,
                            RGB(0,255,0),
                            1);*/
                      }
                 }
                 free(pointArray32f);
                 free(pointArray);
           }
           contour = contour->h_next;
      }
      free(contour);
      free(box);
      cvReleaseImage(&gray_input );
      cvReleaseMemStorage(&stora ge);
      return markers;
}
I get the following two errors:
\segmentation.cpp(290) : error C2440: 'initializing' : cannot convert from 'class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >' to 'struct t2DPoint'
    No constructor could take the source type, or constructor overload resolution was ambiguous
\segmentation.cpp(291) : error C2440: 'initializing' : cannot convert from 'class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >' to 'struct t2DPoint'
    No constructor could take the source type, or constructor overload resolution was ambiguous
Could you please let me know where im goin wrong....
Many thanks. Â Â Â Â Â
I am doing what you advised, and am changing segmentation.cpp at the moment.
I just wanted to ask if im going about this the right way, as i get a couple of errors in one of the functions:
t2DPoint findCircles(IplImage * input, IplImage * draw){ Â //IS THIS OK?
      CvMemStorage * storage;
      CvSeq * contour;
      CvBox2D * box;
      CvPoint * pointArray;
      CvPoint2D32f * pointArray32f;
      CvPoint center;
      t2DPoint match4left;
      t2DPoint match4right;
      t2DPoint result;
      float myAngle,ratio;
      int i,header_size,count,length
      IplImage * gray_input = cvCreateImage(cvGetSize(in
    t2DPoint markers = (list <t2DPoint>)NULL;   //ERROR HERE  <-------------------------
    t2DPoint temppt = ( list <t2DPoint>)NULL;   //ERROR HERE  <-------------------------
      //Convert the input image to grayscale.
      cvCvtColor(input,gray_inpu
      //Remove noise and smooth
      removeNoise(gray_input);
      //Edge detect the image with Canny algorithm
      cvCanny(gray_input,gray_in
      //Allocate memory
      box = (CvBox2D *)malloc(sizeof(CvBox2D));
      header_size = sizeof(CvContour);
      storage = cvCreateMemStorage(1000);
      // Find all the contours in the image.
      cvFindContours(gray_input,
      while(contour!=NULL)
      {
           if(CV_IS_SEQ_CURVE(contour
           {
                 count = contour->total;
                 pointArray = (CvPoint *)malloc(count * sizeof(CvPoint));
                 cvCvtSeqToArray(contour,po
                 pointArray32f = (CvPoint2D32f *)malloc((count + 1) * sizeof(CvPoint2D32f));
                 for(i=0;i<count-1;i++){
                      pointArray32f[i].x = (float)(pointArray[i].x);
                      pointArray32f[i].y = (float)(pointArray[i].y);
                 }
                 pointArray32f[i].x = (float)(pointArray[0].x);
                 pointArray32f[i].y = (float)(pointArray[0].y);
                 if(count>7){
                          cvFitEllipse(pointArray32f
                      ratio = (float)box->size.width/(fl
                      center.x = (int)box->center.x;
                      center.y = (int)box->center.y;
                      length = (int)box->size.height;
                      width = (int)box->size.width;
                      myAngle = box->angle;
                      if((center.x>0) && (center.y>0)){
                       result.x = center.x;     Â
                    result.y = center.y;
                    result.size = length;
                    markers = temppt;
                            if(draw!=NULL) cvCircle(draw,center,(int)
                            /*cvEllipse(input,
                            center,
                            cvSize((int)width/2,(int)l
                            -box->angle,
                            0,
                            360,
                            RGB(0,255,0),
                            1);*/
                      }
                 }
                 free(pointArray32f);
                 free(pointArray);
           }
           contour = contour->h_next;
      }
      free(contour);
      free(box);
      cvReleaseImage(&gray_input
      cvReleaseMemStorage(&stora
      return markers;
}
I get the following two errors:
\segmentation.cpp(290) : error C2440: 'initializing' : cannot convert from 'class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >' to 'struct t2DPoint'
    No constructor could take the source type, or constructor overload resolution was ambiguous
\segmentation.cpp(291) : error C2440: 'initializing' : cannot convert from 'class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >' to 'struct t2DPoint'
    No constructor could take the source type, or constructor overload resolution was ambiguous
Could you please let me know where im goin wrong....
Many thanks. Â Â Â Â Â
>>>> t2DPoint markers = (list <t2DPoint>)NULL; Â Â //ERROR HERE Â
I used objects in the list and not pointers. So
 t2DPoint marker; Â
creates one empty t2DPoint object.
Note, I added t2DPoint::empty() member function. With that you could test objects for being empty rather than testing a return pointer on being NULL.
Could you change structs CVPoint and CVBox2D similar to t2DPoint and tCFeature or do you have a library that needs the current definitions?
Regards, Alex
I used objects in the list and not pointers. So
 t2DPoint marker; Â
creates one empty t2DPoint object.
Note, I added t2DPoint::empty() member function. With that you could test objects for being empty rather than testing a return pointer on being NULL.
Could you change structs CVPoint and CVBox2D similar to t2DPoint and tCFeature or do you have a library that needs the current definitions?
Regards, Alex
ASKER
>>>>Could you change structs CVPoint and CVBox2D similar to t2DPoint and tCFeature or do you have a library that needs the current definitions?
Dont have a library, but i am struggling with how to change these structs, similarly with t2DPoint and tCFeature
Dont have a library, but i am struggling with how to change these structs, similarly with t2DPoint and tCFeature
>>>> but i am struggling with how to change these structs, ...
That is good as a program that has to work with different design paradigmas rarely could succeed...
Note, if your lists contain objects and not pointers, you should make sure that the object in the list is a singleton, i. e. that you don't store the same object somewhere else. Of course you could temporarily extract a local copy or even a temporary result set. But these temporaries shouldn't live longer than you make changes to the main lists.
Regards, Alex
That is good as a program that has to work with different design paradigmas rarely could succeed...
Note, if your lists contain objects and not pointers, you should make sure that the object in the list is a singleton, i. e. that you don't store the same object somewhere else. Of course you could temporarily extract a local copy or even a temporary result set. But these temporaries shouldn't live longer than you make changes to the main lists.
Regards, Alex
ASKER
itsmeandnobodyelse
I am tryin to change the reconstruct.cpp file in this way.
Could you please advise me on how to do this. im am confused on how to convert the .h file aswell.
Many thanks
I am tryin to change the reconstruct.cpp file in this way.
Could you please advise me on how to do this. im am confused on how to convert the .h file aswell.
Many thanks
I started with matlib.h and made fundamental changes to struct tMatrix, thus not needing matlib.cpp anymore:
It compiles (using matlib.cpp only including matlib.h and nothing else) but you would need a lot of changes in reconstruct.cpp, mostly changing pointers of tMatrix to objects and using member functions of tMatrix instead of global functions.
Regards, Alex
////////////////////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// /////////
/*
Core reconstruction component. Defines type, tMatrix which can be used to      Â
represent a matrix of any size, rather than statically defining the matrix   Â
 The matrix itself is represented as a vector of vector of double
*/
#include <vector>
#include <math.h>
#include <iostream>
using namespace std;
//Matrix handler library
enum Axis
{
  X,
  Y,
  Z
};
struct tMatrix
{
  int rows;
  int cols;
  vector< vector<double> > matrix;
  tMatrix() :  rows(0), cols(0) {}
  tMatrix(int r, int c, double init = 0.0)
    : rows(r), cols(c), matrix( r, vector<double>( c, init ) )
  {
  }
  tMatrix(const tMatrix& mat)
    : rows(mat.rows), cols(mat.cols), matrix( mat.rows, vector<double>( mat.cols, 0. ) )
  {
    *this = mat;
  } Â
  vector<double>& operator[](int row)
  {
    return matrix[row];
  }
  const vector<double>& operator[](int row) const
  {
    return matrix[row];
  }
  tMatrix& operator=(const tMatrix& mat)
  {
    for (int i = 0; i < rows; ++i)
      for (int j = 0; j < cols; ++j)
        matrix[i][j] = mat[i][j];
    return *this;
  }
  tMatrix operator* (const tMatrix& src)
  {
    tMatrix result(rows, src.cols);
    for (int r = 0; r < rows; ++r)
    {
      for (int c2 = 0; c2 < src.cols; ++c2)
      {
        double sum = 0.;
        for (int c1 = 0; c1 < cols; ++c1)
        {
          sum += matrix[r][c1] * src[c1][c2];
        }
        result[r][c2] = sum;
      }
    }
    return result;
  }
  void norm()
  {
    if (rows == 3 && cols == 1)
    {
      double dlen = sqrt((matrix[X][0]*matrix[ X][0])
               +(matrix[Y][0]*matrix[Y][0 ])
               +(matrix[Z][0]*matrix[Z][0 ]));
      matrix[X][0] /= dlen;
      matrix[Y][0] /= dlen;
      matrix[Z][0] /= dlen;
    }
  }
  friend ostream& operator<<(ostream& os, const tMatrix& mat)
  {
    for (int i = 0; i < mat.rows; ++i)
      for (int j = 0; j < mat.cols; ++j)
        os << "\t" << mat[i][j];
    os << endl;
    Â
    return os;
  }
  double det33()
  {
    double det = 0.0;
    if (rows == 3 && cols == 3)
    {
      det =  (matrix[0][0] * matrix[1][1] * matrix[2][2])
         -(matrix[0][2] * matrix[1][1] * matrix[2][0])
         +(matrix[0][1] * matrix[1][2] * matrix[2][0])
         -(matrix[0][0] * matrix[1][2] * matrix[2][1])
         +(matrix[0][2] * matrix[1][0] * matrix[2][1])
         -(matrix[0][1] * matrix[1][0] * matrix[2][2]);
    }
    return det;
  }
  tMatrix crossProd31x31(const tMatrix& vec)
  {
    tMatrix result(3, 1);
    if (rows == 3 && cols == 1 && vec.rows == 3 && vec.cols == 1)
    {
      result.matrix[0][0] = matrix[1][0]*vec[2][0] - matrix[2][0]*vec[1][0];
      result.matrix[1][0] = matrix[2][0]*vec[0][0] - matrix[0][0]*vec[2][0];
      result.matrix[2][0] = matrix[0][0]*vec[1][0] - matrix[1][0]*vec[0][0];
    }
    return result;
  }
  double dotProd(const tMatrix& vec)
  {
    double result = 0.0;
    if (rows == 3 && cols == 1 && vec.rows == 3 && vec.cols == 1)
    {
      result =  (matrix[0][0]*vec[0][0])
           + (matrix[1][0]*vec[1][0])
           + (matrix[2][0]*vec[2][0]);
    }
    return result;
  }
};
It compiles (using matlib.cpp only including matlib.h and nothing else) but you would need a lot of changes in reconstruct.cpp, mostly changing pointers of tMatrix to objects and using member functions of tMatrix instead of global functions.
Regards, Alex
//////////////////////////
/*
Core reconstruction component. Defines type, tMatrix which can be used to      Â
represent a matrix of any size, rather than statically defining the matrix   Â
 The matrix itself is represented as a vector of vector of double
*/
#include <vector>
#include <math.h>
#include <iostream>
using namespace std;
//Matrix handler library
enum Axis
{
  X,
  Y,
  Z
};
struct tMatrix
{
  int rows;
  int cols;
  vector< vector<double> > matrix;
  tMatrix() :  rows(0), cols(0) {}
  tMatrix(int r, int c, double init = 0.0)
    : rows(r), cols(c), matrix( r, vector<double>( c, init ) )
  {
  }
  tMatrix(const tMatrix& mat)
    : rows(mat.rows), cols(mat.cols), matrix( mat.rows, vector<double>( mat.cols, 0. ) )
  {
    *this = mat;
  } Â
  vector<double>& operator[](int row)
  {
    return matrix[row];
  }
  const vector<double>& operator[](int row) const
  {
    return matrix[row];
  }
  tMatrix& operator=(const tMatrix& mat)
  {
    for (int i = 0; i < rows; ++i)
      for (int j = 0; j < cols; ++j)
        matrix[i][j] = mat[i][j];
    return *this;
  }
  tMatrix operator* (const tMatrix& src)
  {
    tMatrix result(rows, src.cols);
    for (int r = 0; r < rows; ++r)
    {
      for (int c2 = 0; c2 < src.cols; ++c2)
      {
        double sum = 0.;
        for (int c1 = 0; c1 < cols; ++c1)
        {
          sum += matrix[r][c1] * src[c1][c2];
        }
        result[r][c2] = sum;
      }
    }
    return result;
  }
  void norm()
  {
    if (rows == 3 && cols == 1)
    {
      double dlen = sqrt((matrix[X][0]*matrix[
               +(matrix[Y][0]*matrix[Y][0
               +(matrix[Z][0]*matrix[Z][0
      matrix[X][0] /= dlen;
      matrix[Y][0] /= dlen;
      matrix[Z][0] /= dlen;
    }
  }
  friend ostream& operator<<(ostream& os, const tMatrix& mat)
  {
    for (int i = 0; i < mat.rows; ++i)
      for (int j = 0; j < mat.cols; ++j)
        os << "\t" << mat[i][j];
    os << endl;
    Â
    return os;
  }
  double det33()
  {
    double det = 0.0;
    if (rows == 3 && cols == 3)
    {
      det =  (matrix[0][0] * matrix[1][1] * matrix[2][2])
         -(matrix[0][2] * matrix[1][1] * matrix[2][0])
         +(matrix[0][1] * matrix[1][2] * matrix[2][0])
         -(matrix[0][0] * matrix[1][2] * matrix[2][1])
         +(matrix[0][2] * matrix[1][0] * matrix[2][1])
         -(matrix[0][1] * matrix[1][0] * matrix[2][2]);
    }
    return det;
  }
  tMatrix crossProd31x31(const tMatrix& vec)
  {
    tMatrix result(3, 1);
    if (rows == 3 && cols == 1 && vec.rows == 3 && vec.cols == 1)
    {
      result.matrix[0][0] = matrix[1][0]*vec[2][0] - matrix[2][0]*vec[1][0];
      result.matrix[1][0] = matrix[2][0]*vec[0][0] - matrix[0][0]*vec[2][0];
      result.matrix[2][0] = matrix[0][0]*vec[1][0] - matrix[1][0]*vec[0][0];
    }
    return result;
  }
  double dotProd(const tMatrix& vec)
  {
    double result = 0.0;
    if (rows == 3 && cols == 1 && vec.rows == 3 && vec.cols == 1)
    {
      result =  (matrix[0][0]*vec[0][0])
           + (matrix[1][0]*vec[1][0])
           + (matrix[2][0]*vec[2][0]);
    }
    return result;
  }
};
I would try to convert reconstruct.cpp as well but cannot compile cause of missing cv header files.
Please post the headers
#include <cv.h>
#include <cvaux.h>
#include <highgui.h>
Regards, Alex
Please post the headers
#include <cv.h>
#include <cvaux.h>
#include <highgui.h>
Regards, Alex
ASKER
itsmeandnobodyelse
>>>>Please post the headers
>>>>#include <cv.h>
>>>>#include <cvaux.h>
>>>>#include <highgui.h>
Sorry m8, you want me to post the whole of the .h files?
The cv.h file alone is 3,500 lines...
The reason you cant complile is because you would be missing important 'include' and 'linker' files.
I wrote this program in VS 6.0
You would also need to rebuild a file named baseclasses.dsw within directx, and copy 2 .lib files that are generated.
then you would need to rebuild the open cv workspace, and copy all the .dll files.
I wrote this program in VS 6.0, and needed to include directories from directx, and opencv by going to the tools->options->directorie s tab
and including all the necessary include and library files.
Other configurations are also needed.
Compiling the program will clearly be a problem.
If you like, i can post complete steps on how to configure opencv and vs6.0 if the need be, or was there something else you would require?
Thanks for all your help sofar m8.
>>>>Please post the headers
>>>>#include <cv.h>
>>>>#include <cvaux.h>
>>>>#include <highgui.h>
Sorry m8, you want me to post the whole of the .h files?
The cv.h file alone is 3,500 lines...
The reason you cant complile is because you would be missing important 'include' and 'linker' files.
I wrote this program in VS 6.0
You would also need to rebuild a file named baseclasses.dsw within directx, and copy 2 .lib files that are generated.
then you would need to rebuild the open cv workspace, and copy all the .dll files.
I wrote this program in VS 6.0, and needed to include directories from directx, and opencv by going to the tools->options->directorie
and including all the necessary include and library files.
Other configurations are also needed.
Compiling the program will clearly be a problem.
If you like, i can post complete steps on how to configure opencv and vs6.0 if the need be, or was there something else you would require?
Thanks for all your help sofar m8.
>>>> The cv.h file alone is 3,500 lines...
I would need only struct definitions and function prototypes used in reconstruct.cpp. Â If some are too big, you coould omit them and I could comment them in my source. The problem is, if I couldn't compile I most likely would have to post not compilable code here, what I don't like.
I don't need libs and other stuff as I don't intend to link the prog.
Regards, Alex
I would need only struct definitions and function prototypes used in reconstruct.cpp. Â If some are too big, you coould omit them and I could comment them in my source. The problem is, if I couldn't compile I most likely would have to post not compilable code here, what I don't like.
I don't need libs and other stuff as I don't intend to link the prog.
Regards, Alex
ASKER
>>>>I would need only struct definitions and function prototypes used in reconstruct.cpp
Im not sure which ones these are.
Is there no way for you to view the files?
Im not sure which ones these are.
Is there no way for you to view the files?
>>>> Is there no way for you to view the files?
From where? Could you give me an address to download (tomorrow)?
>>>>Â Im not sure which ones these are.
it's only (class ?) CvCamera and struct camera_ray.
Regards, Alex
From where? Could you give me an address to download (tomorrow)?
>>>>Â Im not sure which ones these are.
it's only (class ?) CvCamera and struct camera_ray.
Regards, Alex
ASKER
>>>>it's only (class ?) CvCamera and struct camera_ray.
i was able to find the CvCamera definitions. they were as follows...
Didnt have much luck with camera_ray though.
Hope this was what you wanted....
/************ Epiline functions *******************/
....
typedef struct CvCamera
{
  float  imgSize[2]; /* size of the camera view, used during calibration */
  float  matrix[9]; /* intinsic camera parameters:  [ fx 0 cx; 0 fy cy; 0 0 1 ] */
  float  distortion[4]; /* distortion coefficients - two coefficients for radial distortion
               and another two for tangential: [ k1 k2 p1 p2 ] */
  float  rotMatr[9];
  float  transVect[3]; /* rotation matrix and transition vector relatively
               to some reference point in the space. */
}
CvCamera;
typedef struct CvStereoCamera
{
  CvCamera* camera[2]; /* two individual camera parameters */
  float fundMatr[9]; /* fundamental matrix */
  /* New part for stereo */
  CvPoint3D32f epipole[2];
  CvPoint2D32f quad[2][4]; /* coordinates of destination quadrangle after
                epipolar geometry rectification */
  double coeffs[2][3][3];/* coefficients for transformation */
  CvPoint2D32f border[2][4];
  CvSize warpSize;
  CvStereoLineCoeff* lineCoeffs;
  int needSwapCameras;/* flag set to 1 if need to swap cameras for good reconstruction */
  float rotMatrix[9];
  float transVector[3];
}
CvStereoCamera;
....
/************************* ********** ********** ********** ********** ********** ********** ***\
*                  Calibration engine                  *
\************************* ********** ********** ********** ********** ********** ********** ***/
....
  /* Retrieves camera parameters for specified camera.
    If camera is not calibrated the function returns 0 */
  virtual const CvCamera* GetCameraParams( int idx = 0 ) const;
  virtual const CvStereoCamera* GetStereoParams() const;
  /* Sets camera parameters for all cameras */
  virtual bool SetCameraParams( CvCamera* params );
....
  /* camera data */
  int   cameraCount;
  CvCamera cameraParams[MAX_CAMERAS];
  CvStereoCamera stereo;
  CvPoint2D32f* points[MAX_CAMERAS];
  CvMat*  undistMap[MAX_CAMERAS];
  CvMat*  undistImg;
  int   latestCounts[MAX_CAMERAS];
  CvPoint2D32f* latestPoints[MAX_CAMERAS];
  CvMat*  rectMap[MAX_CAMERAS];
i was able to find the CvCamera definitions. they were as follows...
Didnt have much luck with camera_ray though.
Hope this was what you wanted....
/************ Epiline functions *******************/
....
typedef struct CvCamera
{
  float  imgSize[2]; /* size of the camera view, used during calibration */
  float  matrix[9]; /* intinsic camera parameters:  [ fx 0 cx; 0 fy cy; 0 0 1 ] */
  float  distortion[4]; /* distortion coefficients - two coefficients for radial distortion
               and another two for tangential: [ k1 k2 p1 p2 ] */
  float  rotMatr[9];
  float  transVect[3]; /* rotation matrix and transition vector relatively
               to some reference point in the space. */
}
CvCamera;
typedef struct CvStereoCamera
{
  CvCamera* camera[2]; /* two individual camera parameters */
  float fundMatr[9]; /* fundamental matrix */
  /* New part for stereo */
  CvPoint3D32f epipole[2];
  CvPoint2D32f quad[2][4]; /* coordinates of destination quadrangle after
                epipolar geometry rectification */
  double coeffs[2][3][3];/* coefficients for transformation */
  CvPoint2D32f border[2][4];
  CvSize warpSize;
  CvStereoLineCoeff* lineCoeffs;
  int needSwapCameras;/* flag set to 1 if need to swap cameras for good reconstruction */
  float rotMatrix[9];
  float transVector[3];
}
CvStereoCamera;
....
/*************************
*                  Calibration engine                  *
\*************************
....
  /* Retrieves camera parameters for specified camera.
    If camera is not calibrated the function returns 0 */
  virtual const CvCamera* GetCameraParams( int idx = 0 ) const;
  virtual const CvStereoCamera* GetStereoParams() const;
  /* Sets camera parameters for all cameras */
  virtual bool SetCameraParams( CvCamera* params );
....
  /* camera data */
  int   cameraCount;
  CvCamera cameraParams[MAX_CAMERAS];
  CvStereoCamera stereo;
  CvPoint2D32f* points[MAX_CAMERAS];
  CvMat*  undistMap[MAX_CAMERAS];
  CvMat*  undistImg;
  int   latestCounts[MAX_CAMERAS];
  CvPoint2D32f* latestPoints[MAX_CAMERAS];
  CvMat*  rectMap[MAX_CAMERAS];
ASKER
>>>> Hope this was what you wanted....
Was there anything else you needed?
Was there anything else you needed?
ASKER
can anyone help with this?
ASKER
itsmeandnobodyelse
Any more help with this mate, your been very helpful so far, futher help will be greatly appreciated.
ukjm2k
Any more help with this mate, your been very helpful so far, futher help will be greatly appreciated.
ukjm2k
>>>> can anyone help with this?
I was back from a one week cruise in the Caribics...
Did you try to convert reconstruct.h and reconstruct.cpp?
Is the matrix class I gave you that what you want?
I think, I'll find time tomorrow to help you with reconstruct.
Regards, Alex
Â
I was back from a one week cruise in the Caribics...
Did you try to convert reconstruct.h and reconstruct.cpp?
Is the matrix class I gave you that what you want?
I think, I'll find time tomorrow to help you with reconstruct.
Regards, Alex
Â
ASKER
>>>Did you try to convert reconstruct.h and reconstruct.cpp?
Tried but was not successful.
>>>Is the matrix class I gave you that what you want?
Yes, this seems fine to me mate.
Tried but was not successful.
>>>Is the matrix class I gave you that what you want?
Yes, this seems fine to me mate.
Here are compilable versions of reconstruct.cpp and reconstruct.cpp. I also had to add a new constructor to tMatrix:
  tMatrix(int r, int c, double* initArr)
    : rows(r), cols(c), matrix( r, vector<double>( c, 0.0 ) )
  {
    for (int i = 0; i < r; ++i)
      for (int j = 0; j < c; ++j)
         matrix[i][j] = *initArr++;
  }
Note, that were the last files I transformed alone. If there are files left, you have to transform them yourselves and I'll help you if you got problems. OK?
Regards, Alex
// reconstruct.cpp
#include "global.h"
#include "matlib.h"
#include "reconstruct.h"
/* variables for the two cameras */
extern CvCamera * clLeftCamera;
extern CvCamera * clRightCamera;
void tkReconstruct(tCFeature& object,char * output)
{
  camera_ray leftcam_ray;
  camera_ray rightcam_ray;
  tMatrix result(3,1);
  char outstring[256];
  leftcam_ray.cam_tran[0] = 505.0;
  leftcam_ray.cam_tran[1] = 485.0;
  leftcam_ray.cam_tran[2] = 1000.0;
  leftcam_ray.cam_rot = 0.0;
  leftcam_ray.pixelsize[0] = (float)15.0/(float)352.0;
  leftcam_ray.pixelsize[1] = (float)12.0/(float)288.0;
  leftcam_ray.vector[0] = (float)175 - (float)object.iLeft[0];
  leftcam_ray.vector[1] = (float)140 - (float)object.iLeft[1];
  leftcam_ray.vector[2] = (float)clLeftCamera->matri x[0];
  rightcam_ray.cam_tran[0] = 235.0;
  rightcam_ray.cam_tran[1] = 485.0;
  rightcam_ray.cam_tran[2] = 1000.0;
  rightcam_ray.cam_rot = 0.0;
  rightcam_ray.pixelsize[0] = (float)15.0/(float)352.0;
  rightcam_ray.pixelsize[1] = (float)12.0/(float)288.0;
  rightcam_ray.vector[0] = (float)175 - (float)object.iRight[0];
  rightcam_ray.vector[1] = (float)140 - (float)object.iRight[1];
  rightcam_ray.vector[2] = (float)clRightCamera->matr ix[0];
  result = reconstruct(leftcam_ray, rightcam_ray);
  //printf("OBJECT LOCATED AT:\n");
  //matPrint(result);
  sprintf(outstring,"[P::%.1 f,%.1f,%.1 f]",result [0][0],res ult[1][0], result.mat rix[2][0]) ;
  strcat(output,outstring);
}
tMatrix reconstruct(camera_ray& left, camera_ray& right)
{
  tMatrix rl = generateRotMat(0.0,left.ca m_rot,0.0) ;
  tMatrix rr = generateRotMat(0.0,right.c am_rot,0.0 );
  tMatrix tl(3,1,left.cam_tran);
  tMatrix tr(3,1,right.cam_tran);
  double l_beta = 0.0,r_beta = 0.0;
  //Generate normalised left vector
  pix2mm(left);
  tMatrix vl_norm(3,1,left.vector);
  vl_norm.norm();
  //Generate normalised right vector
  pix2mm(right);
  tMatrix vr_norm(3,1,right.vector);
  vr_norm.norm();
  //Correct the image vectors for camera rotation.
  tMatrix vl_prime(rl*vl_norm);
  tMatrix vr_prime(rr*vr_norm);
  //Generate the "axis" vector
  tMatrix axis(vl_prime.crossProd31x 31(vr_prim e));
  //Generate the equations for the planes
  tMatrix l_plane = defPlane(tl, vl_prime,axis);
  tMatrix r_plane = defPlane(tr, vr_prime,axis);
  //Generate left and right plane normals
  tMatrix lp_norm(l_plane);
  tMatrix rp_norm(r_plane);
  //Generate the intersection vector and point
  tMatrix iv(lp_norm.crossProd31x31( rp_norm));
  iv.norm();
  tMatrix ip(3,1,0.0);
  ip[2][0] = 0.0;
  ip[1][0] = (-l_plane[3][0]-(l_plane[0 ][0]*r_pla ne[3][0])) /
    ((r_plane[0][0]*l_plane[1] [0])-(l_pl ane[0][0]* r_plane[1] [0]));
  ip[0][0] = ((-r_plane[3][0])-(r_plane [1][0]*ip[ 1][0]))/r_ plane[0][0 ];
  //Calculate the beta value
  l_beta = calcBeta(tl,vl_prime,iv,ip );
  r_beta = calcBeta(tr,vr_prime,iv,ip );
  tMatrix l_int = genIntersect(tl,vl_prime,l _beta);
  tMatrix r_int = genIntersect(tr,vr_prime,r _beta);
  tMatrix object(3,1, 0.0);
  object[X][0] = (l_int[X][0]+r_int[X][0])/ 2;
  object[Y][0] = (l_int[Y][0]+r_int[Y][0])/ 2;
  object[Z][0] = (l_int[Z][0]+r_int[Z][0])/ 2;
  //Finally return the 3D position.
  return object;
}
tMatrix generateRotMat(double x_rot, double y_rot, double z_rot)
{
  double x_mat[] = {
            1,0,0,0,cos(x_rot*PI/180.0 ),sin(x_ro t*PI/180.0 ),0,
            -sin(x_rot*PI/180.0),cos(x _rot*PI/18 0.0)
           };
  double y_mat[] = {
            cos(y_rot*PI/180.0),0,sin( y_rot*PI/1 80.0),0,1, 0,
            -sin(y_rot*PI/180.0),0,cos (y_rot*PI/ 180.0)
           };
  double z_mat[] = {
            cos(z_rot*PI/180.0),sin(z_ rot*PI/180 .0),0,
            -sin(z_rot*PI/180.0),cos(z _rot*PI/18 0.0),0,0,0 ,1
           };
  tMatrix x_rot_mat(3, 3, x_mat);
  tMatrix y_rot_mat(3, 3, y_mat);
  tMatrix z_rot_mat(3, 3, z_mat);
  tMatrix temp_mat(x_rot_mat * y_rot_mat);
  tMatrix result(temp_mat * z_rot_mat);
  return result;
}
void pix2mm(camera_ray camera)
{
  camera.vector[0] = camera.vector[0] * camera.pixelsize[0];
  camera.vector[1] = camera.vector[1] * camera.pixelsize[1];
  camera.vector[2] = camera.vector[2] * (camera.pixelsize[0] + camera.pixelsize[1]) / 2;
}
tMatrix defPlane(tMatrix& point, tMatrix& vec1, tMatrix& vec2)
{
  double det_a[9] = {
            point[Y][0],point[Z][0],1. 0,
            vec1[Y][0],vec1[Z][0],0.0,
            vec2[Y][0],vec2[Z][0],0.0
           };
  double det_b[9] = {
            point[Z][0],1.0,point[X][0 ],
            vec1[Z][0],0.0,vec1[X][0],
            vec2[Z][0],0.0,vec2[X][0]
           };
  double det_c[9] = {
            1.0,point[X][0],point[Y][0 ],
            0.0,vec1[X][0],vec1[Y][0],
            0.0,vec2[X][0],vec2[Y][0]
           };
  double det_d[9] = {
            point[X][0],point[Y][0],po int[Z][0],
            vec1[X][0],vec1[Y][0],vec1 [Z][0],
            vec2[X][0],vec2[Y][0],vec2 [Z][0]
           };
  tMatrix a_matrix(3,3,det_a);
  tMatrix b_matrix(3,3,det_b);
  tMatrix c_matrix(3,3,det_c);
  tMatrix d_matrix(3,3,det_d);
  tMatrix plane(4,1, 0.0);
  plane[0][0] = a_matrix.det33();
  plane[1][0] = b_matrix.det33();
  plane[2][0] = c_matrix.det33();
  plane[3][0] = d_matrix.det33();
  return plane;
}
double calcBeta(tMatrix& trans, tMatrix& vec, tMatrix& iv, tMatrix& ip)
{
  double beta_num =  (ip[Y][0]*iv[X][0])
           + (iv[Y][0]*trans[X][0])
           - (ip[X][0]*iv[Y][0])-(iv[X] [0]*trans[ Y][0]);
  double beta_denom = (iv[X][0]*vec[Y][0])-(iv[Y ][0]*vec[X ][0]);
  return beta_num/beta_denom;
}
tMatrix genIntersect(tMatrix& trans, tMatrix& vec, double beta)
{
  double i_sect[3] = {
             trans[X][0] + (beta * vec[X][0]),
             trans[Y][0] + (beta * vec[Y][0]),
             trans[Z][0] + (beta * vec[Z][0])
            };
  return tMatrix(3, 1, i_sect);
}
//------------------------ ---------- ---------- ---------- ---------- ----------
// reconstruct.h
////////////////////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// ////////// //////////
/*
Involves a significant amount of matrix manipulation, and to avoid duplicating       Â
code, a file matlib.cpp was also written. This is a minimal matrix library    Â
which implements the functionality for this project only, and any exernal
libaries were ignored. Â Â Â
See matlib.cpp
*/
//3D Reconstruction library header
#define PI 3.14159
void tkReconstruct(tCFeature object,char * output);
tMatrix reconstruct(camera_ray& left, camera_ray& right);
tMatrix generateRotMat(double x_rot, double y_rot, double z_rot);
tMatrix defPlane(tMatrix& point, tMatrix& vec1, tMatrix& vec2);
void pix2mm(camera_ray&);
double calcBeta(tMatrix& trans, tMatrix& vec, tMatrix& iv, tMatrix& ip);
tMatrix genIntersect(tMatrix& trans, tMatrix& vec, double beta);
  tMatrix(int r, int c, double* initArr)
    : rows(r), cols(c), matrix( r, vector<double>( c, 0.0 ) )
  {
    for (int i = 0; i < r; ++i)
      for (int j = 0; j < c; ++j)
         matrix[i][j] = *initArr++;
  }
Note, that were the last files I transformed alone. If there are files left, you have to transform them yourselves and I'll help you if you got problems. OK?
Regards, Alex
// reconstruct.cpp
#include "global.h"
#include "matlib.h"
#include "reconstruct.h"
/* variables for the two cameras */
extern CvCamera * clLeftCamera;
extern CvCamera * clRightCamera;
void tkReconstruct(tCFeature& object,char * output)
{
  camera_ray leftcam_ray;
  camera_ray rightcam_ray;
  tMatrix result(3,1);
  char outstring[256];
  leftcam_ray.cam_tran[0] = 505.0;
  leftcam_ray.cam_tran[1] = 485.0;
  leftcam_ray.cam_tran[2] = 1000.0;
  leftcam_ray.cam_rot = 0.0;
  leftcam_ray.pixelsize[0] = (float)15.0/(float)352.0;
  leftcam_ray.pixelsize[1] = (float)12.0/(float)288.0;
  leftcam_ray.vector[0] = (float)175 - (float)object.iLeft[0];
  leftcam_ray.vector[1] = (float)140 - (float)object.iLeft[1];
  leftcam_ray.vector[2] = (float)clLeftCamera->matri
  rightcam_ray.cam_tran[0] = 235.0;
  rightcam_ray.cam_tran[1] = 485.0;
  rightcam_ray.cam_tran[2] = 1000.0;
  rightcam_ray.cam_rot = 0.0;
  rightcam_ray.pixelsize[0] = (float)15.0/(float)352.0;
  rightcam_ray.pixelsize[1] = (float)12.0/(float)288.0;
  rightcam_ray.vector[0] = (float)175 - (float)object.iRight[0];
  rightcam_ray.vector[1] = (float)140 - (float)object.iRight[1];
  rightcam_ray.vector[2] = (float)clRightCamera->matr
  result = reconstruct(leftcam_ray, rightcam_ray);
  //printf("OBJECT LOCATED AT:\n");
  //matPrint(result);
  sprintf(outstring,"[P::%.1
  strcat(output,outstring);
}
tMatrix reconstruct(camera_ray& left, camera_ray& right)
{
  tMatrix rl = generateRotMat(0.0,left.ca
  tMatrix rr = generateRotMat(0.0,right.c
  tMatrix tl(3,1,left.cam_tran);
  tMatrix tr(3,1,right.cam_tran);
  double l_beta = 0.0,r_beta = 0.0;
  //Generate normalised left vector
  pix2mm(left);
  tMatrix vl_norm(3,1,left.vector);
  vl_norm.norm();
  //Generate normalised right vector
  pix2mm(right);
  tMatrix vr_norm(3,1,right.vector);
  vr_norm.norm();
  //Correct the image vectors for camera rotation.
  tMatrix vl_prime(rl*vl_norm);
  tMatrix vr_prime(rr*vr_norm);
  //Generate the "axis" vector
  tMatrix axis(vl_prime.crossProd31x
  //Generate the equations for the planes
  tMatrix l_plane = defPlane(tl, vl_prime,axis);
  tMatrix r_plane = defPlane(tr, vr_prime,axis);
  //Generate left and right plane normals
  tMatrix lp_norm(l_plane);
  tMatrix rp_norm(r_plane);
  //Generate the intersection vector and point
  tMatrix iv(lp_norm.crossProd31x31(
  iv.norm();
  tMatrix ip(3,1,0.0);
  ip[2][0] = 0.0;
  ip[1][0] = (-l_plane[3][0]-(l_plane[0
    ((r_plane[0][0]*l_plane[1]
  ip[0][0] = ((-r_plane[3][0])-(r_plane
  //Calculate the beta value
  l_beta = calcBeta(tl,vl_prime,iv,ip
  r_beta = calcBeta(tr,vr_prime,iv,ip
  tMatrix l_int = genIntersect(tl,vl_prime,l
  tMatrix r_int = genIntersect(tr,vr_prime,r
  tMatrix object(3,1, 0.0);
  object[X][0] = (l_int[X][0]+r_int[X][0])/
  object[Y][0] = (l_int[Y][0]+r_int[Y][0])/
  object[Z][0] = (l_int[Z][0]+r_int[Z][0])/
  //Finally return the 3D position.
  return object;
}
tMatrix generateRotMat(double x_rot, double y_rot, double z_rot)
{
  double x_mat[] = {
            1,0,0,0,cos(x_rot*PI/180.0
            -sin(x_rot*PI/180.0),cos(x
           };
  double y_mat[] = {
            cos(y_rot*PI/180.0),0,sin(
            -sin(y_rot*PI/180.0),0,cos
           };
  double z_mat[] = {
            cos(z_rot*PI/180.0),sin(z_
            -sin(z_rot*PI/180.0),cos(z
           };
  tMatrix x_rot_mat(3, 3, x_mat);
  tMatrix y_rot_mat(3, 3, y_mat);
  tMatrix z_rot_mat(3, 3, z_mat);
  tMatrix temp_mat(x_rot_mat * y_rot_mat);
  tMatrix result(temp_mat * z_rot_mat);
  return result;
}
void pix2mm(camera_ray camera)
{
  camera.vector[0] = camera.vector[0] * camera.pixelsize[0];
  camera.vector[1] = camera.vector[1] * camera.pixelsize[1];
  camera.vector[2] = camera.vector[2] * (camera.pixelsize[0] + camera.pixelsize[1]) / 2;
}
tMatrix defPlane(tMatrix& point, tMatrix& vec1, tMatrix& vec2)
{
  double det_a[9] = {
            point[Y][0],point[Z][0],1.
            vec1[Y][0],vec1[Z][0],0.0,
            vec2[Y][0],vec2[Z][0],0.0
           };
  double det_b[9] = {
            point[Z][0],1.0,point[X][0
            vec1[Z][0],0.0,vec1[X][0],
            vec2[Z][0],0.0,vec2[X][0]
           };
  double det_c[9] = {
            1.0,point[X][0],point[Y][0
            0.0,vec1[X][0],vec1[Y][0],
            0.0,vec2[X][0],vec2[Y][0]
           };
  double det_d[9] = {
            point[X][0],point[Y][0],po
            vec1[X][0],vec1[Y][0],vec1
            vec2[X][0],vec2[Y][0],vec2
           };
  tMatrix a_matrix(3,3,det_a);
  tMatrix b_matrix(3,3,det_b);
  tMatrix c_matrix(3,3,det_c);
  tMatrix d_matrix(3,3,det_d);
  tMatrix plane(4,1, 0.0);
  plane[0][0] = a_matrix.det33();
  plane[1][0] = b_matrix.det33();
  plane[2][0] = c_matrix.det33();
  plane[3][0] = d_matrix.det33();
  return plane;
}
double calcBeta(tMatrix& trans, tMatrix& vec, tMatrix& iv, tMatrix& ip)
{
  double beta_num =  (ip[Y][0]*iv[X][0])
           + (iv[Y][0]*trans[X][0])
           - (ip[X][0]*iv[Y][0])-(iv[X]
  double beta_denom = (iv[X][0]*vec[Y][0])-(iv[Y
  return beta_num/beta_denom;
}
tMatrix genIntersect(tMatrix& trans, tMatrix& vec, double beta)
{
  double i_sect[3] = {
             trans[X][0] + (beta * vec[X][0]),
             trans[Y][0] + (beta * vec[Y][0]),
             trans[Z][0] + (beta * vec[Z][0])
            };
  return tMatrix(3, 1, i_sect);
}
//------------------------
// reconstruct.h
//////////////////////////
/*
Involves a significant amount of matrix manipulation, and to avoid duplicating       Â
code, a file matlib.cpp was also written. This is a minimal matrix library    Â
which implements the functionality for this project only, and any exernal
libaries were ignored. Â Â Â
See matlib.cpp
*/
//3D Reconstruction library header
#define PI 3.14159
void tkReconstruct(tCFeature object,char * output);
tMatrix reconstruct(camera_ray& left, camera_ray& right);
tMatrix generateRotMat(double x_rot, double y_rot, double z_rot);
tMatrix defPlane(tMatrix& point, tMatrix& vec1, tMatrix& vec2);
void pix2mm(camera_ray&);
double calcBeta(tMatrix& trans, tMatrix& vec, tMatrix& iv, tMatrix& ip);
tMatrix genIntersect(tMatrix& trans, tMatrix& vec, double beta);
ASKER
>>>I also had to add a new constructor to tMatrix:
Im doing something wrong wioth this, but im not sure what, as i get errors.
Where does this new constructor need to be inserted?
Im doing something wrong wioth this, but im not sure what, as i get errors.
Where does this new constructor need to be inserted?
>>>> Where does this new constructor need to be inserted?
In matlib.h where struct tMatrix was defined we already have 3 constructors. Here we need one more that takes an array of doubles as last argument.
  tMatrix() :  rows(0), cols(0) {}       // DEFAULT CONSTRUCTOR
  tMatrix(int r, int c, double init = 0.0)   // CONSTRUCTOR TAKES ROWS AND COLS AND *ONE* DOUBLE
    : rows(r), cols(c), matrix( r, vector<double>( c, init ) )
  {
  }
  tMatrix(const tMatrix& mat)  // COPY CONSTRUCTOR
    : rows(mat.rows), cols(mat.cols), matrix( mat.rows, vector<double>( mat.cols, 0. ) )
  {
    *this = mat;
  } Â
  tMatrix(int r, int c, double* initArr) // CONSTRUCTOR TAKES ROWS AND COLS AND DOUBLE ARRAY
    : rows(r), cols(c), matrix( r, vector<double>( c, 0.0 ) )
  {
    for (int i = 0; i < r; ++i)
      for (int j = 0; j < c; ++j)
         matrix[i][j] = *initArr++;
  }
Regards, Alex
In matlib.h where struct tMatrix was defined we already have 3 constructors. Here we need one more that takes an array of doubles as last argument.
  tMatrix() :  rows(0), cols(0) {}       // DEFAULT CONSTRUCTOR
  tMatrix(int r, int c, double init = 0.0)   // CONSTRUCTOR TAKES ROWS AND COLS AND *ONE* DOUBLE
    : rows(r), cols(c), matrix( r, vector<double>( c, init ) )
  {
  }
  tMatrix(const tMatrix& mat)  // COPY CONSTRUCTOR
    : rows(mat.rows), cols(mat.cols), matrix( mat.rows, vector<double>( mat.cols, 0. ) )
  {
    *this = mat;
  } Â
  tMatrix(int r, int c, double* initArr) // CONSTRUCTOR TAKES ROWS AND COLS AND DOUBLE ARRAY
    : rows(r), cols(c), matrix( r, vector<double>( c, 0.0 ) )
  {
    for (int i = 0; i < r; ++i)
      for (int j = 0; j < c; ++j)
         matrix[i][j] = *initArr++;
  }
Regards, Alex
ASKER
itsmeandnobodyelse
Ive converted segmentation.cpp, but i am having trouble with a bit in main.cpp. Could i get some help to convert this please.
The bit of code is:
case MODE_RECONSTRUCT_RUN: Â Â Â Â Â Â Â Â Â Â Â
     Â
           pMatches = tkCorrespond(tkSegment(lef t),tkSegme nt(right)) ;     Â
           if(corrSimple!=NULL){     Â
                 pTempMatch = pMatches;                Â
                 while(pTempMatch!=NULL){                Â
                      tkReconstruct(pTempMatch,o ut_text);                Â
                      pMatches = pTempMatch->pNext;                Â
                      free(pTempMatch);                Â
                      pTempMatch = pMatches;                Â
                 }                Â
                 fp = fopen(out_file,"a");     Â
                 fputs(out_text,fp);           Â
                 fputs("\n",fp);                Â
                 fclose(fp);                Â
                 out_text[0] = '\0';           Â
           }           Â
      }                      Â
      break;     Â
     Â
}
the errors i get are:
Compiling...
main.cpp
My Documents\dissertation\mai n.cpp(213) : error C2064: term does not evaluate to a function
My Documents\dissertation\mai n.cpp(217) : error C2678: binary '!=' : no operator defined which takes a left-hand operand of type 'struct tCFeature' (or there is no acceptable conversion)
My Documents\dissertation\mai n.cpp(217) : fatal error C1903: unable to recover from previous error(s); stopping compilation
Error executing cl.exe.
main.obj - 3 error(s), 0 warning(s)
Thanks in advance
Ive converted segmentation.cpp, but i am having trouble with a bit in main.cpp. Could i get some help to convert this please.
The bit of code is:
case MODE_RECONSTRUCT_RUN: Â Â Â Â Â Â Â Â Â Â Â
     Â
           pMatches = tkCorrespond(tkSegment(lef
           if(corrSimple!=NULL){     Â
                 pTempMatch = pMatches;                Â
                 while(pTempMatch!=NULL){                Â
                      tkReconstruct(pTempMatch,o
                      pMatches = pTempMatch->pNext;                Â
                      free(pTempMatch);                Â
                      pTempMatch = pMatches;                Â
                 }                Â
                 fp = fopen(out_file,"a");     Â
                 fputs(out_text,fp);           Â
                 fputs("\n",fp);                Â
                 fclose(fp);                Â
                 out_text[0] = '\0';           Â
           }           Â
      }                      Â
      break;     Â
     Â
}
the errors i get are:
Compiling...
main.cpp
My Documents\dissertation\mai
My Documents\dissertation\mai
My Documents\dissertation\mai
Error executing cl.exe.
main.obj - 3 error(s), 0 warning(s)
Thanks in advance
>>>> Ive converted segmentation.cpp
Please post segmentation.h and segmentation.cpp. Need some function prototypes declared in segmentation.h and have to check whether the conversion was successful ... beside of compiling ;-)
Regards, Alex
Please post segmentation.h and segmentation.cpp. Need some function prototypes declared in segmentation.h and have to check whether the conversion was successful ... beside of compiling ;-)
Regards, Alex
ASKER
>>>>Need some function prototypes declared in segmentation.h
In that case, i dont think i have done it correctly :( I just made changes within the code
Here it is:
////////////////////////// ////////// ////////// ////////// ////////// ////////
Segmentation.h
////////////////////////// ////////// ////////// ////////// ////////// ///////
/* image segmentation */
/* how large an area to sample from around the click */
#define SAMPLE_SIZE 20
/* the threshhold window around the colour to be matched */
#define THRESH_BAND 60
bool tkGenerateColMap(IplImage * input,int x,int y);
bool segColMapSimpleAve(IplImag e * input, int x, int y);
bool segColMapRThresh(IplImage * input, int x, int y);
t2DPoint tkSegment(IplImage * input);
t2DPoint segFindFeatures_Ave(IplIma ge * input);
t2DPoint segFindFeatures_RThresh(Ip lImage * input);
t2DPoint findCircles(IplImage * input,IplImage * draw);
////////////////////////// ////////// ////////// ////////// ////////// ////////
Segmentation.cpp
////////////////////////// ////////// ////////// ////////// ////////// ///////
#include "global.h"
#include "segmentation.h"
#include "library.h"
int thresh_r;
int thresh_g;
int thresh_b;
char RchnThresh;
extern bool colmapdone;
bool tkGenerateColMap(IplImage * input,int x,int y){
      if(segColMapSimpleAve(inpu t,x,y)) return true;
      else return false;
}
t2DPoint tkSegment(IplImage * input){
      return segFindFeatures_Ave(input) ;
}
/*
  This function creates a sample window around the selected point in order to
  average out the colour across this window; avoids "noisy" pixels
*/
bool segColMapSimpleAve(IplImag e * input, int x, int y){
      char sample[SAMPLE_SIZE][SAMPLE _SIZE][3];
      int row,col,chan,v;
      int norm_total;
      int left = x - (SAMPLE_SIZE/2);
      int right = x + (SAMPLE_SIZE/2);
      int top = y - (SAMPLE_SIZE/2);
      int bottom = y + (SAMPLE_SIZE/2);
      /* Duplicate the bit we need to sample */
      for(row=top;row<bottom;row ++){
           for(col=left;col<right;col ++){
                 sample[row-top][col-left][ C_RED] = input->imageData[(row*inpu t->widthSt ep)+(col*3 )+C_RED];
                 sample[row-top][col-left][ C_BLUE] = input->imageData[(row*inpu t->widthSt ep)+(col*3 )+C_BLUE];
                 sample[row-top][col-left][ C_GREEN] = input->imageData[(row*inpu t->widthSt ep)+(col*3 )+C_GREEN] ;
           }
      }
      /* Now normalise the sample */
      for(row=0;row<SAMPLE_SIZE; row++){
           for(col=0;col<SAMPLE_SIZE; col++){
                 norm_total = sample[row][col][C_RED] + sample[row][col][C_BLUE] +
                      sample[row][col][C_GREEN];
                 sample[row][col][C_RED] =
                      (int)((float)sample[row][c ol][C_RED] /(float)no rm_total*2 55.0);
                 sample[row][col][C_BLUE] =
                      (int)((float)sample[row][c ol][C_BLUE ]/(float)n orm_total* 255.0);
                 sample[row][col][C_GREEN] =
                      (int)((float)sample[row][c ol][C_GREE N]/(float) norm_total *255.0);
           }
      }
      /* Now smooth it a bit */
      for(row=0;row<SAMPLE_SIZE; row++){
           for(col=0;col<SAMPLE_SIZE; col++){
                 for(chan=0;chan<3;chan++){
                      v = sample[row][col][chan];
                      if(v>0){
                            v -= 1;
                            sample[row][col][chan] = v;
                            if(row>0 && sample[row-1][col][chan] < v) sample[row-1][col][chan]
                                 = v;
                            if(col>0 && sample[row][col-1][chan] < v) sample[row][col-1][chan]
                                 = v;
                      }
                 }
           }
      }
      for(row=0;row<SAMPLE_SIZE; row++){
           for(col=0;col<SAMPLE_SIZE; col++){
                 for(chan=0;chan<3;chan++){
                      v = sample[row][col][chan];
                      if(v>0){
                            v -= 1;
                            sample[row][col][chan] = v;
                            if(row < SAMPLE_SIZE-1 && sample[row+1][col][chan] < v)
                                 sample[row+1][col][chan] = v;
                            if(col < SAMPLE_SIZE-1 && sample[row][col+1][chan] < v)
                                 sample[row][col+1][chan] = v;
                      }
                 }
           }
      }
      /*Now find the average*/
      thresh_r = sample[0][0][C_RED];
      thresh_b = sample[0][0][C_BLUE];
      thresh_g = sample[0][0][C_GREEN];
      for(row=0;row<SAMPLE_SIZE; row++){
           for(col=0;col<SAMPLE_SIZE; col++){
                 thresh_r = (thresh_r+sample[row][col] [C_RED])/2 ;
                 thresh_b = (thresh_b+sample[row][col] [C_BLUE])/ 2;
                 thresh_g = (thresh_g+sample[row][col] [C_GREEN]) /2;
           }
      }
      printf("\nTHRESHHOLDS: R:%i G:%i B:%i\n",thresh_r,thresh_g, thresh_b);
      return true;
}
/*
  This function generates a binary image by passing over each of the pixels
  in the image, thresholding them on the values previously found from sampling;
*/
t2DPoint segFindFeatures_Ave(IplIma ge * input){
      IplImage * temp;
      t2DPoint features;
      int i,j;
      int norm_r=0,norm_g=0,norm_b=0 ;
      int low_r=0,low_g=0,low_b=0;
      int high_r=0,high_g=0,high_b=0 ;
      int norm_total=0,in_total=0;
      char * pixel_in = (char *)NULL;
      char * pixel_out = (char *)NULL;
      /* Clone the input image! */
      temp = cvCloneImage(input);
      removeNoise(input);
      removeNoise(input);
      /* Now work out what the normalised boundaries are from
      the specified r,g,b values */
      in_total = thresh_r+thresh_g+thresh_b ;
      if(in_total!=0){
      low_r = (int)(((float)thresh_r / (float)in_total)*255.0);
      low_g = (int)(((float)thresh_g / (float)in_total)*255.0);
      low_b = (int)(((float)thresh_b / (float)in_total)*255.0);
      high_r = low_r + THRESH_BAND;
      high_g = low_g + THRESH_BAND;
      high_b = low_b + THRESH_BAND;
      low_r -= THRESH_BAND;
      low_g -= THRESH_BAND;
      low_b -= THRESH_BAND;
      }
      else {
           low_r = 0;
           low_g = 0;
           low_b = 0;
           high_r = THRESH_BAND;
           high_g = THRESH_BAND;
           high_b = THRESH_BAND;
      }
      for(i=0;i<input->height;i+ +){
           for(j=0;j<(input->widthSte p/3);j++){
                 pixel_in = &input->imageData[(i*input ->widthSte p)+(j*3)];
                 pixel_out = &temp->imageData[(i*input- >widthStep )+(j*3)];
                 norm_total = (int)pixel_in[C_RED] + (int)pixel_in[C_GREEN] + (int)pixel_in[C_BLUE];
                 if(norm_total!=0){
                      norm_r = (int)(((float)pixel_in[C_R ED] / (float)norm_total)*255.0);
                      norm_g = (int)(((float)pixel_in[C_G REEN] / (float)norm_total)*255.0);
                      norm_b = (int)(((float)pixel_in[C_B LUE] / (float)norm_total)*255.0);
                 }
                 else if(norm_total==0){
                      norm_r=0;norm_g=0;norm_b=0 ;
                 }
                 if(norm_r >= low_r && norm_r <= high_r &&
                      norm_g >= low_g && norm_g <= high_g &&
                      norm_b >= low_b && norm_b <= high_b){
                      pixel_out[C_RED] = (char)0;
                      pixel_out[C_GREEN] = (char)0;
                      pixel_out[C_BLUE] = (char)0;
                 }
                 else {
                      pixel_out[C_RED] = (char)255;
                      pixel_out[C_GREEN] = (char)255;
                      pixel_out[C_BLUE] = (char)255;
                 }
           }
      }
      features = findCircles(temp,input);
      cvReleaseImage(&temp);
      return features;
}
bool segColMapRThresh(IplImage * input, int x, int y){
      char * pixel;
      int left = x - 2;
      int right = x + 2;
      int top = y - 2;
      int bottom = y + 2;
      int row,col;
      if (input->depth == IPL_DEPTH_8U) printf ("IPL_DEPTH_8U\n");
      if (input->depth == IPL_DEPTH_8S) printf ("IPL_DEPTH_8S\n");
      if (input->depth == IPL_DEPTH_16S) printf ("IPL_DEPTH_16S\n");
      if (input->depth == IPL_DEPTH_32S) printf ("IPL_DEPTH_32S\n");
      if (input->depth == IPL_DEPTH_32F) printf ("IPL_DEPTH_32F\n");
      if (input->depth == IPL_DEPTH_64F) printf ("IPL_DEPTH_64F\n");
      for(row=top;row<bottom;row ++){
           for(col=left;col<right;col ++){
                 pixel = &input->imageData[(input-> widthStep* row)+(col* 3)];
                 if(col==left && row==top){
                      RchnThresh = pixel[C_RED];
                 }
                 else{
                      RchnThresh = (RchnThresh/2) + (pixel[C_RED]/2);
                 }
                 printf("%c",pixel[C_RED]);
           }
           printf("\n");
      }
      return true;
}
/*
  This function scans the binary image for circles. Consists of 3 main steps;
  First, the binary image is edge detected. Then the OpenCV function cvFindContours
  is used to identify the contours (edges) within the images.
  Finally, an elipse is fitted over each contour.
*/
t2DPoint segFindFeatures_RThresh(Ip lImage * input){
      IplImage * threshed = cvCloneImage(input);
      t2DPoint features;
      int i,j;
      char * pixel_in = (char *)NULL;
      char * pixel_out = (char *)NULL;
      for(i=0;i<input->height;i+ +){
           for(j=0;j<(input->widthSte p/3);j++){
                 pixel_in = &input->imageData[(i*input ->widthSte p)+(j*3)];
                 pixel_out = &threshed->imageData[(i*th reshed->wi dthStep)+( j*3)];
                 if((pixel_in[C_RED] > (RchnThresh - 50)) &&
                      (pixel_in[C_RED] < (RchnThresh + 20)) &&
                      (pixel_in[C_GREEN] < (char)50) &&
                      (pixel_in[C_BLUE] < (char)110)){
                      pixel_out[C_RED] = (char)0;
                      pixel_out[C_BLUE] = (char)0;
                      pixel_out[C_GREEN] = (char)0;
                 }
                 else {
                      pixel_out[C_RED] = (char)255;
                      pixel_out[C_BLUE] = (char)255;
                      pixel_out[C_GREEN] = (char)255;
                 }
           }
      }
     Â
      features = findCircles(threshed,input );
      cvReleaseImage(&threshed);
      return features;
}
t2DPoint findCircles(IplImage * input, IplImage * draw){
      CvMemStorage * storage;
      CvSeq * contour;
      CvBox2D * box;
      CvPoint * pointArray;
      CvPoint2D32f * pointArray32f;
      CvPoint center;
      t2DPoint match4left;
      t2DPoint match4right;
      t2DPoint result;
      float myAngle,ratio;
      int i,header_size,count,length ,width;
      IplImage * gray_input = cvCreateImage(cvGetSize(in put),IPL_D EPTH_8U,1) ;
      t2DPoint markers;
      t2DPoint temppt;
      //Convert the input image to grayscale.
      cvCvtColor(input,gray_inpu t,CV_RGB2G RAY);
      //Remove noise and smooth
      removeNoise(gray_input);
      //Edge detect the image with Canny algorithm
      cvCanny(gray_input,gray_in put,25,150 ,3);
      //Allocate memory
      box = (CvBox2D *)malloc(sizeof(CvBox2D));
      header_size = sizeof(CvContour);
      storage = cvCreateMemStorage(1000);
      // Find all the contours in the image.
      cvFindContours(gray_input, storage,&c ontour,hea der_size,C V_RETR_EXT ERNAL,CV_C HAIN_APPRO X_TC89_KCO S);
      while(contour!=NULL)
      {
           if(CV_IS_SEQ_CURVE(contour ))
           {
                 count = contour->total;
                 pointArray = (CvPoint *)malloc(count * sizeof(CvPoint));
                 cvCvtSeqToArray(contour,po intArray,C V_WHOLE_SE Q);
                 pointArray32f = (CvPoint2D32f *)malloc((count + 1) * sizeof(CvPoint2D32f));
                 for(i=0;i<count-1;i++){
                      pointArray32f[i].x = (float)(pointArray[i].x);
                      pointArray32f[i].y = (float)(pointArray[i].y);
                 }
                 pointArray32f[i].x = (float)(pointArray[0].x);
                 pointArray32f[i].y = (float)(pointArray[0].y);
                 if(count>7){
                      cvFitEllipse(pointArray32f ,count,box );
                      ratio = (float)box->size.width/(fl oat)box->s ize.height ;
                      center.x = (int)box->center.x;
                      center.y = (int)box->center.y;
                      length = (int)box->size.height;
                      width = (int)box->size.width;
                      myAngle = box->angle;
                      if((center.x>0) && (center.y>0)){
                            result.x = center.x;                Â
                            result.y = center.y;           Â
                            result.size = length;     Â
                           Â
                            markers = temppt;
                            if(draw!=NULL) cvCircle(draw,center,(int) length/2,R GB(0,0,255 ),-1);
                            /*cvEllipse(input,
                            center,
                            cvSize((int)width/2,(int)l ength/2),
                            -box->angle,
                            0,
                            360,
                            RGB(0,255,0),
                            1);*/
                      }
                 }
                 free(pointArray32f);
                 free(pointArray);
           }
           contour = contour->h_next;
      }
      free(contour);
      free(box);
      cvReleaseImage(&gray_input );
      cvReleaseMemStorage(&stora ge);
      return markers;
}
In that case, i dont think i have done it correctly :( I just made changes within the code
Here it is:
//////////////////////////
Segmentation.h
//////////////////////////
/* image segmentation */
/* how large an area to sample from around the click */
#define SAMPLE_SIZE 20
/* the threshhold window around the colour to be matched */
#define THRESH_BAND 60
bool tkGenerateColMap(IplImage * input,int x,int y);
bool segColMapSimpleAve(IplImag
bool segColMapRThresh(IplImage * input, int x, int y);
t2DPoint tkSegment(IplImage * input);
t2DPoint segFindFeatures_Ave(IplIma
t2DPoint segFindFeatures_RThresh(Ip
t2DPoint findCircles(IplImage * input,IplImage * draw);
//////////////////////////
Segmentation.cpp
//////////////////////////
#include "global.h"
#include "segmentation.h"
#include "library.h"
int thresh_r;
int thresh_g;
int thresh_b;
char RchnThresh;
extern bool colmapdone;
bool tkGenerateColMap(IplImage * input,int x,int y){
      if(segColMapSimpleAve(inpu
      else return false;
}
t2DPoint tkSegment(IplImage * input){
      return segFindFeatures_Ave(input)
}
/*
  This function creates a sample window around the selected point in order to
  average out the colour across this window; avoids "noisy" pixels
*/
bool segColMapSimpleAve(IplImag
      char sample[SAMPLE_SIZE][SAMPLE
      int row,col,chan,v;
      int norm_total;
      int left = x - (SAMPLE_SIZE/2);
      int right = x + (SAMPLE_SIZE/2);
      int top = y - (SAMPLE_SIZE/2);
      int bottom = y + (SAMPLE_SIZE/2);
      /* Duplicate the bit we need to sample */
      for(row=top;row<bottom;row
           for(col=left;col<right;col
                 sample[row-top][col-left][
                 sample[row-top][col-left][
                 sample[row-top][col-left][
           }
      }
      /* Now normalise the sample */
      for(row=0;row<SAMPLE_SIZE;
           for(col=0;col<SAMPLE_SIZE;
                 norm_total = sample[row][col][C_RED] + sample[row][col][C_BLUE] +
                      sample[row][col][C_GREEN];
                 sample[row][col][C_RED] =
                      (int)((float)sample[row][c
                 sample[row][col][C_BLUE] =
                      (int)((float)sample[row][c
                 sample[row][col][C_GREEN] =
                      (int)((float)sample[row][c
           }
      }
      /* Now smooth it a bit */
      for(row=0;row<SAMPLE_SIZE;
           for(col=0;col<SAMPLE_SIZE;
                 for(chan=0;chan<3;chan++){
                      v = sample[row][col][chan];
                      if(v>0){
                            v -= 1;
                            sample[row][col][chan] = v;
                            if(row>0 && sample[row-1][col][chan] < v) sample[row-1][col][chan]
                                 = v;
                            if(col>0 && sample[row][col-1][chan] < v) sample[row][col-1][chan]
                                 = v;
                      }
                 }
           }
      }
      for(row=0;row<SAMPLE_SIZE;
           for(col=0;col<SAMPLE_SIZE;
                 for(chan=0;chan<3;chan++){
                      v = sample[row][col][chan];
                      if(v>0){
                            v -= 1;
                            sample[row][col][chan] = v;
                            if(row < SAMPLE_SIZE-1 && sample[row+1][col][chan] < v)
                                 sample[row+1][col][chan] = v;
                            if(col < SAMPLE_SIZE-1 && sample[row][col+1][chan] < v)
                                 sample[row][col+1][chan] = v;
                      }
                 }
           }
      }
      /*Now find the average*/
      thresh_r = sample[0][0][C_RED];
      thresh_b = sample[0][0][C_BLUE];
      thresh_g = sample[0][0][C_GREEN];
      for(row=0;row<SAMPLE_SIZE;
           for(col=0;col<SAMPLE_SIZE;
                 thresh_r = (thresh_r+sample[row][col]
                 thresh_b = (thresh_b+sample[row][col]
                 thresh_g = (thresh_g+sample[row][col]
           }
      }
      printf("\nTHRESHHOLDS: R:%i G:%i B:%i\n",thresh_r,thresh_g,
      return true;
}
/*
  This function generates a binary image by passing over each of the pixels
  in the image, thresholding them on the values previously found from sampling;
*/
t2DPoint segFindFeatures_Ave(IplIma
      IplImage * temp;
      t2DPoint features;
      int i,j;
      int norm_r=0,norm_g=0,norm_b=0
      int low_r=0,low_g=0,low_b=0;
      int high_r=0,high_g=0,high_b=0
      int norm_total=0,in_total=0;
      char * pixel_in = (char *)NULL;
      char * pixel_out = (char *)NULL;
      /* Clone the input image! */
      temp = cvCloneImage(input);
      removeNoise(input);
      removeNoise(input);
      /* Now work out what the normalised boundaries are from
      the specified r,g,b values */
      in_total = thresh_r+thresh_g+thresh_b
      if(in_total!=0){
      low_r = (int)(((float)thresh_r / (float)in_total)*255.0);
      low_g = (int)(((float)thresh_g / (float)in_total)*255.0);
      low_b = (int)(((float)thresh_b / (float)in_total)*255.0);
      high_r = low_r + THRESH_BAND;
      high_g = low_g + THRESH_BAND;
      high_b = low_b + THRESH_BAND;
      low_r -= THRESH_BAND;
      low_g -= THRESH_BAND;
      low_b -= THRESH_BAND;
      }
      else {
           low_r = 0;
           low_g = 0;
           low_b = 0;
           high_r = THRESH_BAND;
           high_g = THRESH_BAND;
           high_b = THRESH_BAND;
      }
      for(i=0;i<input->height;i+
           for(j=0;j<(input->widthSte
                 pixel_in = &input->imageData[(i*input
                 pixel_out = &temp->imageData[(i*input-
                 norm_total = (int)pixel_in[C_RED] + (int)pixel_in[C_GREEN] + (int)pixel_in[C_BLUE];
                 if(norm_total!=0){
                      norm_r = (int)(((float)pixel_in[C_R
                      norm_g = (int)(((float)pixel_in[C_G
                      norm_b = (int)(((float)pixel_in[C_B
                 }
                 else if(norm_total==0){
                      norm_r=0;norm_g=0;norm_b=0
                 }
                 if(norm_r >= low_r && norm_r <= high_r &&
                      norm_g >= low_g && norm_g <= high_g &&
                      norm_b >= low_b && norm_b <= high_b){
                      pixel_out[C_RED] = (char)0;
                      pixel_out[C_GREEN] = (char)0;
                      pixel_out[C_BLUE] = (char)0;
                 }
                 else {
                      pixel_out[C_RED] = (char)255;
                      pixel_out[C_GREEN] = (char)255;
                      pixel_out[C_BLUE] = (char)255;
                 }
           }
      }
      features = findCircles(temp,input);
      cvReleaseImage(&temp);
      return features;
}
bool segColMapRThresh(IplImage * input, int x, int y){
      char * pixel;
      int left = x - 2;
      int right = x + 2;
      int top = y - 2;
      int bottom = y + 2;
      int row,col;
      if (input->depth == IPL_DEPTH_8U) printf ("IPL_DEPTH_8U\n");
      if (input->depth == IPL_DEPTH_8S) printf ("IPL_DEPTH_8S\n");
      if (input->depth == IPL_DEPTH_16S) printf ("IPL_DEPTH_16S\n");
      if (input->depth == IPL_DEPTH_32S) printf ("IPL_DEPTH_32S\n");
      if (input->depth == IPL_DEPTH_32F) printf ("IPL_DEPTH_32F\n");
      if (input->depth == IPL_DEPTH_64F) printf ("IPL_DEPTH_64F\n");
      for(row=top;row<bottom;row
           for(col=left;col<right;col
                 pixel = &input->imageData[(input->
                 if(col==left && row==top){
                      RchnThresh = pixel[C_RED];
                 }
                 else{
                      RchnThresh = (RchnThresh/2) + (pixel[C_RED]/2);
                 }
                 printf("%c",pixel[C_RED]);
           }
           printf("\n");
      }
      return true;
}
/*
  This function scans the binary image for circles. Consists of 3 main steps;
  First, the binary image is edge detected. Then the OpenCV function cvFindContours
  is used to identify the contours (edges) within the images.
  Finally, an elipse is fitted over each contour.
*/
t2DPoint segFindFeatures_RThresh(Ip
      IplImage * threshed = cvCloneImage(input);
      t2DPoint features;
      int i,j;
      char * pixel_in = (char *)NULL;
      char * pixel_out = (char *)NULL;
      for(i=0;i<input->height;i+
           for(j=0;j<(input->widthSte
                 pixel_in = &input->imageData[(i*input
                 pixel_out = &threshed->imageData[(i*th
                 if((pixel_in[C_RED] > (RchnThresh - 50)) &&
                      (pixel_in[C_RED] < (RchnThresh + 20)) &&
                      (pixel_in[C_GREEN] < (char)50) &&
                      (pixel_in[C_BLUE] < (char)110)){
                      pixel_out[C_RED] = (char)0;
                      pixel_out[C_BLUE] = (char)0;
                      pixel_out[C_GREEN] = (char)0;
                 }
                 else {
                      pixel_out[C_RED] = (char)255;
                      pixel_out[C_BLUE] = (char)255;
                      pixel_out[C_GREEN] = (char)255;
                 }
           }
      }
     Â
      features = findCircles(threshed,input
      cvReleaseImage(&threshed);
      return features;
}
t2DPoint findCircles(IplImage * input, IplImage * draw){
      CvMemStorage * storage;
      CvSeq * contour;
      CvBox2D * box;
      CvPoint * pointArray;
      CvPoint2D32f * pointArray32f;
      CvPoint center;
      t2DPoint match4left;
      t2DPoint match4right;
      t2DPoint result;
      float myAngle,ratio;
      int i,header_size,count,length
      IplImage * gray_input = cvCreateImage(cvGetSize(in
      t2DPoint markers;
      t2DPoint temppt;
      //Convert the input image to grayscale.
      cvCvtColor(input,gray_inpu
      //Remove noise and smooth
      removeNoise(gray_input);
      //Edge detect the image with Canny algorithm
      cvCanny(gray_input,gray_in
      //Allocate memory
      box = (CvBox2D *)malloc(sizeof(CvBox2D));
      header_size = sizeof(CvContour);
      storage = cvCreateMemStorage(1000);
      // Find all the contours in the image.
      cvFindContours(gray_input,
      while(contour!=NULL)
      {
           if(CV_IS_SEQ_CURVE(contour
           {
                 count = contour->total;
                 pointArray = (CvPoint *)malloc(count * sizeof(CvPoint));
                 cvCvtSeqToArray(contour,po
                 pointArray32f = (CvPoint2D32f *)malloc((count + 1) * sizeof(CvPoint2D32f));
                 for(i=0;i<count-1;i++){
                      pointArray32f[i].x = (float)(pointArray[i].x);
                      pointArray32f[i].y = (float)(pointArray[i].y);
                 }
                 pointArray32f[i].x = (float)(pointArray[0].x);
                 pointArray32f[i].y = (float)(pointArray[0].y);
                 if(count>7){
                      cvFitEllipse(pointArray32f
                      ratio = (float)box->size.width/(fl
                      center.x = (int)box->center.x;
                      center.y = (int)box->center.y;
                      length = (int)box->size.height;
                      width = (int)box->size.width;
                      myAngle = box->angle;
                      if((center.x>0) && (center.y>0)){
                            result.x = center.x;                Â
                            result.y = center.y;           Â
                            result.size = length;     Â
                           Â
                            markers = temppt;
                            if(draw!=NULL) cvCircle(draw,center,(int)
                            /*cvEllipse(input,
                            center,
                            cvSize((int)width/2,(int)l
                            -box->angle,
                            0,
                            360,
                            RGB(0,255,0),
                            1);*/
                      }
                 }
                 free(pointArray32f);
                 free(pointArray);
           }
           contour = contour->h_next;
      }
      free(contour);
      free(box);
      cvReleaseImage(&gray_input
      cvReleaseMemStorage(&stora
      return markers;
}
>>>> In that case, i dont think i have done it correctly
Not so bad at the first glance ;-) Â You removed all return pointers t2DPoint. Bingo!
Will need a few time ...
Regards Â
Not so bad at the first glance ;-) Â You removed all return pointers t2DPoint. Bingo!
Will need a few time ...
Regards Â
Would need struct IplImage. Could you check your .h files where it is defined?
Regards, Alex
Regards, Alex
ASKER
typedef struct _IplImage {
  int  nSize;     /* sizeof(IplImage) */
  int  ID;       /* version (=0)*/
  int  nChannels;   /* Most of OpenCV functions support 1,2,3 or 4 channels */
  int  alphaChannel;  /* ignored by OpenCV */
  int  depth;     /* pixel depth in bits: IPL_DEPTH_8U, IPL_DEPTH_8S, IPL_DEPTH_16S,
              IPL_DEPTH_32S, IPL_DEPTH_32F and IPL_DEPTH_64F are supported */
  char colorModel[4]; /* ignored by OpenCV */
  char channelSeq[4]; /* ditto */
  int  dataOrder;   /* 0 - interleaved color channels, 1 - separate color channels.
              cvCreateImage can only create interleaved images */
  int  origin;     /* 0 - top-left origin,
              1 - bottom-left origin (Windows bitmaps style) */
  int  align;     /* Alignment of image rows (4 or 8).
              OpenCV ignores it and uses widthStep instead */
  int  width;     /* image width in pixels */
  int  height;     /* image height in pixels */
  struct _IplROI *roi;/* image ROI. if NULL, the whole image is selected */
  struct _IplImage *maskROI; /* must be NULL */
  void  *imageId;   /* ditto */
  struct _IplTileInfo *tileInfo; /* ditto */
  int  imageSize;   /* image data size in bytes
              (==image->height*image->wi dthStep
              in case of interleaved data)*/
  char *imageData;  /* pointer to aligned image data */
  int  widthStep;  /* size of aligned image row in bytes */
  int  BorderMode[4]; /* ignored by OpenCV */
  int  BorderConst[4]; /* ditto */
  char *imageDataOrigin; /* pointer to very origin of image data
               (not necessarily aligned) -
               needed for correct deallocation */
}
IplImage;
  int  nSize;     /* sizeof(IplImage) */
  int  ID;       /* version (=0)*/
  int  nChannels;   /* Most of OpenCV functions support 1,2,3 or 4 channels */
  int  alphaChannel;  /* ignored by OpenCV */
  int  depth;     /* pixel depth in bits: IPL_DEPTH_8U, IPL_DEPTH_8S, IPL_DEPTH_16S,
              IPL_DEPTH_32S, IPL_DEPTH_32F and IPL_DEPTH_64F are supported */
  char colorModel[4]; /* ignored by OpenCV */
  char channelSeq[4]; /* ditto */
  int  dataOrder;   /* 0 - interleaved color channels, 1 - separate color channels.
              cvCreateImage can only create interleaved images */
  int  origin;     /* 0 - top-left origin,
              1 - bottom-left origin (Windows bitmaps style) */
  int  align;     /* Alignment of image rows (4 or 8).
              OpenCV ignores it and uses widthStep instead */
  int  width;     /* image width in pixels */
  int  height;     /* image height in pixels */
  struct _IplROI *roi;/* image ROI. if NULL, the whole image is selected */
  struct _IplImage *maskROI; /* must be NULL */
  void  *imageId;   /* ditto */
  struct _IplTileInfo *tileInfo; /* ditto */
  int  imageSize;   /* image data size in bytes
              (==image->height*image->wi
              in case of interleaved data)*/
  char *imageData;  /* pointer to aligned image data */
  int  widthStep;  /* size of aligned image row in bytes */
  int  BorderMode[4]; /* ignored by OpenCV */
  int  BorderConst[4]; /* ditto */
  char *imageDataOrigin; /* pointer to very origin of image data
               (not necessarily aligned) -
               needed for correct deallocation */
}
IplImage;
ASKER
Is this what you are talking about mate?
I tried to get segmentation.cpp compiled at home... but failed as I needed to create dummy prototypes for hundreds of cv structs and cv function calls where I had no headers.
Unfortunately I forgot to transfer my home project to the machine I am currently working on, so I would have to wait til tomorrow *or* try to work on the code snippet of main.cpp you posted above.
Regards, Alex
Unfortunately I forgot to transfer my home project to the machine I am currently working on, so I would have to wait til tomorrow *or* try to work on the code snippet of main.cpp you posted above.
Regards, Alex
Here my attempt:
  case MODE_RECONSTRUCT_RUN:     Â
    {
      // get left list of points (formerly a pointer to the first array element)
      list<t2DPoint> rleft  = tkSegment(left);
      // get right list of points (formerly a pointer to the first array element)
      list<t2DPoint> rright = tkSegment(right);
      tCFeature matches = tkCorrespond(rrleft, rright);  Â
      // the following check is equivalent to the == NULL check
      if(!matches.empty())
      { Â
        // In old version 'pMatches' was a pointer to a node
        // we could have made similar by returning an iterator (== a node of std::list)
        // but unfortunately we have a return by value
        // so we need to check the left list from the beginning
        bool found = false;
        // loop all elements of left list
        for (list<tCFeature>::iterator it = rleft.begin(); it != rleft.end(); )
        {
          if (!found && *it == matches)  Â
            found = true;       // node found
          // the old version calls tkReconstruct for any node following
          // the current node and erases the node
          // we do the same though it doesn't make much sense as
          // the lists are temporary only and where deleted anyhow
          if (found)
          {
            list<tCFeature>::iterator itn = it;
            ++itn;
            tkReconstruct(*it, out_text);       Â
            rleft.erase(it);
            it = itn;
          }
          else
            ++it;
        }
        fp = fopen(out_file,"a");  Â
        fputs(out_text, fp);     Â
        fputs("\n",fp);       Â
        fclose(fp);       Â
        out_text[0] = '\0';     Â
      }  Â
    }
    break;  Â
The conversion maybe is correct, but actually I am not sure what the code should do and whether the old version was correct here. The old version created some linked lists and selectively deletes some items at the end of the lists. It is unclear what happens to the items and lists that were *not* deleted. I would say, the old version had a lot of memory leaks as most items never were freed. However, it seems good that the items were not deleted as we had a many of copies of pointers. Deleting or freeing a copy of a pointer would destroy the original allocated storage, what most likely leads to a crash if the original pointer still was used.
Note, I only removed the *node property* of your structs and put the struct objects to separate list containers instead. What I couldn't change is the logic of the program, though it might be the next thing to do.
Regards, Alex
  case MODE_RECONSTRUCT_RUN:     Â
    {
      // get left list of points (formerly a pointer to the first array element)
      list<t2DPoint> rleft  = tkSegment(left);
      // get right list of points (formerly a pointer to the first array element)
      list<t2DPoint> rright = tkSegment(right);
      tCFeature matches = tkCorrespond(rrleft, rright);  Â
      // the following check is equivalent to the == NULL check
      if(!matches.empty())
      { Â
        // In old version 'pMatches' was a pointer to a node
        // we could have made similar by returning an iterator (== a node of std::list)
        // but unfortunately we have a return by value
        // so we need to check the left list from the beginning
        bool found = false;
        // loop all elements of left list
        for (list<tCFeature>::iterator
        {
          if (!found && *it == matches)  Â
            found = true;       // node found
          // the old version calls tkReconstruct for any node following
          // the current node and erases the node
          // we do the same though it doesn't make much sense as
          // the lists are temporary only and where deleted anyhow
          if (found)
          {
            list<tCFeature>::iterator itn = it;
            ++itn;
            tkReconstruct(*it, out_text);       Â
            rleft.erase(it);
            it = itn;
          }
          else
            ++it;
        }
        fp = fopen(out_file,"a");  Â
        fputs(out_text, fp);     Â
        fputs("\n",fp);       Â
        fclose(fp);       Â
        out_text[0] = '\0';     Â
      }  Â
    }
    break;  Â
The conversion maybe is correct, but actually I am not sure what the code should do and whether the old version was correct here. The old version created some linked lists and selectively deletes some items at the end of the lists. It is unclear what happens to the items and lists that were *not* deleted. I would say, the old version had a lot of memory leaks as most items never were freed. However, it seems good that the items were not deleted as we had a many of copies of pointers. Deleting or freeing a copy of a pointer would destroy the original allocated storage, what most likely leads to a crash if the original pointer still was used.
Note, I only removed the *node property* of your structs and put the struct objects to separate list containers instead. What I couldn't change is the logic of the program, though it might be the next thing to do.
Regards, Alex
ASKER
i got the following errors when i put this in:
Compiling...
main.cpp
C:\My Documents\dissertation\mai n.cpp(215) : error C2440: 'initializing' : cannot convert from 'struct t2DPoint' to 'class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >'
    No constructor could take the source type, or constructor overload resolution was ambiguous
C:\My Documents\dissertation\mai n.cpp(217) : error C2440: 'initializing' : cannot convert from 'struct t2DPoint' to 'class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >'
    No constructor could take the source type, or constructor overload resolution was ambiguous
C:\My Documents\dissertation\mai n.cpp(219) : error C2065: 'rrleft' : undeclared identifier
C:\My Documents\dissertation\mai n.cpp(229) : error C2440: 'initializing' : cannot convert from 'class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >::iterator' to 'class std::list<struct tCFeature,clas
s std::allocator<struct tCFeature> >::iterator'
    No constructor could take the source type, or constructor overload resolution was ambiguous
C:\My Documents\dissertation\mai n.cpp(229) : error C2678: binary '!=' : no operator defined which takes a left-hand operand of type 'class std::list<struct tCFeature,class std::allocator<struct tCFeature> >::iterator' (o
r there is no acceptable conversion)
C:\My Documents\dissertation\mai n.cpp(242) : error C2664: 'class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >::iterator __thiscall std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >::e
rase(class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >::iterator)' : cannot convert parameter 1 from 'class std::list<struct tCFeature,class std::allocator<struct tCFeature> >::iterator' to 'class std::list<struct t2DPoint,clas
s std::allocator<struct t2DPoint> >::iterator'
    No constructor could take the source type, or constructor overload resolution was ambiguous
Error executing cl.exe.
main.obj - 6 error(s), 0 warning(s)
Compiling...
main.cpp
C:\My Documents\dissertation\mai
    No constructor could take the source type, or constructor overload resolution was ambiguous
C:\My Documents\dissertation\mai
    No constructor could take the source type, or constructor overload resolution was ambiguous
C:\My Documents\dissertation\mai
C:\My Documents\dissertation\mai
s std::allocator<struct tCFeature> >::iterator'
    No constructor could take the source type, or constructor overload resolution was ambiguous
C:\My Documents\dissertation\mai
r there is no acceptable conversion)
C:\My Documents\dissertation\mai
rase(class std::list<struct t2DPoint,class std::allocator<struct t2DPoint> >::iterator)' : cannot convert parameter 1 from 'class std::list<struct tCFeature,class std::allocator<struct tCFeature> >::iterator' to 'class std::list<struct t2DPoint,clas
s std::allocator<struct t2DPoint> >::iterator'
    No constructor could take the source type, or constructor overload resolution was ambiguous
Error executing cl.exe.
main.obj - 6 error(s), 0 warning(s)
You need to change segmentation.h and segmentation.cpp. The return values turn from t2DPoint to list<t2DPoint> Â what is an equivalent to the t2DPoint * (pointer to the first element of a linked list) you had before.
Note, I couldn't compile the code below cause the cv structures and functions were missing.
Regards, Alex
////////////////////////// ////////// ////////// ////////// ////////// ////////
// Segmentation.h
////////////////////////// ////////// ////////// ////////// ////////// ///////
#include <list>
using namespace std;
/* image segmentation */
/* how large an area to sample from around the click */
#define SAMPLE_SIZE 20
/* the threshhold window around the colour to be matched */
#define THRESH_BAND 60
bool tkGenerateColMap(IplImage * input,int x,int y);
bool segColMapSimpleAve(IplImag e * input, int x, int y);
bool segColMapRThresh(IplImage * input, int x, int y);
list<t2DPoint> tkSegment(IplImage * input);
list<t2DPoint> segFindFeatures_Ave(IplIma ge * input);
list<t2DPoint> segFindFeatures_RThresh(Ip lImage * input);
list<t2DPoint> findCircles(IplImage * input,IplImage * draw);
////////////////////////// ////////// ////////// ////////// ////////// ////////
// Segmentation.cpp
////////////////////////// ////////// ////////// ////////// ////////// ///////
#include "global.h"
#include "segmentation.h"
#include "library.h"
int thresh_r;
int thresh_g;
int thresh_b;
char RchnThresh;
extern bool colmapdone;
bool tkGenerateColMap(IplImage * input,int x,int y){
   if(segColMapSimpleAve(inpu t,x,y)) return true;
   else return false;
}
list<t2DPoint> tkSegment(IplImage * input){
   return segFindFeatures_Ave(input) ;
}
/*
  This function creates a sample window around the selected point in order to
  average out the colour across this window; avoids "noisy" pixels
*/
bool segColMapSimpleAve(IplImag e * input, int x, int y){
   char sample[SAMPLE_SIZE][SAMPLE _SIZE][3];
   int row,col,chan,v;
   int norm_total;
   int left = x - (SAMPLE_SIZE/2);
   int right = x + (SAMPLE_SIZE/2);
   int top = y - (SAMPLE_SIZE/2);
   int bottom = y + (SAMPLE_SIZE/2);
   /* Duplicate the bit we need to sample */
   for(row=top;row<bottom;row ++){
     for(col=left;col<right;col ++){
        sample[row-top][col-left][ C_RED] = input->imageData[(row*inpu t->widthSt ep)+(col*3 )+C_RED];
        sample[row-top][col-left][ C_BLUE] = input->imageData[(row*inpu t->widthSt ep)+(col*3 )+C_BLUE];
        sample[row-top][col-left][ C_GREEN] = input->imageData[(row*inpu t->widthSt ep)+(col*3 )+C_GREEN] ;
     }
   }
   /* Now normalise the sample */
   for(row=0;row<SAMPLE_SIZE; row++){
     for(col=0;col<SAMPLE_SIZE; col++){
        norm_total = sample[row][col][C_RED] + sample[row][col][C_BLUE] +
          sample[row][col][C_GREEN];
        sample[row][col][C_RED] =
          (int)((float)sample[row][c ol][C_RED] /(float)no rm_total*2 55.0);
        sample[row][col][C_BLUE] =
          (int)((float)sample[row][c ol][C_BLUE ]/(float)n orm_total* 255.0);
        sample[row][col][C_GREEN] =
          (int)((float)sample[row][c ol][C_GREE N]/(float) norm_total *255.0);
     }
   }
   /* Now smooth it a bit */
   for(row=0;row<SAMPLE_SIZE; row++){
     for(col=0;col<SAMPLE_SIZE; col++){
        for(chan=0;chan<3;chan++){
          v = sample[row][col][chan];
          if(v>0){
             v -= 1;
             sample[row][col][chan] = v;
             if(row>0 && sample[row-1][col][chan] < v) sample[row-1][col][chan]
               = v;
             if(col>0 && sample[row][col-1][chan] < v) sample[row][col-1][chan]
               = v;
          }
        }
     }
   }
   for(row=0;row<SAMPLE_SIZE; row++){
     for(col=0;col<SAMPLE_SIZE; col++){
        for(chan=0;chan<3;chan++){
          v = sample[row][col][chan];
          if(v>0){
             v -= 1;
             sample[row][col][chan] = v;
             if(row < SAMPLE_SIZE-1 && sample[row+1][col][chan] < v)
               sample[row+1][col][chan] = v;
             if(col < SAMPLE_SIZE-1 && sample[row][col+1][chan] < v)
               sample[row][col+1][chan] = v;
          }
        }
     }
   }
   /*Now find the average*/
   thresh_r = sample[0][0][C_RED];
   thresh_b = sample[0][0][C_BLUE];
   thresh_g = sample[0][0][C_GREEN];
   for(row=0;row<SAMPLE_SIZE; row++){
     for(col=0;col<SAMPLE_SIZE; col++){
        thresh_r = (thresh_r+sample[row][col] [C_RED])/2 ;
        thresh_b = (thresh_b+sample[row][col] [C_BLUE])/ 2;
        thresh_g = (thresh_g+sample[row][col] [C_GREEN]) /2;
     }
   }
   printf("\nTHRESHHOLDS: R:%i G:%i B:%i\n",thresh_r,thresh_g, thresh_b);
   return true;
}
/*
  This function generates a binary image by passing over each of the pixels
  in the image, thresholding them on the values previously found from sampling;
*/
list<t2DPoint> segFindFeatures_Ave(IplIma ge * input){
   IplImage * temp;
   list<t2DPoint> features;
   int i,j;
   int norm_r=0,norm_g=0,norm_b=0 ;
   int low_r=0,low_g=0,low_b=0;
   int high_r=0,high_g=0,high_b=0 ;
   int norm_total=0,in_total=0;
   char * pixel_in = (char *)NULL;
   char * pixel_out = (char *)NULL;
   /* Clone the input image! */
   temp = cvCloneImage(input);
   removeNoise(input);
   removeNoise(input);
   /* Now work out what the normalised boundaries are from
   the specified r,g,b values */
   in_total = thresh_r+thresh_g+thresh_b ;
   if(in_total!=0){
   low_r = (int)(((float)thresh_r / (float)in_total)*255.0);
   low_g = (int)(((float)thresh_g / (float)in_total)*255.0);
   low_b = (int)(((float)thresh_b / (float)in_total)*255.0);
   high_r = low_r + THRESH_BAND;
   high_g = low_g + THRESH_BAND;
   high_b = low_b + THRESH_BAND;
   low_r -= THRESH_BAND;
   low_g -= THRESH_BAND;
   low_b -= THRESH_BAND;
   }
   else {
     low_r = 0;
     low_g = 0;
     low_b = 0;
     high_r = THRESH_BAND;
     high_g = THRESH_BAND;
     high_b = THRESH_BAND;
   }
   for(i=0;i<input->height;i+ +){
     for(j=0;j<(input->widthSte p/3);j++){
        pixel_in = &input->imageData[(i*input ->widthSte p)+(j*3)];
        pixel_out = &temp->imageData[(i*input- >widthStep )+(j*3)];
        norm_total = (int)pixel_in[C_RED] + (int)pixel_in[C_GREEN] + (int)pixel_in[C_BLUE];
        if(norm_total!=0){
          norm_r = (int)(((float)pixel_in[C_R ED] / (float)norm_total)*255.0);
          norm_g = (int)(((float)pixel_in[C_G REEN] / (float)norm_total)*255.0);
          norm_b = (int)(((float)pixel_in[C_B LUE] / (float)norm_total)*255.0);
        }
        else if(norm_total==0){
          norm_r=0;norm_g=0;norm_b=0 ;
        }
        if(norm_r >= low_r && norm_r <= high_r &&
          norm_g >= low_g && norm_g <= high_g &&
          norm_b >= low_b && norm_b <= high_b){
          pixel_out[C_RED] = (char)0;
          pixel_out[C_GREEN] = (char)0;
          pixel_out[C_BLUE] = (char)0;
        }
        else {
          pixel_out[C_RED] = (char)255;
          pixel_out[C_GREEN] = (char)255;
          pixel_out[C_BLUE] = (char)255;
        }
     }
   }
   features = findCircles(temp,input);
   cvReleaseImage(&temp);
   return features;
}
bool segColMapRThresh(IplImage * input, int x, int y){
   char * pixel;
   int left = x - 2;
   int right = x + 2;
   int top = y - 2;
   int bottom = y + 2;
   int row,col;
   if (input->depth == IPL_DEPTH_8U) printf ("IPL_DEPTH_8U\n");
   if (input->depth == IPL_DEPTH_8S) printf ("IPL_DEPTH_8S\n");
   if (input->depth == IPL_DEPTH_16S) printf ("IPL_DEPTH_16S\n");
   if (input->depth == IPL_DEPTH_32S) printf ("IPL_DEPTH_32S\n");
   if (input->depth == IPL_DEPTH_32F) printf ("IPL_DEPTH_32F\n");
   if (input->depth == IPL_DEPTH_64F) printf ("IPL_DEPTH_64F\n");
   for(row=top;row<bottom;row ++){
     for(col=left;col<right;col ++){
        pixel = &input->imageData[(input-> widthStep* row)+(col* 3)];
        if(col==left && row==top){
          RchnThresh = pixel[C_RED];
        }
        else{
          RchnThresh = (RchnThresh/2) + (pixel[C_RED]/2);
        }
        printf("%c",pixel[C_RED]);
     }
     printf("\n");
   }
   return true;
}
/*
  This function scans the binary image for circles. Consists of 3 main steps;
  First, the binary image is edge detected. Then the OpenCV function cvFindContours
  is used to identify the contours (edges) within the images.
  Finally, an elipse is fitted over each contour.
*/
list<t2DPoint> segFindFeatures_RThresh(Ip lImage * input){
   IplImage * threshed = cvCloneImage(input);
   list<t2DPoint> features;
   int i,j;
   char * pixel_in = (char *)NULL;
   char * pixel_out = (char *)NULL;
   for(i=0;i<input->height;i+ +){
     for(j=0;j<(input->widthSte p/3);j++){
        pixel_in = &input->imageData[(i*input ->widthSte p)+(j*3)];
        pixel_out = &threshed->imageData[(i*th reshed->wi dthStep)+( j*3)];
        if((pixel_in[C_RED] > (RchnThresh - 50)) &&
          (pixel_in[C_RED] < (RchnThresh + 20)) &&
          (pixel_in[C_GREEN] < (char)50) &&
          (pixel_in[C_BLUE] < (char)110)){
          pixel_out[C_RED] = (char)0;
          pixel_out[C_BLUE] = (char)0;
          pixel_out[C_GREEN] = (char)0;
        }
        else {
          pixel_out[C_RED] = (char)255;
          pixel_out[C_BLUE] = (char)255;
          pixel_out[C_GREEN] = (char)255;
        }
     }
   }
  Â
   features = findCircles(threshed,input );
   cvReleaseImage(&threshed);
   return features;
}
list<t2DPoint> findCircles(IplImage * input, IplImage * draw){
   CvMemStorage * storage;
   CvSeq * contour;
   CvBox2D * box;
   CvPoint * pointArray;
   CvPoint2D32f * pointArray32f;
   CvPoint center;
   t2DPoint result;
   float myAngle,ratio;
   int i,header_size,count,length ,width;
   IplImage * gray_input = cvCreateImage(cvGetSize(in put),IPL_D EPTH_8U,1) ;
   list<t2DPoint> markers;
   //Convert the input image to grayscale.
   cvCvtColor(input,gray_inpu t,CV_RGB2G RAY);
   //Remove noise and smooth
   removeNoise(gray_input);
   //Edge detect the image with Canny algorithm
   cvCanny(gray_input,gray_in put,25,150 ,3);
   //Allocate memory
   box = (CvBox2D *)malloc(sizeof(CvBox2D));
   header_size = sizeof(CvContour);
   storage = cvCreateMemStorage(1000);
   // Find all the contours in the image.
   cvFindContours(gray_input, storage,&c ontour,hea der_size,C V_RETR_EXT ERNAL,CV_C HAIN_APPRO X_TC89_KCO S);
   while(contour!=NULL)
   {
     if(CV_IS_SEQ_CURVE(contour ))
     {
        count = contour->total;
        pointArray = (CvPoint *)malloc(count * sizeof(CvPoint));
        cvCvtSeqToArray(contour,po intArray,C V_WHOLE_SE Q);
        pointArray32f = (CvPoint2D32f *)malloc((count + 1) * sizeof(CvPoint2D32f));
        for(i=0;i<count-1;i++){
          pointArray32f[i].x = (float)(pointArray[i].x);
          pointArray32f[i].y = (float)(pointArray[i].y);
        }
        pointArray32f[i].x = (float)(pointArray[0].x);
        pointArray32f[i].y = (float)(pointArray[0].y);
        if(count>7){
          cvFitEllipse(pointArray32f ,count,box );
          ratio = (float)box->size.width/(fl oat)box->s ize.height ;
          center.x = (int)box->center.x;
          center.y = (int)box->center.y;
          length = (int)box->size.height;
          width = (int)box->size.width;
          myAngle = box->angle;
          if((center.x>0) && (center.y>0)){
             result.x = center.x;       Â
             result.y = center.y;     Â
             result.size = length;  Â
            Â
             markers.push_front(result) ;
             if(draw!=NULL) cvCircle(draw,center,(int) length/2,R GB(0,0,255 ),-1);
             /*cvEllipse(input,
             center,
             cvSize((int)width/2,(int)l ength/2),
             -box->angle,
             0,
             360,
             RGB(0,255,0),
             1);*/
          }
        }
        free(pointArray32f);
        free(pointArray);
     }
     contour = contour->h_next;
   }
   free(contour);
   free(box);
   cvReleaseImage(&gray_input );
   cvReleaseMemStorage(&stora ge);
   return markers;
}
Note, I couldn't compile the code below cause the cv structures and functions were missing.
Regards, Alex
//////////////////////////
// Segmentation.h
//////////////////////////
#include <list>
using namespace std;
/* image segmentation */
/* how large an area to sample from around the click */
#define SAMPLE_SIZE 20
/* the threshhold window around the colour to be matched */
#define THRESH_BAND 60
bool tkGenerateColMap(IplImage * input,int x,int y);
bool segColMapSimpleAve(IplImag
bool segColMapRThresh(IplImage * input, int x, int y);
list<t2DPoint> tkSegment(IplImage * input);
list<t2DPoint> segFindFeatures_Ave(IplIma
list<t2DPoint> segFindFeatures_RThresh(Ip
list<t2DPoint> findCircles(IplImage * input,IplImage * draw);
//////////////////////////
// Segmentation.cpp
//////////////////////////
#include "global.h"
#include "segmentation.h"
#include "library.h"
int thresh_r;
int thresh_g;
int thresh_b;
char RchnThresh;
extern bool colmapdone;
bool tkGenerateColMap(IplImage * input,int x,int y){
   if(segColMapSimpleAve(inpu
   else return false;
}
list<t2DPoint> tkSegment(IplImage * input){
   return segFindFeatures_Ave(input)
}
/*
  This function creates a sample window around the selected point in order to
  average out the colour across this window; avoids "noisy" pixels
*/
bool segColMapSimpleAve(IplImag
   char sample[SAMPLE_SIZE][SAMPLE
   int row,col,chan,v;
   int norm_total;
   int left = x - (SAMPLE_SIZE/2);
   int right = x + (SAMPLE_SIZE/2);
   int top = y - (SAMPLE_SIZE/2);
   int bottom = y + (SAMPLE_SIZE/2);
   /* Duplicate the bit we need to sample */
   for(row=top;row<bottom;row
     for(col=left;col<right;col
        sample[row-top][col-left][
        sample[row-top][col-left][
        sample[row-top][col-left][
     }
   }
   /* Now normalise the sample */
   for(row=0;row<SAMPLE_SIZE;
     for(col=0;col<SAMPLE_SIZE;
        norm_total = sample[row][col][C_RED] + sample[row][col][C_BLUE] +
          sample[row][col][C_GREEN];
        sample[row][col][C_RED] =
          (int)((float)sample[row][c
        sample[row][col][C_BLUE] =
          (int)((float)sample[row][c
        sample[row][col][C_GREEN] =
          (int)((float)sample[row][c
     }
   }
   /* Now smooth it a bit */
   for(row=0;row<SAMPLE_SIZE;
     for(col=0;col<SAMPLE_SIZE;
        for(chan=0;chan<3;chan++){
          v = sample[row][col][chan];
          if(v>0){
             v -= 1;
             sample[row][col][chan] = v;
             if(row>0 && sample[row-1][col][chan] < v) sample[row-1][col][chan]
               = v;
             if(col>0 && sample[row][col-1][chan] < v) sample[row][col-1][chan]
               = v;
          }
        }
     }
   }
   for(row=0;row<SAMPLE_SIZE;
     for(col=0;col<SAMPLE_SIZE;
        for(chan=0;chan<3;chan++){
          v = sample[row][col][chan];
          if(v>0){
             v -= 1;
             sample[row][col][chan] = v;
             if(row < SAMPLE_SIZE-1 && sample[row+1][col][chan] < v)
               sample[row+1][col][chan] = v;
             if(col < SAMPLE_SIZE-1 && sample[row][col+1][chan] < v)
               sample[row][col+1][chan] = v;
          }
        }
     }
   }
   /*Now find the average*/
   thresh_r = sample[0][0][C_RED];
   thresh_b = sample[0][0][C_BLUE];
   thresh_g = sample[0][0][C_GREEN];
   for(row=0;row<SAMPLE_SIZE;
     for(col=0;col<SAMPLE_SIZE;
        thresh_r = (thresh_r+sample[row][col]
        thresh_b = (thresh_b+sample[row][col]
        thresh_g = (thresh_g+sample[row][col]
     }
   }
   printf("\nTHRESHHOLDS: R:%i G:%i B:%i\n",thresh_r,thresh_g,
   return true;
}
/*
  This function generates a binary image by passing over each of the pixels
  in the image, thresholding them on the values previously found from sampling;
*/
list<t2DPoint> segFindFeatures_Ave(IplIma
   IplImage * temp;
   list<t2DPoint> features;
   int i,j;
   int norm_r=0,norm_g=0,norm_b=0
   int low_r=0,low_g=0,low_b=0;
   int high_r=0,high_g=0,high_b=0
   int norm_total=0,in_total=0;
   char * pixel_in = (char *)NULL;
   char * pixel_out = (char *)NULL;
   /* Clone the input image! */
   temp = cvCloneImage(input);
   removeNoise(input);
   removeNoise(input);
   /* Now work out what the normalised boundaries are from
   the specified r,g,b values */
   in_total = thresh_r+thresh_g+thresh_b
   if(in_total!=0){
   low_r = (int)(((float)thresh_r / (float)in_total)*255.0);
   low_g = (int)(((float)thresh_g / (float)in_total)*255.0);
   low_b = (int)(((float)thresh_b / (float)in_total)*255.0);
   high_r = low_r + THRESH_BAND;
   high_g = low_g + THRESH_BAND;
   high_b = low_b + THRESH_BAND;
   low_r -= THRESH_BAND;
   low_g -= THRESH_BAND;
   low_b -= THRESH_BAND;
   }
   else {
     low_r = 0;
     low_g = 0;
     low_b = 0;
     high_r = THRESH_BAND;
     high_g = THRESH_BAND;
     high_b = THRESH_BAND;
   }
   for(i=0;i<input->height;i+
     for(j=0;j<(input->widthSte
        pixel_in = &input->imageData[(i*input
        pixel_out = &temp->imageData[(i*input-
        norm_total = (int)pixel_in[C_RED] + (int)pixel_in[C_GREEN] + (int)pixel_in[C_BLUE];
        if(norm_total!=0){
          norm_r = (int)(((float)pixel_in[C_R
          norm_g = (int)(((float)pixel_in[C_G
          norm_b = (int)(((float)pixel_in[C_B
        }
        else if(norm_total==0){
          norm_r=0;norm_g=0;norm_b=0
        }
        if(norm_r >= low_r && norm_r <= high_r &&
          norm_g >= low_g && norm_g <= high_g &&
          norm_b >= low_b && norm_b <= high_b){
          pixel_out[C_RED] = (char)0;
          pixel_out[C_GREEN] = (char)0;
          pixel_out[C_BLUE] = (char)0;
        }
        else {
          pixel_out[C_RED] = (char)255;
          pixel_out[C_GREEN] = (char)255;
          pixel_out[C_BLUE] = (char)255;
        }
     }
   }
   features = findCircles(temp,input);
   cvReleaseImage(&temp);
   return features;
}
bool segColMapRThresh(IplImage * input, int x, int y){
   char * pixel;
   int left = x - 2;
   int right = x + 2;
   int top = y - 2;
   int bottom = y + 2;
   int row,col;
   if (input->depth == IPL_DEPTH_8U) printf ("IPL_DEPTH_8U\n");
   if (input->depth == IPL_DEPTH_8S) printf ("IPL_DEPTH_8S\n");
   if (input->depth == IPL_DEPTH_16S) printf ("IPL_DEPTH_16S\n");
   if (input->depth == IPL_DEPTH_32S) printf ("IPL_DEPTH_32S\n");
   if (input->depth == IPL_DEPTH_32F) printf ("IPL_DEPTH_32F\n");
   if (input->depth == IPL_DEPTH_64F) printf ("IPL_DEPTH_64F\n");
   for(row=top;row<bottom;row
     for(col=left;col<right;col
        pixel = &input->imageData[(input->
        if(col==left && row==top){
          RchnThresh = pixel[C_RED];
        }
        else{
          RchnThresh = (RchnThresh/2) + (pixel[C_RED]/2);
        }
        printf("%c",pixel[C_RED]);
     }
     printf("\n");
   }
   return true;
}
/*
  This function scans the binary image for circles. Consists of 3 main steps;
  First, the binary image is edge detected. Then the OpenCV function cvFindContours
  is used to identify the contours (edges) within the images.
  Finally, an elipse is fitted over each contour.
*/
list<t2DPoint> segFindFeatures_RThresh(Ip
   IplImage * threshed = cvCloneImage(input);
   list<t2DPoint> features;
   int i,j;
   char * pixel_in = (char *)NULL;
   char * pixel_out = (char *)NULL;
   for(i=0;i<input->height;i+
     for(j=0;j<(input->widthSte
        pixel_in = &input->imageData[(i*input
        pixel_out = &threshed->imageData[(i*th
        if((pixel_in[C_RED] > (RchnThresh - 50)) &&
          (pixel_in[C_RED] < (RchnThresh + 20)) &&
          (pixel_in[C_GREEN] < (char)50) &&
          (pixel_in[C_BLUE] < (char)110)){
          pixel_out[C_RED] = (char)0;
          pixel_out[C_BLUE] = (char)0;
          pixel_out[C_GREEN] = (char)0;
        }
        else {
          pixel_out[C_RED] = (char)255;
          pixel_out[C_BLUE] = (char)255;
          pixel_out[C_GREEN] = (char)255;
        }
     }
   }
  Â
   features = findCircles(threshed,input
   cvReleaseImage(&threshed);
   return features;
}
list<t2DPoint> findCircles(IplImage * input, IplImage * draw){
   CvMemStorage * storage;
   CvSeq * contour;
   CvBox2D * box;
   CvPoint * pointArray;
   CvPoint2D32f * pointArray32f;
   CvPoint center;
   t2DPoint result;
   float myAngle,ratio;
   int i,header_size,count,length
   IplImage * gray_input = cvCreateImage(cvGetSize(in
   list<t2DPoint> markers;
   //Convert the input image to grayscale.
   cvCvtColor(input,gray_inpu
   //Remove noise and smooth
   removeNoise(gray_input);
   //Edge detect the image with Canny algorithm
   cvCanny(gray_input,gray_in
   //Allocate memory
   box = (CvBox2D *)malloc(sizeof(CvBox2D));
   header_size = sizeof(CvContour);
   storage = cvCreateMemStorage(1000);
   // Find all the contours in the image.
   cvFindContours(gray_input,
   while(contour!=NULL)
   {
     if(CV_IS_SEQ_CURVE(contour
     {
        count = contour->total;
        pointArray = (CvPoint *)malloc(count * sizeof(CvPoint));
        cvCvtSeqToArray(contour,po
        pointArray32f = (CvPoint2D32f *)malloc((count + 1) * sizeof(CvPoint2D32f));
        for(i=0;i<count-1;i++){
          pointArray32f[i].x = (float)(pointArray[i].x);
          pointArray32f[i].y = (float)(pointArray[i].y);
        }
        pointArray32f[i].x = (float)(pointArray[0].x);
        pointArray32f[i].y = (float)(pointArray[0].y);
        if(count>7){
          cvFitEllipse(pointArray32f
          ratio = (float)box->size.width/(fl
          center.x = (int)box->center.x;
          center.y = (int)box->center.y;
          length = (int)box->size.height;
          width = (int)box->size.width;
          myAngle = box->angle;
          if((center.x>0) && (center.y>0)){
             result.x = center.x;       Â
             result.y = center.y;     Â
             result.size = length;  Â
            Â
             markers.push_front(result)
             if(draw!=NULL) cvCircle(draw,center,(int)
             /*cvEllipse(input,
             center,
             cvSize((int)width/2,(int)l
             -box->angle,
             0,
             360,
             RGB(0,255,0),
             1);*/
          }
        }
        free(pointArray32f);
        free(pointArray);
     }
     contour = contour->h_next;
   }
   free(contour);
   free(box);
   cvReleaseImage(&gray_input
   cvReleaseMemStorage(&stora
   return markers;
}
ASKER CERTIFIED SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
ASKER
Im sure i adjusted the code as required, but i get these errors:
segmentation.cpp
C:\My Documents\dissertation\seg mentation. cpp(127) : error C2143: syntax error : missing ',' before '<'
C:\My Documents\dissertation\seg mentation. cpp(127) : error C2059: syntax error : '<'
C:\My Documents\dissertation\seg mentation. cpp(139) : error C2065: 'input' : undeclared identifier
C:\My Documents\dissertation\seg mentation. cpp(167) : error C2227: left of '->height' must point to class/struct/union
C:\My Documents\dissertation\seg mentation. cpp(168) : error C2227: left of '->widthStep' must point to class/struct/union
C:\My Documents\dissertation\seg mentation. cpp(169) : error C2227: left of '->imageData' must point to class/struct/union
C:\My Documents\dissertation\seg mentation. cpp(169) : error C2227: left of '->widthStep' must point to class/struct/union
C:\My Documents\dissertation\seg mentation. cpp(170) : error C2227: left of '->widthStep' must point to class/struct/union
C:\My Documents\dissertation\seg mentation. cpp(197) : error C2065: 'features' : undeclared identifier
C:\My Documents\dissertation\seg mentation. cpp(270) : error C2660: 'findCircles' : function does not take 2 parameters
C:\My Documents\dissertation\seg mentation. cpp(357) : error C2143: syntax error : missing ';' before '}'
C:\My Documents\dissertation\seg mentation. cpp(357) : error C2143: syntax error : missing ';' before '}'
C:\My Documents\dissertation\seg mentation. cpp(357) : error C2143: syntax error : missing ';' before '}'
Error executing cl.exe.
segmentation.obj - 13 error(s), 0 warning(s)
segmentation.cpp
C:\My Documents\dissertation\seg
C:\My Documents\dissertation\seg
C:\My Documents\dissertation\seg
C:\My Documents\dissertation\seg
C:\My Documents\dissertation\seg
C:\My Documents\dissertation\seg
C:\My Documents\dissertation\seg
C:\My Documents\dissertation\seg
C:\My Documents\dissertation\seg
C:\My Documents\dissertation\seg
C:\My Documents\dissertation\seg
C:\My Documents\dissertation\seg
C:\My Documents\dissertation\seg
Error executing cl.exe.
segmentation.obj - 13 error(s), 0 warning(s)
Seems as if some header includes are missing:
You need the headers where IplImage and t2DPoint where defined.
You need list header and using namespace std.
You should post the latest versions of your header files as I couldn't compile myself.
>>>>Â syntax error : missing ',' before '<'
I assume it's line
   list<t2DPoint> segFindFeatures_Ave(IplIma ge * input){
The error comes cause global.h wasn't included or cause <list> wasn't included.
>>>>Â error C2065: 'input' : undeclared identifier
Seems as if IplImage header wasn't included. Search all header files for struct IplImage and include it.
>>>>Â left of '->height' must point to class/struct/union
>>>>Â left of '->widthStep' must point to class/struct/union
>>>>Â left of '->imageData' must point to class/struct/union
>>>>Â left of '->widthStep' must point to class/struct/union
>>>>Â left of '->widthStep' must point to class/struct/union
All these need IplImage header
>>>>Â 'features' : undeclared identifier
if  line 128 compiles that would compile also:
128: Â Â Â list<t2DPoint> features;
>>>>Â 'findCircles' : function does not take 2 parameters
The findCircles function from above has 2 parameters. Check that all header files have the same declaration.
>>>>Â syntax error : missing ';' before '}'
The last '}' must be removed.
ukjm2k, all these errors are not very difficult to find and solve if you have some experience using Visual C++. If it is your very first program, I don't think you'll succeed as you would need to debug and change the code after it compiles. As I don't have the camera library and code, I couldn't help you on that.
Regards, Alex
You need the headers where IplImage and t2DPoint where defined.
You need list header and using namespace std.
You should post the latest versions of your header files as I couldn't compile myself.
>>>>Â syntax error : missing ',' before '<'
I assume it's line
   list<t2DPoint> segFindFeatures_Ave(IplIma
The error comes cause global.h wasn't included or cause <list> wasn't included.
>>>>Â error C2065: 'input' : undeclared identifier
Seems as if IplImage header wasn't included. Search all header files for struct IplImage and include it.
>>>>Â left of '->height' must point to class/struct/union
>>>>Â left of '->widthStep' must point to class/struct/union
>>>>Â left of '->imageData' must point to class/struct/union
>>>>Â left of '->widthStep' must point to class/struct/union
>>>>Â left of '->widthStep' must point to class/struct/union
All these need IplImage header
>>>>Â 'features' : undeclared identifier
if  line 128 compiles that would compile also:
128: Â Â Â list<t2DPoint> features;
>>>>Â 'findCircles' : function does not take 2 parameters
The findCircles function from above has 2 parameters. Check that all header files have the same declaration.
>>>>Â syntax error : missing ';' before '}'
The last '}' must be removed.
ukjm2k, all these errors are not very difficult to find and solve if you have some experience using Visual C++. If it is your very first program, I don't think you'll succeed as you would need to debug and change the code after it compiles. As I don't have the camera library and code, I couldn't help you on that.
Regards, Alex
I do not know if this is right but from my feeling I would suggest this way:
          while(pTempMatch!=NULL){
             tkReconstruct(pTempMatch,o
             if (pTempMatch->pNext == NULL)
               break;
             pMatches = pTempMatch->pNext;
             free(pTempMatch);
             pTempMatch = pMatches;
          }
I hope this could help