Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
?
Solved

segmentation faul - Message passing interface (MPI)

Posted on 2010-01-11
25
Medium Priority
?
645 Views
Last Modified: 2012-08-13
Hello,

I have a serious problem with my program i constantly get a segmentation fault error.
Is there any expert that has knowledge regarding mpi and could possibly help me ?

Thanks in adnvace !
0
Comment
Question by:unknown_
  • 11
  • 10
  • 3
  • +1
25 Comments
 
LVL 46

Expert Comment

by:Kent Olsen
ID: 26282659
Hi unknown,

Can you post more of the error description?


Kent
0
 

Author Comment

by:unknown_
ID: 26282679
That's the error which i constantly receive:

Process received signal
 Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message
0
 
LVL 46

Expert Comment

by:Kent Olsen
ID: 26282693
Hi unknown,

Is this a complete vendor provided package or are you writing a program that is calling the vendor's functions?

0
Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 

Author Comment

by:unknown_
ID: 26282733
it is provided [if i had understood your question]
0
 
LVL 46

Expert Comment

by:Kent Olsen
ID: 26282752
Since you don't have source code to any of it, there's not much that you can do, and even less that we can do to offer assistance.

You should contact the vendor and tell them that their program is crashing.


Kent
0
 

Author Comment

by:unknown_
ID: 26282767
i have code sorry i misinterpreted your question
0
 
LVL 53

Expert Comment

by:Infinity08
ID: 26282778
Unless I'm mistaken, unknown_ DOES have the code ... right ?

If so, can you please post your code, and indicate where exactly the segmentation fault occurs ? Use your debugger to find out, or add some logging output alternatively.
0
 
LVL 46

Expert Comment

by:Kent Olsen
ID: 26282812
Ok, then...

Taking a look at the error messages:

Process received signal
 Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)

Note the last line.  The failing address is (nil), which is 0.  That suggests that the program has branched to that address.  (Had the program tried to illegally reference that address, the message would be different.)

If the entire program is written in C, the most likely way for this to have occurred is for a function pointer to have not gotten set.  A function pointer is a pointer that can be set to the address of any function in the program.  The function is called from the pointer, not from a "normal" function call.

Check the code to see if it's using function pointers.  That seems like a very good place to start.


Kent


 
0
 

Author Comment

by:unknown_
ID: 26282843
That's the main function of the code where the MPI is defined
if you can please have a look !

Thanks !
int main(int argc, char *argv[]) {
		
	int id;
	int r,c;
	int ierr;
	int rc;
	int i;
	int p;
	int tag;
	int z;
	
	MPI_Request request;
	
	MPI_Status status;
		
	ierr = MPI_Init(&argc, &argv);
	if (ierr != MPI_SUCCESS) {
		printf ("Error starting MPI program\n"); MPI_Abort(MPI_COMM_WORLD, ierr);
	}
	
	ierr = MPI_Comm_size(MPI_COMM_WORLD, &p);
	
	ierr = MPI_Comm_rank(MPI_COMM_WORLD, &id);
	
	MPI_Errhandler_set(MPI_COMM_WORLD,MPI_ERRORS_RETURN); 
	MPI_Barrier(MPI_COMM_WORLD);
	
	if (id == 0) {
				
		printf("varA: ");
		
		scanf("%d", &varA); 
		
		printf("varB: ");
		
		scanf("%f", &varB);
				
		for (tag=1; tag < p; tag++){
			
			MPI_Send(&varA, 1, MPI_INT, tag, 10, MPI_COMM_WORLD);
			MPI_Send(&varB, 1, MPI_FLOAT, tag, 20, MPI_COMM_WORLD);
		}
				
		const int row = varA;
		
		const int column = varA;
		
		double **matrix = (double **)calloc(row,sizeof(double *));
		for(i = 0; i < column; ++i)
			matrix[i] = (double *)calloc(column,sizeof(double));
		
		
		srand(time(0)); 		
		for (r = 0; r < row; r++)
			
		{
			
			for (c = 0; c < column; c++)
				
			{
				
				matrix[r][c] = (rand() % 100) + 1;
				
			}
			
		}
		
		
		for (r = 0; r < row; r++) {
			
			for (c = 0; c < column; c++) {
				
				printf("%3.2f\t", matrix[r][c]);
			}
			printf("\n");
		}
		
		
		for (tag=1; tag < p ; tag++){
			for (r=0; r<varA; r++) {
				rc = MPI_Isend(matrix[r], varA, MPI_DOUBLE, tag, 50, MPI_COMM_WORLD, &request); 
				if (rc != MPI_SUCCESS) {
					printf("error\n");
					exit(1);
				}
			}
		}
		
	}
	
	rc = MPI_Barrier(MPI_COMM_WORLD);
	if (rc != MPI_SUCCESS) {
		printf("error\n");
		exit(1);
	}
	
	
	if(id>0){
				
		rc = MPI_Recv(&varA, 1, MPI_INT, 0, 10, MPI_COMM_WORLD, &status);
		if (rc != MPI_SUCCESS) {
			printf("error\n");
			exit(1);
		}
		//MPI_ANY_TAG
		rc = MPI_Recv(&varB, 1, MPI_FLOAT, 0, 20, MPI_COMM_WORLD, &status);
		if (rc != MPI_SUCCESS) {
			printf("error\n");
			exit(1);
		}
		
		
		for(z = 0; z < varA; z++) {
			rc = MPI_Irecv(matrix[z], varA, MPI_DOUBLE, 0, 50, MPI_COMM_WORLD, &request);
			if (rc != MPI_SUCCESS) {
				exit(1);
			}
		}
		
	}
	
	
	MPI_Barrier(MPI_COMM_WORLD);
	function();
	
	if (id == 0) {
		
		printf("\n");
		
		for (r = 0; r < varA; r++) {
			
			for (c = 0; c < varA; c++){
				
				printf("%3.2f\t", matrix[r][c]);
				
				printf("\n");
				
			}			
		}
		
	}
	
	ierr = MPI_Finalize();
	
	return 0;
	
}

Open in new window

0
 
LVL 46

Expert Comment

by:Kent Olsen
ID: 26282873

Hi unknown,

I'd sure like to know how the item *function* is defined and set.  


0
 

Author Comment

by:unknown_
ID: 26282898

void function() {
	
	int r;
	
	int c;
	
	inline double avg(double a, double b, double c, double d) {
		
		return ((a + b + c + d) / 4.0);
		
	}
	
	double temp[4];
	
	bool iterate = true;
	
	while (iterate) {
		
		iterate = false;
		
		for (r = 1; r < varA - 1; r++) {
			for (c = 1; c < varA - 1; c++) {
				
				temp[0] = matrix[r - 1][c];
				
				temp[1] = matrix[r][c - 1];
				
				temp[2] = matrix[r][c + 1];
				
				temp[3] = matrix[r + 1][c];
				
				double value = avg(temp[0], temp[1], temp[2], temp[3]);
								
				if (fabs(value - matrix[r][c]) > precision) {
					
					iterate = true;
					
				}
				
				matrix[r][c] = value;
				
			}
			
		}
			
	}
	
}

Open in new window

0
 
LVL 46

Expert Comment

by:Kent Olsen
ID: 26282972
Let's break this down and find the offending section...

Try using the code below as your main function.  That should give us a good idea where to look.


Kent

int main(int argc, char *argv[]) { 
                 
        int id; 
        int r,c; 
        int ierr; 
        int rc; 
        int i; 
        int p; 
        int tag; 
        int z; 
         
        MPI_Request request; 
         
        MPI_Status status; 

        fprintf (stderr, "start\n");                 

        ierr = MPI_Init(&argc, &argv); 
        if (ierr != MPI_SUCCESS) { 
                printf ("Error starting MPI program\n"); MPI_Abort(MPI_COMM_WORLD, ierr); 
        } 

        fprintf (stderr, "MPI_Comm_size\n");          
        ierr = MPI_Comm_size(MPI_COMM_WORLD, &p); 
         
        fprintf (stderr, "MPI_Comm_rank\n"); 
        ierr = MPI_Comm_rank(MPI_COMM_WORLD, &id); 
         
        fprintf (stderr, "MPI_Errhandler_set\n"); 
        MPI_Errhandler_set(MPI_COMM_WORLD,MPI_ERRORS_RETURN);  
        fprintf (stderr, "MPI_Varrier\n"); 
        MPI_Barrier(MPI_COMM_WORLD); 
         
        if (id == 0) { 
                                 
                printf("varA: "); 
                 
                scanf("%d", &varA);  
                 
                printf("varB: "); 
                 
                scanf("%f", &varB); 
                                 
                for (tag=1; tag < p; tag++){ 
                        fprintf (stderr, "MPI_Send\n");                          
                        MPI_Send(&varA, 1, MPI_INT, tag, 10, MPI_COMM_WORLD); 
                        MPI_Send(&varB, 1, MPI_FLOAT, tag, 20, MPI_COMM_WORLD); 
                } 
                                 
                const int row = varA; 
                 
                const int column = varA; 
                 
                double **matrix = (double **)calloc(row,sizeof(double *)); 
                for(i = 0; i < column; ++i) 
                        matrix[i] = (double *)calloc(column,sizeof(double)); 
                 
                 
                srand(time(0));                  
                for (r = 0; r < row; r++) 
                         
                { 
                         
                        for (c = 0; c < column; c++) 
                                 
                        { 
                                 
                                matrix[r][c] = (rand() % 100) + 1; 
                                 
                        } 
                         
                } 
                 
                 
                for (r = 0; r < row; r++) { 
                         
                        for (c = 0; c < column; c++) { 
                                 
                                printf("%3.2f\t", matrix[r][c]); 
                        } 
                        printf("\n"); 
                } 
                 
                 
                for (tag=1; tag < p ; tag++){ 
                        for (r=0; r<varA; r++) { 
                                fprintf (stderr, "MPI_Isend\n"); 
                                rc = MPI_Isend(matrix[r], varA, MPI_DOUBLE, tag, 50, MPI_COMM_WORLD, &request);  
                                if (rc != MPI_SUCCESS) { 
                                        printf("error\n"); 
                                        exit(1); 
                                } 
                        } 
                } 
                 
        } 

        fprintf (stderr, "MPI_Barrier (2)\n");          
        rc = MPI_Barrier(MPI_COMM_WORLD); 
        if (rc != MPI_SUCCESS) { 
                printf("error\n"); 
                exit(1); 
        } 
         
         
        if(id>0){ 
                                 
                fprintf (stderr, "MPI_Recv\n"); 
                rc = MPI_Recv(&varA, 1, MPI_INT, 0, 10, MPI_COMM_WORLD, &status); 
                if (rc != MPI_SUCCESS) { 
                        printf("error\n"); 
                        exit(1); 
                } 
                //MPI_ANY_TAG 
                fprintf (stderr, "MPI_Recv (2)\n"); 
                rc = MPI_Recv(&varB, 1, MPI_FLOAT, 0, 20, MPI_COMM_WORLD, &status); 
                if (rc != MPI_SUCCESS) { 
                        printf("error\n"); 
                        exit(1); 
                } 
                 
                 
                for(z = 0; z < varA; z++) { 
                        fprintf (stderr, "MPI_IRecv\n"); 
                        rc = MPI_Irecv(matrix[z], varA, MPI_DOUBLE, 0, 50, MPI_COMM_WORLD, &request); 
                        if (rc != MPI_SUCCESS) { 
                                exit(1); 
                        } 
                } 
                 
        } 
         

        fprintf (stderr, "MPI_Barrier (3)\n"); 
        MPI_Barrier(MPI_COMM_WORLD); 
        function(); 
         
        if (id == 0) { 
                 
                printf("\n"); 
                 
                for (r = 0; r < varA; r++) { 
                         
                        for (c = 0; c < varA; c++){ 
                                 
                                printf("%3.2f\t", matrix[r][c]); 
                                 
                                printf("\n"); 
                                 
                        }                        
                } 
                 
        } 
        fprintf (stderr, "MPI_Finalize\n"); 
        ierr = MPI_Finalize(); 
         
        return 0; 
         
}

Open in new window

0
 

Author Comment

by:unknown_
ID: 26283093
That's the output that i get
start
start
start
start
start
start
start
start
start
start
start
start
start
start
start
start
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
5 0.5
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
varA: varB: 75.00	56.00	47.00	44.00	47.00	
18.00	63.00	77.00	4.00	41.00	
13.00	30.00	69.00	5.00	81.00	
56.00	26.00	37.00	58.00	64.00	
95.00	89.00	76.00	62.00	46.00	
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Barrier (2)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
MPI_Recv

Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Barrier (3)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
End of error message
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message 
End of error message
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message 
End of error message 
End of error message 
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)

Open in new window

0
 
LVL 46

Expert Comment

by:Kent Olsen
ID: 26283177

This is very strange.  The implication is that MPI_Init() is recalling main() (repeatedly) for the line 'startup' to be displayed 16 times.

Do you have the source code for MPI_Init?

0
 

Expert Comment

by:Agentus
ID: 26283188
if you are working under linux, you should do the following:
1) run this
@> ulimit -u unlimited
2) run your code and let it crash
3) in the local directory you will find a file core.<some number>
4) run
@> gdb <yout executable name> core.<some number>
5) post the result
0
 

Author Comment

by:unknown_
ID: 26283194
do you mean that ?
       
      ierr = MPI_Init(&argc, &argv);
      if (ierr != MPI_SUCCESS) {
            printf ("Error starting MPI program\n"); MPI_Abort(MPI_COMM_WORLD, ierr);
      }
0
 
LVL 46

Expert Comment

by:Kent Olsen
ID: 26283229

I'm looking for the code for MPI_Init(), just like you posted the code for function() earlier.


0
 

Author Comment

by:unknown_
ID: 26283257
i dont have any other source code :S
0
 
LVL 46

Accepted Solution

by:
Kent Olsen earned 2000 total points
ID: 26283455
Let's test one more thing....

Try the main() function below.

int main(int argc, char *argv[]) { 
                 
        int id; 
        int r,c; 
        int ierr; 
        int rc; 
        int i; 
        int p; 
        int tag; 
        int z; 
         
        MPI_Request request; 
         
        MPI_Status status; 

        fprintf (stderr, "start\n");                 
        fprintf (stderr, "MPI_Init\n");                 
        ierr = MPI_Init(&argc, &argv); 
        if (ierr != MPI_SUCCESS) {
                fprintf (stderr, "Init failed\n");                 
                printf ("Error starting MPI program\n"); 
                fprintf (stderr, "MPI_Abort\n");                 
                MPI_Abort(MPI_COMM_WORLD, ierr); 
        } 

        fprintf (stderr, "MPI_Comm_size\n");          
        ierr = MPI_Comm_size(MPI_COMM_WORLD, &p); 
         
        fprintf (stderr, "MPI_Comm_rank\n"); 
        ierr = MPI_Comm_rank(MPI_COMM_WORLD, &id); 
         
        fprintf (stderr, "MPI_Errhandler_set\n"); 
        MPI_Errhandler_set(MPI_COMM_WORLD,MPI_ERRORS_RETURN);  
        fprintf (stderr, "MPI_Varrier\n"); 
        MPI_Barrier(MPI_COMM_WORLD); 
         
        if (id == 0) { 
                                 
                printf("varA: "); 
                 
                scanf("%d", &varA);  
                 
                printf("varB: "); 
                 
                scanf("%f", &varB); 
                                 
                for (tag=1; tag < p; tag++){ 
                        fprintf (stderr, "MPI_Send\n");                          
                        MPI_Send(&varA, 1, MPI_INT, tag, 10, MPI_COMM_WORLD); 
                        MPI_Send(&varB, 1, MPI_FLOAT, tag, 20, MPI_COMM_WORLD); 
                } 
                                 
                const int row = varA; 
                 
                const int column = varA; 
                 
                double **matrix = (double **)calloc(row,sizeof(double *)); 
                for(i = 0; i < column; ++i) 
                        matrix[i] = (double *)calloc(column,sizeof(double)); 
                 
                 
                srand(time(0));                  
                for (r = 0; r < row; r++) 
                         
                { 
                         
                        for (c = 0; c < column; c++) 
                                 
                        { 
                                 
                                matrix[r][c] = (rand() % 100) + 1; 
                                 
                        } 
                         
                } 
                 
                 
                for (r = 0; r < row; r++) { 
                         
                        for (c = 0; c < column; c++) { 
                                 
                                printf("%3.2f\t", matrix[r][c]); 
                        } 
                        printf("\n"); 
                } 
                 
                 
                for (tag=1; tag < p ; tag++){ 
                        for (r=0; r<varA; r++) { 
                                fprintf (stderr, "MPI_Isend\n"); 
                                rc = MPI_Isend(matrix[r], varA, MPI_DOUBLE, tag, 50, MPI_COMM_WORLD, &request);  
                                if (rc != MPI_SUCCESS) { 
                                        printf("error\n"); 
                                        exit(1); 
                                } 
                        } 
                } 
                 
        } 

        fprintf (stderr, "MPI_Barrier (2)\n");          
        rc = MPI_Barrier(MPI_COMM_WORLD); 
        if (rc != MPI_SUCCESS) { 
                printf("error\n"); 
                exit(1); 
        } 
         
         
        if(id>0){ 
                                 
                fprintf (stderr, "MPI_Recv\n"); 
                rc = MPI_Recv(&varA, 1, MPI_INT, 0, 10, MPI_COMM_WORLD, &status); 
                if (rc != MPI_SUCCESS) { 
                        printf("error\n"); 
                        exit(1); 
                } 
                //MPI_ANY_TAG 
                fprintf (stderr, "MPI_Recv (2)\n"); 
                rc = MPI_Recv(&varB, 1, MPI_FLOAT, 0, 20, MPI_COMM_WORLD, &status); 
                if (rc != MPI_SUCCESS) { 
                        printf("error\n"); 
                        exit(1); 
                } 
                 
                 
                for(z = 0; z < varA; z++) { 
                        fprintf (stderr, "MPI_IRecv\n"); 
                        rc = MPI_Irecv(matrix[z], varA, MPI_DOUBLE, 0, 50, MPI_COMM_WORLD, &request); 
                        if (rc != MPI_SUCCESS) { 
                                exit(1); 
                        } 
                } 
                 
        } 
         

        fprintf (stderr, "MPI_Barrier (3)\n"); 
        MPI_Barrier(MPI_COMM_WORLD); 
        function(); 
         
        if (id == 0) { 
                 
                printf("\n"); 
                 
                for (r = 0; r < varA; r++) { 
                         
                        for (c = 0; c < varA; c++){ 
                                 
                                printf("%3.2f\t", matrix[r][c]); 
                                 
                                printf("\n"); 
                                 
                        }                        
                } 
                 
        } 
        fprintf (stderr, "MPI_Finalize\n"); 
        ierr = MPI_Finalize(); 
         
        return 0; 
         
}

Open in new window

0
 

Author Comment

by:unknown_
ID: 26283551

start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
5 0.5
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
varA: varB: 31.00	64.00	99.00	37.00	54.00	
33.00	56.00	80.00	83.00	27.00	
23.00	86.00	43.00	49.00	28.00	
39.00	47.00	87.00	63.00	30.00	
81.00	19.00	68.00	47.00	28.00	
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Barrier (2)
MPI_Barrier (3)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
MPI_Recv
MPI_Recv (2)
MPI_IRecv
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Process received signal 
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message
MPI_Recv
End of error message
End of error message
End of error message
End of error message
MPI_Recv
MPI_Recv (2)
MPI_IRecv
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Recv (2)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
End of error message
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
End of error message
End of error message
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message
MPI_IRecv
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message
End of error message
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message
MPI_Recv
MPI_Recv (2)
MPI_IRecv
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message
End of error message

Open in new window

0
 

Expert Comment

by:Agentus
ID: 26283587
Are you runnig linux?
0
 

Author Comment

by:unknown_
ID: 26283642
yes, i tried what you said but it doesnt let me to do the last thing:  @> gdb <yout executable name> core.<some number>
0
 
LVL 46

Expert Comment

by:Kent Olsen
ID: 26283730

Do you have any programming documentation?  

main() is callint MPI_Init(), which is calling main().  This is a very unusual protocol, particularly as program initialization.


Kent
0
 

Expert Comment

by:Agentus
ID: 26283785
Ok,
install Valgrind....
http://valgrind.org/downloads/current.html#current

Then run the following:
valgrind --num-callers=20 --tool=memchek -v --log-file=result --leak-check=yes

It will create a "result" file , post it's content
0
 

Author Comment

by:unknown_
ID: 26283806
i don't have any documentation :S
0

Featured Post

Concerto Cloud for Software Providers & ISVs

Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!

Learn how Concerto can help you.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

This tutorial is posted by Aaron Wojnowski, administrator at SDKExpert.net.  To view more iPhone tutorials, visit www.sdkexpert.net. This is a very simple tutorial on finding the user's current location easily. In this tutorial, you will learn ho…
This is a short and sweet, but (hopefully) to the point article. There seems to be some fundamental misunderstanding about the function prototype for the "main" function in C and C++, more specifically what type this function should return. I see so…
The goal of this video is to provide viewers with basic examples to understand and use structures in the C programming language.
Video by: Grant
The goal of this video is to provide viewers with basic examples to understand and use while-loops in the C programming language.
Suggested Courses

580 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question