• C

segmentation faul - Message passing interface (MPI)

Hello,

I have a serious problem with my program i constantly get a segmentation fault error.
Is there any expert that has knowledge regarding mpi and could possibly help me ?

Thanks in adnvace !
unknown_Asked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Kent OlsenData Warehouse Architect / DBACommented:
Hi unknown,

Can you post more of the error description?


Kent
0
unknown_Author Commented:
That's the error which i constantly receive:

Process received signal
 Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message
0
Kent OlsenData Warehouse Architect / DBACommented:
Hi unknown,

Is this a complete vendor provided package or are you writing a program that is calling the vendor's functions?

0
ON-DEMAND: 10 Easy Ways to Lose a Password

Learn about the methods that hackers use to lift real, working credentials from even the most security-savvy employees in this on-demand webinar. We cover the importance of multi-factor authentication and how these solutions can better protect your business!

unknown_Author Commented:
it is provided [if i had understood your question]
0
Kent OlsenData Warehouse Architect / DBACommented:
Since you don't have source code to any of it, there's not much that you can do, and even less that we can do to offer assistance.

You should contact the vendor and tell them that their program is crashing.


Kent
0
unknown_Author Commented:
i have code sorry i misinterpreted your question
0
Infinity08Commented:
Unless I'm mistaken, unknown_ DOES have the code ... right ?

If so, can you please post your code, and indicate where exactly the segmentation fault occurs ? Use your debugger to find out, or add some logging output alternatively.
0
Kent OlsenData Warehouse Architect / DBACommented:
Ok, then...

Taking a look at the error messages:

Process received signal
 Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)

Note the last line.  The failing address is (nil), which is 0.  That suggests that the program has branched to that address.  (Had the program tried to illegally reference that address, the message would be different.)

If the entire program is written in C, the most likely way for this to have occurred is for a function pointer to have not gotten set.  A function pointer is a pointer that can be set to the address of any function in the program.  The function is called from the pointer, not from a "normal" function call.

Check the code to see if it's using function pointers.  That seems like a very good place to start.


Kent


 
0
unknown_Author Commented:
That's the main function of the code where the MPI is defined
if you can please have a look !

Thanks !
int main(int argc, char *argv[]) {
		
	int id;
	int r,c;
	int ierr;
	int rc;
	int i;
	int p;
	int tag;
	int z;
	
	MPI_Request request;
	
	MPI_Status status;
		
	ierr = MPI_Init(&argc, &argv);
	if (ierr != MPI_SUCCESS) {
		printf ("Error starting MPI program\n"); MPI_Abort(MPI_COMM_WORLD, ierr);
	}
	
	ierr = MPI_Comm_size(MPI_COMM_WORLD, &p);
	
	ierr = MPI_Comm_rank(MPI_COMM_WORLD, &id);
	
	MPI_Errhandler_set(MPI_COMM_WORLD,MPI_ERRORS_RETURN); 
	MPI_Barrier(MPI_COMM_WORLD);
	
	if (id == 0) {
				
		printf("varA: ");
		
		scanf("%d", &varA); 
		
		printf("varB: ");
		
		scanf("%f", &varB);
				
		for (tag=1; tag < p; tag++){
			
			MPI_Send(&varA, 1, MPI_INT, tag, 10, MPI_COMM_WORLD);
			MPI_Send(&varB, 1, MPI_FLOAT, tag, 20, MPI_COMM_WORLD);
		}
				
		const int row = varA;
		
		const int column = varA;
		
		double **matrix = (double **)calloc(row,sizeof(double *));
		for(i = 0; i < column; ++i)
			matrix[i] = (double *)calloc(column,sizeof(double));
		
		
		srand(time(0)); 		
		for (r = 0; r < row; r++)
			
		{
			
			for (c = 0; c < column; c++)
				
			{
				
				matrix[r][c] = (rand() % 100) + 1;
				
			}
			
		}
		
		
		for (r = 0; r < row; r++) {
			
			for (c = 0; c < column; c++) {
				
				printf("%3.2f\t", matrix[r][c]);
			}
			printf("\n");
		}
		
		
		for (tag=1; tag < p ; tag++){
			for (r=0; r<varA; r++) {
				rc = MPI_Isend(matrix[r], varA, MPI_DOUBLE, tag, 50, MPI_COMM_WORLD, &request); 
				if (rc != MPI_SUCCESS) {
					printf("error\n");
					exit(1);
				}
			}
		}
		
	}
	
	rc = MPI_Barrier(MPI_COMM_WORLD);
	if (rc != MPI_SUCCESS) {
		printf("error\n");
		exit(1);
	}
	
	
	if(id>0){
				
		rc = MPI_Recv(&varA, 1, MPI_INT, 0, 10, MPI_COMM_WORLD, &status);
		if (rc != MPI_SUCCESS) {
			printf("error\n");
			exit(1);
		}
		//MPI_ANY_TAG
		rc = MPI_Recv(&varB, 1, MPI_FLOAT, 0, 20, MPI_COMM_WORLD, &status);
		if (rc != MPI_SUCCESS) {
			printf("error\n");
			exit(1);
		}
		
		
		for(z = 0; z < varA; z++) {
			rc = MPI_Irecv(matrix[z], varA, MPI_DOUBLE, 0, 50, MPI_COMM_WORLD, &request);
			if (rc != MPI_SUCCESS) {
				exit(1);
			}
		}
		
	}
	
	
	MPI_Barrier(MPI_COMM_WORLD);
	function();
	
	if (id == 0) {
		
		printf("\n");
		
		for (r = 0; r < varA; r++) {
			
			for (c = 0; c < varA; c++){
				
				printf("%3.2f\t", matrix[r][c]);
				
				printf("\n");
				
			}			
		}
		
	}
	
	ierr = MPI_Finalize();
	
	return 0;
	
}

Open in new window

0
Kent OlsenData Warehouse Architect / DBACommented:

Hi unknown,

I'd sure like to know how the item *function* is defined and set.  


0
unknown_Author Commented:

void function() {
	
	int r;
	
	int c;
	
	inline double avg(double a, double b, double c, double d) {
		
		return ((a + b + c + d) / 4.0);
		
	}
	
	double temp[4];
	
	bool iterate = true;
	
	while (iterate) {
		
		iterate = false;
		
		for (r = 1; r < varA - 1; r++) {
			for (c = 1; c < varA - 1; c++) {
				
				temp[0] = matrix[r - 1][c];
				
				temp[1] = matrix[r][c - 1];
				
				temp[2] = matrix[r][c + 1];
				
				temp[3] = matrix[r + 1][c];
				
				double value = avg(temp[0], temp[1], temp[2], temp[3]);
								
				if (fabs(value - matrix[r][c]) > precision) {
					
					iterate = true;
					
				}
				
				matrix[r][c] = value;
				
			}
			
		}
			
	}
	
}

Open in new window

0
Kent OlsenData Warehouse Architect / DBACommented:
Let's break this down and find the offending section...

Try using the code below as your main function.  That should give us a good idea where to look.


Kent

int main(int argc, char *argv[]) { 
                 
        int id; 
        int r,c; 
        int ierr; 
        int rc; 
        int i; 
        int p; 
        int tag; 
        int z; 
         
        MPI_Request request; 
         
        MPI_Status status; 

        fprintf (stderr, "start\n");                 

        ierr = MPI_Init(&argc, &argv); 
        if (ierr != MPI_SUCCESS) { 
                printf ("Error starting MPI program\n"); MPI_Abort(MPI_COMM_WORLD, ierr); 
        } 

        fprintf (stderr, "MPI_Comm_size\n");          
        ierr = MPI_Comm_size(MPI_COMM_WORLD, &p); 
         
        fprintf (stderr, "MPI_Comm_rank\n"); 
        ierr = MPI_Comm_rank(MPI_COMM_WORLD, &id); 
         
        fprintf (stderr, "MPI_Errhandler_set\n"); 
        MPI_Errhandler_set(MPI_COMM_WORLD,MPI_ERRORS_RETURN);  
        fprintf (stderr, "MPI_Varrier\n"); 
        MPI_Barrier(MPI_COMM_WORLD); 
         
        if (id == 0) { 
                                 
                printf("varA: "); 
                 
                scanf("%d", &varA);  
                 
                printf("varB: "); 
                 
                scanf("%f", &varB); 
                                 
                for (tag=1; tag < p; tag++){ 
                        fprintf (stderr, "MPI_Send\n");                          
                        MPI_Send(&varA, 1, MPI_INT, tag, 10, MPI_COMM_WORLD); 
                        MPI_Send(&varB, 1, MPI_FLOAT, tag, 20, MPI_COMM_WORLD); 
                } 
                                 
                const int row = varA; 
                 
                const int column = varA; 
                 
                double **matrix = (double **)calloc(row,sizeof(double *)); 
                for(i = 0; i < column; ++i) 
                        matrix[i] = (double *)calloc(column,sizeof(double)); 
                 
                 
                srand(time(0));                  
                for (r = 0; r < row; r++) 
                         
                { 
                         
                        for (c = 0; c < column; c++) 
                                 
                        { 
                                 
                                matrix[r][c] = (rand() % 100) + 1; 
                                 
                        } 
                         
                } 
                 
                 
                for (r = 0; r < row; r++) { 
                         
                        for (c = 0; c < column; c++) { 
                                 
                                printf("%3.2f\t", matrix[r][c]); 
                        } 
                        printf("\n"); 
                } 
                 
                 
                for (tag=1; tag < p ; tag++){ 
                        for (r=0; r<varA; r++) { 
                                fprintf (stderr, "MPI_Isend\n"); 
                                rc = MPI_Isend(matrix[r], varA, MPI_DOUBLE, tag, 50, MPI_COMM_WORLD, &request);  
                                if (rc != MPI_SUCCESS) { 
                                        printf("error\n"); 
                                        exit(1); 
                                } 
                        } 
                } 
                 
        } 

        fprintf (stderr, "MPI_Barrier (2)\n");          
        rc = MPI_Barrier(MPI_COMM_WORLD); 
        if (rc != MPI_SUCCESS) { 
                printf("error\n"); 
                exit(1); 
        } 
         
         
        if(id>0){ 
                                 
                fprintf (stderr, "MPI_Recv\n"); 
                rc = MPI_Recv(&varA, 1, MPI_INT, 0, 10, MPI_COMM_WORLD, &status); 
                if (rc != MPI_SUCCESS) { 
                        printf("error\n"); 
                        exit(1); 
                } 
                //MPI_ANY_TAG 
                fprintf (stderr, "MPI_Recv (2)\n"); 
                rc = MPI_Recv(&varB, 1, MPI_FLOAT, 0, 20, MPI_COMM_WORLD, &status); 
                if (rc != MPI_SUCCESS) { 
                        printf("error\n"); 
                        exit(1); 
                } 
                 
                 
                for(z = 0; z < varA; z++) { 
                        fprintf (stderr, "MPI_IRecv\n"); 
                        rc = MPI_Irecv(matrix[z], varA, MPI_DOUBLE, 0, 50, MPI_COMM_WORLD, &request); 
                        if (rc != MPI_SUCCESS) { 
                                exit(1); 
                        } 
                } 
                 
        } 
         

        fprintf (stderr, "MPI_Barrier (3)\n"); 
        MPI_Barrier(MPI_COMM_WORLD); 
        function(); 
         
        if (id == 0) { 
                 
                printf("\n"); 
                 
                for (r = 0; r < varA; r++) { 
                         
                        for (c = 0; c < varA; c++){ 
                                 
                                printf("%3.2f\t", matrix[r][c]); 
                                 
                                printf("\n"); 
                                 
                        }                        
                } 
                 
        } 
        fprintf (stderr, "MPI_Finalize\n"); 
        ierr = MPI_Finalize(); 
         
        return 0; 
         
}

Open in new window

0
unknown_Author Commented:
That's the output that i get
start
start
start
start
start
start
start
start
start
start
start
start
start
start
start
start
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
5 0.5
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
varA: varB: 75.00	56.00	47.00	44.00	47.00	
18.00	63.00	77.00	4.00	41.00	
13.00	30.00	69.00	5.00	81.00	
56.00	26.00	37.00	58.00	64.00	
95.00	89.00	76.00	62.00	46.00	
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Barrier (2)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
MPI_Recv

Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Barrier (3)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
End of error message
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message 
End of error message
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message 
End of error message 
End of error message 
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)

Open in new window

0
Kent OlsenData Warehouse Architect / DBACommented:

This is very strange.  The implication is that MPI_Init() is recalling main() (repeatedly) for the line 'startup' to be displayed 16 times.

Do you have the source code for MPI_Init?

0
AgentusCommented:
if you are working under linux, you should do the following:
1) run this
@> ulimit -u unlimited
2) run your code and let it crash
3) in the local directory you will find a file core.<some number>
4) run
@> gdb <yout executable name> core.<some number>
5) post the result
0
unknown_Author Commented:
do you mean that ?
       
      ierr = MPI_Init(&argc, &argv);
      if (ierr != MPI_SUCCESS) {
            printf ("Error starting MPI program\n"); MPI_Abort(MPI_COMM_WORLD, ierr);
      }
0
Kent OlsenData Warehouse Architect / DBACommented:

I'm looking for the code for MPI_Init(), just like you posted the code for function() earlier.


0
unknown_Author Commented:
i dont have any other source code :S
0
Kent OlsenData Warehouse Architect / DBACommented:
Let's test one more thing....

Try the main() function below.

int main(int argc, char *argv[]) { 
                 
        int id; 
        int r,c; 
        int ierr; 
        int rc; 
        int i; 
        int p; 
        int tag; 
        int z; 
         
        MPI_Request request; 
         
        MPI_Status status; 

        fprintf (stderr, "start\n");                 
        fprintf (stderr, "MPI_Init\n");                 
        ierr = MPI_Init(&argc, &argv); 
        if (ierr != MPI_SUCCESS) {
                fprintf (stderr, "Init failed\n");                 
                printf ("Error starting MPI program\n"); 
                fprintf (stderr, "MPI_Abort\n");                 
                MPI_Abort(MPI_COMM_WORLD, ierr); 
        } 

        fprintf (stderr, "MPI_Comm_size\n");          
        ierr = MPI_Comm_size(MPI_COMM_WORLD, &p); 
         
        fprintf (stderr, "MPI_Comm_rank\n"); 
        ierr = MPI_Comm_rank(MPI_COMM_WORLD, &id); 
         
        fprintf (stderr, "MPI_Errhandler_set\n"); 
        MPI_Errhandler_set(MPI_COMM_WORLD,MPI_ERRORS_RETURN);  
        fprintf (stderr, "MPI_Varrier\n"); 
        MPI_Barrier(MPI_COMM_WORLD); 
         
        if (id == 0) { 
                                 
                printf("varA: "); 
                 
                scanf("%d", &varA);  
                 
                printf("varB: "); 
                 
                scanf("%f", &varB); 
                                 
                for (tag=1; tag < p; tag++){ 
                        fprintf (stderr, "MPI_Send\n");                          
                        MPI_Send(&varA, 1, MPI_INT, tag, 10, MPI_COMM_WORLD); 
                        MPI_Send(&varB, 1, MPI_FLOAT, tag, 20, MPI_COMM_WORLD); 
                } 
                                 
                const int row = varA; 
                 
                const int column = varA; 
                 
                double **matrix = (double **)calloc(row,sizeof(double *)); 
                for(i = 0; i < column; ++i) 
                        matrix[i] = (double *)calloc(column,sizeof(double)); 
                 
                 
                srand(time(0));                  
                for (r = 0; r < row; r++) 
                         
                { 
                         
                        for (c = 0; c < column; c++) 
                                 
                        { 
                                 
                                matrix[r][c] = (rand() % 100) + 1; 
                                 
                        } 
                         
                } 
                 
                 
                for (r = 0; r < row; r++) { 
                         
                        for (c = 0; c < column; c++) { 
                                 
                                printf("%3.2f\t", matrix[r][c]); 
                        } 
                        printf("\n"); 
                } 
                 
                 
                for (tag=1; tag < p ; tag++){ 
                        for (r=0; r<varA; r++) { 
                                fprintf (stderr, "MPI_Isend\n"); 
                                rc = MPI_Isend(matrix[r], varA, MPI_DOUBLE, tag, 50, MPI_COMM_WORLD, &request);  
                                if (rc != MPI_SUCCESS) { 
                                        printf("error\n"); 
                                        exit(1); 
                                } 
                        } 
                } 
                 
        } 

        fprintf (stderr, "MPI_Barrier (2)\n");          
        rc = MPI_Barrier(MPI_COMM_WORLD); 
        if (rc != MPI_SUCCESS) { 
                printf("error\n"); 
                exit(1); 
        } 
         
         
        if(id>0){ 
                                 
                fprintf (stderr, "MPI_Recv\n"); 
                rc = MPI_Recv(&varA, 1, MPI_INT, 0, 10, MPI_COMM_WORLD, &status); 
                if (rc != MPI_SUCCESS) { 
                        printf("error\n"); 
                        exit(1); 
                } 
                //MPI_ANY_TAG 
                fprintf (stderr, "MPI_Recv (2)\n"); 
                rc = MPI_Recv(&varB, 1, MPI_FLOAT, 0, 20, MPI_COMM_WORLD, &status); 
                if (rc != MPI_SUCCESS) { 
                        printf("error\n"); 
                        exit(1); 
                } 
                 
                 
                for(z = 0; z < varA; z++) { 
                        fprintf (stderr, "MPI_IRecv\n"); 
                        rc = MPI_Irecv(matrix[z], varA, MPI_DOUBLE, 0, 50, MPI_COMM_WORLD, &request); 
                        if (rc != MPI_SUCCESS) { 
                                exit(1); 
                        } 
                } 
                 
        } 
         

        fprintf (stderr, "MPI_Barrier (3)\n"); 
        MPI_Barrier(MPI_COMM_WORLD); 
        function(); 
         
        if (id == 0) { 
                 
                printf("\n"); 
                 
                for (r = 0; r < varA; r++) { 
                         
                        for (c = 0; c < varA; c++){ 
                                 
                                printf("%3.2f\t", matrix[r][c]); 
                                 
                                printf("\n"); 
                                 
                        }                        
                } 
                 
        } 
        fprintf (stderr, "MPI_Finalize\n"); 
        ierr = MPI_Finalize(); 
         
        return 0; 
         
}

Open in new window

0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
unknown_Author Commented:

start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
start
MPI_Init
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Comm_size
MPI_Comm_rank
MPI_Errhandler_set
MPI_Varrier
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
MPI_Barrier (2)
5 0.5
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
MPI_Send
varA: varB: 31.00	64.00	99.00	37.00	54.00	
33.00	56.00	80.00	83.00	27.00	
23.00	86.00	43.00	49.00	28.00	
39.00	47.00	87.00	63.00	30.00	
81.00	19.00	68.00	47.00	28.00	
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Isend
MPI_Barrier (2)
MPI_Barrier (3)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
MPI_Recv
MPI_Recv (2)
MPI_IRecv
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Process received signal 
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message
MPI_Recv
End of error message
End of error message
End of error message
End of error message
MPI_Recv
MPI_Recv (2)
MPI_IRecv
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
MPI_Recv (2)
MPI_Recv
MPI_Recv (2)
MPI_IRecv
End of error message
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
End of error message
End of error message
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message
MPI_IRecv
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message
End of error message
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message
MPI_Recv
MPI_Recv (2)
MPI_IRecv
MPI_Recv
MPI_Recv (2)
MPI_IRecv
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
Process received signal 
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
End of error message
End of error message

Open in new window

0
AgentusCommented:
Are you runnig linux?
0
unknown_Author Commented:
yes, i tried what you said but it doesnt let me to do the last thing:  @> gdb <yout executable name> core.<some number>
0
Kent OlsenData Warehouse Architect / DBACommented:

Do you have any programming documentation?  

main() is callint MPI_Init(), which is calling main().  This is a very unusual protocol, particularly as program initialization.


Kent
0
AgentusCommented:
Ok,
install Valgrind....
http://valgrind.org/downloads/current.html#current

Then run the following:
valgrind --num-callers=20 --tool=memchek -v --log-file=result --leak-check=yes

It will create a "result" file , post it's content
0
unknown_Author Commented:
i don't have any documentation :S
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
C

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.