I've just been experimenting with MPI, and copied and ran this code, taken from the second code example at [the LLNL MPI tutorial][1].
#include <mpi.h>
#include <stdlib.h>
#include <stdio.h>
int main(int argc, char ** argv) {
int num_tasks, rank, next, prev, buf[2], tag1 = 1, tag2 = 2;
MPI_Request reqs[4];
MPI_Status status[2];
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &num_tasks);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
prev = rank - 1;
next = rank + 1;
if (rank == 0) prev = num_tasks - 1;
if (rank == (num_tasks - 1)) next = 0;
MPI_Irecv(&buf[0], 1, MPI_INT, prev, tag1, MPI_COMM_WORLD,
&reqs[0]);
MPI_Irecv(&buf[1], 1, MPI_INT, next, tag2, MPI_COMM_WORLD,
&reqs[1]);
MPI_Isend(&rank, 1, MPI_INT, prev, tag2, MPI_COMM_WORLD, &reqs[2]);
MPI_Isend(&rank, 1, MPI_INT, next, tag1, MPI_COMM_WORLD, &reqs[3]);
MPI_Waitall(4, reqs, status);
printf("Task %d received %d from %d开发者_如何学Python and %d from %d\n",
rank, buf[0], prev, buf[1], next);
MPI_Finalize();
return EXIT_SUCCESS;
}
I would have expected an output like this (for, say, 4 tasks):
$ mpiexec -n 4 ./m3
Task 0 received 3 from 3 and 1 from 1
Task 1 received 0 from 0 and 2 from 2
Task 2 received 1 from 1 and 3 from 3
Task 3 received 2 from 2 and 0 from 0
However, instead, I get this:
$ mpiexec -n 4 ./m3
Task 0 received 0 from 3 and 1 from 1
Task 1 received 0 from 0 and 2 from 2
Task 3 received 0 from 2 and 0 from 0
Task 2 received 0 from 1 and 3 from 3
That is, the message (with tag == 1) going into buffer buf[0] always gets value 0. Moreover, if I alter the code so that I declare the buffer as buf[3] rather than buf[2], and replace each instance of buf[0] with buf[2], then I get precisely the output I would have expected (i.e., the first output set given above). This looks as if, for some reason, something is overwriting the value in buf[0] with 0. But I can't see what that might be. BTW, as far as I can tell, my code (without the modification) exactly matches the code inthe tutorial, except for my printf.
Thanks!
Array of statuses must be of size 4 not 2. In your case MPI_Waitall corrupts memory when writing statuses.
精彩评论