开发者

MPI - receive multiple int from task 0 (root)

开发者 https://www.devze.com 2023-02-15 04:34 出处:网络
I am solving this problem. I am implementing cycling mapping, I have 4 processors, so one task is mapped on processor 1 (root), and then three others are workers. I am using cyclic mapping, and I have

I am solving this problem. I am implementing cycling mapping, I have 4 processors, so one task is mapped on processor 1 (root), and then three others are workers. I am using cyclic mapping, and I have as a input several integers, e.g. 0-40. I want from each worker to receive (in this case it would be 10 integers for each worker), do some counting and save it.

I am using MPI_Send to send integers from root, but I don't how to multiply receive some numbers from the same process (root). Also I send int with buffer size fixed on 1,开发者_运维技巧 when there is number e.g. 12, it would do bad things. How to check length of an int?

Any advice would be appreciated. Thanks


I'll assume you're working in C++, though your question doesn't say. Anyway, let's look at the arguments of MPI_Send:

MPI_SEND(buf, count, datatype, dest, tag, comm)

The second argument specifies how many data items you want to send. This call basically means "buf points to a point in memory where there are count number of values, all of them of type datatype, one after the other: send them". This lets you send the contents of an entire array, like this:

int values[10];
for (int i=0; i<10; i++)
    values[i] = i;
MPI_Send(values, 10, MPI_INTEGER, 1, 0, MPI_COMM_WORLD);

This will start reading memory at the start of values, and keep reading until 10 MPI_INTEGERs have been read.

For your case of distributing numbers between processes, this is how you do it with MPI_Send:

int values[40];
for (int i=0; i<40; i++)
    values[i] = i;
for (int i=1; i<4; i++)  // start at rank 1: don't send to ourselves
    MPI_Send(values+10*i, 10, MPI_INTEGER, i, 0, MPI_COMM_WORLD);

However, this is such a common operation in distributed computing that MPI gives it its very own function, MPI_Scatter. Scatter does exactly what you want: it takes one array and divides it up evenly between all processes who call it. This is a collective communication call, which is a slightly advanced topic, so if you're just learning MPI (which it sounds like you are), then feel free to skip it until you're comfortable using MPI_Send and MPI_Recv.

0

精彩评论

暂无评论...
验证码 换一张
取 消