开发者

Boost.MPI: What's received isn't what was sent!

开发者 https://www.devze.com 2023-01-21 20:37 出处:网络
I am relatively new to using Boost MPI. I have got the libraries installed, the code compiles, but I am getting a very odd error - some integer data received by the slave nodes is not what was sent by

I am relatively new to using Boost MPI. I have got the libraries installed, the code compiles, but I am getting a very odd error - some integer data received by the slave nodes is not what was sent by the master. What is going on?

I am using boost version 1.42.0, compiling the code using mpic++ (which wraps g++ on one cluster and icpc on the other). A reduced example follows, including the output.

Code:

#include <iostream>
#include <boost/mpi.hpp>

using namespace std;
namespace mpi = boost::mpi;

class Solution
{
public:
  Solution() :
  solution_num(num_solutions++)
  {
    // Master node's constructor
  }

  Solution(int solutionNum) :
  solution_num(solutionNum)
  {
    // Slave nodes' constructor.
  }

  int solutionNum() const
  {
    return solution_num;
  }

private:
  static int num_solutions;
  int solution_num;
};

int Solution::num_solutions = 0;

int main(int argc, char* argv[])
{
  // Initialization of MPI
  mpi::environment env(argc, argv);
  mpi::communicator world;

  if (world.rank() == 0)
  {
    // Create solutions
    int numSolutions = world.size() - 1;  // One solution per slave
    vector<Solution*> solutions(numSolutions);
    for (int sol = 0; sol < numSolutions; ++sol)
    {
      solutions[sol] = new Solution;
    }

    // Send solutions
    for (int sol = 0; sol < numSolutions; ++sol)
    {
      world.isend(sol + 1, 0, false);  // Tells the slave to expect work
      cout << "Sending solution no. " << solutions[sol]->solutionNum() << " to node " << sol + 1 << endl;
      world.isend(sol + 1, 1, solutions[sol]->solutionNum());
    }

    // Retrieve values (solution numbers squared)
    vector<double> values(numSolutions, 0);
    for (int i = 0; i < numSolutions; ++i)
    {
      // Get values开发者_StackOverflow社区 for each solution
      double value = 0;
      mpi::status status = world.recv(mpi::any_source, 2, value);
      int source = status.source();

      int sol = source - 1;
      values[sol] = value;
    }
    for (int i = 1; i <= numSolutions; ++i)
    {
      world.isend(i, 0, true);  // Tells the slave to finish
    }

    // Output the solutions numbers and their squares
    for (int i = 0; i < numSolutions; ++i)
    {
      cout << solutions[i]->solutionNum() << ", " << values[i] << endl;
      delete solutions[i];
    }
  }
  else
  {
    // Slave nodes merely square the solution number
    bool finished;
    mpi::status status = world.recv(0, 0, finished);
    while (!finished)
    {
      int solNum;
      world.recv(0, 1, solNum);
      cout << "Node " << world.rank() << " receiving solution no. " << solNum << endl;

      Solution solution(solNum);
      double value = static_cast<double>(solNum * solNum);
      world.send(0, 2, value);

      status = world.recv(0, 0, finished);
    }

    cout << "Node " << world.rank() << " finished." << endl;
  }

  return EXIT_SUCCESS;
}

Running this on 21 nodes (1 master, 20 slaves) produces:

Sending solution no. 0 to node 1
Sending solution no. 1 to node 2
Sending solution no. 2 to node 3
Sending solution no. 3 to node 4
Sending solution no. 4 to node 5
Sending solution no. 5 to node 6
Sending solution no. 6 to node 7
Sending solution no. 7 to node 8
Sending solution no. 8 to node 9
Sending solution no. 9 to node 10
Sending solution no. 10 to node 11
Sending solution no. 11 to node 12
Sending solution no. 12 to node 13
Sending solution no. 13 to node 14
Sending solution no. 14 to node 15
Sending solution no. 15 to node 16
Sending solution no. 16 to node 17
Sending solution no. 17 to node 18
Sending solution no. 18 to node 19
Sending solution no. 19 to node 20
Node 1 receiving solution no. 0
Node 2 receiving solution no. 1
Node 12 receiving solution no. 19
Node 3 receiving solution no. 19
Node 15 receiving solution no. 19
Node 13 receiving solution no. 19
Node 4 receiving solution no. 19
Node 9 receiving solution no. 19
Node 10 receiving solution no. 19
Node 14 receiving solution no. 19
Node 6 receiving solution no. 19
Node 5 receiving solution no. 19
Node 11 receiving solution no. 19
Node 8 receiving solution no. 19
Node 16 receiving solution no. 19
Node 19 receiving solution no. 19
Node 20 receiving solution no. 19
Node 1 finished.
Node 2 finished.
Node 7 receiving solution no. 19
0, 0
1, 1
2, 361
3, 361
4, 361
5, 361
6, 361
7, 361
8, 361
9, 361
10, 361
11, 361
12, 361
13, 361
14, 361
15, 361
16, 361
17, 361
18, 361
19, 361
Node 6 finished.
Node 3 finished.
Node 17 receiving solution no. 19
Node 17 finished.
Node 10 finished.
Node 12 finished.
Node 8 finished.
Node 4 finished.
Node 15 finished.
Node 18 receiving solution no. 19
Node 18 finished.
Node 11 finished.
Node 13 finished.
Node 20 finished.
Node 16 finished.
Node 9 finished.
Node 19 finished.
Node 7 finished.
Node 5 finished.
Node 14 finished.

So while the master sends 0 to node 1, 1 to node 2, 2 to node 3 etc, most of the slave nodes (for some reason) receive the number 19. So rather than producing the squares of the numbers from 0 to 19, we get 0 squared, 1 squared and 19 squared 18 times!

Thanks in advance to anyone who can explain this.

Alan


Ok, I think I have the answer, which requires some knowledge of the underlying C-style MPI calls. Boost's 'isend' function is essentially a wrapper around 'MPI_Isend', and it does not protect the user from needing to know some of the details about how 'MPI_Isend' works.

One parameter of 'MPI_Isend' is a pointer to a buffer that contains the information you wish to send. Importantly, however, this buffer CANNOT be reused until you know that the message has been received. So consider the following code:

// Get solution numbers from the solutions and store in a vector
vector<int> solutionNums(numSolutions);
for (int sol = 0; sol < numSolutions; ++sol)
{
  solutionNums[sol] = solutions[sol]->solutionNum();
}

// Send solution numbers
for (int sol = 0; sol < numSolutions; ++sol)
{
  world.isend(sol + 1, 0, false);  // Indicates that we have not finished, and to expect a solution representation
  cout << "Sending solution no. " << solutionNums[sol] << " to node " << sol + 1 << endl;
  world.isend(sol + 1, 1, solutionNums[sol]);
}

This works perfectly, as each solution number is in its own location in memory. Now consider the following minor adjustment:

// Create solutionNum array
vector<int> solutionNums(numSolutions);
for (int sol = 0; sol < numSolutions; ++sol)
{
  solutionNums[sol] = solutions[sol]->solutionNum();
}

// Send solutions
for (int sol = 0; sol < numSolutions; ++sol)
{
  int solNum = solutionNums[sol];
  world.isend(sol + 1, 0, false);  // Indicates that we have not finished, and to expect a solution representation
  cout << "Sending solution no. " << solNum << " to node " << sol + 1 << endl;
  world.isend(sol + 1, 1, solNum);
}

Now the underlying 'MPI_Isend' call is provided with a pointer to solNum. Unfortunately, this bit of memory is overwritten each time around the loop, so while it may look like the number 4 is sent to node 5, by the time the send actually takes place, the new contents of that memory location (19 for example) are passed instead.

Now consider the original code:

// Send solutions
for (int sol = 0; sol < numSolutions; ++sol)
{
  world.isend(sol + 1, 0, false);  // Tells the slave to expect work
  cout << "Sending solution no. " << solutions[sol]->solutionNum() << " to node " << sol + 1 << endl;
  world.isend(sol + 1, 1, solutions[sol]->solutionNum());
}

Here we are passing a temporary. Again, the location of this temporary in memory gets overwritten each time around the loop. Again, the wrong data is sent to the slave nodes.

As it happens, I've been able to restructure my 'real' code to use 'send' instead of 'isend'. However, if I need to use 'isend' in future, I'll be a little more careful!


I think I have stumbled upon a similar problem today. When serializing a custom datatype I noticed it was (sometimes) corrupted on the other side. The fix was to store the mpi::request return value of isend. If you look at communicator::isend_impl(int dest, int tag, const T& value, mpl::false_) in communicator.hpp of boost, you'll see that the serialized data is put into the request as a shared pointer. If it gets removed again, the data is invalidated and anything might happen.

so: always save isend return values!


Your compiler has optimized the crap out of your "solutions[sol] = new Solution;" loop and concluded that it can jump to the end of all the num_solution++ increments. It is of course wrong to do so, but that's what's happened.

It's possible, though very unlikely, that an automatically threading or automatically parallelizing compiler has cause the 20 instances of numsolutions++ to occur in semi-random order with respect to the 20 instances of solution_num = num_solutions in the ctor list of Solutions(). It's much more likely that an optimization went horribly wrong.

Replace

for (int sol = 0; sol < numSolutions; ++sol)
    {
      solutions[sol] = new Solution;
    }

with

for (int sol = 0; sol < numSolutions; ++sol)
    {
      solutions[sol] = new Solution(sol);
    }

and your problem will go away. In particular, each Solution will get its own number instead of getting whatever number the shared static happens to have some time during the compiler's incorrect reordering of the 20 increments.


Building on milianw's answer: My impression is that the correct way of using isend is to keep the request object it returns and check that it has completed by using its test() or wait() methods before another call to isend. I think it would also work to keep calling isend() and pushing the request objects onto a vector. You could then test or wait on those requests with {test,wait}_{any,some,all}.

At some point, you also need to worry about whether you are posting sends faster than the recipient can receive, because sooner or later you'll run out of MPI buffers. In my experience, this will just manifest itself as a crash.

0

精彩评论

暂无评论...
验证码 换一张
取 消