开发者

MPI parallelism in chaotic systems

开发者 https://www.devze.com 2023-03-02 04:18 出处:网络
I have a Fortran program for dynamics (basically a verlet algo). In order to compute the velocities faster I parallelized the algorithm with MPI. What makes me nervous is that if I have four processor

I have a Fortran program for dynamics (basically a verlet algo). In order to compute the velocities faster I parallelized the algorithm with MPI. What makes me nervous is that if I have four processors, each processor runs a Verlet, and when they reach a point of parallelization, they share info. However, due to slight numerical differences (for example, in the compiled LAPACK on each node) each Verlet trajectory may evolve in a completely different direction in the l开发者_C百科ong run, meaning that at the points of sharing I will obtained a mixup of info from different trajectories. I therefore decided to synchronize the info at every time step to prevent divergence, but this clearly introduces a barrier.

How is this problem (divergence of the nodes) normally solved ? Any references ?


Well, you shouldn't have different compiles of LAPACK on each node. If your numerical libraries change in different parts of the simulation, you should expect weird results -- and that has nothing to do with parallelism. So don't do that.

The only real time I've seen MPI introduce trickiness in situations like this is that doing things like MPI_REDUCE(...MPI_SUM...) can result in different answers on the same number of nodes on different runs, because the summation can be in a different order. That's just standard "floating-point math doesn't commute" stuff. You can avoid that by doing an MPI_GATHER() of the relevant numbers, and summing them in some well-defined order, such as after a sort lowest-to-highest in magnitude.

0

精彩评论

暂无评论...
验证码 换一张
取 消