开发者

Techniques Small-scale Distributed Computing

开发者 https://www.devze.com 2023-02-04 05:08 出处:网络
I have a program which performs a Monte Carlo-type simulation.Currently I have written version of the program against both OpenMP and OpenCL and wish to know the best approach for distributing the wor

I have a program which performs a Monte Carlo-type simulation. Currently I have written version of the program against both OpenMP and OpenCL and wish to know the best approach for distributing the workload between the computers on my LAN.

My first idea was to write a sockets-based client/server application whereby the server would divide up work units to send to the clients, which would then complete them, and send back the results. In order to leverage systems with fast CPUs and GPUs I could run multiple instances of the client program on a system (a -omp and a -ocl executable).

However, sockets programming is seldom enjoyable and a pain to get right (deciding on a protocol etc.). Hence I decided to look at MPI, which seems nice, although am unsure how well it works when you want to include CPUs + GPUs into the mix or how well my server-prescribed 'work unit' fits in. (The process of determining which regions of the problem space to sample is non-trivial, hence the requirement for the sentient master process to coordinate things.)

Hence, I am interested to know if th开发者_如何学编程ere are any other options available to me or what others have decided on in a similar situation.


Your description is a little vague, but it certainly sounds doable with MPI. The addition of GPU's shouldn't matter, since MPI doesn't concern itself with what code is running apart from the MPI calls themselves (I once wrote an MPI app that used Qt for visualisation and threading, for instance).

The biggest caveat I can see is that an MPI program consists of multiple instances of one program: if your OpenMP and OpenCL solutions are separate applications, you can't just spawn a couple of both and have them all running together. You can, however, write a simple wrapper application that, for instance, spawns one thread with the OpenMP solution, and another thread with the OpenCL solution. Running this wrapper app with MPI would achieve the desired result, but then of course communication may get a little tricky (sharing communicator info between threads, etc).

0

精彩评论

暂无评论...
验证码 换一张
取 消