开发者

How can multiple calculations be launched in parallel, while stopping them all when the first one returns? [Python]

开发者 https://www.devze.com 2022-12-17 19:06 出处:网络
How can multiple calculations be launched in parallel, while stopping them all when the first one returns?

How can multiple calculations be launched in parallel, while stopping them all when the first one returns?

The application I have in mind is the following: there are multiple ways of calculating a certain value; each method takes a different amount of time depending on the function parameters; by launching calculations in parallel, the fastest calculation would automatically be "selected" each time, and the other calculations would be stopped.

Now, there are some "details" that make this question more difficult:

  • The parameters of the function to be calculated include functions (that are calculated from data points; they are not top-level module functions). In fact, the calculation is the convolution of two functions. I'm not sure how such function parameters could be passed to a subprocess (they are not pickeable).
  • I do not have access to all calculation codes: some calculations are done internally by Scipy (probably via Fortran or C code). I'm not sure whether threads offer something similar to the termination signals that can be sent to pr开发者_如何转开发ocesses.

Is this something that Python can do relatively easily?


I would look at the multiprocessing module if you haven't already. It offers a way of offloading tasks to separate processes whilst providing you with a simple, threading like interface.

It provides the same kinds of primatives as you get in the threading module, for example, worker pools and queues for passing messages between your tasks, but it allows you to sidestep the issue of the GIL since your tasks actually run in separate processes.

The actual semantics of what you want are quite specific so I don't think there is a routine that fits the bill out-of-the-box, but you can surely knock one up.

Note: if you want to pass functions around, they cannot be bound functions since these are not pickleable, which is a requirement for sharing data between your tasks.


Because of the global interpreter lock you would be hard pressed to get any speedup this way. In reality even multithreaded programs in Python only run on one core. Thus, you would just be doing N processes at 1/N times the speed. Even if one finished in half the time of the others you would still lose time in the big picture.


Processes can be started and killed trivially.

You can do this.

import subprocess
watch = []
for s in ( "process1.py", "process2.py", "process3.py" ):
    sp = subprocess.Popen( s )
    watch.append( sp )

Now you're simply waiting for one of those to finish. When one finishes, kill the others.

import time
winner= None
while winner is None:
    time.sleep(10)
    for w in watch:
        if w.poll() is not None:
            winner= w
            break
for w in watch:
    if w.poll() is None: w.kill()

These are processes -- not threads. No GIL considerations. Make the operating system schedule them; that's what it does best.

Further, each process is simply a script that simply solves the problem using one of your alternative algorithms. They're completely independent and stand-alone. Simple to design, build and test.

0

精彩评论

暂无评论...
验证码 换一张
取 消