开发者

right way to run some code with timeout in Python

开发者 https://www.devze.com 2023-03-26 04:05 出处:网络
I looked online and found some SO discussing and ActiveState recipes for running some code with a timeout. It looks there are some common approaches:

I looked online and found some SO discussing and ActiveState recipes for running some code with a timeout. It looks there are some common approaches:

  • Use thread that run the code, and join it with timeout. If timeout elapsed - kill the thread. This is not directly supported in Python (used private _Thread__stop function) so it is bad practice
  • Use signal.SIGALRM - but this approach not working on Windows!
  • Use subprocess with timeout - but this is too heavy - what if I want to start interruptible task often, I don't want fire process for each!

So, what is the right way? I'm not asking about workarounds (eg use Twisted and async IO), but actual way to solve actual problem - I have some function and I want to run it only with some timeout. If timeout elapsed, I want control back. And I want开发者_如何学运维 it to work on Linux and Windows.


A completely general solution to this really, honestly does not exist. You have to use the right solution for a given domain.

  • If you want timeouts for code you fully control, you have to write it to cooperate. Such code has to be able to break up into little chunks in some way, as in an event-driven system. You can also do this by threading if you can ensure nothing will hold a lock too long, but handling locks right is actually pretty hard.

  • If you want timeouts because you're afraid code is out of control (for example, if you're afraid the user will ask your calculator to compute 9**(9**9)), you need to run it in another process. This is the only easy way to sufficiently isolate it. Running it in your event system or even a different thread will not be enough. It is also possible to break things up into little chunks similar to the other solution, but requires very careful handling and usually isn't worth it; in any event, that doesn't allow you to do the same exact thing as just running the Python code.


What you might be looking for is the multiprocessing module. If subprocess is too heavy, then this may not suit your needs either.

import time
import multiprocessing

def do_this_other_thing_that_may_take_too_long(duration):
    time.sleep(duration)
    return 'done after sleeping {0} seconds.'.format(duration)

pool = multiprocessing.Pool(1)
print 'starting....'
res = pool.apply_async(do_this_other_thing_that_may_take_too_long, [8])
for timeout in range(1, 10):
    try:
        print '{0}: {1}'.format(duration, res.get(timeout))
    except multiprocessing.TimeoutError:
        print '{0}: timed out'.format(duration) 

print 'end'


If it's network related you could try:

import socket
socket.setdefaulttimeout(number)


I found this with eventlet library:

http://eventlet.net/doc/modules/timeout.html

from eventlet.timeout import Timeout

timeout = Timeout(seconds, exception)
try:
    ... # execution here is limited by timeout
finally:
    timeout.cancel()


For "normal" Python code, that doesn't linger prolongued times in C extensions or I/O waits, you can achieve your goal by setting a trace function with sys.settrace() that aborts the running code when the timeout is reached.

Whether that is sufficient or not depends on how co-operating or malicious the code you run is. If it's well-behaved, a tracing function is sufficient.


An other way is to use faulthandler:

import time
import faulthandler


faulthandler.enable()


try:
    faulthandler.dump_tracebacks_later(3)
    time.sleep(10)
finally:
    faulthandler.cancel_dump_tracebacks_later()

N.B: The faulthandler module is part of stdlib in python3.3.


If you're running code that you expect to die after a set time, then you should write it properly so that there aren't any negative effects on shutdown, no matter if its a thread or a subprocess. A command pattern with undo would be useful here.

So, it really depends on what the thread is doing when you kill it. If its just crunching numbers who cares if you kill it. If its interacting with the filesystem and you kill it , then maybe you should really rethink your strategy.

What is supported in Python when it comes to threads? Daemon threads and joins. Why does python let the main thread exit if you've joined a daemon while its still active? Because its understood that someone using daemon threads will (hopefully) write the code in a way that it wont matter when that thread dies. Giving a timeout to a join and then letting main die, and thus taking any daemon threads with it, is perfectly acceptable in this context.


I've solved that in that way: For me is worked great (in windows and not heavy at all) I'am hope it was useful for someone)

import threading
import time

class LongFunctionInside(object):
    lock_state = threading.Lock()
    working = False

    def long_function(self, timeout):

        self.working = True

        timeout_work = threading.Thread(name="thread_name", target=self.work_time, args=(timeout,))
        timeout_work.setDaemon(True)
        timeout_work.start()

        while True:  # endless/long work
            time.sleep(0.1)  # in this rate the CPU is almost not used
            if not self.working:  # if state is working == true still working
                break
        self.set_state(True)

    def work_time(self, sleep_time):  # thread function that just sleeping specified time,
    # in wake up it asking if function still working if it does set the secured variable work to false
        time.sleep(sleep_time)
        if self.working:
            self.set_state(False)

    def set_state(self, state):  # secured state change
        while True:
            self.lock_state.acquire()
            try:
                self.working = state
                break
            finally:
                self.lock_state.release()

lw = LongFunctionInside()
lw.long_function(10)

The main idea is to create a thread that will just sleep in parallel to "long work" and in wake up (after timeout) change the secured variable state, the long function checking the secured variable during its work. I'm pretty new in Python programming, so if that solution has a fundamental errors, like resources, timing, deadlocks problems , please response)).


solving with the 'with' construct and merging solution from -

  • Timeout function if it takes too long to finish
  • this thread which work better.

    import threading, time
    
    class Exception_TIMEOUT(Exception):
        pass
    
    class linwintimeout:
    
        def __init__(self, f, seconds=1.0, error_message='Timeout'):
            self.seconds = seconds
            self.thread = threading.Thread(target=f)
            self.thread.daemon = True
            self.error_message = error_message
    
        def handle_timeout(self):
            raise Exception_TIMEOUT(self.error_message)
    
        def __enter__(self):
            try:
                self.thread.start()
                self.thread.join(self.seconds)
            except Exception, te:
                raise te
    
        def __exit__(self, type, value, traceback):
            if self.thread.is_alive():
                return self.handle_timeout()
    
    def function():
        while True:
            print "keep printing ...", time.sleep(1)
    
    try:
        with linwintimeout(function, seconds=5.0, error_message='exceeded timeout of %s seconds' % 5.0):
            pass
    except Exception_TIMEOUT, e:
        print "  attention !! execeeded timeout, giving up ... %s " % e
    
0

精彩评论

暂无评论...
验证码 换一张
取 消