I have a long-running python script with a perl 开发者_Python百科worker subprocess. Data is sent in and out of the child proc through its stdin and stdout. Periodically, the child must be restarted.
Unfortunately, after a while of running, it runs out of files ('too many open files'). lsof shows many remaining open pipes.
What's the proper way to clean up after a Popen'd process? Here's what I'm doing right now:
def start_helper(self):
# spawn perl helper
cwd = os.path.dirname(__file__)
if not cwd:
cwd = '.'
self.subp = subprocess.Popen(['perl', 'theperlthing.pl'], shell=False, cwd=cwd,
stdin=subprocess.PIPE, stdout=subprocess.PIPE,
bufsize=1, env=perl_env)
def restart_helper(self):
# clean up
if self.subp.stdin:
self.subp.stdin.close()
if self.subp.stdout:
self.subp.stdout.close()
if self.subp.stderr:
self.subp.stderr.close()
# kill
try:
self.subp.kill()
except OSError:
# can't kill a dead proc
pass
self.subp.wait() # ?
self.start_helper()
I think that's all you need:
def restart_helper(self):
# kill the process if open
try:
self.subp.kill()
except OSError:
# can't kill a dead proc
pass
self.start_helper()
# the wait comes after you opened the process
# if you want to know how the process ended you can add
# > if self.subp.wait() != 0:
# usually a process that exits with 0 had no errors
self.subp.wait()
As far as I know all file objects will be closed before the popen process gets killed.
A quick experiment shows that x = open("/etc/motd"); x = 1
cleans up after itself and leaves no open file descriptor. If you drop the last reference to a subprocess.Popen
the pipes seem to stick around. Is it possible you are re-invoking start_helper()
(or even some other Popen
) without explicitly closing and stopping the old one?
精彩评论