开发者

Why doesn't pipe.close() cause EOFError during pipe.recv() in python multiprocessing?

开发者 https://www.devze.com 2023-03-16 20:39 出处:网络
I am sending simple objects between processes using pipes with Python\'s multiprocessing module. The documentation states that if a pipe has been closed, calling pipe.recv() should raise EOF开发者_JS百

I am sending simple objects between processes using pipes with Python's multiprocessing module. The documentation states that if a pipe has been closed, calling pipe.recv() should raise EOF开发者_JS百科Error. Instead, my program is just blocking on recv() and never detects that the pipe has been closed.

Example:

import multiprocessing as m

def fn(pipe):
    print "recv:", pipe.recv()
    print "recv:", pipe.recv()

if __name__ == '__main__':
    p1, p2 = m.Pipe()
    pr = m.Process(target=fn, args=(p2,))
    pr.start()

    p1.send(1)
    p1.close()  ## should generate EOFError in remote process

And the output looks like:

recv: 1
<blocks here>

Can anyone tell me what I'm doing wrong? I have this problem on Linux and windows/cygwin, but not with the windows native Python.


The forked (child) process is inheriting a copy of its parent's file descriptors. So even though the parent calls "close" on p1, the child still has a copy open and the underlying kernel object is not being released.

To fix, you need to close the "write" side of the pipe in the child, like so:

def fn(pipe):
    p1.close()
    print "recv:", pipe.recv()
    print "recv:", pipe.recv()


From this solution I've observed that os.close(pipe.fileno()) could immediately break the pipe where pipe.close() doesn't until all processes/sub-processes end. You could try that instead. Warning: You cannot use pipe.close() after, but pipe.closed stills return "false". So you could do this to be cleaner:

os.close(pipe.fileno())
pipe=open('/dev/null')
pipe.close()
0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号