开发者

Releasing C++ resources and fork-exec?

开发者 https://www.devze.com 2023-02-03 12:40 出处:网络
I\'m trying to spawn a new process from my C++-project using fork-exec. I\'m using fork-exec in order to create a bi-directional pipe to the child process. But I\'m afraid my resources in the forked p

I'm trying to spawn a new process from my C++-project using fork-exec. I'm using fork-exec in order to create a bi-directional pipe to the child process. But I'm afraid my resources in the forked process won't get freed properly, since the exec-call will completely take over my process and is not going to call any destructors.

I tried circumventing this by throwing an exception and calling execl from a catch block at the end of main, but this solution doesn't destruct any singletons.

Is there any sensible way to achieve t开发者_C百科his safely? (hopefully avoiding any atExit hacks)

Ex: The following code outputs:

We are the child, gogo!
Parent proc, do nothing
Destroying object

Even though the forked process also has a copy of the singleton which needs to be destructed before I call execl.

#include <iostream>
#include <unistd.h>

using namespace std;

class Resources
{
public:
    ~Resources() { cout<<"Destroying object\n"; }
};

Resources& getRes()
{
    static Resources r1;
    return r1;
}

void makeChild(const string &command)
{
    int pid = fork();
    switch(pid)
    {
    case -1:
        cout<<"Big error! Wtf!\n";
        return;
    case 0:
        cout<<"Parent proc, do nothing\n";
        return;
    }
    cout<<"We are the child, gogo!\n";
    throw command;
}

int main(int argc, char* argv[])
{
    try
    {
        Resources& ref = getRes();
        makeChild("child");
    }
    catch(const string &command)
    {
        execl(command.c_str(), "");
    }
    return 0;
}


There are excellent odds that you don't need to call any destructors in between fork and exec. Yeah, fork makes a copy of your entire process state, including objects that have destructors, and exec obliterates all that state. But does it actually matter? Can an observer from outside your program -- another, unrelated process running on the same computer -- tell that destructors weren't run in the child? If there's no way to tell, there's no need to run them.

Even if an external observer can tell, it may be actively wrong to run destructors in the child. The usual example for this is: imagine you wrote something to stdout before calling fork, but it got buffered in the library and so has not actually been delivered to the operating system yet. In that case, you must not call fclose or fflush on stdout in the child, or the output will happen twice! (This is also why you almost certainly should call _exit instead of exit if the exec fails.)

Having said all that, there are two common cases where you might need to do some cleanup work in the child. One is file descriptors (do not confuse these with stdio FILEs or iostream objects) that should not be open after the exec. The correct way to deal with these is to set the FD_CLOEXEC flag on them as soon as possible after they are opened (some OSes allow you to do this in open itself, but that's not universal) and/or to loop from 3 to some large number calling close (not fclose) in the child. (FreeBSD has closefrom, but as far as I know, nobody else does, which is a shame because it's really quite handy.)

The other case is system-global thread locks, which - this is a thorny and poorly standardized area - may wind up held by both the parent and the child, and then inherited across exec into a process that has no idea it holds a lock. This is what pthread_atfork is supposed to be for, but I have read that in practice it doesn't work reliably. The only advice I can offer is "don't be holding any locks when you call fork", say sorry.

0

精彩评论

暂无评论...
验证码 换一张
取 消