开发者

Redirecting multiple stdouts to single file

开发者 https://www.devze.com 2022-12-28 07:37 出处:网络
I have a program running on multiple machines with NFS and I\'d like to log all their outputs into a single file. Can I just run ./my_program >> filename on every machine or is there an issue wi

I have a program running on multiple machines with NFS and I'd like to log all their outputs into a single file. Can I just run ./my_program >> filename on every machine or is there an issue with concurrency I should be aware of开发者_Python百科? Since I'm only appending, I don't think there would be a problem, but I'm just trying to make sure.


That could work, but yes, you will have concurrency issues with it, and the log file will be basically indecipherable.

What I would recommend is that there be a log file for each machine and then on some periodical basis (say nightly), concatenate the files together with the machine name as the file name:

for i in "/path/to/logfiles/*"; do
    echo "Machine: $i";
    cat $i;
done > filename.log

That should give you some ideas, I think.


The NFS protocol does not support atomic append writes, so append writes are never atomic on NFS for any platform. Files WILL end up corrupt if you try.

When appending to files from multiple threads or processes, the fwrites to that file are atomic under the condition that the file was opened in appending mode, the string written to it does not exceed the filesystem blocksize and the filesystem is local. Which in NFS is not the case.

There is a workaround, although I would not know how to do it from a shellscript. The technique is called close-to-open cache consistency

0

精彩评论

暂无评论...
验证码 换一张
取 消