开发者

Parallel Processes Results Written To Single File

开发者 https://www.devze.com 2023-01-21 10:10 出处:网络
I am new to Linux and was introduced to the \"&\" recently.I have to run several traceroutes and store them in a single file, and I am curious if I am able to kick off these traceroutes in paralle

I am new to Linux and was introduced to the "&" recently. I have to run several traceroutes and store them in a single file, and I am curious if I am able to kick off these traceroutes in parallel?

I tried the following but the results in the generated file, are not kept apart? Well, that is what it seems to me.

traceroute  -n -z 100 www.yahoo.com >> theLog.log &
traceroute  -n -z 100开发者_JAVA技巧 www.abc.com >> theLog.log &

Is what I am asking even possible to do? If so what commands should I be using?

Thanks for any direction given.


Perhaps you could investigate parallel (and tell us about your experience)? If you are on Ubuntu, you can do sudo apt-get install moreutils to obtain parallel.


If you want it to run parallel is better to keep the intermediary results in separated files them join them at the end. The steps would be to start each trace to it's log file and store their pid, wait for them all to stop, them join the results, something like the following:

traceroute  -n -z 100 www.yahoo.com > theLog.1.log & PID1=$!
traceroute  -n -z 100 www.abc.com > theLog.2.log & PID2=$!    
wait $PDI1 $PDI2    
cat theLog.1.log theLog.2.log > theLog.log
rm theLog.2.log theLog.1.log


With the following command they are not really in parallel, but you can continue using your terminal, and the results are taken apart:

{ traceroute -n -z 100 www.yahoo.com; traceroute -n -z 100 www.abc.com; } >> theLog.log &


As you have it written, the behavior is undefined. You might try what enzotib posted, or try to have each write to their own file and cat them together at the end.


traceroutes in the @enzotib's answer are executed one at a time in sequence.

You can execute traceroutes in parallel using suggested by @rmk the parallel utility.

$ /usr/bin/time parallel traceroute -n -z 100 <hosts.txt >> parallel.log
24.78user 0.63system 1:24.04elapsed 30%CPU (0avgtext+0avgdata 37456maxresident)k
72inputs+72outputs (2major+28776minor)pagefaults 0swaps

Sequential analog is 5 times slower:

$ /usr/bin/time ./sequential.sh 
24.63user 0.51system 7:19.09elapsed 5%CPU (0avgtext+0avgdata 5296maxresident)k
112inputs+568outputs (1major+8759minor)pagefaults 0swaps

Where sequential.sh is:

#!/bin/bash
( while read host; do traceroute -n -z 100 $host; done; ) <hosts.txt >>sequential.log

And hosts.txt is:

www.yahoo.com
www.abc.com
www.google.com
stackoverflow.com
facebook.com
youtube.com
live.com
baidu.com
wikipedia.org
blogspot.com
qq.com
twitter.com
msn.com
yahoo.co.jp
taobao.com
google.co.in
sina.com.cn
amazon.com
google.de
google.com.hk
0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号