I have a Perl script which performs some tasks, one of which is to call a system
command to "tar -cvf file.tar....."
.
This can often take some time so I'd like the command line to echo back a progress indicator, something like a #
echoing back to screen whilst the system
call is in progress.
I've been doing some digging around and stumbled across fork
. Is this the best way to go? Is it possible to fork off the system
command, then create a while loop which checks on the staus of the $pid
returned by the fork?
I've also seen references to waitpid
.... I'm guessing I need to use this also.
fork system("tar ... ")
while ( forked process is still active) {
print #
sleep 1
}
Am I barking up the wrong tree?
Many thanks John
Perl has a nice construction for this, called "pipe opens." You can read more about it by typing perldoc -f open
at a shell prompt.
# Note the use of a list for passing the command. This avoids
# having to worry about shell quoting and related errors.
open(my $tar, '-|', 'tar', 'zxvf', 'test.tar.gz', '-C', 'wherever') or die ...;
Here's a snippet showing an example:
open(my $tar, '-|', 'tar', ...) or die "Could not run tar ... - $!";
while (<$tar>) {
print ".";
}
print "\n";
close($tar);
Replace the print "."
with something that prints a hash mark every 10 to 100 lines or so to get a nice gaugebar.
An example that doesn't depend on the child process writing any kind of output, and just prints a dot about once a second as long as it's running:
use POSIX qw(:sys_wait_h);
$|++;
defined(my $pid = fork) or die "Couldn't fork: $!";
if (!$pid) { # Child
exec('long_running_command', @args)
or die "Couldn't exec: $!";
} else { # Parent
while (! waitpid($pid, WNOHANG)) {
print ".";
sleep 1;
}
print "\n";
}
Although it could probably stand to have more error-checking, and there might actually be something better already on CPAN. Proc::Background seems promising for abstracting this kind of job away but I'm not sure how reliable it is.
$|++;
open(my $tar, 'tar ... |') or die "Could not run tar ... - $!";
while ($file=<$tar>) {
print "$file";
}
print "\n";
close($tar);
This prints the filenames received from tar.
For showing progress during a long-running task, you will find Term::ProgressBar useful -- it does the "printing of # across the screen" functionality that you describe.
I would try something like this
open my $tar, "tar -cvf file.tar..... 2>&/dev/null |"
or die "can't fork: $!";
my $i = 0;
while (<$tar>) {
if( i++ % 1000 == 0 ) print;
}
close $tar or die "tar error: $! $?";
Expanding on what Hobbs provided if you would like to get the data from the child process back into the Parent process you need to have an external conduit. I ended up using the tempfs because it was simple like a file, but does not put IO hits on the disk.
** Important **
You need to exit the child process, because otherwise the "child" process will continue along the same script and you will get double print
statements. So in the example below foreach (@stdoutput)
would happen two times despite only being in the script once.
$shm_id = time; #get unique name for file - example "1452463743"
$shm_file = "/dev/shm/$shm_id.tmp"; #set filename in tempfs
$| = 1; #suffering from buffering
print ("Activity Indicator: "); #No new line here
defined(my $pid = fork) or die "Couldn't fork: $!";
if (!$pid) { # Child
@stdoutput=`/usr/home/script.pl -o $parameter`; #get output of external command
open (SHM, ">$shm_file");
foreach (@stdoutput) {
print SHM ("$_"); #populate file in tempfs
}
close (SHM);
exit; #quit the child process (will not kill parent script)
} else { # Parent
while (! waitpid($pid, WNOHANG)) {
print ("\#"); # prints a progress bar
sleep 5;
}
}
print ("\n"); #finish up bar and go to new line
open (SHM, "$shm_file");
@stdoutput = <SHM>; #Now open the file and read it. Now array is in parent
close (SHM);
unlink ($shm_file); #deletes the tempfs file
chomp(@stdoutput);
foreach (@stdoutput) {
print ("$_\n"); #print results of external script
}
精彩评论