开发者

Bash script optimisation

开发者 https://www.devze.com 2023-03-03 10:29 出处:网络
This is the script in question: for file in `ls products` do echo -n `cat products/$file \\ grep \'<td>.*</td>\' | gre开发者_开发问答p -v \'img\' | grep -v \'href\' | grep -v \'input\'

This is the script in question:

for file in `ls products`
do
  echo -n `cat products/$file \
  | grep '<td>.*</td>' | gre开发者_开发问答p -v 'img' | grep -v 'href' | grep -v 'input' \
  | head -1  | sed -e 's/^ *<td>//g' -e 's/<.*//g'`
done

I'm going to run it on 50000+ files, which would take about 12 hours with this script.

The algorithm is as follows:

  1. Find only lines containing table cells (<td>) that do not contain any of 'img', 'href', or 'input'.
  2. Select the first of them, then extract the data between the tags.

The usual bash text filters (sed, grep, awk, etc.) are available, as well as perl.


Looks like that can all be replace by one gawk command:

gawk '
    /<td>.*<\/td>/ && !(/img/ || /href/ || /input/) {
        sub(/^ *<td>/,""); sub(/<.*/,"")
        print
        nextfile
    }
' products/*

This uses the gawk extension nextfile.

If the wildcard expansion is too big, then

find products -type f -print | xargs gawk '...'


Here's some quick perl to do the whole thing that should be alot faster.

#!/usr/bin/perl

process_files($ARGV[0]); 

# process each file in the supplied directory
sub process_files($)
{
  my $dirpath = shift;
  my $dh;
  opendir($dh, $dirpath) or die "Cant readdir $dirpath. $!";
  # get a list of files
  my @files;
  do {
    @files = readdir($dh);
    foreach my $ent ( @files ){
      if ( -f "$dirpath/$ent" ){
        get_first_text_cell("$dirpath/$ent");
      }
    }
  } while ($#files > 0);
  closedir($dh);
}

# return the content of the first html table cell
# that does not contain img,href or input tags
sub get_first_text_cell($)
{
  my $filename = shift;
  my $fh;
  open($fh,"<$filename") or die "Cant open $filename. $!";
  my $found = 0;
  while ( ( my $line = <$fh> ) && ( $found == 0 ) ){
    ## capture html and text inside a table cell
    if ( $line =~ /<td>([&;\d\w\s"'<>]+)<\/td>/i ){
      my $cell = $1;

      ## omit anything with the following tags
      if ( $cell !~ /<(img|href|input)/ ){
        $found++;
        print "$cell\n";
      }
    }
  }
  close($fh);
}

Simply invoke it by passing the directory to be searched as the first argument:

$ perl parse.pl /html/documents/


What about this (should be much faster and clearer):

for file in products/*; do
    grep -P -o '(?<=<td>).*(?=<\/td>)' $file | grep -vP -m 1 '(img|input|href)'
done
  • the for will look to every file in products. See the difference with your syntax.
  • the first grep will output just the text between <td> and </td> without those tags for every cell as long as each cell is in a single line.
  • finally the second grep will output just the first line (which is what I believe you wanted to achieve with that head -1) of those lines which doesn't contain img, href or input (and will exit right then reducing the overall time allowing to process the next file faster)

I would have loved to use just a single grep, but then the regex will be really awful. :-)

Disclaimer: of course I haven't tested it

0

精彩评论

暂无评论...
验证码 换一张
取 消