开发者

How to Sort Output from several Log Files by date

开发者 https://www.devze.com 2023-01-08 20:12 出处:网络
I have got output from several different log files: logfile3 2010/07/21 15:28:52 INFO xxx 2010/07/21 15:31:25 INFO xxx

I have got output from several different log files:

logfile3
2010/07/21 15:28:52 INFO xxx
2010/07/21 15:31:25 INFO xxx
20开发者_开发技巧10/07/21 15:31:25 DEBUG xxx

logfile1
2010/07/21 19:28:52 INFO xxx
2010/07/21 19:31:25 INFO xxx
2010/07/21 19:31:25 DEBUG xxx

logfile2
2010/07/21 13:28:52 INFO xxx
2010/07/21 13:31:25 INFO xxx
2010/07/21 13:31:25 DEBUG xxx

I would like to sort this output by date, but keep the name of the logfile above the log lines, so it should look like:

logfile2
2010/07/21 13:28:52 INFO xxx
2010/07/21 13:31:25 INFO xxx
2010/07/21 13:31:25 DEBUG xxx

logfile3
2010/07/21 15:28:52 INFO xxx
2010/07/21 15:31:25 INFO xxx
2010/07/21 15:31:25 DEBUG xxx

logfile1
2010/07/21 19:28:52 INFO xxx
2010/07/21 19:31:25 INFO xxx
2010/07/21 19:31:25 DEBUG xxx

Do you have any idea how to sort output like this with bash commands, sed or awk? Thanks a lot!

UPDATE: This is the source of the output

for i in $( find log/ -iname *debug*.log -size +0 );do
if [ `grep -c 'ERROR' $i` -gt 0 ];then
 echo -e "\n$i"
 grep 'ERROR' --color=auto -A 5 -B 5 $i
fi
done

Martin


You may be able to get satisfactory results from this (as long as none of your filenames contain colons):

grep -C 5 --recursive 'ERROR' log/* | sort --field-separator=: --key=2

Each line will be prepended by the filename. Your output will look something like this:

logfile2:2010/07/21 13:28:52 INFO xxx
logfile2:2010/07/21 13:31:25 INFO xxx
logfile2:2010/07/21 13:31:25 DEBUG xxx

logfile3:2010/07/21 15:28:52 INFO xxx
logfile3:2010/07/21 15:31:25 INFO xxx
logfile3:2010/07/21 15:31:25 DEBUG xxx
etc.

You can use AWK to reformat that into the format that you show in your example:

grep -C 5 --recursive 'ERROR' log/* | sort --field-separator=: --key=2 |
    awk '{colon = match($0,":"); file = substr($0,1,colon - 1); 
    if (file != prevfile) {print "\n" file; prevfile = file}; 
    print substr($0,colon+1)}'

Here are several improvements to your script, in case you still use it:

find log/ -iname "*debug*.log" -size +0 | while read -r file
do
    if grep -qsm 1 'ERROR' "$file"
    then
        echo -e "\n$file"
        grep 'ERROR' --color=auto -C 5 "$file"
    fi
done


Nicholas-Knights-MacBook-Pro:~/logtest$ ls
logfile1 logfile2 logfile3
Nicholas-Knights-MacBook-Pro:~/logtest$ cat logfile*
2010/07/21 19:28:52 INFO xxx
2010/07/21 19:31:25 INFO xxx
2010/07/21 19:31:25 DEBUG xxx

2010/07/21 13:28:52 INFO xxx
2010/07/21 13:31:25 INFO xxx
2010/07/21 13:31:25 DEBUG xxx

2010/07/21 15:28:52 INFO xxx
2010/07/21 15:31:25 INFO xxx
2010/07/21 15:31:25 DEBUG xxx

Nicholas-Knights-MacBook-Pro:~/logtest$ for i in `ls logfile*` ; do printf "$i"; sort -n $i; printf '\n'; done
logfile1
2010/07/21 19:28:52 INFO xxx
2010/07/21 19:31:25 DEBUG xxx
2010/07/21 19:31:25 INFO xxx

logfile2
2010/07/21 13:28:52 INFO xxx
2010/07/21 13:31:25 DEBUG xxx
2010/07/21 13:31:25 INFO xxx

logfile3
2010/07/21 15:28:52 INFO xxx
2010/07/21 15:31:25 DEBUG xxx
2010/07/21 15:31:25 INFO xxx

Nicholas-Knights-MacBook-Pro:~/logtest$ 


If you have you output already in a file (or script output) I'd go Perl:

$/=undef;
$t=<>;
@t=split(/\s*\n*(logfile.*)$/m,$t);
foreach $f (@t) {
    next unless $f;
    if($f =~ /^logfile/) {
      print $f;
    } else {
        print join("\n",sort (split(/\n/,$f))) . "\n\n";
   }
}

Or, a little more clean:

@lines = ();
while($t=<>) {
    if($t!~ /^2\d\d\d/) {
        print sort @lines if(scalar(@lines));
        @lines = ();
        print $t;
    }
    else {
      push @lines,$t;
   }
}
print sort @lines if(scalar(@lines));


$ awk 'FNR==1{$NF=$NF" "FILENAME;}1' logfile*|sort -t" " -k1 -k2|awk 'NF==5{ h=$NF;$NF="";$0=h"\n"$0 }1'
logfile2
2010/07/21 13:28:52 INFO xxx
2010/07/21 13:31:25 DEBUG xxx
2010/07/21 13:31:25 INFO xxx
logfile3
2010/07/21 15:28:52 INFO xxx
2010/07/21 15:31:25 DEBUG xxx
2010/07/21 15:31:25 INFO xxx
logfile1
2010/07/21 19:28:52 INFO xxx
2010/07/21 19:31:25 DEBUG xxx
2010/07/21 19:31:25 INFO xxx


Thank you all.

I improved script from Dennis Williamson to sort errors by date. Each log file with error inside is saved in file named by the timestamp of last error occured. These files are later sorted and put together. There may be cleaner solutions for that than to use of temp files.

find log/ -iname "*debug*.log" -size +0 | while read -r file
do
    if grep -qsm 1 'ERROR' "$file"
    then
        echo -e "$i \t$file"
        errors=$(grep 'ERROR' --color=auto -C 5 "$file")
        #get the timestamp of last error occured
        time=$(echo $errors | head -n 1 | awk '{print $1" "$2}')
        timestamp=$(date -d "$time" +%s)
        #save it to temp file
        echo -e "\n$file\n$errors" > tmp/logs/$timestamp.$i
    fi
    let i++
done

#put files together
rm -f output.txt
for i in `ls tmp/logs/*|sort`;do cat $i >> output.txt ; rm  $i; done

Opinions and suggestions for improvement appreciated!


You could try rust-based tool Super Speedy Syslog Searcher

(assuming you have rust installed)

cargo install super_speedy_syslog_searcher

then

s4 logfile1 logfile2 logfile3

HOWEVER...

but keep the name of the logfile above the log lines

s4 cannot output that approach.

s4 can write the name of the logfile to precede the log message.

$ s4 -wp logfile1 logfile2 logfile3
logfile2:2010/07/21 13:28:52 INFO xxx
logfile2:2010/07/21 13:31:25 INFO xxx
logfile2:2010/07/21 13:31:25 DEBUG xxx
logfile3:2010/07/21 15:28:52 INFO xxx
logfile3:2010/07/21 15:31:25 INFO xxx
logfile3:2010/07/21 15:31:25 DEBUG xxx
logfile1:2010/07/21 19:28:52 INFO xxx
logfile1:2010/07/21 19:31:25 INFO xxx
logfile1:2010/07/21 19:31:25 DEBUG xxx
0

精彩评论

暂无评论...
验证码 换一张
取 消