I would like to compare one textfile, line by line, with another textfile, to find out how many times the same text appears in text file 2. The problem is that I am getting too many loops. How do I solve this?
#!/bin/bash
# Read text file
echo "Enter file name"
read fname
# Read text file
echo "Enter file name"
read fcheck
# rm out2.txt
c1=0
for i in $(cat $开发者_如何学Cfname);
do
for j in $(cat $fcheck);
do
if [[ $i == $j ]]
then
let c1=c1+1;
fi
done
echo $c1 # >> out2.txt
c1=0;
done
The problem with your for
loops is that they read the files word-by-word. Instead, do something like this:
while read line_a
do
while read line_b
do
if [ "$line_a" = "$line_b" ]
then
let c1=c1+1;
fi
done < "$fcheck"
echo $c1
c1=0;
done < "$fname"
Make it a habit to enclose variables in quotation marks, like "$var"
, to avoid problems with spaces.
comm
is really what you need:
common_lines=$(comm -12 <(sort "$fname") <(sort "$fcheck"))
printf "%d common lines:\n" $(wc -l <<< "$common_lines")
echo "$common_lines"
I'd do
fname=file1.txt
fcheck=file2.txt
cat "$fname" | while read line
do
echo -e "$(fgrep -c "$line" "$fcheck")\t$line"
done
精彩评论