I want to write a bash
command that grep
s all *.txt
file with a pattern in current folder to another folder. Should I use find
or for
loop? I tried using find
but it seems to complicate things.
Edit: I want to copy files with a specific pattern to a different folder. For example:
A.tx开发者_如何学Ct
B.txt
C.txt
all have the word "foo" in them. I want grep to remove "foo" and send it to a different folder with the same name. I don't want to change the original file in any way.
Using for
would probably be a lot easier for this than find
. Something like this:
otherdir='your_other_directory'
for file in *.txt; do
grep -q 'foo' $file && grep -v 'foo' < $file > $otherdir/$file
done
If your grep
doesn't understand -q
then:
otherdir='your_other_directory'
for file in *.txt; do
grep 'foo' $file > /dev/null && grep -v 'foo' < $file > $otherdir/$file
done
In any case, grep
returns a true value to the shell if it finds a match and the X && Y
construct executes the Y
command if X
returns a true value.
UPDATE: The above solution assumes (as noted by Johnsyweb) that you want to remove any lines that contain "foo". If you just want to remove "foo" without removing whole lines, then sed
is your friend:
otherdir='your_other_directory'
for file in *.txt; do
grep -q 'foo' $file && sed 's/foo//g' < $file > $otherdir/$file
done
Or:
otherdir='your_other_directory'
for file in *.txt; do
grep 'foo' $file > /dev/null && sed 's/foo//g' < $file > $otherdir/$file
done
You could do this with find. (You need the sh -c
to get the >
redirection to work.)
find -name '*.txt' -exec sh -c 'grep -v foo {} > new/{}' \;
Or with a for loop. This will be more robust when handling unusual file names, such as files with spaces.
for FILE in *.txt; do
grep -v foo "$FILE" > "new/$FILE"
done
If the files are in some other directory old
rather than the current directory, use basename
to strip out the directory:
for FILE in old/*.txt; do
grep -v foo "$FILE" > "new/$(basename "$FILE")"
done
精彩评论