This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this questionI have lines like these, and I want to know how many lines I actually have...
09:16:39 AM all 2.00 0.00 4.00 0.00 0.00 0.00 0.00 0.00 94.00
09:16:40 AM all 5.00 0.00 0.00 4.00 0.00 0.00 0.00 0.00 91.00
09:16:41 AM all 0.00 0.00 4.00 0.00 0.00 0.00 0.00 0.00 96.00
09:16:42 AM all 3.00 0.00开发者_如何学Python 1.00 0.00 0.00 0.00 0.00 0.00 96.00
09:16:43 AM all 0.00 0.00 1.00 0.00 1.00 0.00 0.00 0.00 98.00
09:16:44 AM all 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
09:16:45 AM all 2.00 0.00 6.00 0.00 0.00 0.00 0.00 0.00 92.00
Is there a way to count them all using linux commands?
Use wc
:
wc -l <filename>
This will output the number of lines in <filename>
:
$ wc -l /dir/file.txt
3272485 /dir/file.txt
Or, to omit the <filename>
from the result use wc -l < <filename>
:
$ wc -l < /dir/file.txt
3272485
You can also pipe data to wc
as well:
$ cat /dir/file.txt | wc -l
3272485
$ curl yahoo.com --silent | wc -l
63
To count all lines use:
$ wc -l file
To filter and count only lines with pattern use:
$ grep -w "pattern" -c file
Or use -v to invert match:
$ grep -w "pattern" -c -v file
See the grep man page to take a look at the -e,-i and -x args...
wc -l <file.txt>
Or
command | wc -l
there are many ways. using wc
is one.
wc -l file
others include
awk 'END{print NR}' file
sed -n '$=' file
(GNU sed)
grep -c ".*" file
wc -l
does not count lines.
Yes, this answer may be a bit late to the party, but I haven't found anyone document a more robust solution in the answers yet.
Contrary to popular belief, POSIX does not require files to end with a newline character at all. Yes, the definition of a POSIX 3.206 Line is as follows:
A sequence of zero or more non- <newline> characters plus a terminating character.
However, what many people are not aware of is that POSIX also defines POSIX 3.195 Incomplete Line as:
A sequence of one or more non- <newline> characters at the end of the file.
Hence, files without a trailing LF
are perfectly POSIX-compliant.
If you choose not to support both EOF types, your program is not POSIX-compliant.
As an example, let's have look at the following file.
1 This is the first line.
2 This is the second line.
No matter the EOF, I'm sure you would agree that there are two lines. You figured that out by looking at how many lines have been started, not by looking at how many lines have been terminated. In other words, as per POSIX, these two files both have the same amount of lines:
1 This is the first line.\n
2 This is the second line.\n
1 This is the first line.\n
2 This is the second line.
The man page is relatively clear about wc
counting newlines, with a newline just being a 0x0a
character:
NAME
wc - print newline, word, and byte counts for each file
Hence, wc
doesn't even attempt to count what you might call a "line". Using wc
to count lines can very well lead to miscounts, depending on the EOF of your input file.
POSIX-compliant solution
You can use grep
to count lines just as in the example above. This solution is both more robust and precise, and it supports all the different flavors of what a line in your file could be:
- POSIX 3.75 Blank Line
- POSIX 3.145 Empty Line
- POSIX 3.195 Incomplete Line
- POSIX 3.206 Line
$ grep -c ^ FILE
The tool wc
is the "word counter" in UNIX and UNIX-like operating systems, but you can also use it to count lines in a file by adding the -l
option.
wc -l foo
will count the number of lines in foo
. You can also pipe output from a program like this: ls -l | wc -l
, which will tell you how many files are in the current directory (plus one).
If you want to check the total line of all the files in a directory ,you can use find and wc:
find . -type f -exec wc -l {} +
Use wc
:
wc -l <filename>
If all you want is the number of lines (and not the number of lines and the stupid file name coming back):
wc -l < /filepath/filename.ext
As previously mentioned these also work (but are inferior for other reasons):
awk 'END{print NR}' file # not on all unixes
sed -n '$=' file # (GNU sed) also not on all unixes
grep -c ".*" file # overkill and probably also slower
Use nl
like this:
nl filename
From man nl
:
Write each FILE to standard output, with line numbers added. With no FILE, or when FILE is -, read standard input.
I've been using this:
cat myfile.txt | wc -l
I prefer it over the accepted answer because it does not print the filename, and you don't have to use awk
to fix that. Accepted answer:
wc -l myfile.txt
But I think the best one is GGB667's answer:
wc -l < myfile.txt
I will probably be using that from now on. It's slightly shorter than my way. I am putting up my old way of doing it in case anyone prefers it. The output is the same with those two methods.
Above are the preferred method but "cat" command can also helpful:
cat -n <filename>
Will show you whole content of file with line numbers.
wc -l file_name
for eg: wc -l file.txt
it will give you the total number of lines in that file
for getting last line use tail -1 file_name
I saw this question while I was looking for a way to count multiple files lines, so if you want to count multiple file lines of a .txt file you can do this,
cat *.txt | wc -l
it will also run on one .txt file ;)
cat file.log | wc -l | grep -oE '\d+'
grep -oE '\d+'
: In order to return the digit numbers ONLY.
count number of lines and store result in variable use this command:
count=$(wc -l < file.txt)
echo "Number of lines: $count"
wc -l <filename>
This will give you number of lines and filename in output.
Eg.
wc -l 24-11-2019-04-33-01-url_creator.log
Output
63 24-11-2019-04-33-01-url_creator.log
Use
wc -l <filename>|cut -d\ -f 1
to get only number of lines in output.
Eg.
wc -l 24-11-2019-04-33-01-url_creator.log|cut -d\ -f 1
Output
63
I tried wc -l to get the number of line from the file name
To do more filtering for example want to count to the number of commented lines from the file use grep '#' Filename.txt | wc -l
echo "No of files in the file $FILENAME"
wc -l < $FILENAME
echo total number of commented lines
echo $FILENAME
grep '#' $FILENAME | wc -l
Just in case. It's all possible to do it with many files in conjunction with the find command.
find . -name '*.java' | xargs wc -l
wc -l file.txt | cut -f3 -d" "
Returns only the number of lines
Redirection/Piping the output of the file to wc -l
should suffice, like the following:
cat /etc/fstab | wc -l
which then would provide the no. of lines only.
Or count all lines in subdirectories with a file name pattern (e.g. logfiles with timestamps in the file name):
wc -l ./**/*_SuccessLog.csv
This drop-in portable shell function [ℹ] works like a charm. Just add the following snippet to your .bashrc
file (or the equivalent for your shell environment).
# ---------------------------------------------
# Count lines in a file
#
# @1 = path to file
#
# EXAMPLE USAGE: `count_file_lines $HISTFILE`
# ---------------------------------------------
count_file_lines() {
local subj=$(wc -l $1)
subj="${subj//$1/}"
echo ${subj//[[:space:]]}
}
This should be fully compatible with all POSIX-compliant shells in addition to bash and zsh.
Awk saves livestime (and lines too):
awk '{c++};END{print c}' < file
If you want to make sure you are not counting empty lines, you can do:
awk '{/^./ && c++};END{print c}' < file
I know this is old but still: Count filtered lines
My file looks like:
Number of files sent
Company 1 file: foo.pdf OK
Company 1 file: foo.csv OK
Company 1 file: foo.msg OK
Company 2 file: foo.pdf OK
Company 2 file: foo.csv OK
Company 2 file: foo.msg Error
Company 3 file: foo.pdf OK
Company 3 file: foo.csv OK
Company 3 file: foo.msg Error
Company 4 file: foo.pdf OK
Company 4 file: foo.csv OK
Company 4 file: foo.msg Error
If I want to know how many files are sent OK:
grep "OK" <filename> | wc -l
OR
grep -c "OK" filename
As others said wc -l
is the best solution, but for future reference you can use Perl:
perl -lne 'END { print $. }'
$.
contains line number and END
block will execute at the end of script.
I just made a program to do this ( with node
)
npm install gimme-lines
gimme-lines verbose --exclude=node_modules,public,vendor --exclude_extensions=html
https://github.com/danschumann/gimme-lines/tree/master
if you're on some sort of BSD-based system like macOS, i'd recommend the gnu version of wc. It doesn't trip up on certain binary files the way BSD wc does. At least it's still somewhat usable performance. On the other hand, BSD tail is slow as ............zzzzzzzzzz...........
As for AWK, only a minor caveat though - since it operates under the default assumption of lines, meaning \n
, if your file just happens not to have a trailing new line delimiter, AWK will over count it by 1 compared to either BSD or GNU wc. Also, if you're piping in things with no new lines at all, such as echo -n
, depending on whether you're measuring at the END { }
section or FNR==1
, the NR will be different.
精彩评论