as the toppic suggests, how to I read in information from multiple text files and only add elements 1 time in a an array regardless if they occur multiple times in the diffrent text files?
I have started with this script that reads in and prints out all elements in the order that they occur in the different documents.
For example take e look at these 3 diffrent text files containing the following data
File 1:
2011-01-22 22:12 test1 22 1312 75 13.55 1399
2011-01-23 22:13 test4 22 1112 72 12.55 1499
File 2:
2011-01-24 22:14 test1 21 1322 75 23.55 1599
2011-01-25 22:15 test2 23 2312 77 33.55 1699
File 3:
2011-01-26 22:16 test2 20 1412 79 63.55 1799
2011-01-27 22:17 test5 12 1352 78 43.55 1999
I want to check if the current element already is added to the array, but as for now my script prints out all elements.
{
BUILDd[NR-1] = $3; len++
}
END {
SUBSYSTEM=substr(FILENAME, 1, length(FILENAME)-7)
LABEL= "\"" toupper(SUBSYSTEM) "\""
print "#{"
print "\"buildnames\": {"
print " \"label\": \"buildnames\","
print " \"data\": ["
for (i = 0 ; i <= len-1; i ++ ) {
if(i == len-1){print " [\"" BUILDd[i] "\"]"}
else
{ print " [\"" BUILDd[i] "\"],"}
}
print " ]"
print " }"
print "};"
}
Gives this output
#{
"buildnames": {
"label": "buildnames开发者_Go百科",
"data": [
["test1"]
["test4"]
["test1"]
["test2"]
["test2"]
["test5"]
]
}
};
When I want it to give out the following
#{
"buildnames": {
"label": "buildnames",
"data": [
["test1"]
["test2"]
["test4"]
["test5"]
]
}
};
1) In other words first check if the elements are already in the array and if not, then add it/them
2) Sort the array afterwards if possible
Thanks =)
Except for the formatting, is this what you are trying to achieve (a, b, c, are files that contains your logs) ?
$ cut -d" " -f3 a b c | sort | uniq
test1
test2
test4
test5
using awk
{
BUILDd[$3] = 1
}
END {
for (i in BUILDd) {
print i
}
}
Gives
awk -f a.awk a b c
test1
test2
test4
test5
Note that the correct sorting order here is pure accidental... The order stuff is put into a array is not the order it is printed.
精彩评论