开发者

Swapping of columns in a file and remove duplicates

开发者 https://www.devze.com 2022-12-26 07:47 出处:网络
i have a file like this: term1 term2 term3 term4 term2 term1 term5 term3 ..... ..... what i need to do is to remove duplicates in any order they appear, such as:

i have a file like this:

term1 term2
term3 term4
term2 term1
term5 term3
..... .....

what i need to do is to remove duplicates in any order they appear, such as:

term1 term2

and

term2 term1

is a duplicate to me. It is a reall开发者_JAVA百科y long file, so I'm not sure what can be faster. Does anyone has an idea on how to do this? awk perhaps?


Ordering each word in the line and sorting is easy with perl.

./scriptbelow.pl < datafile.txt | uniq

#!/usr/bin/perl

foreach(sort map { reorder($_) } <>) {
    print;
}

sub reorder {
    return join(' ', sort { $a cmp $b } split(/\s+/, $_)) . "\n";
}


In perl:

while($t=<>) {
 @ts=sort split(/\s+/, $t);
 $t1 = join(" ", @ts);
 print $t unless exists $done{$t1};
 $done{$t1}++;
}

Or:

cat yourfile | perl -n -e  'print join(" ", sort split) . "\n";' | sort | uniq

I'm not sure which one performs better for huge files. The first one produces a huge perl hashmap in memory, the second one invokes a "sort" command...


To preserve original ordering, a simple (but not necessarily fast and/or memory-efficient) solution in awk:

awk '!seen[$1 " " $2] && !seen[$2 " " $1] { seen[$1 " " $2] = 1; print }

Edit: Sorting alternative in ruby:

ruby -n -e 'puts $_.split.sort.join(" ")' | sort | uniq


If you want to remove both "term1 term2" and "term2 term1":

join -v 1 -1 1 <(sort input_file) -v 2 -2 2 <(sort -k 2 input_file) | uniq


awk '($2FS$1 in _){
 delete _[$1FS$2];delete _[$2FS$1]
 next
} { _[$1FS$2] }
END{ for(i in _)  print i } ' file

output

$ cat file
term1 term2
term3 term4
term2 term1
term5 term3
term3 term5
term6 term7

$ ./shell.sh
term6 term7
term3 term4


If the file is very very long, maybe you should consider writing your program with C/C++. I think this would be the fastest solution ( specially if you have to treat all the file for each line that you read). Treatment with bash functions get very slow with big files and repetitive operations


The way I would do it (if you don't need to keep the double columns) is:

sed 's/ /\n/g' test.txt | sort -u

Here's what the output looks like (ignore my funky prompt):

[~]
==> cat test.txt
term1 term2
term3 term4
term2 term1
term5 term3
[~]
==> sed 's/ /\n/g' test.txt | sort -u
term1
term2
term3
term4
term5
0

精彩评论

暂无评论...
验证码 换一张
取 消