开发者

Column filtering on trace file

开发者 https://www.devze.com 2023-01-22 11:01 出处:网络
I am doing visualization analysis on a trace file I generate from ns-2 that traces out the packets sent/received/dropped at various times of the simulation

I am doing visualization analysis on a trace file I generate from ns-2 that traces out the packets sent/received/dropped at various times of the simulation

here is a sample trace output - http://pastebin.com/aPm3EFax

I want to filter out the column1 after grouping it into S/D/R separately, so that I can sum it 开发者_运维百科over to separately to find packet delivery fraction.

I am clueless on how to get this done? (maybe some awk/python help?)

UPDATE: okay, I did this -

cut -d' ' -f1 wireless-out.tr | grep <x> | wc -l

where <x> is either s or r or D


Give this a try:

awk '{data[$1]+=$2} END{for (d in data) print d,data[d]}' inputfile

Output:

D 80.1951
r 80.059
s 160.158


import collections
result=collections.defaultdict(list)
with open('data','r') as f:
    for line in f:
        line=line.split()
        key=line[0]
        value=float(line[1])
        result[key].append(value)
for key,values in result.iteritems():
    print(key,sum(values))

yields:

('s', 160.15817391900003)
('r', 80.058963809000005)
('D', 80.195127232999994)

Is this close to the form you want?


import csv
import itertools

data =  csv.reader(open('aPm3EFax.txt', 'rb'), delimiter=' ')

result = [(i, sum(float(k[1]) for k in g)) 
 for i, g in itertools.groupby(sorted(list(data)), key=lambda x: x[0])]
0

精彩评论

暂无评论...
验证码 换一张
取 消