What is the best way to read a file and break out the lines by a delimeter. Data returned should be a list of tuples.
Can this method be beaten? Can this be done faster/using less memory?
def readfile(filepath, delim):
with open(filepath, 'r') as f:
return [tuple(line.split(delim))开发者_JS百科 for line in f]
Your posted code reads the entire file and builds a copy of the file in memory as a single list of all the file contents split into tuples, one tuple per line. Since you ask about how to use less memory, you may only need a generator function:
def readfile(filepath, delim):
with open(filepath, 'r') as f:
for line in f:
yield tuple(line.split(delim))
BUT! There is a major caveat! You can only iterate over the tuples returned by readfile once.
lines_as_tuples = readfile(mydata,','):
for linedata in lines_as_tuples:
# do something
This is okay so far, and a generator and a list look the same. But let's say your file was going to contain lots of floating point numbers, and your iteration through the file computed an overall average of those numbers. You could use the "# do something" code to calculate the overall sum and number of numbers, and then compute the average. But now let's say you wanted to iterate again, this time to find the differences from the average of each value. You'd think you'd just add another for loop:
for linedata in lines_as_tuples:
# do another thing
# BUT - this loop never does anything because lines_as_tuples has been consumed!
BAM! This is a big difference between generators and lists. At this point in the code now, the generator has been completely consumed - but there is no special exception raised, the for loop simply does nothing and continues on, silently!
In many cases, the list that you would get back is only iterated over once, in which case a conversion of readfile to a generator would be fine. But if what you want is a more persistent list, which you will access multiple times, then just using a generator will give you problems, since you can only iterate over a generator once.
My suggestion? Make readlines a generator, so that in its own little view of the world, it just yields each incremental bit of the file, nice and memory-efficient. Put the burden of retention of the data onto the caller - if the caller needs to refer to the returned data multiple times, then the caller can simply build its own list from the generator - easily done in Python using list(readfile('file.dat', ','))
.
Memory use could be reduced by using a generator instead of a list and a list instead of a tuple, so you don't need to read the whole file into memory at once:
def readfile(path, delim):
return (ln.split(delim) for ln in open(f, 'r'))
You'll have to rely on the garbage collector to close the file, though. As for returning tuples: don't do it if it's not necessary, since lists are a tiny fraction faster, constructing the tuple has a minute cost and (importantly) your lines will be split into variable-size sequences, which are conceptually lists.
Speed can be improved only by going down to the C/Cython level, I guess; str.split
is hard to beat since it's written in C, and list comprehensions are AFAIK the fastest loop construct in Python.
More importantly, this is very clear and Pythonic code. I wouldn't try optimizing this apart from the generator bit.
精彩评论