开发者

Python MemoryError loading text files

开发者 https://www.devze.com 2023-03-15 11:01 出处:网络
I\'m trying to load ~2GB of text files (approx 35K files) in my python script. I\'m getting a memory error around a third开发者_JAVA技巧 of the way through on page.read(). I\'

I'm trying to load ~2GB of text files (approx 35K files) in my python script. I'm getting a memory error around a third开发者_JAVA技巧 of the way through on page.read(). I'

for f in files:
    page = open(f)
    pageContent = page.read().replace('\n', '')
    page.close()

    cFile_list.append(pageContent)

I've never dealt with objects or processes of this size in python. I checked some of other Python MemoryError related threads but I couldn't get anything to fix my scenario. Hopefully there is something out there that can help me out.


You are trying to load too much into memory at once. This can be because of the process size limit (especially on a 32 bit OS), or because you don't have enough RAM.

A 64 bit OS (and 64 bit Python) would be able to do this ok given enough RAM, but maybe you can simply change the way your program is working so not every page is in RAM at once.

What is cFile_list used for? Do you really need all the pages in memory at the same time?


Consider using generators, if possible in your case:

file_list = []
for file_ in files:
    file_list.append(line.replace('\n', '') for line in open(file_))

file_list now is a list of iterators which is more memory-efficient than reading the whole contents of each file into a string. As soon es you need the whole string of a particular file, you can do

string_ = ''.join(file_list[i])

Note, however, that iterating over file_list is only possible once due to the nature of iterators in Python.

See http://www.python.org/dev/peps/pep-0289/ for more details on generators.


This is not effective way to read whole file in memory.

Right way - get used to indexes.

Firstly you need to complete dictionary with start position of each line (key is line number, and value – cumulated length of previous lines).

t = open(file,’r’)
dict_pos = {}

kolvo = 0
length = 0
for each in t:
    dict_pos[kolvo] = length
    length = length+len(each)
    kolvo = kolvo+1

and ultimately, aim function:

def give_line(line_number):
    t.seek(dict_pos.get(line_number))
    line = t.readline()
    return line

t.seek(line_number) – command that execute pruning of file up to line inception. So, if you next commit readline – you obtain your target line. Using such approach (directly to handle to necessary position of file without running through the whole file) you are saving significant part of time and can handle huge files.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号