开发者

Python: String counting memory error

开发者 https://www.devze.com 2023-04-05 18:22 出处:网络
Based on the suggestions I received in this forum, I am using the following code (example) to count strings.

Based on the suggestions I received in this forum, I am using the following code (example) to count strings.

phrase_words = ['red car', 'no lake', 'newjersey turnpike']
lines = ['i have a red car which i drove on newjersey', 'turnpike. when i took exit 39 there was no', 'lake. i drove my car on muddy roads which turned my red', 'car into brown. driving on newje开发者_开发百科rsey turnpike can be confusing.']
text = " ".join(lines)
dict = {phrase: text.count(phrase) for phrase in phrase_words}

The desired output and the output of the example code is:

{'newjersey turnpike': 2, 'red car': 2, 'no lake': 1}

This code worked great on a text file which was less than 300MB. I used a text file of size 500MB + and received the following memory error:

    y=' '.join(lines)
MemoryError

How do I overcome this? Thanks for your help!


This algorithm needs only two lines in memory at a time. It assumes that no phrase will span three lines:

from itertools import tee, izip
from collections import defaultdict

def pairwise(iterable): # recipe from itertools docs
    "s -> (s0,s1), (s1,s2), (s2, s3), ..."
    a, b = tee(iterable)
    next(b, None)
    return izip(a, b)
d = defaultdict(int)
phrase_words = ['red car', 'no lake', 'newjersey turnpike']
lines = ['i have a red car which i drove on newjersey',
         'turnpike. when i took exit 39 there was no',
         'lake. i drove my car on muddy roads which turned my red',
         'car into brown. driving on newjersey turnpike can be confusing.']

for line1, line2 in pairwise(lines):
    both_lines= ' '.join((line1, line2))
    for phrase in phrase_words:
        # counts phrases in first line and those that span to the next
        d[phrase] += both_lines.count(phrase) - line2.count(phrase)
for phrase in phrase_words:
    d[phrase] += line2.count(phrase) # otherwise last line is not searched


You need to not try to do everything at once. Instead of loading a huge file into memory, and then parsing it, you should load the file a few (hundred) lines at a time, and try to find your strings within those lines.

As long as your chunks overlap at least max(len(phrase) for phrase in phrase_words) characters, you won't miss any hits.

So you could do something like:

text = ''
occurs = {}
overlap_size = max(len(phrase) for phrase in phrase_words)
for phrase in phrase_words:
    occurs[phrase] = 0

while lines:

    text += ' '.join(lines[:1000])
    lines = lines[100:]
    for phrase in phrase_words:
        # this makes sure we don't double count, and also gets all matches (probably)
        occurs[phrase] += text[overlap_size - len(phrase):].count(phrase)
    text = text[-1 * overlap_size:]
0

精彩评论

暂无评论...
验证码 换一张
取 消