开发者

Optimize the calculation of a document-term matrix

开发者 https://www.devze.com 2022-12-07 17:32 出处:网络
I have a list of files, files = [\'file_1.txt\', \'file_2.txt\', file_3.txt\', ...] and a list of words,

I have a list of files,

files = ['file_1.txt', 'file_2.txt', file_3.txt', ...]

and a list of words,

words = ['def.com', 'abc', 'xyz', 'jkl - rst.com', ...]

so a word may contain a dot, a space or a hypen.

For each file I'm looking for the number of occurren开发者_开发百科ces of each word in the list. My code is:

df = pd.DataFrame(0, index=files, columns=words)
for file in files:
    text = open(file, 'r', encoding='utf-8').read().lower()
    for word in words:
        df.loc[file, word] = len(re.findall(r'\b' + word + r'\b', text))

If len(files) = 5000 and len(words) = 500, the procedure becomes very time-consuming. Is there any way to optimize it? Perhaps on the basis of CountVectorizer() there is a way to produce a document-term matrix based only on the terms present in the list, but other ways are obviously welcome.

Please note that I am interested in, say, 'def.com' but not 'abcdef.com'.

Thank you in advance.

0

精彩评论

暂无评论...
验证码 换一张
取 消