开发者

Speed of many regular expressions in python

开发者 https://www.devze.com 2022-12-12 11:39 出处:网络
I\'m writing a python program that deals with a fair amount of strings/files. My problem is that I\'m going to be presented with a fairly short piece of text, and I\'m going to need to search it for i

I'm writing a python program that deals with a fair amount of strings/files. My problem is that I'm going to be presented with a fairly short piece of text, and I'm going to need to search it for instances of a fairly broad range of words/phrases.

I'm thinking I'll need to compile regular expressions as a开发者_如何学运维 way of matching these words/phrases in the text. My concern, however, is that this will take a lot of time.

My question is how fast is the process of repeatedly compiling regular expressions, and then searching through a small body of text to find matches? Would I be better off using some string method?

Edit: So, I guess an example of my question would be: How expensive would it be to compile and search with one regular expression versus say, iterating 'if "word" in string' say, 5 times?


You should try to compile all your regexps into a single one using the | operator. That way, the regexp engine will do most of the optimizations for you. Use the grouping operator () to determine which regexp matched.


If speed is of the essence, you are better off running some tests before you decide how to code your production application.

First of all, you said that you are searching for words which suggests that you may be able to do this using split() to break up the string on whitespace. And then use simple string comparisons to do your search.

Definitely do compile your regular expressions and do a timing test comparing that with the plain string functions. Check the documentation for the string class for a full list.


Your requirement appears to be searching a text for the first occurrence of any one of a collection of strings. Presumably you then wish to restart the search to find the next occurrence, and so on until the searched string is exhausted. Only plain old string comparison is involved.

The classic algorithm for this task is Aho-Corasick for which there is a Python extension (written in C). This should beat the socks off any alternative that's using the re module.


If you like to know how does it fast during compiling regex patterns, you need to benchmark it.

Here is how I do that. Its compile 1 Million time each patterns.

import time,re

def taken(f):
 def wrap(*arg):
  t1,r,t2=time.time(),f(*arg),time.time()
  print t2-t1,"s taken"
  return r
 return wrap

@taken
def regex_compile_test(x):
 for i in range(1000000):
  re.compile(x)
 print "for",x,

#sample tests
regex_compile_test("a")
regex_compile_test("[a-z]")
regex_compile_test("[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}")

Its took around 5 min for each patterns in my computer.

for a 4.88999986649 s taken
for [a-z] 4.70300006866 s taken
for [A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4} 4.78200006485 s taken

The real Bottleneck is not in compiling patterns, its in extracting text like re.findall, replacing re.sub. If you use that against Several MB texts, Its quite slow.

If your text is fixed, use normal str.find, its faster than regex.

Actually, If you give your text samples, and your regex patterns samples, we could give you better idea, there is many many great regex, and python guys out there.

Hope this help, sorry If my answer couldn't help you.


When you compile the regexp, it is converted into a state machine representation. Provided the regexp is efficiently expressed, it should still be very fast to match. Compiling the regexp can be expensive though, so you will want to do that up front, and as infrequently as possible. Ultimately though, only you can answer if it is fast enough for your requirements.

There are other string searching approaches, such as the Boyer-Moore algorithm. But I'd wager the complexity of searching for multiple separate strings is much higher than a regexp that can switch off each successive character.


This is a question that can readily be answered by just trying it.

>>> import re
>>> import timeit
>>> find = ['foo', 'bar', 'baz']
>>> pattern = re.compile("|".join(find))
>>> with open('c:\\temp\\words.txt', 'r') as f:
        words = f.readlines()

>>> len(words)
235882
>>> timeit.timeit('r = filter(lambda w: any(s for s in find if w.find(s) >= 0), words)', 'from __main__ import find, words', number=30)
18.404569854548527
>>> timeit.timeit('r = filter(lambda w: any(s for s in find if s in w), words)', 'from __main__ import find, words', number=30)
10.953313759150944
>>> timeit.timeit('r = filter(lambda w: pattern.search(w), words)', 'from __main__ import pattern, words', number=30)
6.8793022576891758

It looks like you can reasonably expect regular expressions to be faster than using find or in. Though if I were you I'd repeat this test with a case that was more like your real data.


If you're just searching for a particular substring, use str.find() instead.


Depending on what you're doing it might be better to use a tokenizer and loop through the tokens to find matches.

However, when it comes to short pieces of text regexes have incredibly good performance. Personally I remember only coming into problems when text sizes became ridiculous like 100k words or something like that.

Furthermore, if you are worried about the speed of actual regex compilation rather than matching, you might benefit from creating a daemon that compiles all the regexes then goes through all the pieces of text in a big loop or runs as a service. This way you will only have to compile the regexes once.


in general case, you can use "in" keyword

for line in open("file"):
    if "word" in line:
        print line.rstrip()

regex is usually not needed when you use Python :)

0

精彩评论

暂无评论...
验证码 换一张
取 消