开发者

Efficient Alternative to "in"

开发者 https://www.devze.com 2023-03-16 11:43 出处:网络
I\'m writing a web crawler with the ultimate goal of creating a map of the path the crawler has taken. While I haven\'t a clue at what rate other, and most definitely better crawlers pull down pages,

I'm writing a web crawler with the ultimate goal of creating a map of the path the crawler has taken. While I haven't a clue at what rate other, and most definitely better crawlers pull down pages, mine clocks about 2,000 pages per minute.

The crawler works on a recu开发者_Python百科rsive backtracking algorithm which I have limited to a depth of 15. Furthermore, in order to prevent my crawler from endlessly revisitng pages, it stores the url of each page it has visited in a list, and checks that list for the next candidate url.

for href in tempUrl:
    ...
    if href not in urls:
         collect(href,parent,depth+1)

This method seems to become a problem by the time it has pulled down around 300,000 pages. At this point the crawler on average has been clocking 500 pages per minute.

So my question is, what is another method of achieving the same functionality while improving its efficiency.

I've thought that decreasing the size of each entry may help, so instead of appending the entire url, I append the first 2 and the last to characters of each url as a string. This, however hasn't helped.

Is there a way I could do this with sets or something?

Thanks for the help

edit: As a side note, my program is not yet multithreaded. I figured I should resolve this bottleneck before I get into learning about threading.


Perhaps you could use a set instead of a list for the urls that you have seen so far.


Simply replace your 'list of crawled URLS" with a "set of crawled urls". Sets are optimised for random access (using the same hashing algorithms that dictionaries use) and they're a heck of a lot faster. A lookup operation for lists is done using a linear search so it's not particularly fast. You won't need to change the actual code that does the lookup.

Check this out.

In [3]: timeit.timeit("500 in t", "t = list(range(1000))")
Out[3]: 10.020853042602539

In [4]: timeit.timeit("500 in t", "t = set(range(1000))")
Out[4]: 0.1159818172454834


I had a similar problem. Ended up profiling various methods (list/file/sets/sqlite) for memory vs time. See these 2 posts. Finally sqlite was the best choice. You can also use url hash to reduce the size

Searching for a string in a large text file - profiling various methods in python

sqlite database design with millions of 'url' strings - slow bulk import from csv


Use a dict with the urls as keys instead (O(1) access time).

But a set will also work. See

http://wiki.python.org/moin/TimeComplexity

0

精彩评论

暂无评论...
验证码 换一张
取 消