开发者

Interview question: Honeypots and web crawlers

开发者 https://www.devze.com 2023-03-22 05:15 出处:网络
I was recently reading a book as prep for an interview and came across the following question: What will you do when your crawler runs into a honey pot that generates an infinite subgraph for you to

I was recently reading a book as prep for an interview and came across the following question:

What will you do when your crawler runs into a honey pot that generates an infinite subgraph for you to wander about?

I wanted to get some solutions to this qn. Personally, I would some fo开发者_开发知识库rm of depth limited search to prevent traversing continuously. Or perhaps use some form of machine learning to detect patterns. Thoughts?


Most commonly infinite subgraphs are prevented by link depth. So you gain an inital set of urls and you will traverse from each to a finite depth. While limiting the traversing depth you may use some heuristics to dynamically adjust it according to webpage characteristics. More information can be found e.g. here.

Another option would be to try some sort of pattern matching. But depending on the algorithm which produces the subgraph this will be a pretty (very very very)hard task. This will also be at least a pretty expensive operation.

For the interview question(about detecting infinite loops):

If they ask this questiom someone want to hear a reference to the Halting problem

Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.


You could limit the number of retrieved pages. Of course there is a problem with this.. what if the site is really huge? Is wikipedia infinite? :)

A better way is to set the threshold based on how many external sites link to it and how many distinct pages they link to. The larger the number is the bigger your threshold is. This could solve problems with a couple of infinite honeypots which link to each other.

0

精彩评论

暂无评论...
验证码 换一张
取 消