I'm doing a RSS spider. How do you do for controlling the last crawl date?
Right now what was I thinking is this:
- Put in a control file the last pub_date that I have crawled.
- Then when the crawl starts, it checks the last pub_date against the new pub_dates. If there are new items, then start crawl开发者_运维百科ing, if not, do nothing.
How does everyone else resolve this?
I store all data in the database (including last crawl date and post dates) and take all dates I need from database.
I store all data in database as well, and calculate a hash value out of the data. That way you can look up the hash very quickly, and carry out de-dup operation on the fly.
精彩评论