I am thinking of writing an application that will pseudo-track competing websites to ensure that our prices stay competitive, etc. I looked at possibly using the Google Shopping Search API, but I felt that it could possibly be lacking in flexibility and not all of our competitors are fully listed or updated regularly.
My question, is where is a good place to start with a PHP based webcrawler? I obviously want a crawler that is respectful (even to our competitors), so it will hopefully obey the robots.txt and throttling. (To be fair, I think I am even going to host this on a third party server and have it crawl our websites to show no biases.) I looked around via google and I couldn't find any mature packages -- only some poorly written sourceforge scripts that haven't been maintained in over a year, despite being labeled as beta or alpha.
Looking for ideas or suggestions. Thanks
A crawler in itself isn't that complicated. You just load up the site then evaluate and follow the links you find.
What you might do in order to be "friendly" is to purpose build a crawler for each site you plan on trawling. In other words pick one site and see how they are structured. Code your get requests and html parsing around that structure. Rinse and repeat for the other sites.
If they are using a common shopping cart software (anything is possible here) then obviously you have a bit of reuse.
When crawling, you might want to hit their sites during off peak hours (this is going to be a guess). Also, don't execute 500/requests a second. Throttle it down quite a bit.
One optional thing you might even consider would be to contact these other sites and see if they want to participate in some direct data sharing. The ideal would be for everyone to have an RSS feed for their products.
Of course, depending on who you are selling to this might be considered price fixing... So, proceed with caution.
If you are just seeking for an efficient crawler you can use this one.. . This crawler can crawl around 10,000 web pages within 300 secs on a nice server. This one is in python, a similar curl implementation is also available in PHP, thought I hope you understand PHP dont support multithreading which is an important aspect when considering an efficient crawler.
#! /usr/bin/env python
# -*- coding: iso-8859-1 -*-
# vi:ts=4:et
# $Id: retriever-multi.py,v 1.29 2005/07/28 11:04:13 mfx Exp $
#
# Usage: python retriever-multi.py <file with URLs to fetch> [<# of
# concurrent connections>]
#
import sys
import pycurl
# We should ignore SIGPIPE when using pycurl.NOSIGNAL - see
# the libcurl tutorial for more info.
try:
import signal
from signal import SIGPIPE, SIG_IGN
signal.signal(signal.SIGPIPE, signal.SIG_IGN)
except ImportError:
pass
# Get args
num_conn = 10
try:
if sys.argv[1] == "-":
urls = sys.stdin.readlines()
else:
urls = open(sys.argv[1]).readlines()
if len(sys.argv) >= 3:
num_conn = int(sys.argv[2])
except:
print "Usage: %s <file with URLs to fetch> [<# of concurrent connections>]" % sys.argv[0]
raise SystemExit
# Make a queue with (url, filename) tuples
queue = []
for url in urls:
url = url.strip()
if not url or url[0] == "#":
continue
filename = "doc_%03d.dat" % (len(queue) + 1)
queue.append((url, filename))
# Check args
assert queue, "no URLs given"
num_urls = len(queue)
num_conn = min(num_conn, num_urls)
assert 1 <= num_conn <= 10000, "invalid number of concurrent connections"
print "PycURL %s (compiled against 0x%x)" % (pycurl.version, pycurl.COMPILE_LIBCURL_VERSION_NUM)
print "----- Getting", num_urls, "URLs using", num_conn, "connections -----"
# Pre-allocate a list of curl objects
m = pycurl.CurlMulti()
m.handles = []
for i in range(num_conn):
c = pycurl.Curl()
c.fp = None
c.setopt(pycurl.FOLLOWLOCATION, 1)
c.setopt(pycurl.MAXREDIRS, 5)
c.setopt(pycurl.CONNECTTIMEOUT, 30)
c.setopt(pycurl.TIMEOUT, 300)
c.setopt(pycurl.NOSIGNAL, 1)
m.handles.append(c)
# Main loop
freelist = m.handles[:]
num_processed = 0
while num_processed < num_urls:
# If there is an url to process and a free curl object, add to multi stack
while queue and freelist:
url, filename = queue.pop(0)
c = freelist.pop()
c.fp = open(filename, "wb")
c.setopt(pycurl.URL, url)
c.setopt(pycurl.WRITEDATA, c.fp)
m.add_handle(c)
# store some info
c.filename = filename
c.url = url
# Run the internal curl state machine for the multi stack
while 1:
ret, num_handles = m.perform()
if ret != pycurl.E_CALL_MULTI_PERFORM:
break
# Check for curl objects which have terminated, and add them to the freelist
while 1:
num_q, ok_list, err_list = m.info_read()
for c in ok_list:
c.fp.close()
c.fp = None
m.remove_handle(c)
print "Success:", c.filename, c.url, c.getinfo(pycurl.EFFECTIVE_URL)
freelist.append(c)
for c, errno, errmsg in err_list:
c.fp.close()
c.fp = None
m.remove_handle(c)
print "Failed: ", c.filename, c.url, errno, errmsg
freelist.append(c)
num_processed = num_processed + len(ok_list) + len(err_list)
if num_q == 0:
break
# Currently no more I/O is pending, could do something in the meantime
# (display a progress bar, etc.).
# We just call select() to sleep until some more data is available.
m.select(1.0)
# Cleanup
for c in m.handles:
if c.fp is not None:
c.fp.close()
c.fp = None
c.close()
m.close()
If you are seeking for a complete price comparison system, you are actually seeking for a customized sophisticated project on web. If you find any please do share a link here otherwise if you interested for getting this freelanced, you can get in touch with me :)
精彩评论