What language and libraries are suitable for a script to parse and download small numbers of web resources?
For example, some websites publish pseudo-podcasts, but not as proper RSS feeds; they just publish an MP3 file regularly with a web page containing the playlist. I want to write a script to run regularly and parse the relevant pages for the link and playlist info, download the MP3, and put the playlist in the MP3 tags so it shows up nicely in my iPod. There are a bunch of similar applications that I could write too.
What language would you recommend? I would like the script to run on Windows and MacOS. Here are some alternatives:
- JavaScript. Just so I could use jQuery for the parsing. I don't know if jQuery works outside a browser though.
- Python. Probably good library support for doing what I want. Bu开发者_开发百科t I don't love Python syntax.
- Ruby. I've done simple stuff (manual parsing) in Ruby before.
- Clojure. Because I want to spend a bit of time with it.
What's your favourite language and libraries for doing this? And why? Are there any nice jQuery-like libraries for other languages?
If you want to spend some time with Clojure (a very good idea IMO!), give Enlive a shot. The GitHub description reads
a selector-based (à la CSS) templating and transformation system for Clojure — Read more
In addition to being useful for templating, it's a capable webscraping library; see the initial part of this tutorial for some simple scraping examples. (The third one is the New York Times homepage, so actually not as simple as all that.)
There are other tutorials available on the Web if you look for them; Enlive itself comes with some docs / examples. (Plus the code is < 1000 lines in total and very readable, though I suppose this might be less so for someone new to the language.)
Clojure link dumps, covering enlive, based on tagSoup and agents for parallel downloads (roundups/ link dumps aren't pretty, but I did spend some time googling/searching for different libs. Spidering/crawling can be very easy or pretty involved depending on the structure of sites crawled, HTML, XHTML, etc.)
http://blog.bestinclass.dk/index.php/2009/10/functional-social-webscraping/
http://nakkaya.com/2009/12/17/mashups-using-clojure/
http://freegeek.in/blog/2009/10/downloading-a-bunch-of-files-in-parallel-using-clojure-agents/
http://blog.maryrosecook.com/post/46601664/Writing-an-mp3-crawler-in-Clojure
http://gnuvince.wordpress.com/2008/11/18/fetching-web-comics-with-clojure-part-2/
http://htmlparser.sourceforge.net/
http://nakkaya.com/2009/11/23/converting-html-to-compojure-dsl/
http://www.bestinclass.dk/index.php/2009/10/functional-social-webscraping/
apache http client
http://github.com/rnewman/clj-apache-http
http://github.com/heyZeus/clj-web-crawler
http://japhr.blogspot.com/2009/01/clojure-http-clientclj.html
Beautiful Soup (http://www.crummy.com/software/BeautifulSoup/) is a good python library for this. It specializes in dealing with malformed markup.
In ruby you also have Nokogiri, Nokogiri (鋸) is an HTML, XML, SAX, and Reader parser. Among Nokogiri's many features is the ability to search documents via XPath or CSS3 selectors.
Like Mikael S has mentioned hpricot is a great ruby html parser. However, for page retrieval, you may consider using a screen scraping library like scRUBYt or Mechanize.
I highly recommend using Ruby and the hpricot library.
You should really give to Python a shot.
When I decide to design a crawler, i usually reproduce the same pattern.
For each step, there is a worker, which picks the data from a container (mainly a queue). There is container between each type of worker. After the first connection the target site, all types of workers can be threaded. So we have to use synchronization for accessing theses queues.
- Connector : the Session object from the requests library is remarkable.
- Loader : with multiple threaded Loaders, multiple requests can be launched in no time.
- Parser : xpath is intensively used on each etree object created with lxml.
- Validator : set of assertions and heuristics to check the validity of parsed data.
- Archiver : depending on what is stored, how many and how fast, but nosql is often the easiest way to store the retrieved data. For example, mongodb and pymongo.
I would probably do this with PHP, curl, & phpQuery .. but there's a lot of different ways ..
What do you really want to do? If you want to learn Clojure||ruby||C do that. If you just want to get it done do whatever is fastest for you to do. And at the very least when you say Clojure and library you are also saying Java and library, there are lots and some are very good(I don't know what they are though). And the same was said for ruby and python above. So what do you want to do?
For jQuery-like CSS selector library in Perl then take a look at pQuery
Also have a look at this previous SO question for examples of HTML parsing & scraping in many languages.
- Can you provide an example of parsing HTML with your favorite parser?
/I3az/
精彩评论