开发者

downloading full page text from a web domain

开发者 https://www.devze.com 2023-01-03 02:31 出处:网络
First time here -- thought I\'d field a question on behalf of a coworker. Somebody in my lab is doing a content analysis (e.g. reading an article or transcript line by line and identifying relevant t

First time here -- thought I'd field a question on behalf of a coworker.

Somebody in my lab is doing a content analysis (e.g. reading an article or transcript line by line and identifying relevant themes) of the web presences of various privatized neuroimaging centers (e.g. http://www.canmagnetic.com/). She's been c/ping entire si开发者_StackOverflowte maps by hand, and I know I could slap something together with Python to follow links and dump full text (with line numbers) for her, but I've never actually done anything quite like this. Any ideas for how I'd get started?

Cheers, -alex


Here is pretty much everything you need to get started. Read the section "Listing 7. Simple Python Web site crawler". The examples are even written in python.

http://www.ibm.com/developerworks/linux/library/l-spider/

Good luck!


A popular web scraping module for Python is Scrapy. Go ahead and take a look at the tutorial link at the bottom for instance.


you're looking for "web scraping".

you can Google around to find quite a bit of different techniques and utilities such as this one

http://www.webscrape.com/

more info

http://blogs.computerworld.com/node/324


Is it necessary to do it in Python? If not, HTTrack could be the perfect solution for you. This can copy entire sites into a hierarchy of HTML files. If you are looking for a Python solution, try Scrapy.


You could use wget with the --spider option.


Last time I had to do something like this, I started something like this:

from BeautifulSoup import BeautifulSoup
import urllib
html = urllib.urlopen("http://www.someurl.com")
html = html.read()
soup = BeautifulSoup(html)

Here is the documentation for Beautiful Soup (http://www.crummy.com/software/BeautifulSoup/documentation.html) and while it may be overkill for your purposes, it is handy to know in my opinion.

0

精彩评论

暂无评论...
验证码 换一张
取 消