开发者

Ubuntu: how to mass download a field from multiple websites?

开发者 https://www.devze.com 2023-02-04 23:26 出处:网络
I do have permission to do this. I\'ve got a website with about 250 pages from which I need to download the \'product descriptions\' and \'product images\'. How do I do it? I\'d like to get the data

I do have permission to do this.

I've got a website with about 250 pages from which I need to download the 'product descriptions' and 'product images'. How do I do it? I'd like to get the data out into a CSV, so that I can put it in a DB table. Could someone point me to a good tutorial to get started on this? I should be using cURL, right?

So far, I got this from another stackoverflow page, How do I transfer wget output to a file or DB?:

curl somesite.com | grep sed etc | sed -e '/^(.*)/INSERT tableName (columnName) VALUES (\1)/' |psql dbname

And I created this, which sucks, to get the images:

#!/bin/bash

lynx --source "www.site.com"|cut -d\" -f8|grep jpg|while read image
do
wget "www.site.com/开发者_高级运维$image"
done

by watching this video: http://www.youtube.com/watch?v=dMXzoHTTvi0.


You want to do what's called screen scraping.

Here are some links to get you started:

  • http://www.bradino.com/php/screen-scraping/
  • http://www.developertutorials.com/tutorials/php/easy-screen-scraping-in-php-simple-html-dom-library-simplehtmldom-398/
  • http://www.weberdev.com/get_example-4606.html
  • http://www.google.com/search?q=screen+scraping+php
0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号