Currently I'm scraping using PHP cURL and XPath, but it is very slow.
Each website has many URLs with many subpages using Javascript.
One website would have say 30 categories of products and each category has about 70 subpages with 10 items on each.
I scrape about 150 webpages in total with the above.
One script takes one website and scrapes all the URLs from that page one at the time. At the same time another script is running doing the same.
Each script takes one URL, fetches the data into a variable, and that then gets scraped using XPath, then values are stored in the DB.
Many of the pages uses Javascript with Microsoft ASP.NET Viewstate, so many loops need to be executed in order to jump from page 1 to page 2, etc.
One script may run for about 2 hours getting everything from a single website.
What can be done speeding things up?
I have been thinking about doing the same as above, but only storing each page locally first, and then when every page from a single website is stored then scrape them.
Anyone with great exprience in this? Javascript/viewstate has to taken into consideration, so I 开发者_StackOverflow中文版can't just wget everything first.
You can use mutli-curl to fetched multiple pages at once. If you wanted to, you could request all 30 category pages in a single mutli-curl request. For processing each page, you can use forking (pctl_fork). Combining those two techniques, your computer CPU/network can become the bottleneck.
精彩评论