开发者

Extract paragraphs from Wikipedia API using PHP cURL

开发者 https://www.devze.com 2022-12-31 07:11 出处:网络
Here\'s what I\'m trying to do using the Wikipedia (MediaWiki) API - http://en.wikipedia.org/w/api.php

Here's what I'm trying to do using the Wikipedia (MediaWiki) API - http://en.wikipedia.org/w/api.php

  1. Do a GET on http://en.wikipedia.org/w/api.php?format=xml&action=opensearch&search=[keyword] to retrieve a list of suggested pages for the keyword

  2. Loop through each suggested page using a GET on http://en.wikipedia.org/w/api.php?format=json&action=query&export&titles=[page title]

  3. Extract any paragraphs found on the page into an array

  4. Do something with the array

I'm stuck on #3. I can see a bunch of JSON data that includes "\n\n" between paragraphs, but for some reason the PHP explode() function doesn't work.

Essentially I just want to grab the "meat" of each Wikipedia page (not titles or any formatting, just t开发者_开发问答he content) and break it by paragraph into an array.

Any ideas? Thanks!


The \n\n are literally those characters, not linefeeds. Make sure you use single quotes around the string in explode:

$parts = explode('\n\n', $text);

If you choose to use double quotes you'll have to escape the \ characters like so:

$parts = explode("\\n\\n", $text);

On a side note: Why do you retrieve the data in two different formats? Why not go for only JSON or only XML?

0

精彩评论

暂无评论...
验证码 换一张
取 消