开发者

Persist items using a POST request within a Pipeline

开发者 https://www.devze.com 2023-03-26 12:33 出处:网络
I want to persist items within a Pipeline posting them to a url. I am using this code within the Pipeline

I want to persist items within a Pipeline posting them to a url.

I am using this code within the Pipeline

class XPipeline(object):
def process_item(self, item, spider):     
    log.msg('in SpotifylistPipeline', level=log开发者_StackOverflow.DEBUG)   

    yield FormRequest(url="http://www.example.com/additem, formdata={'title': item['title'], 'link': item['link'], 'description': item['description']})

but it seems it's not making the http request.

  • Is it possible to make http request from pipelines? If not, do I have to do it in the Spider?
  • Do I need to specify a callback function? If so, which one?
  • If I can make the http call, can I check the response (JSON) and return the item if everything went ok, or discard the item if it didn't get saved?

As I final thing, is there a diagram that explains the flow that Scrapy follows from beginning to end? I am getting slightly lost which what gets called when. For instance, if Pipelines returned items to Spiders, what do Spiders do with those items? What's after a Pipeline call?

Many thanks in advance

Migsy


You can inherit your pipeline from scrapy.contrib.pipeline.media.MediaPipeline and yield Requests in 'get_media_requests'. Responses are passed into 'media_downloaded' callback.


Quote:

This method is called for every item pipeline component and must either return a Item (or any descendant class) object or raise a DropItem exception. Dropped items are no longer processed by further pipeline components.

So, only spider can yield a request with a callback. Pipelines are used for processing items.

You better describe what do you want to achieve.

is there a diagram that explains the flow that Scrapy follows from beginning to end

Architecture overview

For instance, if Pipelines returned items to Spiders

Pipelines do not return items to spiders. The items returned are passed to the next pipeline.


This could be done easily by using the requests library. If you don't want to use another library then look into urllib2.

import requests

class XPipeline(object):

    def process_item(self, item, spider):       
        r = requests.post("http://www.example.com/additem", data={'title': item['title'], 'link': item['link'], 'description': item['description']})
        if r.status_code == 200:
            return item
        else:
            raise DropItem("Failed to post item with title %s." % item['title'])
0

精彩评论

暂无评论...
验证码 换一张
取 消