开发者

Ruby Mechanize web scraper library returns file instead of page

开发者 https://www.devze.com 2023-03-24 23:59 出处:网络
I have recently been using the Mechanize gem in ruby to write a scraper. Unfortunately, the URL that I am attempting to scrape returns a Mechanize::File object instead of a Mechanize::Page object upon

I have recently been using the Mechanize gem in ruby to write a scraper. Unfortunately, the URL that I am attempting to scrape returns a Mechanize::File object instead of a Mechanize::Page object upon a GET request.

I can't figure out why. Every other URL I have tried has retur开发者_JAVA百科ned a Mechanize::Page object.

Is there some way to force Mechanize to return a Page object?


Here's what's going on.

When you download a 'normal' web page, its header will have a field that says something like Content-Type text/html. When Mechanize sees this, it knows to interpret the page content as HTML and parses it into a Mechanize::Page object, complete with links and forms and whatnot.

But if you've ever clicked on a link that says "download CSV data" or "download PDF" or, in short, anything that's not HTML, you're receiving a page that does not have a Content-Type of text/html. Since Mechanize cannot parse non-html into a Mechanize::Page object, it will package up the content into a Mechanize::File object.

What you do with the Mechanize::File object depends on what you're trying to accomplish. For example, if you know that the page you visited was CSV data rather than HTML, you can extract the CSV data like this:

page = web_agent.get(some_url_that_references_csv_data)
parsed_csv = CSV.parse(page.body)

If you want to be fancy, you can write your own parsers that allow Mechanize to handle non-HTML formats. See the Mechanize docs on PluggableParser if you want to go that route. But you can accomplish plenty by working directly with the Mechanize::File object.

addendum in response to @user741072's comment

If on the other hand, if the page is HTML and somebody neglected to set content-type to HTML, you can write a method that swaps in the html parser for the default parser just long enough to parse the page. This will force parsing as HTML:

def with_html_parser(agent, &body)
  original_parser = agent.pluggable_parser.default
  agent.pluggable_parser.default = agent.pluggable_parser['text/html']
  begin
    yield
  ensure
    agent.pluggable_parser.default = original_parser
  end
end

Let me know if that does the trick.


When a website doesn't returns the Content Type as part of the response, you can set the Content Type yourself in a post connection hook:

agent = Mechanize.new { |a|
  a.post_connect_hooks << lambda { |_,_,response,_|
    if response.content_type.nil? || response.content_type.empty?
      response.content_type = 'text/html'
    end
  }
}


have a look out for the Content-Type of the specific url in the HTTP-Headers with curl (curl yoururl -i). In your code you might want to check the content-type before you get the url:

require 'net/http'
url = URI.parse('http://www.gesetze-im-internet.de/bundesrecht/bgb/gesamt.pdf')
res = Net::HTTP.start(url.host, url.port) {|http| http.head(url.path)}
puts res['content-type']

#=> application/pdf

Or you can just check if your Mechanize object is of class Mechanize::Page:

agent = Mechanize.new
unknown_body = agent.get(url)
if unknown_body.class == Mechanize::Page
  self.body = unknown_body
else
  puts "Discarded binary content!"
end

Be aware that this approach will be much slower since it "downloads" the requested resource anyways.But it might be useful if you want to store the file for later usage.

0

精彩评论

暂无评论...
验证码 换一张
取 消