I'm using Ruby+Watir to request pages through Firefox.
I would like to record the headers and content of every http request made through the browser.
Would it be possible to configure a proxy solution to store this information, either in a file or pipe it straight in开发者_高级运维to an application? Could I use something such as squid or nginx to record header/content information?
PS: Running Ubuntu x64.
If you don't want a proxy, you could just use tcpdump e.g. tcpdump -i en0 -n -s 0 -w output.pcap
You can then retrospectively look at all traffic in wireshark etc.
The BrowserMob proxy might be a good fit for you (runs as a jar file)
You could also roll your own in Ruby (but only works with HTTP, not HTTPS) e.g.
require 'rubygems'
require 'webrick/httpproxy'
@proxy_port = ARGV[0] || 9090
# Optional flags
@print_headers = false
@print_body = true
server = WEBrick::HTTPProxyServer.new(
:Port => @proxy_port,
:AccessLog => [],
:ProxyContentHandler => Proc.new do |req,res|
puts "-"*75
puts ">>> #{req.request_line.chomp}\n"
req.header.keys.each { |key| puts "#{key.capitalize}: #{req.header[key]}" if @print_headers }
puts "<<<" if @print_headers
puts res.status_line if @print_headers
res.header.keys.each { |key| puts "#{key.capitalize}: #{res.header[key]}" if @print_headers }
puts res.body unless res.body.nil? or !@print_body
end
)
trap("INT") { server.shutdown }
server.start
For Windows there is a program called Fiddler that does exactly what you need, so I did a Google search for "Fiddler for Linux" and came up with Charles. Looks pretty strong.
I didn't notice the price tag for the non-trial version of Charles. Another app worth looking into is Poster, an add-on for Firefox. It is not clear to me whether it captures all traffic or only returns response for directly input requests, but still could help you with your project.
精彩评论