Is there a way to make https calls with the Network.Browser package. I'm not seeing it in the documentation on Hackage.
If there isn't a way to do it with browse is th开发者_如何学Goere another way to fetch https pages?My current test code is
import Network.HTTP
import Network.URI (parseURI)
import Network.HTTP.Proxy
import Data.Maybe (fromJust)
import Control.Applicative ((<$>))
import Network.Browser
retrieveUrl :: String -> IO String
retrieveUrl url = do
rsp <- browse $ request (Request (fromJust uri) POST [] "Body")
return $ snd (rspBody <$> rsp)
where uri = parseURI url
I've been running nc -l -p 8000 and watching the output. I see that it doesn't encrypt it when I do retrieveUrl https://localhost:8000
Also when I try a real https site I get:
Network.Browser.request: Error raised ErrorClosed
*** Exception: user error (Network.Browser.request: Error raised ErrorClosed)
Edit: Network.Curl solution (For doing a SOAP call)
import Network.Curl (curlGetString)
import Network.Curl.Opts
soapHeader s = CurlHttpHeaders ["Content-Type: text/xml", "SOAPAction: " ++ s]
proxy = CurlProxy "proxy.foo.org"
envelope = "myRequestEnvelope.xml"
headers = readFile envelope >>= (\x -> return [ soapHeader "myAction"
, proxy
, CurlPost True
, CurlPostFields [x]])
main = headers >>= curlGetString "https://service.endpoint"
An alternative and perhaps more "haskelly" solution as Travis Brown put it with http-conduit:
To just fetch https pages:
import Network.HTTP.Conduit
import qualified Data.ByteString.Lazy as L
main = simpleHttp "https://www.noisebridge.net/wiki/Noisebridge" >>= L.putStr
The below shows how to pass urlencode parameters.
{-# LANGUAGE OverloadedStrings #-}
import Network.HTTP.Conduit
import qualified Data.ByteString.Lazy as L
main = do
initReq <- parseUrl "https://www.googleapis.com/urlshortener/v1/url"
let req' = initReq { secure = True } -- Turn on https
let req = (flip urlEncodedBody) req' $
[ ("longUrl", "http://www.google.com/")
-- ,
]
response <- withManager $ httpLbs req
L.putStr $ responseBody response
You can also set the method, content-type, and request body manually. The api is the same as in http-enumerator a good example is: https://stackoverflow.com/a/5614946
I've wondered about this in the past and have always ended up just using the libcurl bindings. It would be nice to have a more Haskelly solution, but Network.Curl
is very convenient.
If all you want to do is fetch a page, Network.HTTP.Wget is the most simple way. Exhibit a:
import Network.HTTP.Wget
main = putStrLn =<< wget "https://www.google.com" [] []
精彩评论