This is kind of weird problem. I have been using HttpWebRequest for a long time now. But never faced this problem before. The site I am scrapping is huge in size, atleast 3mb each page.
In xp it is giving no error but scrapping incomplete page.
In win7 or 2008 server it is showing this error
"Unable to read data from the transport connection: An existing 开发者_JAVA百科 connection was forcibly closed by the remote host"
. Any help will be very much appreciated.
It could be an issue with the ISP serving the pages filtering HTTP GET requests.
Try adding the following ACCEPT header:
webRequest.Accept = "*/*";
The webserver or a man-in-the-middle e.g. a proxy is killing your connection. I take it this request works fine in a browser. I'd do a few things:
- It's a lot of data - make sure your timeouts (Timeout and ReadWriteTimeout) are set suitably high.
- Make this request look like it originated from a real browser- use a tool like Fiddler or Netmon to extract and copy the headers from a browser making the same request - UserAgent, Accepts, Content-Encoding etc. Seen lots of sites vomit when standard headers are missing.
- Cookies may be important (some sites use it for basic DDoS prevention) - again use Fiddler to observe the real browser interaction.
Fiddler
NetMon
Let us know how you got on.
After spending 5 days on it I come to this conclusion that this is a big bug of .net And atlast I solved this problem using WebBrowser component. Though I dont like it very much because it does not work outside of main thread. But it is really fast and scrapping those pages like a champ.
精彩评论