开发者

C# Downloading website into string using C# WebClient or HttpWebRequest

开发者 https://www.devze.com 2023-04-06 23:20 出处:网络
I am trying to download the contents of a website. However for a certain webpage the string returned contains jumbled data, containing many � chara开发者_如何学JAVActers.

I am trying to download the contents of a website. However for a certain webpage the string returned contains jumbled data, containing many � chara开发者_如何学JAVActers.

Here is the code I was originally using.

HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(url);
req.Method = "GET";
req.UserAgent = "Mozilla/5.0 (Windows; U; MSIE 9.0; WIndows NT 9.0; en-US))";
string source;
using (StreamReader reader = new StreamReader(req.GetResponse().GetResponseStream()))
{
    source = reader.ReadToEnd();
}
HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();
doc.LoadHtml(source);

I also tried alternate implementations with WebClient, but still the same result:

HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();
using (WebClient client = new WebClient())
using (var read = client.OpenRead(url))
{
    doc.Load(read, true);
}

From searching I guess this might be an issue with Encoding, so I tried both the solutions posted below but still cannot get this to work.

  • http://blogs.msdn.com/b/feroze_daud/archive/2004/03/30/104440.aspx
  • http://bytes.com/topic/c-sharp/answers/653250-webclient-encoding

The offending site that I cannot seem to download is the United_States article on the english version of WikiPedia (en . wikipedia . org / wiki / United_States). Although I have tried a number of other wikipedia articles and have not seen this issue.


Using the built-in loader in HtmlAgilityPack worked for me:

HtmlWeb web = new HtmlWeb();
HtmlDocument doc = web.Load("http://en.wikipedia.org/wiki/United_States");
string html = doc.DocumentNode.OuterHtml; // I don't see no jumbled data here

Edit:

Using a standard WebClient with your user-agent will result in a HTTP 403 - forbidden - using this instead worked for me:

using (WebClient wc = new WebClient())
{
    wc.Headers.Add("user-agent", "Mozilla/5.0 (Windows; Windows NT 5.1; rv:1.9.2.4) Gecko/20100611 Firefox/3.6.4");
    string html = wc.DownloadString("http://en.wikipedia.org/wiki/United_States");
    HtmlDocument doc = new HtmlDocument();
    doc.LoadHtml(html);
}

Also see this SO thread: WebClient forbids opening wikipedia page?


The response is gzip encoded. Try the following to decode the stream:

UPDATE

Based on the comment by BrokenGlass setting the following properties should solve your problem (worked for me):

req.Headers[HttpRequestHeader.AcceptEncoding] = "gzip, deflate";
req.AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip;

Old/Manual solution:

string source;
var response = req.GetResponse();

var stream = response.GetResponseStream();
try
{
    if (response.Headers.AllKeys.Contains("Content-Encoding")
        && response.Headers["Content-Encoding"].Contains("gzip"))
    {
        stream = new System.IO.Compression.GZipStream(stream, System.IO.Compression.CompressionMode.Decompress);
    }

    using (StreamReader reader = new StreamReader(stream))
    {
        source = reader.ReadToEnd();
    }
}
finally
{
    if (stream != null)
        stream.Dispose();
}


This is how I usually grab a page into a string (its VB, but should translate easily):

req = Net.WebRequest.Create("http://www.cnn.com")
Dim resp As Net.HttpWebResponse = req.GetResponse()
sr = New IO.StreamReader(resp.GetResponseStream())
lcResults = sr.ReadToEnd.ToString

and haven't had the problems you are.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号