We have a text file that gets generated automatically and put in a web server. The task is to read the file line by line and insert the records in a database. The following code is in C#:
WebRequest request = WebRequest.Create(url);
WebResponse response = request.GetResponse();
StreamReader r = new StreamReader(res开发者_StackOverflow中文版ponse.GetResponseStream());
while (r.Peek() > -1)
{
string s = r.ReadLine().Trim();
//insert string into a db.
}
When I do this I constantly get the entire file which ranges from 9000 - 10000 lines. On the other hand when I use the following sometimes I get a truncated file (less lines)
WebClient client = new WebClient();
StreamReader r = new StreamReader(client.OpenRead(url));
while (r.Peek() > -1)
{
string s = r.ReadLine().Trim();
//insert string into a db.
}
Can anyone explain the difference? Why would the results be different? I was under the impression that WebClient was just a wrapper of HttpWebRequest.
using Peek
method, you are guaranteed to have a complete line ready to be red.
I bet you are getting partial lines (actually breaking at the current stream position).
In fact, I don't think it's related to either of the two class, but more to how you read the result.
Did you tried WebClient.DownloadString() method ?
精彩评论