开发者

Different behaviour when streaming and saving file from ftp server using FtpWebRequest on production machine ASP.Net

开发者 https://www.devze.com 2023-03-07 01:02 出处:网络
There might be some very simple answer to this, but i am really stuck on this one. I have written some code that fetches quite a large (4GB+) xml file through ftp, reads it as a string and splits the

There might be some very simple answer to this, but i am really stuck on this one.

I have written some code that fetches quite a large (4GB+) xml file through ftp, reads it as a string and splits the document into smaller parts. Finally the smaller files are written to disc.

Everything works perfectly well on my developer machine, but when put into production, the script suddenly ends after reading through only a tenth of the file. No exceptions are thrown. Every single line of code is executed as expected. It just ends before going through the whole file. This is making me think that it is either some IIS or web.config settings that need to be tampered with.

Code is running inside the Umbraco CMS as a Custom user control. Server is a 2008 Windows machine running IIS.

Any ideas? This is the code:

FtpWebRequest request = (FtpWebRequest)WebRequest.Create(serverUri);
    request.Credentials = new NetworkCredential("anonymous", "x@y.z");
    request.Method = WebRequestMethods.Ftp.DownloadFile;
    request.Timeout = -1;
    request.KeepAlive = true;
    request.UsePassive = 开发者_StackOverflowtrue;
    request.UseBinary = true;
    using (response = (FtpWebResponse)request.GetResponse())
    using (responseStream = response.GetResponseStream())
    using (StreamReader sr = new StreamReader(responseStream))
    {
      ReadStreamIntoNewRecord(fileName, sr, ref progress, ref result);
    }

The ReadStreamIntoNewRecord function looks like this:

private void ReadStreamIntoNewRecord(string fileName, StreamReader sr, int NumberOfRecordsPerBatch)
{
    string line = "";
    string record = "";
    int i = 0;  
    XDocument xdoc = new XDocument(new XElement("collection"));
    while (sr.Peek() >= 0)
    {
        line = sr.ReadLine();
        if (line.Contains("</record>"))
        {
            xdoc.Element("collection").Add(MakeRecordFromString(record + line));
            record = "";
            i++;
            if (i % NumberOfRecordsPerBatch == 0)
            {
                SaveRecordToFile(fileName, xdoc);
                xdoc = new XDocument(new XElement("collection"));
            }
        }
        else
        {
            record = record + line;
        }

    }
    SaveRecordToFile(fileName, xdoc);            
}


Wow, loading a 4GB file into a string in memory is a horrible idea. If it's 4GB on disk as UTF-8 then it'll be 8GB in memory since all .NE strings are UTF-16 in memory. Luckily, you're not really doing that, you just said you were in the description.

I believe you should change the while loop a little. As written it can be detecting an improper end of stream when there is really more data come in. Use this instead:

while ((line = sr.ReadLine()) != null)
{
    ...
}

Besides that, you would be much better off using either a simple StreamWriter or XmlTextWriter to save the file instead of XDocument. XDocument keeps the whole file in memory and is designed for easier traversal with Linq-to-Xml. You're not using it and can benefit from a much lighter weight class.

0

精彩评论

暂无评论...
验证码 换一张
取 消