I currently have a .NET http handler that deals with passing files to the web browsers for download. IE6+, FireFox, Chrome, and Safari all work with this code, but the new IE9 fails to download, but only when in SSL.
When I click on the link to download the file:
https://rootwebsite/taskmanager/DownloadFiles.ashx?fileId=3b2c7e41-f51a-445d-9627-f4f4481e1425
IE9 opens the save dialog and shows me DownloadFiles_ashx?fileId=3b2c7e41-f51a-445d-9627-f4f4481e1425
as the file name, but refuses to download the file.
If I change my link to http://
then the code works fine, and the file downloads.
Whats the difference? What am I missing?
Here is my code:
public void WriteByteArrayToHttp(HttpResponse response, string fileName, string contentType, Stream file, bool downloadFile)
{
using (file)
{
if (downloadFile)
{
response.Clear();
response.ClearHeaders();
response.ClearContent();
response.AddHeader("Content-Disposition",
string.Format("attachment; filename={0}", HttpUtility.UrlEncode(fileName)));
}
response.AddHeader("Content-Length", file.Length.ToString());
// Added with suggestion of YSlow FireFox plug-in
// Specifies how long the file is valid for in cache in seconds
// http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
// See 14.9 Cache-Control
// 6 Hours
response.AddHeader("max-age", "21600");
response.ContentType = contentType;
// At the time of this writing, we are running IIS6, BUT if we decide to go to IIS7
// there is a 28.6MB limit to content size both up and down by default
// See http://msdn.microsoft.com/en-us/library/ms689462.aspx
// This would be a problem for a number of files we serv up with the Original method
// so this chunking method replaces that.
// See http://support.microsoft.com/kb/812406 as the base for this change.
// Tested with file between 1MB and 3GB
// Total bytes to read:
long dataToRead = file.Length;
// Buffer to read 10K bytes in chunk:
byte[] buffer = new Byte[10000];
try
{
while (dataToRead > 0)
{
// Verify that the client is connected.
if (response.IsClientConnected)
{
// Read the data in buffer.
// Length of the file:
int length = file.Read(buffer, 0, 10000);
// Write the data to the current output stream.
response.OutputStream.Write(buffer, 0, length);
// Flush the data to the HTML output.
response.Flush();
buffer = new Byte[10000];
dataToRead = dataToRead - length;
}
else
开发者_Go百科 {
//prevent infinite loop if user disconnects
dataToRead = -1;
}
}
}
catch (HttpException hex)
{
if (!hex.Message.StartsWith("The remote host closed the connection"))
{
throw;
}
}
}
}
The web.config defines the handler as such:
<system.webServer>
<validation validateIntegratedModeConfiguration="false"/>
<handlers>
<add name="DownloadFiles.ashx" path="DownloadFiles.ashx" verb="*" type="Russound.Web.HttpHandlers.DownloadFileHandler" resourceType="Unspecified" preCondition="integratedMode"/>
</handlers>
</system.webServer>
</configuration>
Might have to do with
- some cache control headers (or this one; see also this question) or
- an Internet Explorer option that prevents SSL-encrypted pages from being cached / stored on disk, or
- both of them...
Gave me some headache yesterday (it was the second one for me).
If you ask me, this Internet Explorer option Do not save encrypted pages to disk
should only apply to caching but not to deliberate downloads... but well, hey, this is Microsoft we're talking about, so it does not need to make any kind of sense anyways.
See here: http://blogs.msdn.com/b/ieinternals/archive/2009/10/03/internet-explorer-cannot-download-over-https-when-no-cache.aspx:
Update Feb. 2011: I've modified the file download logic for IE9. IE9 should be able to successfully download a file regardless of HTTPS or Cache-Control headers unless you have the "Do not save encrypted pages to disk" option set.
精彩评论