I am using HttpURLC开发者_开发百科onnection
to write files, some of which are quite large, to a server.
final HttpURLConnection conn = (HttpURLConnection) url.openConnection();
A while back I had issues writing objects 1 GB or more. I fixed that by setting it to stream a more manageable chunk size.
final int bufferSize = 1024 * 1024;
[...]
conn.setChunkedStreamingMode(bufferSize);
It had then been running fine on my laptop, but on other machines it was crashing. On investigation, I found that the cause was an out of memory error that occurred when writing to the output stream.
final OutputStream out = conn.getOutputStream();
final long bytesWritten = IOUtils.copyLarge(in, out);
Inside the copyLarge routine I found that that it was able to do 262145 iterations of 4096 bytes, failing when it tried to cross the 1 GB line. Allocating more memory to the java application seemed to prevent those crashes, but I thought that should be unnecessary. If it is writing chunks of 1 MB, then either it should fail with far fewer iterations or repeatedly write 1 MB without issue.
UPDATE: Turns out the line setting the ChunkedStreamingMode wasn't actually being called on some machines. If you don't set fixed/chunked streaming mode, HttpURLConnection will just send everything to a PosterOutputStream/ByteArrayOutputStream.
精彩评论