I'm in need to transfer a file via sockets:
# sender
require 'socket'
SIZE = 1024 * 1024 * 10
TCPSocket.open('127.0.0.1', 12345) do |socket|
File.open('c:/input.pdf', 'rb') do |file|
while chunk = file.read(SIZE)
socket.write(chunk)
end
end
end
# receiver
require 'socket'
require 'benchmark'
SIZE = 1024 * 1024 * 10
server = TCPServer.new("127.0.0.1", 12345)
put开发者_开发知识库s "Server listening..."
client = server.accept
time = Benchmark.realtime do
File.open('c:/output.pdf', 'w') do |file|
while chunk = client.read(SIZE)
file.write(chunk)
end
end
end
file_size = File.size('c:/output.pdf') / 1024 / 1024
puts "Time elapsed: #{time}. Transferred #{file_size} MB. Transfer per second: #{file_size / time} MB" and exit
Using Ruby 1.9 i get a transfer rate of ~ 16MB/s (~ 22MB/s using 1.8) when transfering a 80MB PDF file from / to localhost. I'm new to socket programming, but that seems pretty slow compared to just using FileUtils.cp. Is there anything i'm doing wrong?
Well, even with localhost
, you still have to go through some of the TCP stack, introducing inevitable delays with packet fragmentation and rebuilding. It probably doesn't go out on the wire where you'd be limited to 100 megaBITs (~12.5 MB/s) per second or a gigibit (~125 MB/s) theoretical maximum.
None of that overhead exists for raw file copying disk to disk. You should keep in mind that even SATA1 gave you 1.5 gigabits/sec and I'd be surprised if you were still running on that backlevel. On top of that, your OS itself will undoubtedly be caching a lot of stuff, not possible when sending over the TCP stack.
16MB per second doesn't sound too bad to me.
I know this question is old, but why can't you compress before you send, then decompress on the receiving end?
require 'zlib'
def compress(input)
Zlib::Deflate.deflate(input)
end
def decompress(input)
Zlib::Inflate.inflate(input)
end
(Shameless plug) AFT (https://github.com/wlib/aft) already does what you're making
精彩评论