I'm sending a very large string from one application to another on localhost using sockets in python. Small strings move instantly, but large strings seem to take a while longer (I say large, but I'm talking maybe a MB or two at the very 开发者_Go百科most). Enough that I have to sit and wait a few seconds after I do something in one app before it shows up in another.
What bottlenecks are involved here? As I understand it, with sockets on 127.0.0.1, all I'm really doing is moving data from one point in memory to another. So transferring even hundreds of MB at a time should move perceptually instantly on my workstation.
You are still moving the data through the entire network stack, just not going out through the network interface card itself.
There may be some shortcuts taken around the network stack with localhost, but it's most likely dependent on how the stack is implemented on the system you are using. Regardless shared memory or pipes will be much faster.
Here is a high level overview: http://docs.python.org/howto/sockets.html
PS: Not sure if this will work for your case, but the multiprocessing module has a some ways of sharing data between several processes in an efficient way.
PPS: You could try using a UDP socket instead of a TCP socket. This could potentially give you better throughput and not change drastically your method of IPC.
精彩评论