Im writing a server application for my iPhone app. The section of the server im working on is the relay server. This essentially relays messages between iPhones, through a server, using TCP sockets. The server reads the length of the header from the stream, then reads that number of bytes from the stream. It deserializes the header, and checks to see if the message is to be relayed on to another iPhone (rather than being processed on the server).
If it has to be relayed, it begins reading bytes from the sender's socket, 1024 bytes at a time. After each 1024 bytes are received, it adds those bytes (as a "packet" of bytes) to the outgoing message queue, which is processed in order.
This is all fine, however, but what happens if the sender gets interrupted, so it hasn't sent all its bytes (say, out of the 3,000 bytes it had to send, the sending iPhone goes into a tunnel after 2,500 bytes)?
This means that all the other devices are waiting on the remaining 500 bytes, which dont get relayed to them. Then if the sender (or anyone else for that matter) sends data to these sockets, they think the start of the new message is the end of the last one, corrupting the data.
Obviously from the description above, im using message framing, but I think im missing something. From what I can see, message framing only seems to allow the r开发者_运维知识库eceiver to know the exact amount of bytes to read from the socket, before assembling them into an object. Wont things start to get hairy once a byte or two goes astray at some point, throwing everything out of sync? Is there a standard way of getting back in sync again?
Wont things start to get hairy once a byte or two goes astray at some point, throwing everything out of sync? Is there a standard way of getting back in sync again?
TCP/IP itself ensures that no bytes go "missing" over a single socket connection.
Things are a bit more complex in your situation, where (if I understand correctly) you're using a server as a sort of multiplexer.
In this case, here's some options off the top of my head:
- Have the server buffer the entire message from point A before sending it to point B.
- Close the B-side sockets if an abnormal close is detected from the A side.
- Change the receiving side of the protocol so that a B-side client can detect and recover from a partial A-stream without killing and re-establishing the socket. e.g., if the server gave a unique id to each incoming A-stream, then the B client would be able to detect if a different stream starts. Or have an additional length prefix, so the B client knows both the entire length to expect and the length for that individual message.
Which option you choose depends on what kind of data you're transferring and how easy the different parts are to change.
Regardless of the solution, be sure to include detection of half-open connections.
精彩评论