When an application such as a web server sends HTTP data to a web browser, how does the browser know when it 开发者_JAVA技巧has received all of the data so that it can begin using it instead of waiting for more? TCP doesn't specify anywhere how large a segmented message is going to be.
Right now I'm thinking that it's up to the application layer, like HTTP's Content-Length header. But it seems like even that header could be split off into a 2nd or 3rd packet.
TCP/IP is a connection oriented protocol. So, when the browser performs a HTTP connection using TCP/IP, the Network stack guarantees that stream will arrive in the same order the sender intended to.
So, there is no packet concept when you are dealing with TCP. TCP is an ordered stream of bytes arriving through a socket. No need to worry about packets at all. That's the beauty of a protocol stack: each layer does its own work, and abstracts the layer above it of the underlying complications of the problems it resolves.
Content-length indeed, except in the case where the client reads until it gets an end-of-file indication due to the other end closing the connection. Of course, in HTTP, 'RSVP', so that's not going to happen.
Absent content length, it's gotta look for </html>
or some other delimiter in the content. The browser doesn't see packets at all. The connection looks like a stream, with no boundaries, and it's up to the two ends to make a protocol.
精彩评论