开发者

TCP socket in Unix - notify server I am done sending

开发者 https://www.devze.com 2023-03-14 14:15 出处:网络
I have a tiny little TCP app where many clients connect to a server, send data to it (via write() - they are also sending message size) and then exit. I have the clients send \\0\\0 to the server when

I have a tiny little TCP app where many clients connect to a server, send data to it (via write() - they are also sending message size) and then exit. I have the clients send \0\0 to the server when done sending - and I make it so that if the server gets a zero from read() then it knows something went wrong in the client (like SIGKILL). My question is - is there any programmatic way (some sys call) to notify the server that开发者_开发问答 I am done sending - instead of the server checking always for \0\0 ? Server uses poll() on clients/listening socket to detect if there is something to read/new connection request, btw.

Should I send a signal ? But how do I know which descriptor to stop polling then ?

I read this but the answers there are more or less what I use now


Doing it at the application level (e.g. using \0\0 as you're doing it) is the correct way to do if your protocol is a bit more complex that a single request/response model.

HTTP 1.0, for example, closes the connection straight after a single request/response: the client sends its request command, the server replies with its response and closes the connection.

In protocols where you have a more complex exchange, there are specific commands to indicate the end of a message. SMTP and POP3, for example, are line-delimited. When sending the content of an e-mail via SMTP, you indicate the end of the message using . on a single line (. in the actual message is escaped as ..). You also get commands such as QUIT to indicate you're done.

In HTTP 1.1, the set of request headers is terminated by an empty-line (e.g. GET / HTTP/1.1 + each header on a line + an empty line), so the server knows where the end of the request is. The responses in HTTP 1.1 then use either a Content-Length header (to signal when the end of the response body is going to be) or use chunked transfer encoding which essentially inserts a number of delimiters to indicated whether there's more data coming (usually, it's used when the data size isn't known by the server in advance). (Requests that have a body also use the same headers to indicate when the request ends.)

It's otherwise difficult for the server to know when it's done reading, since it's generally not possible to detect whether a socket is disconnected (or rather, if it's still connected even though the client isn't sending any data). By sending some delimiter or length indicator at the application-level, you avoid this sort of problem (or can detect when there's a problem/timeout).


This is done at the application level. In HTTP it is done by closing the socket for a response. Also in HTTP after the server receives two returns, it knows the GET request has finished sending then if there is a content-length header, it knows that the client/server is finished sending after X bytes.

You will need to implement something similar.

0

精彩评论

暂无评论...
验证码 换一张
取 消