开发者

Serial programming: measuring time between characters

开发者 https://www.devze.com 2022-12-11 07:01 出处:网络
I am sending/receiving data over a serial line in Linux and I would like to find the delay between characters.

I am sending/receiving data over a serial line in Linux and I would like to find the delay between characters.

Modbus uses a 3.5 character delay to detect message frame boundaries. If there is more than a 1.5 character delay, the message frame is declared incomplete.

I'm writing a quick program in C which is basically

fd = open(MODEMDEVICE, O_RDWR | O_NOCTTY | O_NONBLOCK);
// setup newtio
....
tcsetattr(fd, TCSANOW, &newtio);
for(;;) {
    res = read(fs, buf, 1);
    if (res > 0) {
        // store time in milliseconds?
        //do stuff
    }
}

Is there some way of measuring the time here? Or do I need to look at retrieving data from the serial line in a different way?

I've also tried hooking into SIGIO to get a signal whenever there is data but I seem to get data 8 bytes at a time.

(yes, I know there exist some modbus libraries 开发者_JAVA百科but I want to use this in other applications)


The simple answer is... you cannot (not without writing you own serial driver)!

If you are writing a MODBUS master there is some hope: You can either detect the end of a slave response by waiting any amount of time (provided its longer than 3.5 chars) without receiving anything (select(2) can help you here), or by parsing the response on the fly, as you read it (the second method wastes much less time). You must also be careful to wait at least 3.5 characters-time before staring to transmit a new request, after receiving the response to the previous request. "At least" is operative here! Waiting more doesn't matter. Waiting less does.

If you a writing a MODBUS slave then you' re out of luck. You simply cannot do it reliably from userspace Linux. You have to write you own serial driver.

BTW, this is not Linux's fault. This is due to the unbelievable stupidity of MODBUS's framing method.


MODbus is like a lot of old protocols and really hates modern hardware.

The reason you're getting 8 bytes at a time is : Your PC has a (at least) 16 byte serial FIFO on receive and transmit, in the hardware. Most are 64byte or bigger.

It is possible to tell the uart device to time out and issue a received interrupt after a number of char times.

The Trigger Level is adjustable, but the low-level driver sets it "smartly". try low-latency mode using setserial) You can fiddle with the code in the serial driver if you must. Google it (mature content warning) it is not pretty.

so the routine is as pseudocode

int actual=read (packet, timeout of 1.5 chars)

look at actual # of received bytes

if less than a packet, has issues, discard.

not great.


You can't use timeouts. On higher baud rates 3.5 character timeout means a few milliseconds, or even hundreds of microseconds. Such timeouts can't be handled in the Linux user space.

On the client side, it isn't a big deal since Modbus doesn't send asynchronous messages. So it's up to you not to send 2 consecutive messages within 3.5 character timeout.

On the server side, the problem is that if your clients have an extremely short response timeouts and Linux is too busy you can't write a bullet-proof framing solution. There is a chance that read() function will return more than one packet. Here is (a little contrived) example.

  1. Client writes a packet to server. Timeout is let's say 20 ms.

  2. Let's say that Linux is at the moment very busy, so kernel doesn't wake up your thread within next 50 ms.

  3. After 20 ms client detects that it didn't receive any response so it sends another packet to server (maybe resent the previous one).

  4. If Linux wakes up your reading thread after 50 ms, read() function can get 2 packets or even 1 and half depending to how many bytes were received by the serial port driver.

In my implementation I use a simple method that tries to parse bytes on-the-fly - first detecting the function code and then I try to read all remaining bytes for a specific function. If I get one and half packet I parse just the first one and remaining bytes are left in the buffer. If more bytes come within a short timeout I add them and try to parse, otherwise I discard them. It's not a perfect solution (for instance some sub-codes for function 8 doesn't have a fixed size) but since MODBUS RTU doesn't have any STX ETX characters, it's the best one I were able to figure out.


I think you are going about this the wrong way. There is a built in mechanism for ensuring that characters come in all together.

Basically, you are going to want to use ioctl() and set the VMIN and VTIME parameters appropriately. In this case, it seems like you'd want VMIN (minimum number of characters in a packet) to be 0 and VTIME (minimum amount of time allowed between characters to be 15 (they are tenths of seconds).

Some really basic example code:

struct termio t;
t.c_cc[ VMIN ] = 0;
t.c_cc[ VTIME ] = 15;
if (ioctl( fd, TCSETAW, &t ) == -1)
{
    printf( msg, "ioctl(set) failed on port %s. Exiting...", yourPort);
    exit( 1 );
}

Do this before your open() but before your read(). Here's a couple of links that I've found wildly helpful:

Serial Programming Guide

Understanding VMIN and VMAX

I hope that at least helps/points you in the right direction even if it isn't a perfect answer for your question.

0

精彩评论

暂无评论...
验证码 换一张
取 消