开发者

How are SYNC words chosen?

开发者 https://www.devze.com 2022-12-10 23:23 出处:网络
I\'m using开发者_C百科 a data transmission system which uses a fixed SYNC word (0xD21DB8) at the beginning of every superframe. I\'d be curious to know how such SYNC words are chosen, i.e. based on wh

I'm using开发者_C百科 a data transmission system which uses a fixed SYNC word (0xD21DB8) at the beginning of every superframe. I'd be curious to know how such SYNC words are chosen, i.e. based on which criteria designers choose the length and the value of such a SYNC word.


In short:

  • high probability of uniqueness

  • high density of transitions

It depends on the underlying "server layer" (in communication terms). If the said server layer doesn't provide a means of distinguishing payload data from control signals then a protocol must be devised. It is common in synchronous bit-stream oriented transport layer to rely on a SYNC pattern in order to delineate payload units. A good example of such technique used is in SONET/SDH/OTN, the major optical transport communication technologies.

Usually, the main criterion for choosing a SYNC word is high probability of uniqueness. Of course what makes its uniqueness property depend on the encoding used for the payload.

Example: in SONET/SDH, once the SYNC word has been found, it is validated for a number of superframes (I don't remember exactly of many) before declaring a valid sync state. This is required because false positive can occur: encoding on a synchronous bit stream cannot be guaranteed to generate encoded payload patterns orthogonal to the SYNC word.

There is another criterion: high density of transitions. Sometimes, the server layer is made up of both clock and data signals (i.e. not separate). In this case, for the receiver to be able to delineate symbols from the stream, it is critical to ensure a maximum number of 0->1, 0->1 transitions in oder to extract the clock signal.

Hope this helps.

Updated: these presentations might be of interest too.


At the physical layer, another consideration (besides those mentioned in jldupont's answer) is that a sync word may be used to synchronise the receiver's communication clock to that of the sender. Synchronisation may only require zeroing the receiver's clock, but it may also involve changing the frequency of the clock to match the sender's more closely.

For a typical asynchronous protocol, the sender and receiver are required to have clocks that are the same. In reality of course, the clocks are never precisely the same, so a maximum error is normally specified.

Some protocols don't require the receiver to adjust its clock rate, but tolerate the error by oversampling, or some other method. For example, a typical UART is able to cope with errors by zeroing on the first edge of the start bit, and thereafter, taking multiple samples at the point where it expects the middle of each bit to be. In this case, the sync word is just the start bit, and ensures a transition at start of the message.

In the HART industrial protocol, the sync word is 0xFF, plus a zero parity bit, repeated a number of times. This is represented as an analogue waveform, encoded using FSK, and appears as 8 periods (equal to 8 bits times) of a 1200 Hz sinusoidal wave, followed by one bit time at 2200 Hz. This pattern allows the receiver to detect that there is a valid signal, and then synchronise to the start of a byte by detecting the transition from 2200 Hz back to 1200 Hz. If required, the receiver can also use this waveform to adjust its clock.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号