I’m using a data transmission system which uses a fixed SYNC word (0xD21DB8) at the beginning of every superframe. I’d be curious to know how such SYNC words are chosen, i.e. based on which criteria designers choose the length and the value of such a SYNC word.
Share
In short:
high probability of uniqueness
high density of transitions
It depends on the underlying “server layer” (in communication terms). If the said server layer doesn’t provide a means of distinguishing payload data from control signals then a protocol must be devised. It is common in synchronous bit-stream oriented transport layer to rely on a SYNC pattern in order to delineate payload units. A good example of such technique used is in SONET/SDH/OTN, the major optical transport communication technologies.
Usually, the main criterion for choosing a SYNC word is high probability of uniqueness. Of course what makes its uniqueness property depend on the encoding used for the payload.
Example: in SONET/SDH, once the SYNC word has been found, it is validated for a number of superframes (I don’t remember exactly of many) before declaring a valid sync state. This is required because false positive can occur: encoding on a synchronous bit stream cannot be guaranteed to generate encoded payload patterns orthogonal to the SYNC word.
There is another criterion: high density of transitions. Sometimes, the server layer is made up of both clock and data signals (i.e. not separate). In this case, for the receiver to be able to delineate symbols from the stream, it is critical to ensure a maximum number of 0->1, 0->1 transitions in oder to extract the clock signal.
Hope this helps.
Updated: these presentations might be of interest too.