I'm looking for a forward error-correcting code that is relatively easy/fast to encode on a microcontroller; decode will be done on a PC so it can be more complicated.
I don't know that much about error-correcting codes and except for the simple Hamming codes they seem to all be more complicated than I can handle.
Any recommendations?
edit: I'm going to cut things short and accept Carl's answer... I guess there were two things I didn't mention:
(1) I don't strictly need the error correction, it's just advantageous for me, and I figured that there might be some error correction algorithm that was a reasonable benefit for a minimal cost. Hamming codes are probably about the right fit and even they seem like they might be too costly for my encoding application.
(2) The greater advantage than the error correction itself is the 开发者_Go百科ability to resync properly to packets that follow an error. (if I get out of sync for a long time, that's bad) So I think it's just better if I keep things simple.
I haven't quite gotten straight how much overhead you can afford. In your comment you say a 16-bit error detection/correction code is about right, but you don't specify how large of a block you're thinking of attaching that to. To be meaningful, you should probably express the allowable overhead as a percentage. 16 bits of error correction for 64 bits of data is a lot different from 16 bits of error correction of a kilobyte of data..
If you can afford something like 15-20% overhead or so, you can probably use a convolutional code with Viterbi decoder. This is highly assymetrical -- the convolutional coder is quite simple (basically a shift register, with output taps leading to XOR's). A really large one might use a 16-bit register with a half dozen or so XOR's.
Fortunately you have a heavier-duty computer to handle the decoding, because a Viterbi decoder can be a fearsome beast. Especially as you use a larger encoder to reduce the overhead, the size of the decoder explodes. The size of the decoder is exponential with respect to the size of the code group.
Turbo codes were mentioned. These can make better use of available bandwidth than convolutional codes with Viterbi decoders -- but they use a considerably more complex encoder -- a minimum of two convolutional coders of a specific type (recursive systematic convolutional encoders). As such, they don't seem to fit your specification as well.
The problem with error correcting codes is that they'll let you recover from single bit or maybe 2 bit errors, but usually not detect or patch up major damage.
Thus, my recommendation would be instead to divide your data streams up into blocks (1 KB, 10 KB, ... 1 MB at most) and calculate a checksum for each block. Then, when the data arrives on the other CPU, you can ascertain whether it is correct and request retransmission of that block if not. So the receiving computer would either acknowledge and wait for the next block, or negative-acknowledge and expect a re-send.
Yes, we're implementing a subset of TCP/IP here. But there's a reason this protocol was so successful: It works!
For a checksum, I'd recommend CRC-32. It requires a table of (I think) 256 32-bit numbers and some reasonably easy computation (array indexing, OR and XOR, mostly) so it's fairly easy for a "dumb" CPU to compute.
I'd suggest using a packet-based form of forward-error correction. If you have to send six equal-length packets, send each of them with enough information to identify it as "packet 1 of 6", "2 of 6", etc. along with one more packet whose first payload byte is the xor of the first payload byte of packets 1-6, whose second payload byte is the xor of the second byte of packet 1-6, etc. Code which receives any six packets out of the seven total will be able to reconstruct the missing one. As a slight enhancement, use one "parity" packet for the "even-numbered" packets and another for the "odd" ones. If you do that, the system will be able to recover from any burst error which is no longer than a packet.
精彩评论