[mod.protocols] Erroneous posting on CRC-16

jr@CC5.BBN.COM (John Robinson) (09/08/86)

As numerous people have pointed out, I was a little too pessimistic in
my rebuttal to the MNP interview about the abilities of 16-bit CRC's
to detect errors.  Yes, any pattern of errors that is 16 bits or less
is detected; the failing patterns I referred to all were 17 bits in
length, and had only 4 errors.

In commenting on his examples of 4-out-of-17 bit erros that were
undetected, Marc Kaufman <kaufman@shasta.stanford.edu> commented
(reproduced here for benefit of the telecomm list):

    Note that my example was 17 bits ... The problem is, that this represents
    only three (3) symbols in the newer RF modems (64- or 256- QAM, etc.).  In
    addition, a symbol error in a differentially encoded modem will cause
    an error in the next symbol (as the phase is corrected).  With trellis
    coding, the error may propogate to two or more symbols before the codes
    get back to normal.  Just possibly (I have not looked in detail) the
    generated errors may just cancel the CRC error in some cases.  In any
    event, short polynomials become less useful as the number of bits in
    a symbol increases.  Either the makers of modems should select their
    transition rules to preserve effectiveness of the CRC, or we should
    look for better error checking polynomials.

I second this concern.  Each protocol may look fine on its own, yet
their combination may be far less bullet-proof than assumed, because
the errors made at one level may have a far lower than expected
probability of detection at the other.  Modem vendors (Microcom
included) probably should pay some heed; likewise, a similar concern
applies to the choice of redundancy check applied at the Transport
layer in (ISO or DoD) IP.  There should be a nice article or two in
here somehwere.

/jr