don@allegra.UUCP (D. Mitchell) (03/29/84)
Subject: error detecting codes Here are the results of an experiment to test the quality of various error detecting codes. Numbers below are failures to detect errors out of 131072 tests on 128-bit messages. All error codes are 16 bit except the PUP which is 32. Number of Bits Incorrect Protocol 1 2 3 5 8 13 26 Method X.25 0 0 0 0 5 0 3 CRC 16 TCP/IP 0 3699 337 94 19 6 5 1's comp. sum PUP-I 0 2925 256 62 17 0 0 sum and rotate Brand X 0 7327 0 0 100 0 5 16 parities uucp 7687 3353 913 141 40 4 3 Ad Hoc (hash) 0 0 9 2 2 0 0 modulo 65521 (crypto) 4 1 3 4 1 3 2 DES CBC Comments: A good error code means more than a good hashing algorithm. DES is a nearly perfect hash, but fails to detect 1-bit errors. CRC, in addition to being a good hash, detects ALL small-number-of-bits errors. UUCP is terrible! Its error code was tested as a hashing function, but not as an error code, from what I have heard. TCP and PUP error codes are fast to computer. Mod 65521 and DES are too expensive to really use. CRC can be done faster than many people realize. Note that mod 65521 is mathematically related to CRC, and is almost as good.
gwyn@brl-vgr.ARPA (Doug Gwyn ) (03/31/84)
That was very interesting. You mean people really implement important communication protocols with such lousy ECCs? I thought everyone in that corner of the business had at least read Hamming's book on the subject. Tsk, tsk. Richard Lary told a funny story about the Chinese mathematician DEC kept around and let out of the closet just long enough to design an ECC for them whenever they needed one. The Digital Storage Architecture is supposed to have a phenomenally good ECC as a result.
rpw3@fortune.UUCP (04/04/84)
#R:allegra:-237000:fortune:26600005:000:1525 fortune!rpw3 Apr 4 00:42:00 1984 NOT FAIR! (to either PUP or TCP/IP) Look, the PUP and TCP/IP (and the XNS/"IP") software checksums are NOT intended for catching data transmission errors. They expect to be used inside a link-level protocol which has a "reasonable" CRC. The stated purpose of the software checksum is to detect errors WITHIN a system (like a stuck bit on a bus), not between systems (although it helps there too, see below). Experimental Ethernet (3 Mb/s) used a 16-bit CRC (not particularly swuft). But what is the probability of BOTH a CRC-16 and a ones-complement-and-rotate checksum agreeing that a bad packet is good? (And while having an error which is missed by both the CRC and the soft-check, oh yes, it must have a good destination network, host, socket, and protocol number. Right.) Standard Ethernet (10 Mbit/s, DEC/Intel/Xerox) uses a CRC-32 at the link-level, plus the usual 16-bit 1's-comp-add+rotate software checksum at the IP level. What is the probability of an undetected error THERE? (Same rules about address fields) The CRC-32 they use is supposed to be a very well studied polynomial developed for Autodin-II. It is initialized to all 1's so it doesn't have the old blanking (large bursts of zeros) problem. Even if TCP/IP is running on top of plain old HDLC, you still have two very different nested error checks (like 3Mb/s Ethernet). Rob Warnock UUCP: {ihnp4,ucbvax!amd70,hpda,harpo,sri-unix,allegra}!fortune!rpw3 DDD: (415)595-8444 USPS: Fortune Systems Corp, 101 Twin Dolphin Drive, Redwood City, CA 94065