simoni@strat.Stanford.EDU (Richard Simoni) (06/24/91)
In article <1991Jun24.175014.4049@waikato.ac.nz>, ldo@waikato.ac.nz (Lawrence D'Oliveiro, Waikato University) writes: > Let's see, typical error rates for computer storage are one incorrect > bit read out of 10 ** 12. If this were true I'd probably want parity. Even if my computer only read a million bytes per second (not a fast computer), I'd see an error every 1e12/8e6 = 1.25e5 seconds = 34.7 hours. According to the TI 1989 MOS Memory Data Book, each memory chip (depending on density) exhibits a typical soft error rate of .001 to .0035 per 1000 hours. Assuming .003, a system with 64 memory chips (e.g., 8 1MB SIMMs) will flip a bit somewhere in memory every 7 months, on average. At this rate, most people probably don't need parity, but some do. > Adding a parity bit detects half of these errors, so the number > of undetected errors drops to one in 2 * 10 ** 12. Parity detects all single-bit errors, which will obviously make up almost all of the errors, so far more than half are detected. Rich Simoni
gbrown@nntp-server.caltech.edu (Glenn Christopher Brown) (06/24/91)
ldo@waikato.ac.nz (Lawrence D'Oliveiro, Waikato University) writes: >Let's see, typical error rates for computer storage are one incorrect >bit read out of 10 ** 12. >Adding a parity bit detects half of these errors, so the number >of undetected errors drops to one in 2 * 10 ** 12. >Now, considering that you're using 12.5% more chips to achieve this >(plus interface circuitry circuitry and, of course, the error detection >support), is it worth it? Parity checking in 8 bit systems detects all 1 bit errors within a byte. Parity checking is only fooled when there are an even number of errors in a byte(including in the parity bit). This means the chance that parity checking will be fooled is on the order of 9*8 chances in (10^12)^2 or _approximately_ 1 chance in 10^22. (This is the probability that 2 errors will occur at distinct locations in the same 9 bit byte). That means errors are about 10 billion times less likely to go undetected. Not just 2 times less likely. Whether knowing when 9,999,999 out of 10,000,000 errors occurs is actually worth the cost of the extra chips is another question entirely. (Note: The cost increase is more than just 12.5% more chips: Much extra glue logic must be implemented in order to take advantage of the extra bit... especially since 9bit TTL doesn't exist! (to my knowledge)) I personally don't think it's worth it: How many compu. companies do you know of who implement parity checking in memory storage? --Glenn Brown gbrown@tybalt.caltech.edu
ldo@waikato.ac.nz (Lawrence D'Oliveiro, Waikato University) (06/25/91)
Let's see, typical error rates for computer storage are one incorrect bit read out of 10 ** 12. Adding a parity bit detects half of these errors, so the number of undetected errors drops to one in 2 * 10 ** 12. Now, considering that you're using 12.5% more chips to achieve this (plus interface circuitry circuitry and, of course, the error detection support), is it worth it? Lawrence D'Oliveiro fone: +64-71-562-889 Computer Services Dept fax: +64-71-384-066 University of Waikato electric mail: ldo@waikato.ac.nz Hamilton, New Zealand 37^ 47' 26" S, 175^ 19' 7" E, GMT+12:00 To someone with a hammer and a screwdriver, every problem looks like a nail with threads.
nagle@well.sf.ca.us (John Nagle) (06/26/91)
gbrown@nntp-server.caltech.edu (Glenn Christopher Brown) writes: >ldo@waikato.ac.nz (Lawrence D'Oliveiro, Waikato University) writes: >>Let's see, typical error rates for computer storage are one incorrect >>bit read out of 10 ** 12. Gee, if it's that bad, my IIci is getting a bad bit every two hours or so, assuming 3 MIPS and two 32-bit memory accesses (one instruction, one data) per instruction. > I personally don't think it's worth it: How many compu. companies >do you know of who implement parity checking in memory storage? IBM, Compaq, ... All those guys who are bigger than Apple. Why do you think SIMMs are designed for 9 chips? John Nagle