rcd@ico.isc.com (Dick Dunn) (02/13/90)
ejp@bohra.cpg.oz (Esmond Pitt) writes: > markh@attctc.Dallas.TX.US (Mark Harrison) writes: > >...storing numeric values with 18-digit precision, ala COBOL and > >the IBM mainframe. This can be accomplished in 64 bits, and is probably > >the reason "they" chose 18 digits as their maximum precision. > > According to a fellow who had been on the original IBM project in the > fifties, the 18 digits came about because of using BCD (4-bit decimal) > representation, in two 36-bit words. (Hmmm...what about the sign?) This is quite a digression, but... The question of "why 18 digits?" came up at the History of Programming Languages conference some years back. I can't remember whether it was Grace Hopper or Jean Sammett who answered, but the real answer was specifically NON-machine-oriented: It was large enough to deal with anticipated needs (?the national debt?) and it didn't give any particular advantage for any particular hardware. That is, it was larger than the "convenient" sizes for hardware of those days. -- Dick Dunn rcd@ico.isc.com uucp: {ncar,nbires}!ico!rcd (303)449-2870 ...Mr. Natural says, "Use the right tool for the job."