mccalpin@loligo.fsu.edu (John McCalpin) (01/02/89)
In article <10452@obiwan.mips.COM> mark@mips.COM (Mark G. Johnson) writes: > >The MIPS instruction set includes opcodes for manipulating IEEE-standard >80-bit and 128-bit floating point numbers. As I recall, the IEEE >standard calls them double-extended (80b) and quad (128b). > > -- Mark Johnson > MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086 What format is used for the IEEE 128-bit numbers? All I have read is the original proposed draft standard, and I don't recall this length. Specifically, what size exponent does the 128-bit format use? There is some hesitancy in the supercomputer community to switch to the IEEE format because the exponent range of 64-bit numbers is so much smaller than the range currently provided by Cray and CDC/ETA formats. The IEEE 64-bit allows a range of about 1.0e-308 to 1.0e+308, while the Cray and CDC/ETA machines allow a range of about 1.0e-4000 to 1.0e+4000. I do not believe that the 80-bit format increases the exponent range. It might help if the 128-bit format did allow this..... John D. McCalpin mccalpin@masig1.ocean.fsu.edu mccalpin@nu.cs.fsu.edu mccalpin@fsu (BITNET or MFENET)
cjosta@taux01.UUCP (Jonathan Sweedler) (01/02/89)
In article <325@loligo.fsu.edu> mccalpin@masig1.ocean.fsu.edu (John D. McCalpin) writes: >In article <10452@obiwan.mips.COM> mark@mips.COM (Mark G. Johnson) writes: >> >>The MIPS instruction set includes opcodes for manipulating IEEE-standard >>80-bit and 128-bit floating point numbers. As I recall, the IEEE >>standard calls them double-extended (80b) and quad (128b). ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ see below > >What format is used for the IEEE 128-bit numbers? All I have read >is the original proposed draft standard, and I don't recall this >length. Specifically, what size exponent does the 128-bit format use? > >There is some hesitancy in the supercomputer community to switch to the >IEEE format because the exponent range of 64-bit numbers is so much >smaller than the range currently provided by Cray and CDC/ETA formats. >The IEEE 64-bit allows a range of about 1.0e-308 to 1.0e+308, while the >Cray and CDC/ETA machines allow a range of about 1.0e-4000 to 1.0e+4000. ^^^^^^^^^^^^^^^^^^^^^^ see below The IEEE Standard 754 defines Single Extended precision and Double Extended precision numbers (I'm not sure about 854 but I *guess* it's the same). A single extended precision number is defined to have more than 42 bits with an exponent field greater than 10 bits and a fraction field greater than 30 bits. A double extended precision number is defined to have more than 78 bits with an exponent field greater than 14 bits and a fraction field greater than 62 bits. Note that the EXACT size of these precisions is not given. With this definition, double extended precision numbers have an exponent range of AT LEAST 1.0e-4931 to 1.0e+4931. The Intel 80x87 line and Motorola 68881 line use a 15 bit exponent field (the minimum required for double extended precision) and a 64 bit fraction field internally (of course these particular widths are not REQUIRED by the IEEE standard). So not only does the IEEE standard allow formats that give the range of current supercomputers, but some implementations even support it! :-). -- Jonathan Sweedler === National Semiconductor Israel UUCP: ...!{amdahl,hplabs,decwrl}!nsc!taux01!cjosta Domain: cjosta@taux01.nsc.com
PLS@cup.portal.com (Paul L Schauble) (01/03/89)
mccalpin@masig1.ocean.fsu.edu writes: >There is some hesitancy in the supercomputer community to switch to the >IEEE format because the exponent range of 64-bit numbers is so much >smaller than the range currently provided by Cray and CDC/ETA formats. >The IEEE 64-bit allows a range of about 1.0e-308 to 1.0e+308, while the >Cray and CDC/ETA machines allow a range of about 1.0e-4000 to 1.0e+4000. Now that's a curious answer. A few weeks ago I asked this group about the usage of the IEEE standard. According to the response, there hasn't been a new design in several years that used anything else. The previous comment seems to contradict that answer. My searches did turn up several variations on the IEEE standard, usually in exponent length. One common system provided - 32 bit real with 8 bit exponent, range 10**38 - 64 bit real with 11 bit exponent, range 10**308 - 128 bit real with 16 bit exponent, range 10**4000 I am left with several questions. Is the IEEE standard really the thing to use in a new design? If not, what is? Is the IEEE standard widely used in Europe? Does it have official standing? Where can I get a copy?? ++PLS
mccalpin@loligo.fsu.edu (John McCalpin) (01/04/89)
In article <13142@cup.portal.com> PLS@cup.portal.com (Paul L Schauble) writes: >mccalpin@masig1.ocean.fsu.edu writes: > >>There is some hesitancy in the supercomputer community to switch to the >>IEEE format because the exponent range of 64-bit numbers is so much >>smaller than the range currently provided by Cray and CDC/ETA formats. > >Now that's a curious answer. A few weeks ago I asked this group about the >usage of the IEEE standard. >According to the response, there hasn't been a new design in several years >that used anything else. The previous comment seems to contradict that >answer. > Is the IEEE standard really the thing to use in a new design? If not, what >is? I think that The IEEE standard is a very good choice in a new design. It has the advantage of being vendor-independent, and is much more reliable than most other floating-point formats. It is rather expensive to do correctly, and this often translates into slower speed at a fixed price. But for a multi-million dollar machine, the incremental cost of getting the same speed with the IEEE format should not be prohibitive. Almost all workstations from independent vendors use the IEEE FP formats, though they don't always handle all the exceptions correctly. The large vendors (IBM, DEC, CDC) seem to have too much vested interest in old binaries to want to change for their machines. I do hear from occasionally reliable sources in CDC that the next ETA machine will use the IEEE formats. Of course, having the same FP format is only part of portability. The next level of headache involves incompatibility of record types in the files containing these binary numbers... John D. McCalpin mccalpin@masig1.ocean.fsu.edu mccalpin@fsu (BITNET or MFENET)
khb%chiba@Sun.COM (Keith Bierman - Sun Tactical Engineering) (01/04/89)
In article <345@loligo.fsu.edu> mccalpin@loligo.cc.fsu.edu (John McCalpin) writes: >I think that The IEEE standard is a very good choice in a new design. >It has the advantage of being vendor-independent, and is much more >reliable than most other floating-point formats. It is rather expensive >to do correctly, and this often translates into slower speed at a fixed >price. But for a multi-million dollar machine, the incremental cost of >getting the same speed with the IEEE format should not be prohibitive. Gradual underflow and divide are sticking points for very high (cost is no object performance). They are both worth having (GU especially for 32-bit machines) but they will cause the biggest "baddest" machines to be slower. Seymour's machines do very low quality divides, but they are very fast. If speed is the name of the game (and in supercomputing it has been) then a non-ieee divide is a fact of life. Until reccently I was on the other side (user side) and as a provider of scientific software (kalman filtering) I much prefered to limit my concerns to the algorithm being designed, and/or the problem being solved. Bad arithmetic looks just like certain types of mismodelling, poor obserability, etc. Good arithmetic is worth having. Missing the target is much worse than having to sweat to make the code run fast enough. Keith H. Bierman It's Not My Fault ---- I Voted for Bill & Opus
mccalpin@loligo.fsu.edu (John McCalpin) (01/04/89)
In article <83596@sun.uucp> khb@sun.UUCP (Keith Bierman) writes: >In article <345@loligo.fsu.edu> (John McCalpin) writes: > >>I think that The IEEE standard is a very good choice in a new design. >>It is rather expensive to do correctly, and this often translates into >>slower speed at a fixed price. > >Gradual underflow and divide are sticking points for very high (cost >is no object performance). ...they will cause the biggest "baddest" >machines to be slower. >Seymour's machines do very low quality divides, but they are very >fast. If speed is the name of the game (and in supercomputing it has >been) then a non-ieee divide is a fact of life. > >Keith H. Bierman >It's Not My Fault ---- I Voted for Bill & Opus An option that seems to be taken by many vendors is to adopt the IEEE *format* without adopting all of the *rules*. Admittedly, this is a dangerous choice, but it does aid portability. A particular example is to simply not calculate guard, round, and sticky bits in divides. The user should try to replace divides with multiplies, which retain full IEEE precision. If the divide can't be replaced, then you just get a less accurate answer (as on the Crays). I don't know how to handle gradual underflow, though I agree it is important.... The advantage of using the IEEE *formats* is that there is at least hope that binary data files could be read on the front end. Many, many hours of CPU time on supercomputers are wasted on scalar data analysis/graphics programs that should be run on a more cost-effective front-end, which is often where the files are actually stored anyway. I recently ran the PARANOIA floating-point validation test on a wide variety of machines, and found precisely ONE that passed all the tests. I later found two more, out of about 15 machines/vendors tested. The Sun workstations, HP workstations, and MIPS machines passed OK. All the supercomputers failed, of course! :-) John D. McCalpin Supercomputer Computations Research Institute mccalpin@masig1.ocean.fsu.edu mccalpin@fsu (BITNET or MFENET)
khb%chiba@Sun.COM (Keith Bierman - Sun Tactical Engineering) (01/05/89)
In article <350@loligo.fsu.edu> mccalpin@loligo.UUCP (John McCalpin) writes: > >An option that seems to be taken by many vendors is to adopt the IEEE >*format* without adopting all of the *rules*. Admittedly, this is a >dangerous choice, but it does aid portability. A particular example is >to simply not calculate guard, round, and sticky bits in divides. The >user should try to replace divides with multiplies, which retain full >IEEE precision. If the divide can't be replaced, then you just get a >less accurate answer (as on the Crays). I don't know how to handle >gradual underflow, though I agree it is important.... > >The advantage of using the IEEE *formats* is that there is at least hope >that binary data files could be read on the front end. Many, many hours >of CPU time on supercomputers are wasted on scalar data analysis/graphics >programs that should be run on a more cost-effective front-end, which is >often where the files are actually stored anyway. If there is a 32-bit mode (and its being used) I would be very worried about the quality of a non-ieee "algorithm" using ieee formatted numbers. For 64-bit machines the problem is not as severe...at least in my experience the "real world" is 32-bit (resolution of measuring devices, combined with reasonable units) so the fact that the last several bits are "corrupt" is not really a problem. But in 32-bit mode all of the IEEE stuff is really a must....this is why so much work is done in DP...32-bits of "crummy" arithmetic just wasn't good enough. The sad thing is that many researchers insist on using their nice new ieee machines in 64-bit mode only... Keith H. Bierman It's Not My Fault ---- I Voted for Bill & Opus
mccalpin@loligo.fsu.edu (John McCalpin) (01/05/89)
In article <83722@sun.uucp> khb@sun.UUCP (Keith Bierman - Sun Tactical Engineering) writes: >In article <350@loligo.fsu.edu> mccalpin@loligo.UUCP (John McCalpin) writes: >> >>An option that seems to be taken by many vendors is to adopt the IEEE >>*format* without adopting all of the *rules*. Admittedly, this is a >>dangerous choice, but it does aid portability. > >If there is a 32-bit mode (and its being used) I would be very worried >about the quality of a non-ieee "algorithm" using ieee formatted >numbers. For 64-bit machines the problem is not as severe... > >Keith H. Bierman I certainly agree with that fear, and insist on testing my 32-bit codes pretty thoroughly before trusting them. An interesting example (which I referred to in my paper in SUPERCOMPUTING a few months back) is the 1000x1000 LINPACK benchmark test. The matrix is fairly poorly conditioned, so requires some care in 32-bit mode. I include a sample of my results below. The numbers scale so that the absolute value of the exponent of the RMS error is about the number of significant digits of accuracy in the solution. machine precision RMS error in solution (*) --------------------------------------------------------------------- Cyber 205 / ETA-10 32-bit 2.22521256e-01 IBM 3081 32-bit 2.37465184e-03 IEEE standard 32-bit 2.82104476e-04 --------------------------------------------------------------------- Cyber 205 / ETA-10 64-bit 1.32111221e-08 Cray X/MP 64-bit 2.47078473e-11 IEEE standard 64-bit 2.27274978e-13 --------------------------------------------------------------------- Cyber 205/ETA-10 128-bit 1.60755733e-22 Cray X/MP 128-bit 4.15861230e-26 NOTES: (*) The solution vector consists of 1000 identical elements = 1.0 (1) The IEEE standard was run on a Sun 3/280, which had passed the PARANOIA benchmark test.
lamaster@ames.arc.nasa.gov (Hugh LaMaster) (01/06/89)
In article <325@loligo.fsu.edu> mccalpin@masig1.ocean.fsu.edu (John D. McCalpin) writes: >There is some hesitancy in the supercomputer community to switch to the >IEEE format because the exponent range of 64-bit numbers is so much >smaller than the range currently provided by Cray and CDC/ETA formats. >The IEEE 64-bit allows a range of about 1.0e-308 to 1.0e+308, while the >Cray and CDC/ETA machines allow a range of about 1.0e-4000 to 1.0e+4000. I have never heard of any such hesitations on the part of users, although there may be some, about the IEEE exponent size. Most users that I know of would welcome having the same format on their supercomputer as on their graphics engine (often IEEE), and the IEEE format is considered among the best available by numerical analysts. ("welcome" seems a little weak in retrospect. Some people would kill for it. Others wouldn't care much because they have resigned themselves to the annoyance and performance penalties of frequent data conversion.) The only hesitations I have heard have been by hardware designers who don't like handling IEEE underflow and exception requirements in deeply pipelined machines. However, it seems that if you accept a big penalty for the usual unusual special cases, you can make it work just fine. -- Hugh LaMaster, m/s 233-9, UUCP ames!lamaster NASA Ames Research Center ARPA lamaster@ames.arc.nasa.gov Moffett Field, CA 94035 Phone: (415)694-6117