[comp.arch] IBM 360 floating point round off problems

johnl@ima.ISC.COM (John R. Levine) (01/21/88)

In article <1069@cpocd2.UUCP> howard@cpocd2.UUCP (Howard A. Landman) writes:
>In article <4404@ecsvax.UUCP> hes@ecsvax.UUCP (Henry Schaffer) writes:
>>This tradeoff goes back to the original 360 - and seems to have originated
>>with the desire to have a floating point word fit in 4 bytes, and so the
>>exponent was in 1 byte.  In order to have an acceptable range of
>>magnitude, the exponent had to shift more than binary, and HEX was chosen.
>
>Given that hex rounding effectively trashes three bits of mantissa accuracy,
>it would have been just as good to use an 11-bit exponent and a 21-bit
>mantissa with normal rounding.  

The IBM Systems Journal published an article in 1964 discussing the design of
the 360's floating point system. Evidently they neglected to ask the advice of
numerical analysts, because they made some horrendous mistakes. Hex
normalization was perceived to expand exponent range and speed normalization
(which it certainly does) but they incorrectly assumed that the leading hex
digits of numbers would be uniformly rather than geometrically distributed so
that their analysis of the number of significant bits retained was plain
wrong. They didn't know about the now-standard hidden leading bit trick that
works with binary but not hex formats. And they truncate rather than rounding
results, again for speed, again not having understood the precision
implications. Originally 360s didn't keep any guard digits when doing the
arithmetic, which caused results so awful that they recalled and retrofitted
all of the machines in 1965 with guard digits, though by then it was too late
to do anything about the format. The 360/85 added a band-aid of quadruple
precision that gives you lots of bits if you're willing to pay the speed and
space penalty.

I've always been baffled by the 360's awful floating point. IBM had and has
among the world's great mathematics departments, and it's astounding that
nobody who knew anything about numerical analysis seems to have had anything
to do with the design of what they knew would be their main scientific line of
computers for many years. Other work from IBM has been first rate, such as a
report in the IBM Journal of R&D about a year ago describing the new Fortran
library, with breathtaking error analyses that would make any IEEE aficionado
proud.

>It was precisely the awfulness of IBM single precision that led Kernighan and
>Ritchie to make it a required feature of the C language that all floating
>point computations be done in double precision.

Well, actually, no.  It was a lot easier to generate floating point code for
the PDP-11 if you left the FPU in double precision mode than to flip it from
single to double mode and back, so that ended up being the way C did
arithmetic.  There was no 360 C compiler until some years after the 11's
floating wartiness was enshrined in the C tradition.
-- 
John R. Levine, IECC, PO Box 349, Cambridge MA 02238-0349, +1 617 492 3869
{ ihnp4 | decvax | cbosgd | harvard | yale }!ima!johnl, Levine@YALE.something
Gary Hart for President -- Let's win one for the zipper.