[comp.arch] IBM S/360 FP

johnl@esegue.segue.boston.ma.us (John R. Levine) (03/15/90)

In article <8370@hubcap.clemson.edu> mark@hubcap.clemson.edu (Mark Smotherman) writes:
>  [report of 1983 IBM justification of the 360's floating point]

There was an article in the IBM Systems Journal in 1964 explaining the design
of the FP system.  They did indeed get a big speed increase by normalizing in
hex rather than binary, but they didn't understand the precision hit they were
taking.  In particular, their analysis assumed that leading digits of FP
fractions are uniformly distributed, but actually they're geometrically
distributed.  This means that they thought that there would be an average of
one leading zero bit, but actually there are two.

Also, the hidden bit trick common in binary FP systems doesn't work in larger
bases.  (In fairness, the PDP-6 floating point which was designed at about the
same time didn't use a hidden bit either, but it did let you use fixed point
compare instructions to compare normalized floating point numbers, a
significant simplification in the CPU design.)  This lost another bit.

As another speed hack, they truncate rather than round FP results which loses
another bit.  In aggregate, these three decisions lost three bits, almost a
full digit, compared to a binary FP format like the PDP-11's.

>  In System/360, the loss of precision was quite acceptable for the long 
>format. ...

Uh huh.  When people moved their numerical programs from the 7094, which
had a binary 36 bit FP format to the 360, in most cases they had to
change all of their REAL variables to double precision at a severe cost in
memory, which was very expensive, and in performance since they didn't really
need double precision, just something better than the 360's cruddy single
precision.  The name of the program they used to do that, SIFT, has since
become a generic term for source-to-source translators.

Furthermore, the original floating point units didn't even have guard digits!
The numerical analysis crowd screamed so loudly that IBM quickly added guard
digits to the FP units and field upgraded all of the existing machines for
free.

I have never understood why the 360's floating point was botched so badly.  As
far back as the 1940s and perhaps even before IBM had excellent numerical
analysts on staff, and it seems strange to me that none of them seem to have
been involved in the 360 design.
-- 
John R. Levine, Segue Software, POB 349, Cambridge MA 02238, +1 617 864 9650
johnl@esegue.segue.boston.ma.us, {ima|lotus|spdcc}!esegue!johnl
"Now, we are all jelly doughnuts."

des@dtg.nsc.com (Desmond Young) (03/21/90)

In article <8370@hubcap.clemson.edu>, mark@hubcap.clemson.edu (Mark Smotherman) writes:
> From article <8888@boring.cwi.nl>, by dik@cwi.nl (Dik T. Winter):
> > complete (although one still wonders why they ever chose for hex arithmetic).
>   The Bendix G20 (1961) pioneered the use of a higher power of 2 as base
> by selecting base 8.  In the design of System/360, ..., we were using a

Actually, I wonder if that is the case. Burroughs have used octal base
since their first stack machines. The B5000(?) was circa 1960, I suspect
their use may predate the Bendix.
  Anyway, a higher radix does lose some precision, i.e. when
normalized, there may (or may not) be a loss of significant digits.
There was a paper
  (pause while he digs through old boxes)...
  Yes:
      "On the Precision Attainable with Various
	Floating-Point Number Systems"
      by Richard P. Brent
      IEEE Transactions on Computers, Vol C-22 Number 6.

  My reading was, binary with an implicit leading bit is of course
the best, and up to base 4 was ok. However, there does seem to be
benefit for choosing even 8 over 16.

  On another tack, Burroughs have traditionally also made decimal
machines. The financial community loved it. I think they had
24 decimal-digit precision. I do not need to add any more to a
previous posting about all those fractions of a cent adding up..

Des.
   des@dtg.nsc.com

huck@aspen.IAG.HP.COM (Jerry Huck) (03/22/90)

In an response by lindsay@MATHOM.GANDALF.CS.CMU.EDU (Donald Lindsay) he
writes:
>In article <3060@wtkatz.oakhill.UUCP> chinds@oakhill.UUCP 
>	(Chris Hinds) writes:
>>In last weeks COMPCOM presentation of the RS6000, the
>>IBM speaker was asked if the new, fast fp multiply/
>>accumulate instruction was IEEE 754 compatible.  The
>>reply was 'no.'
>
>The machine doesn't have a multiply instruction, nor does it have an
>add instruction. It has a four-argument multiply-and-add instruction.
>One does multiplies, or adds, by the judicious use of nil arguments.

What are nil arguments?  One approach might be to provide the constant
one and zero into the multiplier and adder but that has problems with
handling signed zeros.  How does the HW know which operation to simply
"pass" an argument?  sub-op bits?, reserved register spec?

Thanks,
Jerry (huck@iag.hp.com)