[comp.arch] Inherent imprecision of floating point variablesREAD/NEW

leo@ehviea.ine.philips.nl (Leo de Wit) (06/28/90)

In article <1990Jun26.052624.16953@cs.umn.edu> thornley@cs.umn.edu (David H. Thornley) writes:
|In article <3806@memqa.uucp> r91400@memqa.uucp (Michael C. Grant) writes:
   [stuff left out]
|                                                        I remember a
|>VAX FORTRAN that provided a quad-precision floating point variable
|>(which even on a fully-loaded system was faster than my Z80's single
|>precision implementation)!
|
|Ah yes, the Z80.  I remember trying to write efficient arithmetic routines
|for it.  Rather difficult for a machine whose most sophisticated arithmetic
|instruction is a 16-bit add with carry!

But the Z80 had BCD; you could even rotate nibbles through A and (HL)
(a 3 nibble rotate, something like RRD and RLD if I remember
correctly).

B.T.W. I think whether you can write efficient arithmetic routines has
little to do with the sophistication of arithmetic instructions. Remember
that sometimes fancy CISC instructions are speed up by writing them out in
more basic ones; also think of the RISC philosophy.

For sentimental ones, I redirected follow-ups to comp.arch.

    Leo.

baxter@ics.uci.edu (Ira Baxter) (06/28/90)

In <811@ehviea.ine.philips.nl> leo@ehviea.ine.philips.nl (Leo de Wit) writes:

[another thread on efficiency of implementation of arithmetic]

>But the Z80 had BCD; you could even rotate nibbles through A and (HL)
>(a 3 nibble rotate, something like RRD and RLD if I remember
>correctly).

The 8-bit Motorola 6800 had BCD  add and subtract.
I made the mistake of implementing a floating
point system using them, on the ground it was going to make
conversion easier, and for the commercial applications in which it
would be sometimes be used, you didn't lose fractions of dollars.

What a mistake.  The BCD instructions turned out to be a total
loss when when it came to multiplication and division.
The problem turned out to be convert-two-nibbles into
a binary number so you could attempt to do binary multiply
and divides.  (If you've ever tried doing BCD multiplies
on a machine without BCD multiply instructions, you'll understand this).
We got a 40% performance gain by switching from a two-nibble
representation to a base-100 representation, one "digit" per byte,
without losing the decimal flavor of the result.

I finally concluded that no binary machine had any excuse
for BCD instructions whatsoever.





--
Ira Baxter