[comp.arch] base 10 float hardware

kruger@16bits.dec.com (Hart for CCCP chief in '88) (01/15/88)

By definition, base 10 hardware must waste some circuitry, because you are
consciously deciding not to store the binary values 11-15, which you could
otherwise do. So you lose range, but gain precision for decimal calculations.

But this doesn't save you from precision errors, it only saves you fro
DECIMAL precision errors. What about (x/3)*3 ?

Incidentally, I suggest that the example (x*2)/2 should probably have been
(x/2)*2.

dov

schmitz@FAS.RI.CMU.EDU (Donald Schmitz) (01/15/88)

>By definition, base 10 hardware must waste some circuitry, because you are
>consciously deciding not to store the binary values 11-15, which you could
>otherwise do. So you lose range, but gain precision for decimal calculations.

This assumes representing the mantissa and/or exponent in BCD, I don't think
this is what is meant by base 10 floating point.  Rather, the mantissa and
exponent would still be an unsigned binary int, however the fp number would
represent mantissa * 10^exponent.  From a hardware stand point, this would
complicate pre and post justification; rather than simply use a barrel
shifter, a multiplier/divider would be needed.

If the same number of bits are used for the exponent as a base 2
implementation, the range will increase.  Similarly, precision will decrease
somewhat given the same number of mantissa bits, (this may not be obvious,
but more unique fp values can not be represented in the same number of bits,
so the expanded range must be spanned by a more spread out set of values).
I'm not sure what the real win is, since (as mentioned) there are still
numbers which can't be represented exactly, the hardware implemented gate
count is almost sure to increase, and the format will be incompatible with
the rest of the world.

Donald.Schmitz	schmitz@fas.ri.cmu.edu

haynes@ucscc.UCSC.EDU.ucsc.edu (99700000) (01/15/88)

But you can do some interesting things with the binary values
11-15.  I believe most of the recent decimal floating point hardware
has allowed variable-length mantissas.  The Fairchild SYMBOL architecture
used one of the values 11-15 to indicate "exactly".  I had about the
same time played mentally with the complement of this, using one of the
values 11-15 to indicate 'fuzz' and work out arithmetic rules for
handling fuzz so that it serves as an indicator of precision loss.

That is, if you have approximate data you can input a fuzz digit
following the last good digit.  Or if the hardware has to round off
or truncate it can furnish a fuzz digit following the last good
digit.  The fuzz digits participate in arithmetic just like good
digits, except the result of doing any operation on fuzz is fuzz
(except multiplying fuzz by zero).  Everything a fuzz digit touches
turns to fuzz.

Another idea we were discussing years ago in Harry Huskey's class
was to represent decimal data in base 100 in 7 bits, or in base 1000
in 10 bits.  Both of these are more efficient than base 10 in 4 bits;
but at the time the hardware requirements were pretty formidable.
Maybe they wouldn't look so bad in today's technologies.

haynes@ucscc.ucsc.edu
haynes@ucscc.bitnet
..ucbvax!ucscc!haynes

ok@quintus.UUCP (Richard A. O'Keefe) (01/15/88)

In article <8801141342.AA15537@decwrl.dec.com>, kruger@16bits.dec.com (Hart for CCCP chief in '88) writes:
> By definition, base 10 hardware must waste some circuitry, because you are
> consciously deciding not to store the binary values 11-15, which you could
> otherwise do. So you lose range, but gain precision for decimal calculations.
You misunderstood.  I explicitly said that the exponent and significand
were represented in BINARY.  The idea is that you have
	Sign x Significand x 10**Exponent
where Sign is 1 or -1, Significand is a binary integer, and Exponent is a
binary integer.  The precision loss when scaling is intermediate between
that of base-8 (Burroughs) and base-16 (IBM) floating-point.  There is no
range loss (except that you can't use a "hidden bit", so ok, one bit loss).
With 32 bits, you'd get 1 bit for sign, 8 bits for exponent (10**-128 to
10**127, maybe use 10**-128 for NaNs), and 23 bits for significand (not the
24 you get from binary-with-hidden-bit) or roughly 7 decimal digits.
In any case, it isn't circuitry you'd be paying.  (I believe the original
design was for a micro-coded floating-point unit.)  And the point of the
original article was that decimal scaling was very little harder than
binary scaling.

> But this doesn't save you from precision errors, it only saves you fro
> DECIMAL precision errors. What about (x/3)*3 ?
Decimal representation error is all the method is intended to save you.
That matters because we WRITE numbers in base 10, not base 3.  The number
you write in your program or read from a file is EXACTLY the number
represented by the bits, in this scheme.  This was important in the BASIC
application I mentioned:  beginning students were very confused when they
would say	PRINT 2.3
and get		2.29999
back.  You are spared one roundoff error in input and one in output.
If you can avoid that without inefficiency, why not avoid it?