[comp.arch] Decimal Arithmetic

greg@sce.carleton.ca (Greg Franks) (03/15/90)

In article <76700176@p.cs.uiuc.edu> gillies@p.cs.uiuc.edu writes:
>
>I'd like to know the exact reason for decimal arithmetic too.  

I believe that the cobol world uses decimal arithmetic because
*floating point* does not retain enough precision.  On modern
machines, this issue may be moot. 


-- 
Greg Franks   (613) 788-5726     Carleton University,             
uunet!mitel!sce!greg (uucp)  	 Ottawa, Ontario, Canada  K1S 5B6.
greg@sce.carleton.ca (bitnet)	 (we're on the internet too. (finally))
Overwhelm them with the small bugs so that they don't see the big ones.

gillies@p.cs.uiuc.edu (03/20/90)

nEven today, maybe we don't know the full intellectual costs of using
binary numbers in computers.  I imagine Von Neumann weighed the
technical advantages, and ignored the human factors.  I imagine he &
his group did not forsee these costs:

(1) Professional programmers must be fluent in 3-4 bases
    (binary, octal/hex, decimal), boolean logic, and arithmetic systems
    (sign-magnitude, one's complement, two's complement).  This is no
    trouble to Von Neumann, but try to teach most high-school kids this.
(2) Numerical algorithms must deal with binary roundoff.
(3) Accounting algorithms must hassle with representing .1 in binary
(4) The industry struggled for years with 6/7-bit characters (too
    little), and finally settled on 8-bit characters (too much).  
    In decimal, 2-digit characters would have made sense on day 1.
(5) The necessity to learn / program conversion algorithms.

Can anyone think of other human costs of binary numbers?

dave@fps.com (Dave Smith) (03/20/90)

In article <76700180@p.cs.uiuc.edu> gillies@p.cs.uiuc.edu writes:
>
>nEven today, maybe we don't know the full intellectual costs of using
>binary numbers in computers.  I imagine Von Neumann weighed the
>technical advantages, and ignored the human factors.  I imagine he &
>his group did not forsee these costs:
>
>(1) Professional programmers must be fluent in 3-4 bases
>    (binary, octal/hex, decimal), boolean logic, and arithmetic systems
>    (sign-magnitude, one's complement, two's complement).  This is no
>    trouble to Von Neumann, but try to teach most high-school kids this.

Base switching can be confusing, especially when someone has an octal number
print out with no indication that it's in octal.  Boolean logic is the
basis of the system, however.  How would using a decimal system make
it different?  We'd still be and'ing and or'ing things and doing and's and
or's in decimal is decidely non-trivial to do in your head.

The system almost has to be binary at the base levels, unless you want to
run multiple voltage levels to represent the various numbers.  That's more
of a nightmare than I would like to think of.

Trying to teach most high-school kids their ABC's is a major problem.  
Having a different number base for computers wouldn't make them any easier
to understand.

>(2) Numerical algorithms must deal with binary roundoff.
That's the machine's problem.  You'd have to deal with decimal round-off
otherwise.  Once you know the why's and the algorithms, it's not tough to 
cope with.

>(3) Accounting algorithms must hassle with representing .1 in binary

They have to hassle with representing 1/3 in any format.

>(4) The industry struggled for years with 6/7-bit characters (too
>    little), and finally settled on 8-bit characters (too much).  
>    In decimal, 2-digit characters would have made sense on day 1.

8-bit characters are still too few for many languages.

>(5) The necessity to learn / program conversion algorithms.
>
>Can anyone think of other human costs of binary numbers?

I can think of one advantage.  Since I know octal, I write all the
PIN's for my cards that I don't use often in octal on the cards.  I can
do the conversion quickly but anyone who steals my cards will be quite
confused.
--
David L. Smith
FPS Computing, San Diego
ucsd!celerity!dave or dave@fps.com
"We are a bigger musical genius than any Bob Dylan" - Milli Vanilli

cik@l.cc.purdue.edu (Herman Rubin) (03/20/90)

In article <76700180@p.cs.uiuc.edu>, gillies@p.cs.uiuc.edu writes:
> 
> nEven today, maybe we don't know the full intellectual costs of using
> binary numbers in computers.  I imagine Von Neumann weighed the
> technical advantages, and ignored the human factors.  I imagine he &
> his group did not forsee these costs:
> 
> (1) Professional programmers must be fluent in 3-4 bases
>     (binary, octal/hex, decimal), boolean logic, and arithmetic systems
>     (sign-magnitude, one's complement, two's complement).  This is no
>     trouble to Von Neumann, but try to teach most high-school kids this.

I would rather try to teach this to elementary school kids than to 
programmers.  It is trivial compared to the programming languages.

> (2) Numerical algorithms must deal with binary roundoff.

I can see no way that binary roundoff is any worse than decimal roundoff.
It is usually a lot better.  Besides, factors of 2 and 4 are very common
in numerical procedures, and factors of 10 are rare.  Also, such common
functions as square root and the elementary transcendental functions are
easier to deal with in binary.  The CORDIC algorithms are much easier in
binary than decimal.

> (3) Accounting algorithms must hassle with representing .1 in binary

There are many ways around this problem.  Even accounting roundoff is
binary.

> (4) The industry struggled for years with 6/7-bit characters (too
>     little), and finally settled on 8-bit characters (too much).  
>     In decimal, 2-digit characters would have made sense on day 1.

Are 8-bit characters too much?  Unless someone is willing to use the
usual typewriter characters only, 8 bits are too little.

> (5) The necessity to learn / program conversion algorithms.

A.  It is easy.

B.  It is no more necessary for the programmer to learn than to learn to
program elementary transcendental functions.

> Can anyone think of other human costs of binary numbers?

Other than making humans think occasionally, no.  But there are many costs
of not using binary numbers.  Bit vectors are extremely useful, as are other
binary devices which I suspect many programmers and programming gurus are
not aware of.  I know of far too many uses of binary arithmetic.  In fact,
I would prefer to have more use of binary; I find the use of decimal numbers
for register designations very annoying, and even the default that numbers
are decimal, expecially floating point, a hindrance to efficient coding.

Why is it necessary to design computers (and anything else) so that they
cannot be used efficiently by those capable of arising above the trivial?
-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet, UUCP)

hrich@emdeng.Dayton.NCR.COM (George.H.Harry.Rich) (03/20/90)

In article <76700180@p.cs.uiuc.edu> gillies@p.cs.uiuc.edu writes:
>
>nEven today, maybe we don't know the full intellectual costs of using
>binary numbers in computers.  I imagine Von Neumann weighed the
>technical advantages, and ignored the human factors.  I imagine he &
>his group did not forsee these costs:

I don't think that Van Neumann was the real decision maker in this case.
A number of fairly early computers had fairly heavy decimal orientation
in their instruction set if not in their basic data representation.
I believe that the added complexity of the decimal orientation cost more
in human terms than the familiarity of the representation and the ease
of conversion saved, and this is real reason why today so few processors
bother with decimal representation.

>
>(1) Professional programmers must be fluent in 3-4 bases
>    (binary, octal/hex, decimal), boolean logic, and arithmetic systems
>    (sign-magnitude, one's complement, two's complement).  This is no
>    trouble to Von Neumann, but try to teach most high-school kids this.

I don't think most professional programmers have to be particularly fluent
in any base.  If they're doing numeric work  they do have to understand the
general principles of arithmetic in a base independent manner.  Tools for
conversion and arithmetic in multiple bases have been around for a long time.

>(2) Numerical algorithms must deal with binary roundoff.

Why is this harder to deal with than decimal roundoff?

>(3) Accounting algorithms must hassle with representing .1 in binary

Although it is a slight hassle, doing all your work in pennies is a very
simple adjustment.  The problem is not as much with binary as with binary
integers which don't have enough digits.

>(4) The industry struggled for years with 6/7-bit characters (too
>    little), and finally settled on 8-bit characters (too much).  

I believe that the 8 bit character is going to prove far from adequate.
If you look at the character set on the IBM PC you will see it wipes out
8 bits with great ease.  The Japanese have adopted 16 bit standards for
characters and will use it up eventually (there are in excess of 50,000
Chinese characters).  I suggest that if you don't know how big the character
set you use really is that you look in the appendix of an unabridged 
dictionary.

>    In decimal, 2-digit characters would have made sense on day 1.

The decimal machine I first worked on used 1-1/2 digit characters, with
a decimal digit represented in 4 bits.  Something about space on the card.

>(5) The necessity to learn / program conversion algorithms.
>
>Can anyone think of other human costs of binary numbers?


I've worked on several machines which had extensive decimal capability.
My experience was that in general these instruction sets got you into
considerable complexity (to ameliorate the effects of the relatively
inefficent decimal arithmetic), with such things as special stop
symbols, or other indicators of the actual length of the decimal integer.

We have yet to find a more efficent method of representing a decimal
digit electronically than as four binary digits, and until we do, at some
level there are going to be human costs associated with the basically
binary representation.  However, I think these costs are relatively
small compared with the other human factors associated with computing.

Regards,

	Harry Rich

ddb@ns.network.com (David Dyer-Bennet) (03/21/90)

In article <76700180@p.cs.uiuc.edu> gillies@p.cs.uiuc.edu writes:
:Can anyone think of other human costs of binary numbers?

Bit ordering wars and confusion.  Although the same problems could
occur in decimal, the symptoms would be more easily recognizable, so
the confusion caused would still be less.

My first two computers WERE decimal oriented -- the IBM 1620 and then
the 1401.  I didn't make the big jump to binary until I got access to
a PDP 8 several years later.


-- 
David Dyer-Bennet, ddb@terrabit.fidonet.org
or ddb@network.com
or Fidonet 1:282/341.0, (612) 721-8967 9600hst/2400/1200/300
or terrabit!ddb@Lynx.MN.Org, ...{amdahl,hpda}!bungia!viper!terrabit!ddb