[net.arch] Complement Arithmetic

hopkins@burdvax.UUCP (Bill Hopkins) (01/26/84)

I can recall three main arguments for two's complement arithmetic:

-  There is a unique representation for each integer in the range
represented, [-(2**(n-1))..2**(n-1)-1]. One's complement representation has
the infamous and mathematically troublesome "positive and negative zero"
problem, which has prematurely aged too many compiler writers.

-  The arithmetic is the same as for unsigned integers; only the
interpretation of carry and overflow change.  In one's complement the two
require separate ALU operations.

-  Multiple precision addition and subtraction are simply done by propagating
carries (borrows) through status bits.  It's virtually impossible to simulate
a 32-bit integer operation with a 16-bit ALU in one's complement (consider
the end-around carry).

My only argument in favor on one's complement is the symmetry of the range,
which avoids the possibility of overflow on complementing the most negative
number.  (There may be others; let's hear from the one's complement freaks
out there.)

(N.b.: we're only talking integers (really fixed point) because that's how
n's-complement arithmetic is defined; floating point representations are made
up out of an integer exponent and a fixed point mantissa.  The arguments for
integers apply to these unchanged.  Similarly, multiplication and division
are built up out of addition and subtraction.)

In the folklore department:  The CDC 6000/Cyber-whatever series designed by
Seymour Cray use one's complement arithmetic, presumably because he could
make it go faster back in the early 60's.  The Cray-1 uses two's complement;
the folk tale is the Seymour finally figured out how to do it fast.  Any
confirmation or correction to this tale would be appreciated.

	Bill Hopkins		{presby,psuvax,sdcrdcf}!burdvax!hopkins
	SDC-Burroughs		(215) 648-7578
	P.O. Box 517
	Paoli, PA  19335

And may all you complements be computable.

nather@utastro.UUCP (Ed Nather) (01/27/84)

<>
There's a famous shin-barking problem in astronomy that ones complement
arithmetic can fix.  We still use the Babylonian designation for angles
as degrees, minutes, and seconds for the declination direction (pole to
pole) and things go very funny for declinations near the equator, where
the values come out like  "0 degrees,  12 minutes,  24 seconds, usually
written as "00 12 24".  This one is OK, but how about "-00 12 24"?  The
programmer must propagate the minus sign to the integers 12 and 24, and
any testing must test all three before the sign of the result is chosen.

The problem would be much simplified if we could test for neagtive zero!

tjt@kobold.UUCP (01/28/84)

As I recall, the biggest problem with "positive and negative zero" in
one's complement arithmetic is the natural tendency to regard the
"positive" zero as the true zero, whereas any additive or subtractive
operation would naturally produce the "negative" zero.  Some machines
attempted to automagically convert this to "positive" zero, but usually
failed for some case or other.  I don't remember the details but I
think it was something to do with a direct implementation of
subtraction of a positive number from a negative number when the
subtraction is implemented differently than complementing the second
operand and adding.

While we're at it, what about sign-magnitude representations?  These
have the same "positive and negative zero" problem, although we seem to
be able to tolerate it for floating point.
-- 
	Tom Teixeira,  Massachusetts Computer Corporation.  Westford MA
	...!{ihnp4,harpo,decvax}!masscomp!tjt   (617) 692-6200 x275

jeff@heurikon.UUCP (01/29/84)

I like the notion that ones complement has the "advantage" of not
getting an overflow from complementing the "most negative number".
To get this advantage, however, they've had to sacrifice the most
negative twos complement number.  So what have you gained?  You've
just made it impossible to *produce* the error by reducing the range.

By using the same logic we could cure all those people on net.* who
like to drive fast.  We simply make it a non-problem by removing
their ability to drive.  We could chop off their heads.  :-)
-- 
/"""\	Jeffrey Mattox, Heurikon Corp, Madison, WI
|O.O|	{harpo, hao, philabs}!seismo!uwvax!heurikon!jeff  (news & mail)
\_=_/				     ihnp4!heurikon!jeff  (mail)

roger@minn-ua.UUCP (01/30/84)

#R:burdvax:-142500:minn-ua:2700001:000:2252
minn-ua!roger    Jan 29 12:42:00 1984


-----

Regarding the ones vs. twos complement arithmetic discussion:

Having worked on several compilers on both styles of computers, I
would rather work with ones complement arithmetic.  There are many
tricks one can use to generate functional results (e.g. min, max functions)
without using conditional operators (read, tests and jumps).  This
leads to some rather fast code especially on computers like CDC Cybers.
Most of the people who don't like negative zero have either not worked on
a ones complement computer or worked on brain-damaged hardware that generated
negative zero.  The only way to 'generate' a negative zero on a CDC Cyber
is to complement a positive zero, which can be easily avoided (unfortunately,
the hardware will propagate negative zeros but it can't do that if you
don't give it one to start with).  Note that the more pipe-lined the hardware
is the more reason you have to avoid conditional operators.  This trend
is becoming more prevalent as people build faster computers.

A major fault of most twos complement computers is the lack of the required
number of shifts to properly handle sign extension and other such items.
I seem to recall that the number is eight, but I may be mistaken about
that.  A ones complement computer on the other hand only needs two shifts,
a sign-extension right shift (which does work as a divide by 2**n even for
those negative numbers) and a circular shift which is typically implemented
as a left shift.

I agree with the statements that it is difficult to generate multiple
word results with ones complement hardware.  However, the most popular
ones complement computers around (those designed by Seymour Cray) have
60 bit words which are usually adequate all by themselves.  The same
arguments can be used for unsigned arithmetic.

As to why the Cray 1 has twos complement arithmetic, it's because Seymour
couldn't make a ones complement arithmetic section fast enough.  Twos
complement arithmetic is faster which is why it became so prevalent.
The other designers couldn't figure out how to make ones complement
arithemetic sections for their computers which were fast enough.

----

	Roger L. Gulbranson
	University of Minnesota Computer Center
	...ihnp4!stolaf!umn-cs!minn-ua!roger

faiman@uiuccsb.UUCP (01/31/84)

#R:burdvax:-142500:uiuccsb:5600004:000:1446
uiuccsb!faiman    Jan 30 08:33:00 1984

I was both amused at, and sympathetic towards Bill Hopkins' remarks about
one's complement arithmetic and his difficulty in coming up with a good
argument for using it other than symmetry of range, which property,
incidentally, is also possessed by signed magnitude.  I have taught ye
goode olde standarde digitale designe course off and on for quite a few
years and have got used to the fact that most textbook authors on the
subject are content to list the common forms of number representation
without giving any reasons why a designer might want to choose one over
another in a given application.  The pricipal virtues of two's complement,
biassed, and signed-magnitude are, of course, well known, but for many
years I could find nothing good to say about one's complement, being
unimpressed by the "ease of implementation" argument that used to be
fashionable around, maybe, 1950.  However, consider the problem of a
poor soul who wants to build a (fixed point, of course) signed-magnitude
adder.  A simple way of thinking about this, and not a bad way to
implement it either is first, to convert from SM to 1C, a trivial and
fast operation; next, add with end-around carry; and, finally, convert
back to SM, an operation identical to the first.  That's a pretty far-
fetched reason, I hear you say.  Well, perhaps someone from the frozen
wastes of Minneapolis can produce some better ones.

(From the frozen wastes of Urbana) - Mike Faiman

andree@uokvax.UUCP (01/31/84)

#R:burdvax:-142500:uokvax:9900006:000:1517
uokvax!andree    Jan 29 13:44:00 1984

My thanks to all those who replied to my query on ones complement vs.
twos complement, whether by mail or by news.

Most everybody pointed out the infamous extra zero problem. This I know
about, and will discuss later in the article.

Several people pointed out that it takes extra hardware to do ones complement.
I hadn't known this. Considering the earlier discussion on RISC machines,
it would seem that this is no longer a problem, but a design consideration.

The problem with extended precision arithmetic is also new to me. It looks
like this is the only really good reason not to use ones complement.

Now, about that extra zero. I think that that's an advantage, not a problem.
My inspiration is the PDP-11 floating point. In this, as in many floating
point systems, there are a LOT of zeros. The people at DEC did something
bright with them. Anything zero except the one with a zero exponent is
an illegal value, and can cause a trap. It seems to me that this would be
good thing to do with that extra zero in ones complement. Of course, this
would involve extra hardware - checking for that on input to arithmetic
ops, turning it into the other zero on output, and maybe more.

As a software person, this idea appeals to me. Compilers that can check
for uninitialized variables at run time? With no bogus values to trip
over, and no extra cost? Sounds good to me.

Would someone comment on how much extra pain this would be in hardware?
I'll go away and think about the extended precision problem.

	<mike

tjt@kobold.UUCP (02/01/84)

Mike Andree (..!uokvax!andree) has suggested using "-0" on a
one's-complement machine to represent an illegal value for
uninitialized variables.  He was hoping that this would result in no
extra (software) cost.

Unfortunately, TANSTAAFL: somebody has to initialize those
uninitialized variables with the illegal value.
-- 
	Tom Teixeira,  Massachusetts Computer Corporation.  Westford MA
	...!{ihnp4,harpo,decvax}!masscomp!tjt   (617) 692-6200 x275

ags@pucc-i (Seaman) (02/01/84)

Believe it or not, the following quotes come from the same article:

>  Having worked on several compilers on both styles of computers, I
>  would rather work with ones complement arithmetic.  There are many
>  tricks one can use to generate functional results (e.g. min, max functions)
>  without using conditional operators (read, tests and jumps).  This
>  leads to some rather fast code especially on computers like CDC Cybers.
>  . . .
>  As to why the Cray 1 has twos complement arithmetic, it's because Seymour
>  couldn't make a ones complement arithmetic section fast enough.  Twos
>  complement arithmetic is faster which is why it became so prevalent.
>  The other designers couldn't figure out how to make ones complement
>  arithemetic sections for their computers which were fast enough.

Apparently, the author likes ones complement because it is faster, even
though twos complement is faster.

It seems that the argument for ones complement has nothing to do with speed --
only with giving dedicated bit-twiddlers an opportunity to demonstrate their
prowess.  There is no problem in computing max and min functions without
conditional operators on the CDC Cyber 205 (a twos complement machine)...
You can find the max or min of a vector up to length 65,535 with exactly
ONE machine instruction.

-- 

Dave Seaman
..!pur-ee!pucc-k:ags

"Against people who give vent to their loquacity 
by extraneous bombastic circumlocution."

wallach@parsec.UUCP (02/04/84)

#R:burdvax:-142500:parsec:32800004:000:1498
parsec!wallach    Feb  3 15:04:00 1984

i did not realize that an off-the-cuff remark would generate this type
of response. some comments on the subject

1)dividing by 2 on 2's complement works.  the shift right algorithm is
a)test for negative, b)if negative take the 2's complement and then
right shift, and then take the 2's complement of the answer.  thus
the 2's complement of binary (1111 1111) (i.e, -1 base 10) is 0
as contrasted to 1111 1111 with a simple signed right shift

2)the relative speed of arithmetics on all these algorithms is a
function of the arithmetic used.  adds and subtracts are nice using
2's complement  while multiply and divide are nice using sign and
magnitude.

3)the two zero representation of sign and magnitude and 1's can
however produce curious side effects. if an add of 1 to negative -1
produces -0 as opposed to +0 and the compare or whatever instructions
do not anticpate this, then bugs will occur.  this occurred to be
using an algol compiler on the 7040 way back when.  if one had the
statement if n = 0 and n was initially negative and incrementing
unusual side effects occurred.  the fix was to replace the if with
if n+1 = 1.

4)the extra negative representatin of 2's really screws things up when
you do fix to float and float to fix.  since float is sign and
magnitude.

5)leaving with this observation why is  ieee  float sign and magnitude
and fix is 2's.  why shouldn't the float standard use the same number
representation as the standard (be it defacto) fixed point
representaion?

tjt@kobold.UUCP (02/06/84)

parsec!wallach wants to why the IEEE floating point standard uses a
sign-magnitude representation while representing integers in
2's-complement notation.

I can't speak for the IEEE specifiers, but I always thought the basic
floating point representation of:

			----------------------
			| S | EXP | Mantissa |
			----------------------
							   EXP-offset
where S is a sign bit and the mantissa gets multiplied by B
used the offset notation of the exponent so that the machine integer
comparison instructions would also work for floating point.

I didn't study the IEEE specification closely enough to know if this
still works, especially with unnormalized numbers.

It is also true that using integer comparison instructions would not
trap "illegal" values (e.g. -0).


-- 
	Tom Teixeira,  Massachusetts Computer Corporation.  Westford MA
	...!{ihnp4,harpo,decvax}!masscomp!tjt   (617) 692-6200 x275

andree@uokvax.UUCP (02/08/84)

#R:burdvax:-142500:uokvax:9900007:000:865
uokvax!andree    Feb  5 04:15:00 1984

/***** uokvax:net.arch / kobold!tjt /  5:43 pm  Feb  3, 1984 */
Mike Andree (..!uokvax!andree) has suggested using "-0" on a
one's-complement machine to represent an illegal value for
uninitialized variables.  He was hoping that this would result in no
extra (software) cost.

Unfortunately, TANSTAAFL: somebody has to initialize those
uninitialized variables with the illegal value.
-- 
	Tom Teixeira,  Massachusetts Computer Corporation.  Westford MA
	...!{ihnp4,harpo,decvax}!masscomp!tjt   (617) 692-6200 x275
/* ---------- */

Somebody, somewhere had to initalize them anyway :-). I was actually
refering to runtime costs - you don't have to generate code to test for
the illegal value before each statement. At least one compiler (the
Waterloo Fortran V) does this now. It's a good idea, but the cost!
[If you're interested, the value is 2 * '    '.]

	<mike

jdd@allegra.UUCP (John DeTreville) (02/08/84)

    Somebody, somewhere had to initalize them anyway :-). I was actually
    refering to runtime costs - you don't have to generate code to test
    for the illegal value before each statement. At least one compiler
    (the Waterloo Fortran V) does this now. It's a good idea, but the
    cost!  [If you're interested, the value is 2 * '    '.]

Ah, this brings back fond memories of WATFOR on the IBM 7040/7044.  These
machines had 36-bit words with an extra parity bit, but IBM let you twiddle
the parity, and WATFOR used this "feature" (which was not documented in the
Principles of Operation).  Variables were initialized to contain bad parity,
and normal-parity loads, adds, etc. would trap if done prior to a normal-
parity store.

Another trick was that the ASSIGN statement stored reversed parity into the
variable named, and ASSIGNed GOTO's used a reversed-parity load to access
the contents; this kept you from storing an arbitrary value into a variable
and then using it in an ASSIGNed GOTO.

Cheers,
John ("Bad Parity") DeTreville
Bell Labs, Murray Hill

minow@decvax.UUCP (Martin Minow) (02/10/84)

The Fortran II run-time library on the 7090 converted blank
input fields to -0.  The 7090 had a 3-way branch instruction
(that's where Fortran's 3-way IF came from), so you could
write

	IF (I) 10, 30, 40
C ... blank field or negative value input
10	CONTINUE
	IF (I .EQ. 0) GOTO 20
C ... negative value input
C ...
C ... blank field input
20	CONTINUE

(I think that's right -- but am too lazy to go upstairs and look
at some of my old listings.)

PS: for your programmer superstition file -- never throw a listing
away; you'll need it someday.

Martin Minow
decvax!minow