[net.arch] One's complement machines and C logic

sdo@u1100a.UUCP (Scott Orshan) (01/26/84)

We run the UNIX system on Sperry 1100 mainframes.  These machines
use one's complement arithmetic.  The wordsize is 36 bits.
Such architectures have two representations for zero - a word with
all zeros and a word with all ones (known as negative zero).

Some arithmetic operations result in positive zero, and some in
negative zero.  Both zeros are equal arithmetically.  Both pass
a tz (test zero) instruction.

The problem arises when C has to deal with a word as a boolean
quantity.  If I say "if(a - b == 0)" I don't care whether a
neg. or pos. zero is the result of the subtraction - it will
test equal to zero.  A shorter form of this is "if(a-b)" which
implicitly tests for zero (Boolean FALSE).

The matter for discussion is whether both zeros, or just +0
should represent a FALSE.  The programmer should
be able to say "while(a-b)" and expect that to be the
same as "while(a-b != 0)" since the language is defined this way.
Therefore, if "a-b" should happen to result in a -0, the result
should still look false.

What if we write "if(0777777777777)"?  Well, a word with all
ones looks about as true as one can get, but this number is
really zero.  Suppose I'm testing whether bits are set in a
word by either testing for zero or saying "i & 0777777777777".
Well, a word of all ones will fail this test.  Another way
to get a word of all ones is to use "~0".  Some programs
use this to mean -1.  The only way I see around this is to
redefine C to make Boolean a separate type.  A conditional
statement would require a boolean operand.  It would no
longer be valid to test an arithmetic value as a logical one.

I realize that this will not happen, nor do I like the idea
of a new data type.  It means that programmers must be careful
on a one's complement machine when testing for bits rather than
value.  About the cleanest way to check for bits set is to say:
	"if((i & 0777777777770) || (i & 07))"
so that a word of all ones will test TRUE.

Constructive comments are welcome.

	Scott Orshan
	Central Services Org., Piscataway
	201-981-3064
	{ihnp4,pyuxww,abnjh}!u1100a!sdo

smh@mit-eddie.UUCP (Steven M. Haflich) (01/28/84)

Back in 1959 a small, young computer company released a new machine, the
Programmed Data Processor 1 (PDP1).  It is (at least one is still
running!) an 18-bit, ones-complement machine.  It also had hardware
multiply and divide, although that may have been an option and may have
appeared later than the original machine.

The instruction manual clearly states that the *only* arithmetic
operation yielding minus zero is the addition: (+0)+(-0) ==> (-0).
(This supports the notion that addition of (+0) is as much an identity
operation as an arithmetic operation.)  Even in the dark ages, when
gates were expensively built of discrete transitors, it was still
possible to do things right.

Next item:  The problem of multiple representations for zero is not
unique to ones-complement.  Most floating point representations employ
a separate sign bit, i.e., sign-magnitude.  It is possible to write (-0)
as a constant, but floating instructions generally will not produce it.
Indeed, the PDP11 provides hardware support for trapping when the FPU
fetches (-0).  This allows cost-free execution-time checking for
computations on uninitialized variables.

Steve Haflich, MIT Experimental Music Studio