[net.arch] Complement Arithmetic, -0 as a tag

aaw@pyuxss.UUCP (Aaron Werman) (02/01/84)

Why should checking for -0 be significantly faster then checking the twos
compliment tag -2**(word size) - 1 ? This would gain the arithmetic op tag,
solve the compliment problem, and be only slightly higher software
overhead to emulate larger wordsize fixed point arithmetic

The only rational way to do either tag test would be in hardware (as
opposed to interpreter horizontal microcode) in parallel, clogging further
a big bottleneck, register-ALU pathways, which is probably why few of
the ones compliment architectures have -0 propagation traps.
			{harpo,houxm,ihnp4}!pyuxss!aaw
			Aaron Werman

guy@rlgvax.UUCP (Guy Harris) (02/02/84)

If -(2^(word size - 1)) were treated as an "unitialized variable" tag on a
twos-complement machine, that would mean that you would need signed and
unsigned add/subtract instructions; the bit pattern (1 shifted_left word_size-1)
is a valid unsigned number in the middle of the valid range of unsigned
numbers, so it shouldn't cause a trap.  In fact, it could even be nasty on
a ones-complement machine, because (1 shifted_left word_size-1) is a valid
bit pattern even if you want to declare it an invalid number.  This is OK
with floating point because one very rarely treats a bit pattern labelled as
a floating point number as other than a floating point number; however, most
machines manipulate short bit strings and numbers with the same instructions.
On a register machine, for instance, you'd need a "load signed number"
instruction and a "load unsigned number/bit string/pointer/..." instruction.

There ain't no such thing as a free lunch.  To have special "invalid values"
for a data type (henceforth referred to as NaN's, for "Not a Number", from
the IEEE floating point standard), you need to declare some subset of the
set of all 2^word_size values not to be usable.  Since a bit string can,
by definition, contain all members of this set, you can't catch an
unitialized bit string.  The same applies for any other type where there are
not "obvious" candidate bit patterns for exclusion.  The only solution to
this is to enlarge the set of valid bit patterns and exclude some of the new
patterns; the way the set is usually enlarged is by the addition of a tag bit
or bits...

	Guy Harris
	{seismo,ihnp4,allegra}!rlgvax!guy

drew@pyuxbb.UUCP (RD Davis) (02/03/84)

The PL/C compiler is a PL/I implementation intended for quick compiles
and maximum diagnostic assistance at run-time, even if the price of this
was some sacrifice in run-time performance.  (However, it was a genuine
compiler, not an interpreter like the PL/I-Checkout "Compiler").

An implementation detail I recall from PL/C that is relevant to this discussion
of 2's complement vs. 1's complement:

PL/C arranged for all fixed binary variables to be initialized to the most
negative integer.  Unless a compile time option was specified to turn
off checking for uninitialized variables, the compiler generated an
extra "load complement" instruction for every use of such a variable.
The overhead of this extra instruction was very small.  However, since PL/C
programs ran on a 2's-complement machine, complementing the most
negative number caused an overflow to occur.  Such overflow's were trapped
and the trap routine diagnosed exactly what was wrong to give a message
about a reference to an uninitialized variable (cleanly naming the variable
name & statement number and even "fixing" the problem by initializing
the variable for you [to zero? or was it to 1?]).

Point is, that here is an example of exploiting the asymmetry of the 2's
complement domain.  This trick gave low overhead for valid references,
and still gave run-time checking for uninitialized integer variables.

By the way, the PL/I declaration for fixed binary variables seems to be
implicitely assuming "sign/magnitude" representation.  For example,
on the IBM 370, with 32 bit registers, the maximum bits allowed by the
PL/I implementation is BIN(31).  Handily for the PL/C implementors,
this constraint meant that the most negative integer was not an acceptable
value for a PL/I program to store in such a variable.  That is, an
operation that yielded the most negative integer as a result, could
properly be diagnosed as having exceeded the bounds permitted by the language.

R. Drew Davis   AT&T Bell Laboratories  pyuxbb!drew