djones@megatest.UUCP (Dave Jones) (03/09/90)
From article <1710@tws8.cs.tcd.ie>, by emcmanus@cs.tcd.ie (Eamonn McManus): ... > > An interesting point about rounding to minus infinity is that it allows > (sign-extending) right shifts to be used for integer division by powers of > two, when using twos-complement representation. Lots of operations work better on a two's complement machine, when you use the 'mathematical' rules. Operations distribute correctly. Multiplications and divisions by powers of two can be done with shifts. Modulus by a power of two can be done with an &-operator. Multiplications by other numbers can be done with combinations of shifts and adds. It all works nicely because there is a nice consistent theory behind it. In fact, except for sign-extension, the operations are exactly the same for signed and unsigned numbers! How come? Imagine this: For some positive number M, we take the integers, coil them up into a helix then squash the helix into a circle of circumference M. We can consistently identify the points on that circle with any interval of length M, but two are obvious winners: If we choose the interval 0..(M-1), we get unsigned arithmetic. If we choose -(M/2) .. (M/2)-1 we get signed numbers. If M happens to be 2**n, where n is the word-size of the machine, the mod-M coiling happens automatically as the bits are shifted out and lost. Sign-extension is a way of 'recovering' the bits by shifting them in. See why it works so well? Look at the mess the Sun-3 C compiler generates, in order to come up with the conventional, but mathematically incorrect, answer to i/2: | Annotated for those lucky enough not to know already... tstl d0 | Perform various tests on it. jge L2000000 | Jump if i is greater than or equal to zero. addql #1,d0 | No? Add one to it. (What a bother.) L2000000: asrl #1,d0 | Now, we can do the shift. There's that ubiquitous test for non-negative, staring us in the face. Smug little bugger. Somebody recently said that this situation we are so often stuck with is an artifact of the system FORTRAN was first implemented on, which used sign-bit coded integers. The division routine (incorrectly) did arithmetic on the low bits and then negated the result if the sign bits of the operands differed. And the rest is histrionics.