[net.arch] Right shift vs. divide

ark@alice.UucP (Andrew Koenig) (01/04/86)

> Now that we know that right shifting is not the same as division, has 
> anybody ever come up with a plausible reason for computers to have 
> arithmetic shift instructions on twos complement machines (other than that 
> they're easy to implement and poorly educated engineers might have thought 
> they were the same as division)?  For positive numbers, they give the same 
> result as unsigned shifts, and for negative numbers they give the wrong 
> answer.  Followups to net.arch, please.  

I can think of two reasons:  (1) sometimes you want to divide with
truncation toward -infinity and an arithmetic right shift gives you
that;  (2) on some machines, an arithmetic left shift lets you multiply
by a power of 2 with overflow checking.

ken@turtlevax.UUCP (Ken Turkowski) (01/06/86)

In article <4772@alice.UUCP> ark@alice.UucP (Andrew Koenig) writes:
>> Now that we know that right shifting is not the same as division, has 
>> anybody ever come up with a plausible reason for computers to have 
>> arithmetic shift instructions on twos complement machines (other than that 
>> they're easy to implement and poorly educated engineers might have thought 
>> they were the same as division)?  For positive numbers, they give the same 
>> result as unsigned shifts, and for negative numbers they give the wrong 
>> answer.  Followups to net.arch, please.  
>
>I can think of two reasons:  (1) sometimes you want to divide with
>truncation toward -infinity and an arithmetic right shift gives you
>that;  (2) on some machines, an arithmetic left shift lets you multiply
>by a power of 2 with overflow checking.

Several more reasons are: (3) arithmetic right shift is an
approximation to division by a power of two that is only one LSB off.
Sometimes you may want to divide a signed number by a power of two, but
can tolerate a small error, but not excessive computation time; thus
the popularity of the arithmetic right shift.  (4) Arithmetic right
shift is necessary to implement multiplication without a multiplier
array.  (5) Arithmetic left shift is necessary to implement division
and square root.


-- 
Ken Turkowski @ CIMLINC, Menlo Park, CA
UUCP: {amd,decwrl,hplabs,seismo,spar}!turtlevax!ken
ARPA: turtlevax!ken@DECWRL.DEC.COM

franka@mmintl.UUCP (Frank Adams) (01/06/86)

In article <4772@alice.UUCP> ark@alice.UucP (Andrew Koenig) writes:
>> Now that we know that right shifting is not the same as division, has 
>> anybody ever come up with a plausible reason for computers to have 
>> arithmetic shift instructions on twos complement machines (other than that 
>> they're easy to implement and poorly educated engineers might have thought 
>> they were the same as division)?  For positive numbers, they give the same 
>> result as unsigned shifts, and for negative numbers they give the wrong 
>> answer.  Followups to net.arch, please.  
>
>I can think of two reasons:  (1) sometimes you want to divide with
>truncation toward -infinity and an arithmetic right shift gives you
>that;  (2) on some machines, an arithmetic left shift lets you multiply
>by a power of 2 with overflow checking.

Also, (3) sometimes you know that the dividend is divisible by the power
of two you are dividing it by.  And (4), a conditional add followed by a
shift is equivalent to a divide, and is often much faster.  Finally, (5),
if you regard the number as a fraction, with the decimal point at the left,
what happens to the low order bit is of no great consequence.

It does seem, however, that an arithmetic right shift which *is* equivalent
to a division would be more useful, however.  There are relatively few cases
where one wish to round towards minus infinity, and those cases are not
especially likely to be divisions by powers of two.

Frank Adams                           ihpn4!philabs!pwa-b!mmintl!franka
Multimate International    52 Oakland Ave North    E. Hartford, CT 06108

mcgrath@uicsrd.CSRD.UIUC.EDU (01/06/86)

Such shifting is handy for bit fiddlin' with masks or addresses.  E.g.,
'I want to build a bit pattern in bits 15-8' is easy to do by computing
the mask and shifting left.  (Sure you can do it other ways, too.)

wallach@convexs.UUCP (01/07/86)

arithmetic right shifting is equivalent(numerically) to division if the
algorithm is implemented correctly.  just because most machines do it
wrong doesn't mean it doesn't work.  the correct algorithm for right
shifting for fixed point is:

1)if the operand is positive right shift.

2)if the operand is negative, negate the operand before shifting, then
right shift, and then negate the result.

if you want, try the infamous right shift all 1's (-1).

the division is round toward zero (ala fortran).

the MV series from data general perform arithmetic right shifting
correctly

ark@alice.UucP (Andrew Koenig) (01/09/86)

> arithmetic right shifting is equivalent(numerically) to division if the
> algorithm is implemented correctly.  just because most machines do it
> wrong doesn't mean it doesn't work.  the correct algorithm for right
> shifting for fixed point is:
>
> 1)if the operand is positive right shift.
>
> 2)if the operand is negative, negate the operand before shifting, then
> right shift, and then negate the result.
> 
> if you want, try the infamous right shift all 1's (-1).
> 
> the division is round toward zero (ala fortran).
> 
> the MV series from data general perform arithmetic right shifting
> correctly

Still more evidence that the problem is harder than it looks:
the above algorithm fails if the dividend is the most negative number.
Consider, for example, a 32-bit machine.  Let's divide
-2147483648 by 2.  The answer should be -1073741824.

	-2147483648 is 80000000 hex.  Negate it and you
	get integer overflow.  If this is masked, most machines
	will give you 80000000 hex back.

	Shift 80000000 right 1 bit, giving C0000000.

	Negate this again, giving 40000000.

Thus, the sign gets lost.

radford@calgary.UUCP (Radford Neal) (01/09/86)

Rather than get rid of the arithmetic right shift, one could instead
change the divide instruction so it does what mathematicians have
always thought division meant.

Specifically:

      Quotient of -3 divided by 2 should be -2.
      Remainder of -3 divided by 2 should be 1.

With this definition, a right shift divides by two and masking off
the lower bit gives you the remainder.

Does anyone know any reason, other than inertia, why divides shouldn't
be changed to work this way.

      Radford Neal
      The University of Calgary

wallach@convexs.UUCP (01/10/86)

the statement in the previous note is correct.  however, within the internal
logical of the processor, an extra bit of precision, namely the carry bit
is used to handle the most negative number.

one not remember not to confuse implementation from the algorithm.       
the algorithm works, but requires a n+1 bit implementation.

tim@ism780c.UUCP (Tim Smith) (01/11/86)

In article <3000002@convexs> wallach@convexs.UUCP writes:
>
>arithmetic right shifting is equivalent(numerically) to division if the
>algorithm is implemented correctly.  just because most machines do it
>wrong doesn't mean it doesn't work.  the correct algorithm for right
>shifting for fixed point is:
>
.
.
>the MV series from data general perform arithmetic right shifting
>correctly

Argghhh!  It is division that is wrong, not right shift!  -1/2 should
be -1, not 0.  Don't hardware people ever take math courses?
-- 
Tim Smith       sdcrdcf!ism780c!tim || ima!ism780!tim || ihnp4!cithep!tim

ken@turtlevax.UUCP (Ken Turkowski) (01/11/86)

In article <4782@alice.UUCP> ark@alice.UucP (Andrew Koenig) writes:
>	-2147483648 is 80000000 hex.  Negate it and you
>	get integer overflow.  If this is masked, most machines
>	will give you 80000000 hex back.
>
>	Shift 80000000 right 1 bit, giving C0000000.
>
>	Negate this again, giving 40000000.
>
>Thus, the sign gets lost.

Wrongo.  After you negate the original negative number, it becomes an
unsigned number without overflow.  When you shift, you use an unsigned
shift.  Then,

	Shift 80000000 right by 1 bit, giving 40000000.

	Negate this again, giving, C0000000 (-1073741824)

This is the correct result.
-- 
Ken Turkowski @ CIMLINC, Menlo Park, CA
UUCP: {amd,decwrl,hplabs,seismo,spar}!turtlevax!ken
ARPA: turtlevax!ken@DECWRL.DEC.COM

doon@sdcrdcf.UUCP (Harry W. Reed) (01/11/86)

In article <231@ism780c.UUCP> tim@ism780c.UUCP (Tim Smith) writes:
>In article <3000002@convexs> wallach@convexs.UUCP writes:
>>
>>arithmetic right shifting is equivalent(numerically) to division if the
>>algorithm is implemented correctly.  just because most machines do it
>>wrong doesn't mean it doesn't work.  the correct algorithm for right
>>shifting for fixed point is:
>>
>.
>.
>>the MV series from data general perform arithmetic right shifting
>>correctly
>
>Argghhh!  It is division that is wrong, not right shift!  -1/2 should
>be -1, not 0.  Don't hardware people ever take math courses?
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Where I went to school, us hardware people had to take MUCH more math
than the software people!!   (EE Vs. CS) So math is not the problem ...

hansen@mips.UUCP (Craig Hansen) (01/11/86)

> Rather than get rid of the arithmetic right shift, one could instead
> change the divide instruction so it does what mathematicians have
> always thought division meant.

So now we've seen every possible solution: ignore it,
fix arithmetic right shift, and now fix divide!

There are indeed two consistent definitions of divide,
both of which satisfy the relation:

	x = (x / y) * y + (x % y)

The two definitions basically amount to evaluating x/y
by (1) rounding toward -infinity and (2) rounding toward zero.
Each have their own advantages:

    a)	1) the sign of the remainder = the sign of the divisor
	2) the sign of the remainder = the sign of the dividend

    b)	1)  x  >=  (x/y)*y
	2) |x| >= |(x/y)*y|

(2) is the most commonly implemented form of divide.
(1) looks like the most commonly implemented arithmetic right shift.

(2) is useful for signal processing applications because
of the (b) property.  (1) is useful for applications involving
residue arithmetic because of the (a) property, i.e.
the remainder is a proper residue.

Ideally, a machine would have both (or neither).

Craig Hansen
MIPS Computer Systems

ark@alice.UucP (Andrew Koenig) (01/12/86)

> Also, (3) sometimes you know that the dividend is divisible by the power
> of two you are dividing it by.  And (4), a conditional add followed by a
> shift is equivalent to a divide, and is often much faster.  Finally, (5),
> if you regard the number as a fraction, with the decimal point at the left,
> what happens to the low order bit is of no great consequence.

To which I add (6): sometimes you don't care if a division rounds
up or down, as long as it delivers one of the neighbors of the
exact result.  Consider a binary search.

mouse@mcgill-vision.UUCP (der Mouse) (01/13/86)

Radford Neal (radford@calgary), U of Calgary, writes (in <32@calgary.UUCP>):
> Rather than get rid of the arithmetic right shift, one could instead
> change the divide instruction so it does what mathematicians have
> always thought division meant.
> [ ie, truncate towards negative infinity rather than zero ]
> [ example: -3/2 versus 3/2 ]
> With this definition, a right shift divides by two and masking off
> the lower bit gives you the remainder.
> Does anyone know any reason, other than inertia, why divides shouldn't
> be changed to work this way.

Gee....and I always thought mathematicians thought -3/2 was -1.5 (;-).

[
 Note: By x%y I mean x mod y (remainder when x is divided by y)
          x>>y       x right-shifted y bits
          x^y        x raised to the power y
 No flames please, some people out there probably don't know
  C (I can hear it now:  "Doesn't >> mean `much greater than'?").
]

     Only reason I can see for a  mathematician -- for anyone -- to want
it  to   work  this  way  is   that  then  x-(y*(x/y))  is  always  >=0.
Unfortunately, doing  this  breaks other nice  properties, in particular
that  x/y  equals -((-x)/y).   It  therefore  means that  hardware can't
handle  negative  numbers  by  simply  negating  before  and  after  the
operation.    I  don't  see  why you aren't satisfied with  -3/2=-1  and
-3%2=-1.   You still  have  that x=(x%y)+(y*(x/y)), which is  really all
mathematicians demand, as far as I know (flame retardant:  my  knowledge
of number theory is limited to undergrad level).

     What do you want done with -3/-2 or 3/-2?

     There are some  algorithms which are easier to implement with  your
sort of division, but I would be surprised if it were not the case  that
in most of them, the divisor *is* a power of two and  hence right shifts
are appropriate.  You seem to want x%(2^y)=(x>>y).

     Inertia is a powerful reason.   There  has been  much discussion on
groups like net.lang.c and net.unix  about portability; the general idea
is  that  a  programmer  should  assume   as   little  as   possible.   
Unfortunately, programmers tend to assume that  things like the  laws of
arithmetic will not change and hence  assumptions about  them can safely
be wired into the code.  I feel certain there is a lot of code out there
which would break on any such machine.  I know I would  consider such  a
machine broken  (out of incompatibility with the  rest of the  world and
with existing code -- yes, I know that's what you mean by inertia).
-- 
					der Mouse

USA: {ihnp4,decvax,akgua,etc}!utcsri!mcgill-vision!mouse
     philabs!micomvax!musocs!mcgill-vision!mouse
Europe: mcvax!decvax!utcsri!mcgill-vision!mouse
        mcvax!seismo!cmcl2!philabs!micomvax!musocs!mcgill-vision!mouse

Hacker: One who accidentally destroys /
Wizard: One who recovers it afterward

franka@mmintl.UUCP (Frank Adams) (01/14/86)

In article <285@mips.UUCP> hansen@mips.UUCP (Craig Hansen) writes:
>So now we've seen every possible solution: ignore it,
>fix arithmetic right shift, and now fix divide!
>
>There are indeed two consistent definitions of divide,
>both of which satisfy the relation:
>
>	x = (x / y) * y + (x % y)
>
>The two definitions basically amount to evaluating x/y
>by (1) rounding toward -infinity and (2) rounding toward zero.
>Each have their own advantages:

There is another important definition, which is to round to the nearest
integer.  (This is subdivided by what one does with a fractional part
of exactly one half.)  Uses for the rounded quotient should be quite
obvious.

The remainder in this case is the remainder with the smallest possible
absolute value.  This is useful in a variety of number-theoretic
applications; as a simple example, Euclid's algorithm will terminate
faster using the least absolute remainder.


I will note also that I fairly often want to do a division and round
up instead of down.  This is especially common when dealing with
arrays whose first element is 1, not 0.

Frank Adams                           ihpn4!philabs!pwa-b!mmintl!franka
Multimate International    52 Oakland Ave North    E. Hartford, CT 06108

tim@ism780c.UUCP (Tim Smith) (01/15/86)

>It does seem, however, that an arithmetic right shift which *is* equivalent
>to a division would be more useful, however.  There are relatively few cases
>where one wish to round towards minus infinity, and those cases are not
>especially likely to be divisions by powers of two.

There are two things that seem reasonable to compute for a/b.
(1) int(a/b), and (2) sign(a/b)*int(abs(a)/abs(b)).  When b = 2,
arithmetic right shift computes (1).  Division computes (2).
[This is assuming a "normal" computer, whatever that means!]
Because division is (2), some properties one would normally
expect division to have do not hold.  For example, if a,b, and n
are integers, b != 0, I would expect (a + bn) / b to be the same
as a/b + n.  This would hold if division was (1), but not when it
is (2).

The only reason I have heard for using (2) is that -(a/b) = (-a)/b.
This is a nice property, but I think that having (a+bn)/b = a/b + n,
and having the mod function have a range of b instead of 2b-1 are
more important.  Hence, it is division that should be changed to
agree with shift.

--
Tim Smith       sdcrdcf!ism780c!tim || ima!ism780!tim || ihnp4!cithep!tim

ron@brl-smoke.ARPA (Ron Natalie <ron>) (01/15/86)

I don't know how this discussion managed to leak over from net.lang.c,
but it is not a C issue.  C makes no assumption on whether integer divides
of negative operands truncates towards zero or -inf.  If both operands are
positive trucation is toward zero.

C only guarantees that the integer divide and the modulo function are
consistant with respect to each other, i.e.

	(a/b)*a + a%b equals a
-Ron

ron@brl-smoke.ARPA (Ron Natalie <ron>) (01/15/86)

> 	(a/b)*a + a%b equals a

Whoops...that should be

	(a/b)*b + a%b equals a

tim@ism780c.UUCP (Tim Smith) (01/16/86)

In article <345@mcgill-vision.UUCP> mouse@mcgill-vision.UUCP (der Mouse) writes:

>  C (I can hear it now:  "Doesn't >> mean `much greater than'?").

I thought it was an input operator.. :-)

>     Only reason I can see for a  mathematician -- for anyone -- to want
>it  to   work  this  way  is   that  then  x-(y*(x/y))  is  always  >=0.

Actually, it is not that we want it to be >= 0, we want it to have only |y|
different values, rather than 2|y|-1 values. In fact, if y < 0, then we
want it to be <= 0.

> I don't see why you aren't satisfied with -3/2=-1 and -3%2=-1.
> You still have that x=(x%y)+(y*(x/y)), which is really all
> mathematicians demand, as far as I know (flame retardant: my
> knowledge of number theory is limited to undergrad level).

I want (x+ny)/y to be the same as x/y + n.  This only works if we
always round toward negative infinity, or always round toward positive
infinity.

>     What do you want done with -3/-2 or 3/-2?
1 and -2.
-- 
Tim Smith       sdcrdcf!ism780c!tim || ima!ism780!tim || ihnp4!cithep!tim

radford@calgary.UUCP (Radford Neal) (01/17/86)

>      Only reason I can see for a  mathematician -- for anyone -- to want
> it  to   work  this  way  is   that  then  x-(y*(x/y))  is  always  >=0.
> Unfortunately, doing  this  breaks other nice  properties, in particular
> that  x/y  equals -((-x)/y).   It  therefore  means that  hardware can't
> handle  negative  numbers  by  simply  negating  before  and  after  the
> operation.    I  don't  see  why you aren't satisfied with  -3/2=-1  and
> -3%2=-1.   You still  have  that x=(x%y)+(y*(x/y)), which is  really all
> mathematicians demand, as far as I know (flame retardant:  my  knowledge
> of number theory is limited to undergrad level).

In applications where number theory is relevant there is a powerful
demand that

      x%y = (x+y)%y

which isn't true with truncate toward zero division.

>      What do you want done with -3/-2 or 3/-2?

Beets me. I'm rather confused as to when people actually use negative
divisors and what should be done about them. But as a first cut I might
go for:

    7/3 = 2     7%3 = 1
   -7/3 = -3   -7%3 = 2
    7/-3 = -2   7%-3 = 1
   -7/-3 = 3   -7%-3 = 2

This preserves the properties that

   x = (x/y)*y + x%y   and   0 <= x%y < |y|

      Radford Neal
      University of Calgary

rentsch@unc.UUCP (01/18/86)

All this argument over whether divide == right shift for negative
arguments is pretty irrelevant.  As every good systems hacker knows,
[computer] division is only defined when both the operands are
positive!   :-)

If that isn't enough for you, try looking at rules for fixed point
division in PL/I.  Now answer this simple question:

	What is  25 + 1/3  ?  (Hint:  same as   25 + (1/3) ).

[One final thing:  no flames about this not belonging on newsgroup
X, and answers to /dev/null, please.  And add my vote for ending the
discussion of right shift versus divide right now!]

asw@rlvd.UUCP (Antony Williams) (01/21/86)

In article <32@calgary.UUCP> radford@calgary.UUCP (Radford Neal) writes:
>Rather than get rid of the arithmetic right shift, one could instead
>change the divide instruction so it does what mathematicians have
>always thought division meant.
	this looks like opinion to me.
>
>Specifically:
>
>      Quotient of -3 divided by 2 should be -2.
>      Remainder of -3 divided by 2 should be 1.
>
>With this definition, a right shift divides by two and masking off
>the lower bit gives you the remainder.
>
>Does anyone know any reason, other than inertia, why divides shouldn't
>be changed to work this way.
  Unfortunately, this breaks one of the equivalences that programmers
tend to expect on intuitive grounds, namely:

    - ( a div b ) == ( -a div b ) == a div (-b)

by analogy with real arithmetic.



-- 
---------------------------------------------------------------------------
Tony Williams					|Informatics Division
UK JANET:	asw@uk.ac.rl.vd			|Rutherford Appleton Lab
Usenet:		{... | mcvax}!ukc!rlvd!asw	|Chilton, Didcot
ARPAnet:	asw%rl.vd@ucl-cs.arpa		|Oxon OX11 0QX, UK

crandell@ut-sally.UUCP (Jim Crandell) (01/22/86)

>> arithmetic right shifting is equivalent(numerically) to division if the
>> algorithm is implemented correctly.  just because most machines do it
>> wrong doesn't mean it doesn't work.
>>
>> if you want, try the infamous right shift all 1's (-1).
>> 
>Still more evidence that the problem is harder than it looks:
>the above algorithm fails if the dividend is the most negative number....

Seymour Cray would really get a bang out of this discussion.  Anybody want
to send him a copy of it?
-- 

    Jim Crandell, C. S. Dept., The University of Texas at Austin
               {ihnp4,seismo,ctvax}!ut-sally!crandell

fnf@unisoft.UUCP (Fred Fish) (01/24/86)

>>Argghhh!  It is division that is wrong, not right shift!  -1/2 should
>>be -1, not 0.  Don't hardware people ever take math courses?
>		^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Sure, just ask any math professor and he will tell you that -1/2 should
be -1/2,  NOT -1 or 0    :-)

-Fred