flee@shire.cs.psu.edu (Felix Lee) (02/19/90)
Kent Paul Dolan <xanthian@ads.com> wrote: >Yep. It is utterly amazing how many modulus inplementations exist >where (-1) mod m isn't m-1. There are three things that most people would like: quotient = -5 div 3 = -(5 div 3) = -1 remainder = -5 mod 3 = (-5 + 6) mod 3 = 1 3*quotient + remainder = -5 Unfortunately, they're mutually contradictory. You must choose one of them to violate. Someone will be unhappy with your decision. -- Felix Lee flee@shire.cs.psu.edu *!psuvax1!flee
xanthian@saturn.ADS.COM (Metafont Consultant Account) (02/19/90)
In article <E1wt*4@cs.psu.edu> flee@shire.cs.psu.edu (Felix Lee) writes: >Kent Paul Dolan <xanthian@ads.com> wrote: >>Yep. It is utterly amazing how many modulus inplementations exist >>where (-1) mod m isn't m-1. > >There are three things that most people would like: > quotient = -5 div 3 = -(5 div 3) = -1 > remainder = -5 mod 3 = (-5 + 6) mod 3 = 1 > 3*quotient + remainder = -5 > >Unfortunately, they're mutually contradictory. You must choose one of >them to violate. Someone will be unhappy with your decision. Absolutely! One may either choose to truncate integer divisions' quotients towards minus infinity, a very unnatural feeling operation, to dissociate the modulus function from the remainder of integer division, or to violate the grade school concept of reconstructing the dividend simply from the divisor, the quotient, and the remainder. I see the idea of a modulus function that doesn't change if the axis is shifted left or right by the mod base, rather than one that suffers an ugly reflection at the origin, as inherently beautiful, in the math sense. It is also inherently more useful, in the graphics programming sense. Among your three items, the one that does not arouse any particular religious fervor at thoughts of its loss is the second; I see no particular reason to identify the mod function with the remainder of integer division; they are very comfortable as independent concepts, and, I think, were identified with one another on computers by the historical accident that the remainder fell out simply from the most common implementation of shift and subtract logic for division, which leaves the remainder in one half of the dividend double register while the quotient is built in the other half, and the by arithmetical coincidence that they are identical when all the operands are positive. -- xanthian@ads.com xanthian@well.sf.ca.us (Kent Paul Dolan) Again, my opinions, not the account furnishers'.
aipdc@castle.ed.ac.uk (Paul D. Crowley) (02/20/90)
In article <E1wt*4@cs.psu.edu> flee@shire.cs.psu.edu (Felix Lee) writes: >There are three things that most people would like: > quotient = -5 div 3 = -(5 div 3) = -1 > remainder = -5 mod 3 = (-5 + 6) mod 3 = 1 > 3*quotient + remainder = -5 > >Unfortunately, they're mutually contradictory. You must choose one of >them to violate. Someone will be unhappy with your decision. I'd _hate_ a computer that obeyed the first of these. To me, -5 div 3 is -2. -5 div 3 = int(-5/3) = int(-1.666) = -2 Then again, I think month and day numbers should start at 0. -- \/ o\ "I say we grease him right away. No offense." - Aliens. /\__/ Paul D Crowley aipdc@uk.ac.ed.castle
desnoyer@apple.com (Peter Desnoyers) (02/20/90)
In article <10946@saturn.ADS.COM> xanthian@saturn.ADS.COM (Metafont Consultant Account) writes: > I see the idea of a modulus function that doesn't change if the axis > is shifted left or right by the mod base, rather than one that suffers > an ugly reflection at the origin, as inherently beautiful, in the math > sense. It is also inherently more useful, in the graphics programming > sense. It is more than just pretty. The integers mod M has M elements, usually represented as 0..M-1. A modulo operator that results in {integers mod M} having 2M-1 elements is either incorrectly named or incorrectly implemented. Peter Desnoyers Apple ATG desnoyer@apple.com
xanthian@saturn.ADS.COM (Metafont Consultant Account) (02/20/90)
In article <6790@internal.Apple.COM> desnoyer@apple.com (Peter Desnoyers) writes: >In article <10946@saturn.ADS.COM> xanthian@saturn.ADS.COM (Metafont >Consultant Account) writes: >> I see the idea of a modulus function that doesn't change if the axis >> is shifted left or right by the mod base, rather than one that suffers >> an ugly reflection at the origin, as inherently beautiful, in the math >> sense. It is also inherently more useful, in the graphics programming >> sense. > >It is more than just pretty. The integers mod M has M elements, usually >represented as 0..M-1. A modulo operator that results in {integers mod M} >having 2M-1 elements is either incorrectly named or incorrectly implemented. [I knew I couldn't get away without drawing a picture; my words have never been that clear...] Peter, It isn't the case, for the implementions I have seen that I considered flawed, that the x mod 4 mapping looks like: 3 3 2 2 1 1 0 0 0 0 0 -1 -1 -2 -2 -3 -3 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 but instead it looks like: 3 3 3 3 2 2 2 2 1 1 1 1 0 0 0 0 0 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 when I need it to look like: 3 3 3 3 2 2 2 2 1 1 1 1 0 0 0 0 0 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 which latter is invarient under translations of the axis by multiples of 4, the modulus base, a very desirable property for mapping a rectangle onto a torus. -- xanthian@ads.com xanthian@well.sf.ca.us (Kent Paul Dolan) Again, my opinions, not the account furnishers'.
chl@cs.man.ac.uk (Charles Lindsey) (02/20/90)
xanthian@saturn.ADS.COM (Metafont Consultant Account) writes: >In article <E1wt*4@cs.psu.edu> flee@shire.cs.psu.edu (Felix Lee) writes: >>There are three things that most people would like: >> quotient = -5 div 3 = -(5 div 3) = -1 >> remainder = -5 mod 3 = (-5 + 6) mod 3 = 1 >> 3*quotient + remainder = -5 >Absolutely! One may either choose to truncate integer divisions' >quotients towards minus infinity, a very unnatural feeling operation, On the contrary, one must start by realising that it is the usual definition of integer division in programming languages that is WRONG, and is the real root of the problem. Also, aipdc@castle.ed.ac.uk (Paul D. Crowley) writes: >I'd _hate_ a computer that obeyed the first of these. To me, -5 div 3 is >-2. -5 div 3 = int(-5/3) = int(-1.666) = -2 So I am not alone in my view. Here is a nice example taken from an article by Lambert Meertens in ALGOL Bulletin #39. I have transcribed it into ADA, TYPE intarray IS ARRAY(integer RANGE <>) OF integer; PROCEDURE mid(a: intarray) RETURN integer IS -- the "middle" value of the array a BEGIN RETURN a( (a'first + a'last)/2 ) END; PROCEDURE at(a: intarray, newbound: integer) RETURN intarray IS -- a copy of a, s.t. a'first = newbound DECLARE newa: ARRAY(newbound .. a'last-a'first+newbound) OF integer := a; BEGIN RETURN newa; END; somearray: intarray(1..4) := (1,2,3,4); -- so mid(somearray) gives '2' FOR i IN REVERSE -4..+4 LOOP put(mid(at(somearray, i))); END LOOP; -- prints "2 2 2 2 2 2 3 3 3" How did we get into this mess? Once upon a time (c. 1950) some machines (mostly in Europe) did integer division "right" and some (mostly in America) did it "wrong". The difference was essentially between machines built by mathematicians and machines built by engineers (note that, if you carefully follow a textbook on Algebra, you will see how mathematicians treat it). Not surprisingly, the "wrong" answer got built into FORTRAN. The last chance to get it right was in ALGOL 60, when the Europeans were only interested in REAL division. The Americans wanted a separate operator for INTEGER division, so the Europeans asked the Americans to provide a definition for it. I tried in 1968 to get it right in ALGOL 68, but all I achieved was to get the MOD correct (and not the same as remainder). I tried to persuade ADA to get it right, but all I achieved was to get a REM operator in addition to MOD. I tried to get the PASCAL standardizers to get it right, but although they admitted the merit of my case, they got cold feet because, in the meantime, all hardware was now being built the way languages were now defining it. So here is my plea to our descendants who may someday colonize a planet in Alpha Centauri. When you get there, and set up new standards to be applied in that corner of the galaxy, please remember that the people on Terra got integer division WRONG, and do not let the mistake propagate any further.
djones@megatest.UUCP (Dave Jones) (02/21/90)
In article <E1wt*4@cs.psu.edu> flee@shire.cs.psu.edu (Felix Lee) writes: > There are three things that most people would like: > quotient = -5 div 3 = -(5 div 3) = -1 > remainder = -5 mod 3 = (-5 + 6) mod 3 = 1 > 3*quotient + remainder = -5 > >Unfortunately, they're mutually contradictory. You must choose one of >them to violate. Someone will be unhappy with your decision. > I differ. The first of these, -5 div 3 == -1 is wrong. -5 div 3 is -2. Fix that, and everything is cool. Draw both versions of the 'div-3' graph, and you'll see why this makes sense. The -5 div 3 == -1 graph has an ughliness in it around zero that translates into an extra test in algorithms.
flee@shire.cs.psu.edu (Felix Lee) (02/21/90)
Dave Jones <djones@megatest.UUCP> wrote:
>I differ. The first of these, -5 div 3 == -1 is wrong.
The contrary view is that abs(a/b) != abs(-a/b) is strange. It
depends on what you're doing. I have no problems with either stance.
--
Felix Lee flee@shire.cs.psu.edu *!psuvax1!flee
djones@megatest.UUCP (Dave Jones) (02/22/90)
From article <Er$cfg@cs.psu.edu>, by flee@shire.cs.psu.edu (Felix Lee): > Dave Jones <djones@megatest.UUCP> wrote: >>I differ. The first of these, -5 div 3 == -1 is wrong. > > The contrary view is that abs(a/b) != abs(-a/b) is strange. It > depends on what you're doing. I have no problems with either stance. You've hit the nail on the head. The contrary 'view' is only that: a view. Many beautiful and consistent mathematical concepts seem 'strange' when understood only incompletely. If you didn't sell it back to the bookstore for tuition, get your old group-theory textbook out of the garage and read the first couple of chapters. Then tell me what 'abs' has to do with the price of tea in China. Several others have told me that it 'depends on what you're doing', but so far nobody has come up with an example where the contrary view is a win.
firth@sei.cmu.edu (Robert Firth) (02/22/90)
In article <12099@goofy.megatest.UUCP> djones@megatest.UUCP (Dave Jones) writes: [the 'contrary' view is that integer division should truncate towards zero] >Several others have told me that it 'depends on what you're doing', but so >far nobody has come up with an example where the contrary view is a win. That surprises me, since there is a standard example. You have a picture drawn on a Cartesian plane, symmetrical about the origin. You need to scale this picture by some factor - for example, you want to reduce its size by 50% linear. The obvious coordinate transformation is (x',y') = (x/2,y/2). If division truncates towards zero, fine; otherwise, the scaled picture is no longer symmetrical (and typically the viewer can perceive the asymmetry).
flee@shire.cs.psu.edu (Felix Lee) (02/22/90)
On the issue of -a div b = -(a div b) [1] versus a div b = floor(a/b) [2] I do prefer [2] over [1]. The reason I'm mostly tolerant of either case is because I remember writing expressions like abs(a) div b * sgn(a) to simulate [1], but I can't remember the context; it could probably have been better done differently anyway. -- Felix Lee flee@shire.cs.psu.edu *!psuvax1!flee
bs@linus.UUCP (Robert D. Silverman) (02/22/90)
In article <12099@goofy.megatest.UUCP> djones@megatest.UUCP (Dave Jones) writes:
:From article <Er$cfg@cs.psu.edu>, by flee@shire.cs.psu.edu (Felix Lee):
:> Dave Jones <djones@megatest.UUCP> wrote:
:>>I differ. The first of these, -5 div 3 == -1 is wrong.
:>
:> The contrary view is that abs(a/b) != abs(-a/b) is strange. It
:> depends on what you're doing. I have no problems with either stance.
:
Sigh. Don't people learn about functions in school anymore?
The mapping 5 div 3 -- > 1 is a function. It is the floor function
of the quotient. Why on earth SHOULD one expect that for some function f
abs(f(a,b)) = abs(f(-a,b))?????
It would be the same as expecting f(a,b) = -f(-a,b). This simply is not
true of all functions.
It is not strange. It is simply people who know very little mathematics
shooting their mouth off.
--
Bob Silverman
#include <std.disclaimer>
Internet: bs@linus.mitre.org; UUCP: {decvax,philabs}!linus!bs
Mitre Corporation, Bedford, MA 01730
postpischil@alien.enet.dec.com (Always mount a scratch monkey.) (02/22/90)
In article <6197@bd.sei.cmu.edu>, firth@sei.cmu.edu (Robert Firth) writes... > The obvious coordinate transformation is (x',y') = (x/2,y/2). If > division truncates towards zero, fine; otherwise, the scaled picture > is no longer symmetrical (and typically the viewer can perceive > the asymmetry). I'm not sure why you are doing geometry with integers, but it's a real shame that the drawing, which consisted of two identical components, one centered about the origin and one off to the side a bit, no longer consists of two identical components. The difference between the two divisions is that one preserves reflectional symmetry and one preserves translational symmetry. Can you make an argument that one symmetry is more aesthetic than another? Neither form of division has a greater claim to purity or grace. Each should be used when appropriate. Both should be available in a general programming environment. -- edp (Eric Postpischil) "Always mount a scratch monkey." postpischil@alien.enet.dec.com
peter@ficc.uu.net (Peter da Silva) (02/23/90)
I've got a secret for you. Engineers like a continuous div/rem as well. The folks who got it wrong were the physiscists, who never deal with integers nohow, and crystallography weenies, ditto. -- _--_|\ Peter da Silva. +1 713 274 5180. <peter@ficc.uu.net>. / \ \_.--._/ Xenix Support -- it's not just a job, it's an adventure! v "Have you hugged your wolf today?" `-_-'
djones@megatest.UUCP (Dave Jones) (02/23/90)
From article <981@m1.cs.man.ac.uk>, by chl@cs.man.ac.uk (Charles Lindsey): ... > > I tried to get the PASCAL standardizers to get it right, but although they > admitted the merit of my case, they got cold feet because, in the meantime, > all hardware was now being built the way languages were now defining it. > I too haggled with the Pascal standardizers, but I was under the impression that we WON. I just now went back to an old draft copy, and to _Standard_Pascal_ by Doug Cooper, and I discover that dammit, they only did MOD right, not DIV. @#$%^&!!! MOD is stipulated to be correct, but -- get this! -- DIV is stipulated to be wrong! You can't even do it right if you want to. Not unless you want to (shudder) violate the standard. Oooooo. They say, abs(i) - abs(j) < abs((i div j)*j) <= abs(i) There's them damnable abses again. What the **** has abs got to do with anything? Abses in definitions translate into tests for > 0 at runtime. Who needs the agravation? And then they confess, 'NOTE: Only for i >=0 and j > 0 does the relation (i div j) * h + i mod j = i hold.' *** Just now I did a little experimenting. On Sun-3, release 4.3. In both C and Pascal, division is correct, but mod is wrong. In other words, they have it exactly backwards from the ANSI Pascal Standard. Not right mathematically. Not right according to the standard. You got to wonder, how come?
flee@shire.cs.psu.edu (Felix Lee) (02/23/90)
Bob Silverman <bs@linus.mitre.org> wrote: >The mapping 5 div 3 -- > 1 is a function. It is the floor function >of the quotient. Why on earth SHOULD one expect that for some function f >abs(f(a,b)) = abs(f(-a,b))????? >It would be the same as expecting f(a,b) = -f(-a,b). This simply is not >true of all functions. I find this statement a little strange. Consider: The mapping 2 plus 3 -> 5 is a function. Why on earth SHOULD one expect that for some function f, f(a,b) = f(b,a)?? I'm afraid my statement about "div" was a little unclear. The issue is whether integer division is the floor of the quotient or not. -- Felix Lee flee@shire.cs.psu.edu *!psuvax1!flee
djones@megatest.UUCP (Dave Jones) (02/23/90)
From article <6197@bd.sei.cmu.edu>, by firth@sei.cmu.edu (Robert Firth): > In article <12099@goofy.megatest.UUCP> djones@megatest.UUCP (Dave Jones) writes: > > [the 'contrary' view is that integer division should truncate towards zero] > >>Several others have told me that it 'depends on what you're doing', but so >>far nobody has come up with an example where the contrary view is a win. > > That surprises me, since there is a standard example. Several people posted candidates, but each was shot down in turn. > You have a > picture drawn on a Cartesian plane, symmetrical about the origin. > You need to scale this picture by some factor - for example, you > want to reduce its size by 50% linear. > > The obvious coordinate transformation is (x',y') = (x/2,y/2). If > division truncates towards zero, fine; otherwise, the scaled picture > is no longer symmetrical (and typically the viewer can perceive > the asymmetry). I presume that since we are using integer division, the image is some kind of a bitmap, right? This has the ring of truth to it, although it seems somewhat contrived. (How often do you have a *symetrical* digitized picture laying about that you want to scale by 1/2?) It has the ring of truth, because the problem with the 'contrary' view is precicely that it's graph is symetrical about the y-axix, which makes translating it sideways impossible. And here we have a picture which is symetrical about the y-axis. If there is a case where the 'contrary view' prevails, it will be something like this. But I'm still not convinced. A picture undergoing this transformatinon will be 'pinched' along both the x and y-axes, whether it is symetrical or not. This may or may not be obvious, depending on the picture. But I would think that the 'creases' down the middle would often be noticeable, and would be a direct consequence of the nastiness around zero. (It may not matter, but is this really a valid way to scale down a bitmap image? Seems bogus to me, because any two-by-two rectangle that has even one dot in it will get transformed into a dot.)
bs@linus.UUCP (Robert D. Silverman) (02/23/90)
In article <E5$4$m@cs.psu.edu> flee@shire.cs.psu.edu (Felix Lee) writes: >Bob Silverman <bs@linus.mitre.org> wrote: >>The mapping 5 div 3 -- > 1 is a function. It is the floor function >>of the quotient. Why on earth SHOULD one expect that for some function f >>abs(f(a,b)) = abs(f(-a,b))????? >>It would be the same as expecting f(a,b) = -f(-a,b). This simply is not >>true of all functions. > >I find this statement a little strange. Consider: > The mapping 2 plus 3 -> 5 is a function. Why on earth SHOULD one > expect that for some function f, f(a,b) = f(b,a)?? Huh? This statement has absolutely nothing to do with what I said. Oh, for blanks sake. When are you CS weenies going to learn some math? I said for *some* function f. Not for *all* functions f, and this clown then picks out an example of a linear function for which f(a,b) = f(b,a), which has absolutely nothing to do with the issue at hand. It works for this particular function, but won't work for all. Will everyone PLEASE go look up the difference between a ring and a field? The integers are not a field and division is not well defined for every pair of integers. Choosing a,b \in Z --> a/b = floor[a/b] is the best, mathematically consistent way to handle integer division. This means -5/3 = -2. This is absolutely necessary. The integers form a Euclidean domain. We therefore MUST have 3*(-5/3) + (-5 mod 3) = -5. -- Bob Silverman #include <std.disclaimer> Internet: bs@linus.mitre.org; UUCP: {decvax,philabs}!linus!bs Mitre Corporation, Bedford, MA 01730
jgk@osc.COM (Joe Keane) (02/23/90)
The main problem is that you're trying to assign an operator to one of many possible different functions. I have my favorites, but someone will always want a different one from what you pick. Let's say we just want to convert a floating point number to an integer. There are a number of rounding possibilities, up, down, toward zero, away from zero, and to nearest. And if you're rounding to nearest, what do you do if you're halfway in between? Again you can go up, down, toward zero, away from zero, or my favorite, pick the even one. Just when you think you've got it down, some weird case shows up; for example, should 1 mod -2 be 1 or -1? I won't even get into 0 vs. -0 or infinities and not-a-numbers. Good thing IEEE and ANSI define the way these things should work. Trouble is, they don't always agree with each other. Even in the BASIC on my HP-71 there are four functions to convert a floating point number to an integer, and three functions to take remainders in different ways. I just looked in the Unix manual section RINT(3M) and there are seven functions provided to convert a floating point number to an integer. If you think this is all kid's stuff, try getting the branch cuts right in your complex inverse hyperbolic functions. Fun stuff.
cet1@cl.cam.ac.uk (C.E. Thompson) (02/23/90)
In article <12115@goofy.megatest.UUCP> djones@megatest.UUCP (Dave Jones) writes: >From article <981@m1.cs.man.ac.uk>, by chl@cs.man.ac.uk (Charles Lindsey): >... >> >> I tried to get the PASCAL standardizers to get it right, but although they >> admitted the merit of my case, they got cold feet because, in the meantime, >> all hardware was now being built the way languages were now defining it. >> > >I too haggled with the Pascal standardizers, but I was under the impression >that we WON. I just now went back to an old draft copy, and to >_Standard_Pascal_ by Doug Cooper, and I discover that dammit, they >only did MOD right, not DIV. > >@#$%^&!!! > >MOD is stipulated to be correct, but -- get this! -- DIV is stipulated >to be wrong! You can't even do it right if you want to. Not unless you >want to (shudder) violate the standard. Oooooo. They say, > > abs(i) - abs(j) < abs((i div j)*j) <= abs(i) > >There's them damnable abses again. What the **** has abs got to do with >anything? Abses in definitions translate into tests for > 0 at runtime. >Who needs the agravation? > >And then they confess, 'NOTE: Only for i >=0 and j > 0 does the relation >(i div j) * j + i mod j = i hold.' (Slight correction to quotation made there) Why the big suprise? I thought that this <%rudeword> in Pascal was NOTORIOUS. And you have had eight years to find out about it :-) If I have it right, the critical change happened between the 3rd and 5th working drafts (i.e. between Oct 1978 and Nov 1979). In the 3rd, i div j was defined for i>=0, j>0 to be what everyone would expect; to be an error if j=0, and to be 'implementation defined' in other cases; however i mod j was defined to be i - j*(i div j) (by implication, for all i,j). By the 5th these definitions, which at least allow the 'right' implementation, had been replaced by the mess that made it into the 1982 ISO standard. > > *** > >Just now I did a little experimenting. On Sun-3, release 4.3. In both C >and Pascal, division is correct, but mod is wrong. > >In other words, they have it exactly backwards from the ANSI Pascal >Standard. Not right mathematically. Not right according to the standard. >You got to wonder, how come? The *ANSI* standard... the fact that the ANSI standard is not the same as the ISO standard is another <%rudeword>. Luckily, it doesn't affect the div/mod discussion. Chris Thompson JANET: cet1@uk.ac.cam.phx Internet: cet1%phx.cam.ac.uk@nsfnet-relay.ac.uk
callahan@mordor.endor.cs.psu.edu (Paul B. Callahan) (02/24/90)
In article <98717@linus.UUCP> bs@linus.UUCP (Robert D. Silverman) writes: > >Oh, for blanks sake. When are you CS weenies going to learn some math? . . . >Choosing a,b \in Z --> a/b = floor[a/b] is the best, >mathematically consistent way to handle integer division. [this gives the correct definition: a mod b = a-b*floor[a/b]] Here's at least one CS weenie that agrees with you. On at least one occasion I've gone crazy trying to track down errors related to the incorrect implementation of the % (so called modulus) operator in C. I have this funny notion in my head that "a mod b" ought to be in the range 0 to b-1. I remember once thinking after a core dump "hmmm... maybe '%' is returning a negative value... naaah, they couldn't have been that dumb; it must be my fault," and wasting another half hour trying to track down a non-existent bug. When I first learned that floor[x] was the largest integer less than or equal to x (a long time ago, in high school) it did strike me as strange, since my intuitive notion of turning a real into an integer involved throwing away all the digits to the right of the decimal point. After seeing enough mathematics, I was ultimately convinced that my intuition was at fault. That is, the idea of digit truncation is an _artifact_ of the way we _represent_ numbers, and is not elegantly defined in terms of the numbers themselves (as many have pointed out, you have to throw in these nasty absolute values to make it work). On the other hand, as a computer weenie, I like to have direct access to the operations the machine is actually performing. If we assume that floating point arithmetic is done using a sign-magnitude representation then it is reasonable to conclude that when the machine coerces a float to an integer, it does so by throwing away the digits to the right of the decimal point. This is somewhat easier to implement, though it gives an incorrect value for floor. Assuming the machine is giving out this value for free, I would rather have the *option* of using it, than be *forced* to use a version of floor or mod which is correct but requires a software test. When I say I would like to have the option, this is not to say that I would be inclined to exercise the option often. Normally, I would be quite happy to pay a time penalty if it meant I could remember exactly what the function is supposed to return. A nice and easy way to get around this problem is to define your own "defensive" mod procedure, which assumes that mod only works for positive numbers. The obvious way to do this involves an extra software test. If you're careful, it can be made completely portable. Incidentally, the way I got around this problem in practice was simply to add a big multiple of b to a when computing "a mod b." That is, a mod b = kb+a mod b So, if we choose k large enough to insure kb+a is always positive, then we get the correct value for mod. As long as b is constant, we can precompute kb. Admittedly, this is a kludge, but it is probably faster than sticking in an extra "if." Paul Callahan callahan@crabcake.cs.jhu.edu callahan@psuvax1.cs.psu.edu
karl@haddock.ima.isc.com (Karl Heuer) (02/24/90)
In article <0:X1-+Fxds13@ficc.uu.net> peter@ficc.uu.net (Peter da Silva) writes: >I've got a secret for you. Engineers like a continuous div/rem as well. The >folks who got it wrong were the physiscists, who never deal with integers >nohow, and crystallography weenies, ditto. As you may know, C allows the / % operator-pair to be implemented in either of two$ ways: the reflection-invariant definition that rounds toward zero, or the translation-invariant one that rounds toward negative infinity. It seems that there were some voices that didn't like that indeterminacy, and the compromise position was to add the div() function whose result-pair would be predictable. For some reason--possibly to interface with other "broken" languages--the X3J11 Committee chose to make div() round toward zero. I think they should have also provided a mathdiv() that rounds toward minus infinity; since they didn't, I keep such a function in my own library. Karl W. Z. Heuer (karl@ima.ima.isc.com or harvard!ima!karl), The Walking Lint ________ $ Strictly speaking, I suppose the implementation-defined rule could be something silly like "round toward negative infinity if the denominator is a power of two, else round toward zero", but since the choice must be documented, I doubt that any serious vendor will try that.
karl@haddock.ima.isc.com (Karl Heuer) (02/24/90)
In article <E*er-q@cs.psu.edu> callahan@mordor.endor.cs.psu.edu (Paul B. Callahan) writes: >After seeing enough mathematics, I was ultimately convinced that my intuition >was at fault. A similarly confusing point is that the negative of a negative number is positive. Ah, the memories of being a TA... >If we assume that floating point arithmetic is done using a sign-magnitude >representation [then truncate is probably easier than floor to implement]. But, given that most implementations use two's complement for integers, and even for floating-point exponents, why *is* sign-magnitude the standard for floating point significands? Karl W. Z. Heuer (karl@ima.ima.isc.com or harvard!ima!karl), The Walking Lint
dik@cwi.nl (Dik T. Winter) (02/25/90)
In article <16014@haddock.ima.isc.com> karl@haddock.ima.isc.com (Karl Heuer) writes: > In article <E*er-q@cs.psu.edu> callahan@mordor.endor.cs.psu.edu (Paul B. Callahan) writes: > >If we assume that floating point arithmetic is done using a sign-magnitude > >representation [then truncate is probably easier than floor to implement]. > > But, given that most implementations use two's complement for integers, and > even for floating-point exponents, why *is* sign-magnitude the standard for > floating point significands? > There is a good reason to use two's complement for integers: on an add ignore carry, this reason is much less compelling when doing floating-point arithmetic, you can not simply ignore carry. Note also that the exponent is not strictly two's complement, but has the sign bit inverted. The reason is that compare logic is simplified. Further, IEEE specifies four rounding modes, so using a two's complement mantissa because of simple truncation is no good reason. It is even stronger: when doing a round to nearest IEEE specifies that if the exact result is halfway two representable numbers, that number has to be chosen that has a low order zero bit (*). With a two's complement mantissa a similar requirement might be posed, but it would look more complex, and would be more complex in the logic. -- (*) See [I think; this is from memory] IEEE Computer, Vol. 14, Issue 1. This describes draft 8.0 of the IEEE standard and has a number of accompanying articles. The current standard is based on draft 10.0. There are some differences, but they are not shocking. I do not have a reference for that standard. -- dik t. winter, cwi, amsterdam, nederland dik@cwi.nl
flee@shire.cs.psu.edu (Felix Lee) (02/25/90)
Robert D. Silverman <bs@linus.UUCP> wrote:
>Huh? This statement has absolutely nothing to do with what I said.
Grr, semantic difficulties. I think I agree with your position, but
not with your phrasing.
Anyway, more silliness: ADA has both "rem" and "mod", and its integer
division rounds towards zero. (e.g., (-A)/B = -(A/B) = A/(-B))
--
Felix Lee flee@shire.cs.psu.edu *!psuvax1!flee
moss@ibis.cs.umass.edu (Eliot Moss) (02/26/90)
(Please take this as appropriately tongue-in-cheek, though there is a little bit of a message, too! :-) Ok, you theoretical weenies, the *real* reason why -5 DIV 2 comes out as -1 is that some of the first computers (including some of the first ones on which FORTRAN was implemented) were sign-magnitude machines ... so they implemented the MOD, DIV, etc. functions on the absolute values (magnitudes) and then propagated the signs according to (slightly arbitrary) rules. The FORTRAN standards were established in this distant era, have propagated to the present day, and continue to lead machine designers to implement the "FORTRAN" (sign/magnitude) version of these operations in hardware, even when language designers (such as myself) and algorithm designers probably prefer the purer "mathematical" form. The sign/magnitude form probably *is* a little easier to implement, but if it's not what people want, it doesn't matter. Of course, what people want depends on which people you ask .... Eliot -- J. Eliot B. Moss, Assistant Professor Department of Computer and Information Science Lederle Graduate Research Center University of Massachusetts Amherst, MA 01003 (413) 545-4206; Moss@cs.umass.edu
ccplumb@lion.waterloo.edu (Colin Plumb) (03/01/90)
In article <16014@haddock.ima.isc.com> karl@haddock.ima.isc.com (Karl Heuer) writes: >But, given that most implementations use two's complement for integers, and >even for floating-point exponents, why *is* sign-magnitude the standard for >floating point significands? Because the multiply/add ratio is much higher for floating point code, and signed-magnitude is better for multiply at the expense of add. -- -Colin
sjs@spectral.ctt.bellcore.com (Stan Switzer) (03/01/90)
In article <Er$cfg@cs.psu.edu> flee@shire.cs.psu.edu (Felix Lee) writes: > Dave Jones <djones@megatest.UUCP> wrote: > >I differ. The first of these, -5 div 3 == -1 is wrong. > > The contrary view is that abs(a/b) != abs(-a/b) is strange. It > depends on what you're doing. I have no problems with either stance. Put up or shut up. I have often found that the anomalous behavior of "modulus" at zero has caused me extra work to make it do the "right" thing: -5 div 3 == -2 -5 mod 3 == 1 This has occurred in graphics applications, in communications handlers (computing sliding windows using a wrap-around counter), and others. Please, somebody, show me a single example where the other behavior is actually USEFUL! We are not concerned with whether your grade-school education [ -a/b == -(a/b) ] applies to computers, but with what actually is useful in integer computation in actual algorithms. Stan Switzer sjs@bellcore.com
db@lfcs.ed.ac.uk (Dave Berry) (03/02/90)
In article <12115@goofy.megatest.UUCP> djones@megatest.UUCP (Dave Jones) writes: >Just now I did a little experimenting. On Sun-3, release 4.3. In both C >and Pascal, division is correct, but mod is wrong. > >In other words, they have it exactly backwards from the ANSI Pascal >Standard. Not right mathematically. Not right according to the standard. >You got to wonder, how come? Almost certainly becasue that's how the hardware does it. Almost all processors implements integer division and modulus that way.o Questions to comp.arch readers (who I've just tuned in): 1. Is the "normal" method for calculating integer division and modulus intrinsically faster than the mathematically expected definition (that rounds towards minus infinity), or is it just convention that it's implemented this way? 2. If everyone was suddenly to agree that the mathematically expected definition should be adopted as a standard, could it be done quicker in hardware than with software over the existing hardware? There is some weight to arguments that languages should support the round towards zero versions. Standard ML defines div and mod the mathematically expected way, and this has produced criticism from some users and implementers that use of these operators on positive arguments is unnecessarily penalised, since they can't use the versions provided by the hardware. It is possible that the definition might be amended to support both definitions. I note that the ISO draft proposal for standard computer arithmetic (published in SIGPLAN notices, Jan. 1990) allows either or both versions of div and mod. For reference, the Definition of Standard ML defines div and mod as follows: i mod d, i div d return integers r, q determined by the equation d * q + r = i, where either 0 <= r < d or d < r <= 0. Thus r has the same sign as d. [An exception is raised] if d = 0. [nb. the operators used in this definition are those of arithmetic, not of the language.] What sign i mod d should have when d < 0 (in an arbitrary language, not ML) was discussd in several other articles recently. Dave Berry, LFCS, Edinburgh Uni. db%lfcs.ed.ac.uk@nsfnet-relay.ac.uk "The thought of someone sharing one's own preferences without also sharing one's aversions strikes most people as utterly inconceivable." - C. A. Tripp.
cet1@cl.cam.ac.uk (C.E. Thompson) (03/03/90)
In article <2573@castle.ed.ac.uk> db@lfcs.ed.ac.uk (Dave Berry) writes: > >Almost certainly becasue that's how the hardware does it. Almost all >processors implements integer division and modulus that way.o *Very* few processors produce a "natural" div and rem/mod which are not related by (i div j)*j + (i rem j) = i. Therefore no Pascal implementation that passes the validation tests can be as naive as you suggest. Chris Thompson JANET: cet1@uk.ac.cam.phx Internet: cet1%phx.cam.ac.uk@nsfnet-relay.ac.uk
emcmanus@cs.tcd.ie (Eamonn McManus) (03/07/90)
db@lfcs.ed.ac.uk (Dave Berry) writes: >1. Is the "normal" method for calculating integer division and modulus > intrinsically faster than the mathematically expected definition (that > rounds towards minus infinity), or is it just convention that it's > implemented this way? An interesting point about rounding to minus infinity is that it allows (sign-extending) right shifts to be used for integer division by powers of two, when using twos-complement representation. For example, -7/2 is -3 by the `normal' definition but -4 by the `mathematically expected' definition. Its twos-complement representation is 1...1001, which when shifted right produces 1...100 or -4 as required. If a programming language supported this definition of negative division, its compilers could optimize divisions by powers of two into shifts without having to know whether the quantity being divided was negative. On many (most?) processors shifts are faster than divisions even by powers of two. [Unsupported assertion.] -- Eamonn McManus <emcmanus@cs.tcd.ie> <emcmanus%cs.tcd.ie@cunyvm.cuny.edu> One of the 0% of Americans who are not Americans.
djones@megatest.UUCP (Dave Jones) (03/09/90)
From article <1710@tws8.cs.tcd.ie>, by emcmanus@cs.tcd.ie (Eamonn McManus): ... > > An interesting point about rounding to minus infinity is that it allows > (sign-extending) right shifts to be used for integer division by powers of > two, when using twos-complement representation. Lots of operations work better on a two's complement machine, when you use the 'mathematical' rules. Operations distribute correctly. Multiplications and divisions by powers of two can be done with shifts. Modulus by a power of two can be done with an &-operator. Multiplications by other numbers can be done with combinations of shifts and adds. It all works nicely because there is a nice consistent theory behind it. In fact, except for sign-extension, the operations are exactly the same for signed and unsigned numbers! How come? Imagine this: For some positive number M, we take the integers, coil them up into a helix then squash the helix into a circle of circumference M. We can consistently identify the points on that circle with any interval of length M, but two are obvious winners: If we choose the interval 0..(M-1), we get unsigned arithmetic. If we choose -(M/2) .. (M/2)-1 we get signed numbers. If M happens to be 2**n, where n is the word-size of the machine, the mod-M coiling happens automatically as the bits are shifted out and lost. Sign-extension is a way of 'recovering' the bits by shifting them in. See why it works so well? Look at the mess the Sun-3 C compiler generates, in order to come up with the conventional, but mathematically incorrect, answer to i/2: | Annotated for those lucky enough not to know already... tstl d0 | Perform various tests on it. jge L2000000 | Jump if i is greater than or equal to zero. addql #1,d0 | No? Add one to it. (What a bother.) L2000000: asrl #1,d0 | Now, we can do the shift. There's that ubiquitous test for non-negative, staring us in the face. Smug little bugger. Somebody recently said that this situation we are so often stuck with is an artifact of the system FORTRAN was first implemented on, which used sign-bit coded integers. The division routine (incorrectly) did arithmetic on the low bits and then negated the result if the sign bits of the operands differed. And the rest is histrionics.
rpw3@rigden.wpd.sgi.com (Rob Warnock) (03/09/90)
In article <1710@tws8.cs.tcd.ie> emcmanus@cs.tcd.ie (Eamonn McManus) writes: +--------------- | ...If a programming language supported | this definition of negative division, its compilers could optimize | divisions by powers of two into shifts without having to know whether the | quantity being divided was negative. On many (most?) processors shifts are | faster than divisions even by powers of two. [Unsupported assertion.] +--------------- This keeps coming up (comp.arch.topics.perennial anyone?). (The first time I ran into it was in the early 1970's when the PDP-10 Fortran compiler got the obvious bug fixed.) These days, all good compilers know how to optimize this anyway, with a pre- or post-shift correction step that makes this work "right" even for negative dividends. In C, with post-shift correction: #define DIVISOR (1 << SHIFT) #define MASK (DIVISOR - 1) result = (dividend >> SHIFT); if (dividend < 0 && (dividend & MASK) != 0) result += 1; or with pre-shift correction: if (dividend < 0) dividend += MASK; result = (dividend >> SHIFT); The really cute trick is that for most machines this can be done in just a few extra cycles *without* a branch (and therefore without a pipeline break) by computing "MASK" from the dividend's sign bit. For example, on the Am29000, to divide register "a0" by a power-of-2 constant giving "v0" costs four cycles total (most other architectures have similar short sequences): sra v0, a0, SHIFT - 1 ; replicate the sign-bit by the shift-width srl v0, v0, 32 - SHIFT ; right-justify. t0 = (a0 < 0)? MASK : 0 add v0, v0, a0 ; correct the dividend sra v0, v0, SHIFT ; do the division -Rob ----- Rob Warnock, MS-9U/510 rpw3@sgi.com rpw3@pei.com Silicon Graphics, Inc. (415)335-1673 Protocol Engines, Inc. 2011 N. Shoreline Blvd. Mountain View, CA 94039-7311
toma@tekgvs.LABS.TEK.COM (Tom Almy) (03/10/90)
In article <1710@tws8.cs.tcd.ie> emcmanus@cs.tcd.ie (Eamonn McManus) writes: >db@lfcs.ed.ac.uk (Dave Berry) writes: >>1. Is the "normal" method for calculating integer division and modulus >> intrinsically faster than the mathematically expected definition (that >> rounds towards minus infinity), or is it just convention that it's >> implemented this way? >An interesting point about rounding to minus infinity is that it allows >(sign-extending) right shifts to be used for integer division by powers of >two, when using twos-complement representation. [example deleted] >If a programming language supported >this definition of negative division, its compilers could optimize >divisions by powers of two into shifts without having to know whether the >quantity being divided was negative. Well Forth supports this type of division. It does so because in control applications you don't want the discontinuity around zero that you get with the truncating division. Yes, that does make it possible to divide by powers of two with simple shift instructions. But now division is messed up because hardware divide operations trucate rather than floor. So you get your choice -- fixup code for divisions by powers of two or fixup code for divisions by signed variables or negative constants. BTW there is considerable support in the Forth community to supply both styles of division. Smalltalk already supplies both: // and \\ for floored integer division and remainder and quo: and rem: for truncating operations. Other languages don't seem so generous. Tom Almy toma@tekgvs.labs.tek.com Standard Disclaimers Apply