[net.arch] Architecture of Integer Remainder

jimc@ucla-cs.UUCP (02/24/86)

   When a negative integer is divided, what should be the sign of the
remainder?  Should -14/5 = -3 remainder +1, or -2 remainder -4?  The draft
standard for "C" language (X3J11/85-138 sect. C.3.5 line 434) states
that the sign is "implementation defined", as different hardware produces
different results, and to make all conform would be inefficient.  Many "C" 
users desire a specific standard, and debate has churned in the last few weeks
over what should be done.
   One person points out that if you sell 9 things for $10.00 the unit price
is $1.11, so if you buy 9 for $-10.00 the price should be $-1.11.  In
this case the remainder should have the sign of the dividend.  (Integers are
assumed, Cobol style.)
   Many have pointed out systems applications that demand a positive remainder.
Examples are converting a signed time offset to hours and minutes; a signed
byte offset in a file to bytes, sectors and tracks; a hash table when the key
is signed; a ring cache when jumps are long and bidirectional; and so on.
Range reduction of periodic functions is a related example in floating point.
And the positive remainder is clearly what the theory of rings demands.
   We have also been pointedly reminded that Fortran, Cobol, Basic and Ada
all make the remainder sign match the dividend, and against such forces we 
cannot expect any kind of rational choice to prevail.  
   The issue is more, should users, compilers or hardware produce the required
remainder sign?  Users are capable of doing (where % means remainder):
     r = a % b; if (r < 0) r = r + b;
or   if (a < 0) a = a - (b-1); q = a/b; r = a - b*q;
But users feel the irregular remainder sign is a blemish in a well-constructed
language and a significant error exposure if forgotten.  Compilers are capable
of providing the same code automatically, but compiler writers know that 90%
of all divisions are of positive quantities and they have to produce the code
for all divisions, a waste.  Hardware divide algorithms, at least those I know
of, cannot handle a negative dividend so that an explicit sign-correction step
is necessary, again slowing down the machine.
   "C" users: Do we have a consensus that there should be a specific standard,
and that we want the remainder sign positive -- if the efficiency issue can
be resolved?  Do users want a positive remainder even if it usually means
wasted code?  How do users of other languages feel?
   Compiler writers: Is it feasible to recognize when a dividend is in fact
always positive, so the extra code can be omitted?  If you declare the dividend
explicitly unsigned, on some machines you must produce bizarrely contorted
code to handle the N'th bit; is it feasible to recognize when these contortions
are unnecessary?
   Hardware engineers:  It's you who got us into this, so please help us get
out by, in the next generation of machines, handling division the way the
users want -- don't tell us to take our discussion elsewhere.  Is there any
major hardware that does not make the remainder sign match the dividend?  Is
there any efficient divide algorithm that works equally well on both sign
dividends?  Is it a significant burden on hardware to adjust the remainder
when it is negative?
   To avoid clogging the newsgroups for another few weeks, please mail me your
comments and I will summarize them.  Closing date 3 March.  Thank you.

-- 
James F. Carter            (213) 206-1306
UCLA-SEASnet; 2567 Boelter Hall; 405 Hilgard Ave.; Los Angeles, CA 90024
UUCP:...!{ihnp4,ucbvax,{hao!cepu}}!ucla-cs!jimc  ARPA:jimc@locus.UCLA.EDU