[comp.arch] Modular arithmetic, was Re: Trachtenberg System of Math

johnl@ima.ima.isc.com (John R. Levine) (10/28/88)

In article <16310@prls.UUCP> gert@prls.UUCP (Gert Slavenburg) writes:
>The technique I am referring to is called 'Residue Arithmetic' or 'Chinese
>Remaindering'. ...

Modular arithmetic is discussed at some length in Knuth's second volume,
"Seminumerical Algorithms" in sections 4.3.2 and 4.3.3.  He points out that
it can be a convenient way to represent large integers, particularly in
problems that involve exact rational answers to sets of linear equations.
Since you add, subtract, and multiply numbers by adding, subtracting, or
multiplying the sets of remainders component-wise without any sort of carry,
you can solve the equations independently for each component, then (slowly)
convert the components back into the large integers.  This is obviously
useful for parallel processors, and also handy for conventional machines
since the entries in the component arrays can be moderate sized integers
which are easier to manipulate in languages like Fortran than are multiple-
precision integers.

As noted elsewhere, the idea of modular arithmetic has been known for
centuries, and was proposed for computers in 1955.  Apparently there has
been a lot of discussion of hardware implementation but I've never seen
anything.  Reports from hardware designers are invited.
-- 
John R. Levine, IECC, PO Box 349, Cambridge MA 02238-0349, +1 617 492 3869
{ bbn | think | decvax | harvard | yale }!ima!johnl, Levine@YALE.something
Rome fell, Babylon fell, Scarsdale will have its turn.  -G. B. Shaw

shankar@src.honeywell.COM (Subash Shankar) (10/29/88)

In article <2817@ima.ima.isc.com> johnl@ima.UUCP (John R. Levine) writes:
>In article <16310@prls.UUCP> gert@prls.UUCP (Gert Slavenburg) writes:
>>The technique I am referring to is called 'Residue Arithmetic' or 'Chinese
>>Remaindering'. ...
>As noted elsewhere, the idea of modular arithmetic has been known for
>centuries, and was proposed for computers in 1955.  Apparently there has
>been a lot of discussion of hardware implementation but I've never seen
>anything.  Reports from hardware designers are invited.

Residue arithmetic representations store a number by modding it with a
set of predefined radices (relatively prime).

I thought one of the problems with residual arithmetic is that overflow
detection isn't possible.  For example, assuming you use radix 2,3, and 5.
If x1=4, x2=19, y=2, they would be represented as
  x1   (0,1,4)
  x2   (1,1,4)
  y    (0,2,2)
Multiplying x1 and y results in (0,2,3) = 8
Multiplying x2 and y results in (0,2,3) = 8 + 2*3*5 = 38 should be overflow.
Overflow detection is difficult (impossible?) since you can't tell the
magnitude of a number by observing any of its "digits".  Anybody out there
know if this is correct, or has this problem been solved?

webber@athos.rutgers.edu (Bob Webber) (10/30/88)

In article <11010@srcsip.UUCP>, shankar@src.honeywell.COM (Subash Shankar) writes:
> In article <2817@ima.ima.isc.com> johnl@ima.UUCP (John R. Levine) writes:
> >As noted elsewhere, the idea of modular arithmetic has been known for
> >centuries, and was proposed for computers in 1955.  Apparently there has
> >been a lot of discussion of hardware implementation but I've never seen
> >anything.  Reports from hardware designers are invited.
> ...
> I thought one of the problems with residual arithmetic is that overflow
> detection isn't possible.  ...
> Overflow detection is difficult (impossible?) since you can't tell the
> magnitude of a number by observing any of its "digits".  Anybody out there
> know if this is correct, or has this problem been solved?

There is a nice book on Residue Arithmetic written by Szabo back in
the early 60's.  Multiplication and addition work real sweet.  Division
gets tricky.  Comparison/sign-determination seem to essentially
require pseudo-conversion to decimal arithmetic which isn't pretty.
Since you can't test for overflow, you need to allocate rather large
numbers if you have a lot of arithmetic to do (e.g., matrix multiplications).
If you have to do tests on these numbers frequently, as far as I know
you are going to have real problems.  Haven't heard of anything theoretical
that improves on Szabo's results.  However, from the hardware point of
view, people have discussed this at some of the Optical Computing conferences.
Somewhere I have a copy of a paper that shows a rather nice scheme for
binary encoding residue numbers (most authors tend to think of them
as a sequence of integers).  The encoding is recursive, representing
values in one mod as a sequence of values in lower mods.  The
``standard'' way to use residue arithmetic on ``normal'' architectures
seems to be to find a handful of relatively prime numbers that are
near the machine's word size so that you can take maximum advantage of
the built in arithmetic hardware, while avoiding having to handle
carry propagation between the words.  Should work nicely on VLIW architectures
too [just to tie in with a separate stream in this group :-) ].

--- BOB (webber@athos.rutgers.edu ; rutgers!athos.rutgers.edu!webber)