[net.arch] Representations of Real Numbers

edwards@houxa.UUCP (D.LEWAN) (02/06/86)

       I am currently considering the real numbers, some of their
       constructive aspects and specific uses, and different
       encodings for them for digital computers.  This article is a
       request for encodings that may not be well known as well
       (but are indeed useful), as particular uses for those
       encodings.  Such a use is probably one for which

         1.  that encoding is convenient and

         2.  another encoding would be difficult to use.

       A few philosophical notes should be made before I go too
       far.

       The act of encoding is that of assigning, to a *meaningful*
       object, a name from which the object (or its meaning,
       anyway) can be recreated.  For example, to the object
       "fifteen" we may assign the name "15".  Now, from "15" we
       can reconstruct "fifteen" by interpreting it as "10 + 5",
       and the objects named by "10", "+" and "5" are all well
       defined and understood.  I cannot name all the real numbers
       since there are only a countable number of names available,
       and there are an uncountable number of reals.  But I can
       select subsets of the reals tailored to my particular needs
       and for some of those subsets I can find worthwhile
       encodings.

       The standard exponent/base approach to real number
       representation is a style that in some sense has the best
       and worst of all worlds.  Its precision is generally good
       enough for most arithmetic applications, but absolute
       precision waxes for large numbers, where the type integer
       might in fact be more precise.  It also doesn't guarantee,
       for example, that n-th roots, when raised to the n-th power,
       yield the original numbers, i.e. that

                              n ---- n
                              \/ q      =  q

       in all cases.  (Imagine moving your PC's best approximation
       of the square root of 2 to a CRAY II and then performing
       arithmetic.)

       There are representations that do preserve such algebraic
       properties, unfortunately they lose meaningful precision.
       If one only knows that the square of a certain number always
       yields 2 one does not necessarily know if it is greater than
       or less than 0.  (High school math conventions do no good
       here.)

       Now, I know that preservation of algebraic properties is
       just one of many that might be of interest to a particular
       application.

       I am sending this plea to the net to find a wealth of
       applications that might run better with less well known
       encodings for specific real numbers.

hofbauer@utcsri.UUCP (John Hofbauer) (02/09/86)

Mother of mercy, if you thought the debate on integer division
got out of hand you ain't seen nothing yet! That is not the kind
of constructive comment the poster was looking for but as a
Numerical Analyst I'm not going to touch the representation of
real numbers with a ten foot pole. Sorry.

aglew@ccvaxa.UUCP (02/12/86)

I saw a paper in the not-too-distant past about a real number representation
that had three fields (four if you count the sign): fraction, exponent, and 
meta-exponent. I am not sure that I remember exactly what the meta-exponent 
was - it may have been something like the number of tens in 10**10**10**...
exponent - but it was used to greatly increase range. I believe that you
could express the loss in precision of large numbers as a fraction of the log
of the number?

Anybody remember this paper? Unfortunately, I have changed cities, and I 
remember where to find this paper only by the position in the stacks.

herndon@umn-cs.UUCP (02/13/86)

  There've been some interesting representations suggested
for "real" numbers (I shouldn't say floating point, but they
aren't real either.)  One article which appeared in Journal
of the ACM a while back suggested using a very odd scheme
involving a mantissa and another field which was (if I recall
aright) the number of times e should be raised to itself before
multiplying by the mantissa.  Very useful for LARGE(!) and SMALL(!)
numbers (we're talking numbers you couldn't even write down in
scientific notation, they have so many zeroes).   Mentioned
lots of useful properties of these things, but computational
efficiency wasn't one of them.  This article appeared a year
or two ago.
  There was an article in byte a while back suggesting an alternate
representation for floating point numbers which might be useful
on a micro... unfortunately I've forgotten all the details.  I
think the idea was to keep the logarithm of the number, and then
do multiplies and divisions by adding and subtracting... mildly
interesting, anyway.
  I heard something interesting from one of the hardware designers
where I used to work (Phoenix, AZ) about a FP representation under
consideration for large arrays of processing elements.  I seem to
recall that it involved non-unique representations for numbers,
allowing arithmetic operations to be performed much faster since
the representation put a finite bound on the number of carries which
ever had to be performed (something like gray codes for FP numbers).
  Sorry I don't have more details, but maybe these notes will
prompt somebody to mention more about them.

ken@rochester.UUCP (Ipse dixit) (02/16/86)

In article <17200002@umn-cs.UUCP> herndon@umn-cs.UUCP writes:
>  There was an article in byte a while back suggesting an alternate
>representation for floating point numbers which might be useful
>on a micro... unfortunately I've forgotten all the details.  I
>think the idea was to keep the logarithm of the number, and then
>do multiplies and divisions by adding and subtracting... mildly
>interesting, anyway.

In 1979, Aug I think, an article appeared in CACM with the above
scheme.  It was called FOCUS. The rationale was that data from real
world sensors, e.g. light, sound follow logarithmic scales so there
were advantages to being able to multiply or divide quickly on cheap
micros, and the lack of a true zero didn't matter much, the smallest
number being essentially below thermal noise levels.

I am curious to know if anybody actually used this scheme.

	Ken
-- 
UUCP: ..!{allegra,decvax,seismo}!rochester!ken ARPA: ken@rochester.arpa
Snail: Comp. of Disp. Sci., U. of Roch., NY 14627. Voice: Ken!