[net.micro] Arctan approximation for generating pi

milne@icse.uci.edu (Alastair Milne) (12/28/85)

   I must agreee with the observations about the speed of the power series
   suggested so far.  Two key points in my understanding of numeric
   approximation:

   1.  For numeric approximations which people are actually going to have to
       use in their work, avoid power series if possible.  Deadly slow, even
       if, as some will, they converge sufficiently within 5 terms or fewer.

   2.  Don't raise values to exponents unless it's unavoidable.  The
       implementations used in most languages use EXP and LN, which are
       themselves approximations, resulting in representation error,
       round-off error, and amplification of these in the
       multiplications which produce the final result.  Wirth knew what
       he was doing when he refused to put exponentiation into Pascal.


   The best way I know of to get around both of these is to use
   continued fractions.  And most fortunately, there is a very nice
   continued fraction form for arctan which requires perhaps 3 floating
   point divisions, and 2 floating-point additions.  Furthermore, it
   gives a very pleasant number of significant digits.  Incredibly few
   operations for a very good result.  Naturally, I am describing it
   like this because I can't at the moment recall its exact form
   (sorry!).  It is among some of my old numeric analysis notes.  When
   (and if) I find it, I'll send it in.  I think the people who need
   this will be quite impressed with it.  I certainly was when it was
   first shown to me, by a teacher who made sure we first endured
   coding power series evaluations.


   Alastair Milne

   PS.  Like Sheldon Meth, I am very intrigued by the apparently special
	relationship between atan(1/5) and atan(1/239).  But is it really true
	than pi, a transcendental number, is EQUAL to a linear combination of
	these two, or does it only lie within an acceptable tolerance?

RCONN@simtel20.arpa (Rick Conn) (12/28/85)

For those who are interested, the math libraries (MLIB.SRC and MLIB2.SRC)
in the Ada repository (see pd:<ADA.MATH>) contain many routines, including
SIN, COS, EXP, LOG, etc, which are implemented using continued fractions
rather than power series.  I have run some brief tests of these routines
(using a DEC Ada compiler) against the DEC standard math library under VMS,
and the results agreed completely in over 99% of my tests, where the remaining
tests differed only by a few hundred-thousandths.  Since most of the Ada
code is simple calculation, you can probably read the code easily whether
you know Ada or not.

	Rick
-------

hofbauer@utcsri.UUCP (John Hofbauer) (12/29/85)

>    1.  For numeric approximations which people are actually going to have to
>        use in their work, avoid power series if possible.  Deadly slow, even
>        if, as some will, they converge sufficiently within 5 terms or fewer.
> 
>    The best way I know of to get around both of these is to use
>    continued fractions.  And most fortunately, there is a very nice
>    continued fraction form for arctan which requires perhaps 3 floating
>    point divisions, and 2 floating-point additions.  Furthermore, it
>    gives a very pleasant number of significant digits.  Incredibly few
>    operations for a very good result.

Of course there are many better types of approximations than power series:
continued fractions (as mentioned), rational functions, Chebyshev series,
minimax, to name a few. But which one do you choose? It depends on what
your objectives are. Most frequently you want to compute an approximation
to some desired accuracy. This is the case for builtin functions were you
want sin, for example, to single or double precision. Since you know what
the desired accuracy is at the time you construct the algorithm you usually
pick the type of approximation which will minimize the number of operations
and best preserves numerical stability. It's fascinating to look at the
construction of the builtin function library which comes with every compiler.
The variety of approximations used is astounding; double precision version
of exponential, say, may use an entirely different approximation than the
single precision version simply to save one add or multiply. The problem
is that if the accuracy requirement is changed a new approximation must
be derived from scratch. So, if you want to play around with
generating pi to various number of digits you've got problems,
since unlike with power series it is usually not possible
to generate the next term in the approximation dynamically. Therefore
the power series approximation is not all that bad provided you expand
about a point very close to zero, as some people suggested.

>    2.  Don't raise values to exponents unless it's unavoidable.  The
>        implementations used in most languages use EXP and LN, which are
>        themselves approximations, resulting in representation error,
>        round-off error, and amplification of these in the
>        multiplications which produce the final result.  Wirth knew what

It is true that a**x will be computed as exp(x*ln a), x real, [well known
formula] but a**n will usually be implemented as a*a*...*a (n-1 multiplies)
n integer, at least that's how IBM Fortran compilers do it. But then
in generating the next term in a power series the previous term is
multiplied by x and, usually, divided by some constant so the exponentiation
problem should never arise in practice. Incidently, I once heard a
great horror story how a**x = exp(x*ln a) almost sunk an engineering
Ph.D. thesis here at UofT. It happened nearly 20 years ago before the
builtin functions we all rely on got as good as they are now.

>        Wirth knew what he was doing when he refused to put exponentiation
>	into Pascal.

Wirth probably didn't include exponentiation in Pascal because of
the weird arithmetic on the CDC6600 computer, which was the machine
on which Pascal was first implemented. This is also why Pascal has
packed arrays. The 6600 was a number cruncher and so did not have
byte addressing. Wirth published a paper in Software: Practice and
Experience in 1971 which goes into the problems of implementing
Pascal on the 6600, among other things. It makes your hair stand on
end. Talk about lousy architecture! Under certain circumstances you
could multiply a number by one and change the last few bits!

>    PS.  Like Sheldon Meth, I am very intrigued by the apparently special
> 	relationship between atan(1/5) and atan(1/239).  But is it really true
> 	than pi, a transcendental number, is EQUAL to a linear combination of
> 	these two, or does it only lie within an acceptable tolerance?

What's so unusual about a transcendental number being the sum of two other
transcendental numbers?

milne@icse.uci.edu (Alastair Milne) (12/31/85)

>  Of course there are many better types of approximations than power series:
>  ....
>  about a point very close to zero, as some people suggested.

    I quite agree.  I did not, however, want to clog the net with discussions
    of numeric analysis considerations: it is clogged enough already
    with other things.  The question at hand was how to approximate
    arctan, and the suggestions I had seen so far, though correct,
    would have been costly both to their implementors and their users.
    I simply wanted to point out why, and suggest the best alternative
    I know of, specifically for arctan.  Certainly you could (for
    instance) apply Chebyshev economisation to the power series, and
    with x as small as 1/5 and 1/239, the economised series should
    converge very rapidly.  But it still won't compete with the
    continued fraction.  (Notice that I didn't even mention Horner's
    method.)

    I grant I hadn't thought about dynamically changing accuracy, which
    is one advantage that a plain power series does have.  I don't
    recall just now whether that can be done in any way with continued 
    fractions.

>  It is true that a**x will be computed as exp(x*ln a), x real, [well known
>  formula] but a**n will usually be implemented as a*a*...*a (n-1 multiplies)
>  n integer, at least that's how IBM Fortran compilers do it. But then
>  in generating the next term in a power series the previous term is
>  multiplied by x and, usually, divided by some constant so the exponentiation
>  problem should never arise in practice. Incidently, I once heard a
>  great horror story how a**x = exp(x*ln a) almost sunk an engineering
>  Ph.D. thesis here at UofT. It happened nearly 20 years ago before the
>  builtin functions we all rely on got as good as they are now.

     I'm not sure whether the practices of IBM Fortran compilers
     constitute recommendations.  Many would say just the opposite.
     Either way, I'm not sure increasingly long series of
     floating-point multiplications are much to be preferred: greater
     accuracy, but still expensive to compute.  And, though the next
     term of a power series SHOULD be generated the way you say, the
     average programmer might not implement it that way on seeing its
     arithmetic description.  Furthermore, I'm sure none of us wants to
     encourage programming techniques based on knowledge of how our
     favourite compiler implements feature x.

     Horror stories in numeric analysis!  Yes, I imagine we could start a
     whole new distribution list for them alone, if it hasn't actually been
     done already.  There was a 100x100 sparse matrix that had to be inverted
     ... well, never mind.  Let's hope that we're generating them a little
     less frequently these days.  

>  Wirth probably didn't include exponentiation in Pascal because of
>  the weird arithmetic on the CDC6600 computer, ...
>  multiply a number by one and change the last few bits!

   Wirth's design considerations for Pascal were more oriented around
   human than machine (though you have made it much clearer to me why
   the routines PACK and UNPACK were included).  Specifically,
   concerning exponentiation, Wirth said that it was misused in most of
   the places where it occurred, and that he was not going to include
   something in his new language which he was then going to have to
   teach his students not to use.  Teaching was, after all, his primary
   aim in creating Pascal.


Thanks for the reply,
Alastair Milne

hofbauer@utcsri.UUCP (John Hofbauer) (01/02/86)

> 
> >  Of course there are many better types of approximations than power series:
> >  ....
> >  about a point very close to zero, as some people suggested.
> 
>     I quite agree.  I did not, however, want to clog the net with discussions
>     of numeric analysis considerations:

I mildly disagree with you here. As a former numerical analyst I am constantly
amused and appauled by the naive use of formulas from mathematics texts.
George Forsythe wrote a delightful paper titled "Pitfalls in Computation or
Why a Math book isn't Enough" on this subject. It appeared in the American
Mathematical Monthly in November 1971. While I agree with you that we shouldn't
clog net.micro with discussions of numerical analysis, people who read it
do compute and so should be made aware of pitfalls they may inadvertently
fall into, and evaluating approximating formulas with floating point numbers
is fraught with pitfalls. 

>      I'm not sure whether the practices of IBM Fortran compilers
>      constitute recommendations. ...
>                   ...  Furthermore, I'm sure none of us wants to
>      encourage programming techniques based on knowledge of how our
>      favourite compiler implements feature x.
> 
While regretable, it is wise to know as much as possible about what compilers
do to your programs, if for no other reason than self-preservation!
I cited the IBM Fortran compiler as an example of a good implementation
but that, of course, says nothing about other compilers.

> >  Wirth probably didn't include exponentiation in Pascal because of
> 
>    Wirth's design considerations for Pascal were more oriented around
>    human than machine.

I'm now inclined to agree with you that the exclusion of exponentiation
from Pascal was not specifically due to funny arithmetic on the CDC6600
but rather to the unpredictable, and unsafe, way exponentiation may be
implemented in different compilers. At least this way user must use the
the appropriate formula explicitly; unfortunately it doesn't follow that
he will be any more aware of the pitfalls. On the other hand,
he will be forced to discover that formal exponentiation is often
unnecessary.