mleech@bcarh342.bnr.ca (Marcus Leech) (08/02/90)
I recently received a complaint from a user about the CPU-hungriness of the pow(3m) function in the UNIX C library. I'm wondering about the efficiency of "standard" implementations of this (and other) math library functions for C. Does pow(3m) conventionally use a logarithm table, or is it (ugh!) iterative, or some B.O.T? ----------------- Marcus Leech, 4Y11 Bell-Northern Research |opinions expressed mleech@bnr.ca P.O. Box 3511, Stn. C |are my own, and not VE3MDL@VE3JF.ON.CAN.NA Ottawa, ON, CANADA |necessarily BNRs
gwyn@smoke.BRL.MIL (Doug Gwyn) (08/04/90)
In article <1990Aug2.120706.25713@bnrgate.bnr.ca> mleech@bcarh342.bnr.ca (Marcus Leech) writes: >I'm wondering about the efficiency of "standard" implementations of this > (and other) math library functions for C. Does pow(3m) conventionally > use a logarithm table, or is it (ugh!) iterative, or some B.O.T? All common implementations of pow(x,y) basically perform exp(log(x)*y), but they add some twists designed to avoid unnecessary overflow, or in the case of 4.3BSD, to implement the IEEE FP funny numbers. Thus there is often a considerable amount of additional bookkeeping computation beyond the log() and exp() evaluations that one would expect.
gvr@cs.brown.edu (George V. Reilly) (08/04/90)
In article <13474@smoke.BRL.MIL> gwyn@smoke.BRL.MIL (Doug Gwyn) writes: >In article <1990Aug2.120706.25713@bnrgate.bnr.ca> mleech@bcarh342.bnr.ca (Marcus Leech) writes: >>I'm wondering about the efficiency of "standard" implementations of this >> (and other) math library functions for C. Does pow(3m) conventionally >> use a logarithm table, or is it (ugh!) iterative, or some B.O.T? > >All common implementations of pow(x,y) basically perform exp(log(x)*y), >but they add some twists designed to avoid unnecessary overflow, or in >the case of 4.3BSD, to implement the IEEE FP funny numbers. Thus there >is often a considerable amount of additional bookkeeping computation >beyond the log() and exp() evaluations that one would expect. There are many common special cases where calling pow is unwise: when y = 0.5, use sqrt instead; when y is a small positive or negative integer, just do the multiplications by hand; when y is of the form 1/k (i.e., you're trying to find the k-th root of x), use the Newton-Raphson method instead. The following routine can be used to calculate x^n, where n is a non-negative integer. Instead of requiring the n-1 multiplications that the naive algorithm would use, it requires lg n multiplications. double power(double x, unsigned n) { double t = 1.0; while (n > 0) if (n & 1) { t *= x; n--; } else { x *= x; n >>= 1; } return t; } Caveat: I would expect any decent implementation of pow to do this, though this power function should be slightly faster. If n is known in advance, just code the multiplications by hand, using the above algorithm; e.g., to calculate x^6: temp = x*x; temp *= x; temp *= temp; The Newton-Raphson method can be used to find the roots of an equation f(x) = 0 in an iterative fashion. Given x_i, the approximation to the root at the i-th iteration, we compute x_{i+1} = x_i - f(x_i) / f'(x_i) It's important to make a good guess for x_0, or this may fail to converge, or take a large number of iterations to converge---see any good calculus book or numerical analysis text for further details. To calculate n^(1/k), the k-th root of n, we use f(x) = x^k - n, which yields f'(x) = k * x ^ (k-1), so x_{i+1} = x_i - f(x_i) / f'(x_i) = x_i - (x_i ^ k - n) / (k * x_i ^ (k-1)) = ((k - 1)/k) * x_i + (n/k) * (1/(x_i ^ (k-1))) For example, when k = 5, this becomes x_{i+1} = 0.8 * x_i + n' /(x_i ^ 4) where n' = 0.2 * n and x_i ^ 4 should be calculated as (x_i ^ 2) ^ 2. A reasonably good value for x_0, the initial guess, is n/k, when |n| > 1.0, and min(n*k, 1.0), otherwise. Caveat: you should find that the sqrt library function is faster than a general-purpose Newton-Raphson root-finding routine. ________________ George V. Reilly gvr@cs.brown.edu uunet!brunix!gvr gvr@browncs.bitnet Box 1910, Brown U, Prov, RI 02912
ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (08/04/90)
In article <46584@brunix.UUCP>, gvr@cs.brown.edu (George V. Reilly) writes: > There are many common special cases where calling pow is unwise ... > The following routine can be used to calculate x^n, where n is a non-negative > integer. Instead of requiring the n-1 multiplications that the naive > algorithm would use, it requires lg n multiplications. > Caveat: I would expect any decent implementation of pow to do this, though > this power function should be slightly faster. UNIX implementations of pow _do_ do that. However, consider x**(n-eps) => exp(log(x)*(n-eps)) x**n => squaring-and-multiplying used x**(x+eps) => exp(log(x)*(n+eps)) The code implementing the exp-and-log method had better be _very_ good, or pow(x,y) may fail to be monotone in y. That can do nasty things to some iterative algorithms. [There _are_ some very good algorithms now, would you believe 0.54 ULP worst case for exp() using IEEE?] Consider also that for n sufficiently large, and for exp() and log() sufficently well implemented, the exp() and log() method may be _more_ accurate than the squaring-and-multiplying method. -- Distinguishing between a work written in Hebrew and one written in Aramaic when we have only a Latin version made from a Greek translation is not easy. (D.J.Harrington, discussing pseudo-Philo)
ark@alice.UUCP (Andrew Koenig) (08/04/90)
In article <46584@brunix.UUCP>, gvr@cs.brown.edu (George V. Reilly) writes: > There are many common special cases where calling pow is unwise: > when y = 0.5, use sqrt instead; when y is a small positive or negative > integer, just do the multiplications by hand; when y is of the form > 1/k (i.e., you're trying to find the k-th root of x), use the > Newton-Raphson method instead. A lot depends on what you're trying to do. Getting an accurate pow() function is very, very difficult. Even if your exp() and log() return results that are the most accurate they can possibly be, I believe that evaluating exp(y*log(x)) for pow(x,y) can lose up to 12 low-order bits of precision. I have seen pow() functions that, in the interest of speed, check for various special values of the second argument (0.5, integers, and so on) and use fast algorithms for those cases. If such an implementation uses exp and log for the ordinary cases, it will almost surely not be monotonic. That is, it will be possible to find positive x, y, and z such that y > z and pow(x,y) < pow(x,z). I believe that if I were to set out to write a good pow() function from scratch, it would take me at least several months. -- --Andrew Koenig ark@europa.att.com
volpe@underdog.crd.ge.com (Christopher R Volpe) (08/06/90)
In article <11143@alice.UUCP>, ark@alice.UUCP (Andrew Koenig) writes: |>If such an |>implementation uses exp and log for the ordinary cases, it will |>almost surely not be monotonic. That is, it will be possible to |>find positive x, y, and z such that y > z and pow(x,y) < pow(x,z). I sure hope so, since there are trivial real-life cases where this is true. E.g.: x=0.5, y=3, z=2. pow(x,y)=.125 < pow(x,z)=.25 Am I missing something???? |>-- |> --Andrew Koenig |> ark@europa.att.com Chris Volpe GE Corporate R&D volpecr@crd.ge.com
ark@alice.UUCP (Andrew Koenig) (08/07/90)
In article <10803@crdgw1.crd.ge.com>, volpe@underdog.crd.ge.com (Christopher R Volpe) writes: > In article <11143@alice.UUCP>, ark@alice.UUCP (Andrew Koenig) writes: > |>If such an > |>implementation uses exp and log for the ordinary cases, it will > |>almost surely not be monotonic. That is, it will be possible to > |>find positive x, y, and z such that y > z and pow(x,y) < pow(x,z). > I sure hope so, since there are trivial real-life cases where this is > true. E.g.: x=0.5, y=3, z=2. pow(x,y)=.125 < pow(x,z)=.25 Sorry about that. I meant x>1, y>z, z>0. -- --Andrew Koenig ark@europa.att.com
bright@Data-IO.COM (Walter Bright) (08/07/90)
In article <11143@alice.UUCP> ark@alice.UUCP (Andrew Koenig) writes:
<Getting an accurate pow() function is very, very difficult.
<I believe that if I were to set out to write a good pow() function
<from scratch, it would take me at least several months.
What do you think of the algorithm described in Cody & Waite's
"Software Manual For The Elementary Functions"?
engbert@cs.vu.nl (Engbert Gerrit IJff) (08/08/90)
In article <10803@crdgw1.crd.ge.com>, volpe@underdog.crd.ge.com (Christopher R Volpe) writes: ) In article <11143@alice.UUCP>, ark@alice.UUCP (Andrew Koenig) writes: ) |>If such an ) |>implementation uses exp and log for the ordinary cases, it will ) |>almost surely not be monotonic. That is, it will be possible to ) |>find positive x, y, and z such that y > z and pow(x,y) < pow(x,z). ) ) I sure hope so, since there are trivial real-life cases where this is ) true. E.g.: x=0.5, y=3, z=2. pow(x,y)=.125 < pow(x,z)=.25 ) ) Am I missing something???? ) ) |>-- ) |> --Andrew Koenig ) |> ark@europa.att.com ) ) Chris Volpe ) GE Corporate R&D ) volpecr@crd.ge.com Well, althoug Andrews story was right, the example might not have sufficient as an explanation. There are, in fact, two cases, both of which shouldn't, but could occur if the implementation uses several speed optimizing strategies. One could probably find: some y > z > 0 and x > 1 where pow(x,y) <= pow(x,z) some y > z > 0 and 0 < x < 1 where pow(x,y) >= pow(x,z) A correct implementation should give: / if x > 1 then pow(x,y) > pow(x,z) for all y > z > 0 { \ if 0 < x < 1 then pow(x,y) < pow(x,z) as we expect it to be. Bert IJff Free University Amsterdam ------------------------------------- "The power of a math library function is not always in its speed." Bert
ark@alice.UUCP (Andrew Koenig) (08/08/90)
In article <2621@dataio.Data-IO.COM>, bright@Data-IO.COM (Walter Bright) writes: > What do you think of the algorithm described in Cody & Waite's > "Software Manual For The Elementary Functions"? It's not bad, but if I remember correctly it could be significantly improved upon. I think I recall an article in the IBM Journal of R&D about a power algorithm that gives no more than 1 ULP error. -- --Andrew Koenig ark@europa.att.com
karl@haddock.ima.isc.com (Karl Heuer) (08/09/90)
In article <3511@goanna.cs.rmit.oz.au> ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) writes: >UNIX implementations of pow _do_ [special-case integer exponents]. *Some* Unix implementations do. I have observed others in which roundoff error causes (int)pow(2.0, 3.0) to return 7; this is an excellent reason not to use pow() when you want integer exponentiation. (I myself would prefer a new operator, `*^'. I wrote an essay on this a couple of years ago.) Karl W. Z. Heuer (karl@kelp.ima.isc.com or ima!kelp!karl), The Walking Lint
aduncan@rhea.trl.oz.au (Allan Duncan) (08/10/90)
From article <17301@haddock.ima.isc.com>, by karl@haddock.ima.isc.com (Karl Heuer): > *Some* Unix implementations do. I have observed others in which roundoff > error causes (int)pow(2.0, 3.0) to return 7; this is an excellent reason not > to use pow() when you want integer exponentiation. (I myself would prefer a > new operator, `*^'. I wrote an essay on this a couple of years ago.) Ahh! Shades of Fortran! This is one of the reasons that ancient language still keeps alive and well. I find C's handling of mixed mode arithmetic a little less than satisfactory, and the need to write everything as functions obscures the mathematical form of an expression. I might be forced to look at C++ if I am to throw over Fortran. Allan Duncan ACSnet a.duncan@trl.oz (03) 541 6708 ARPA a.duncan%trl.oz.au@uunet.uu.net UUCP {uunet,hplabs,ukc}!munnari!trl.oz.au!a.duncan Telecom Research Labs, PO Box 249, Clayton, Victoria, 3168, Australia.