[sci.math] square roots by hand or computer

amc4919@cec_ss1.wustl.edu (Adam M. Costello) (03/01/91)

I have seen several programs posted that implement the square-root-by-hand-
one-digit-at-a-time algorithm.  This very same algorithm becomes much, MUCH
simpler in base 2, both for a computer and on paper.  Try it.
Since base-conversions are easy, I'd say this is the way to go.
AMC

amc4919@cec_ss1.wustl.edu (Adam M. Costello) (03/05/91)

: comp.theory.dynamic-sys:92 sci.math:1737 sci.physics:1474

I have seen several programs posted that implement the square-root-by-hand-
one-digit-at-a-time algorithm.  This very same algorithm becomes much, MUCH
simpler in base 2, both for a computer and on paper.  Try it.
Since base-conversions are easy, I'd say this is the way to go.
AMC
#! rnews 1103From: dave@wucs1.wustl.edu (David T Mit

00lhramer@bsu-ucs.uucp (Leslie Ramer) (03/14/91)

In article <1991Mar1.001103.6341@cec1.wustl.edu>, amc4919@cec_ss1.wustl.edu (Adam M. Costello) writes:
> I have seen several programs posted that implement the square-root-by-hand-
> one-digit-at-a-time algorithm.  This very same algorithm becomes much, MUCH
> simpler in base 2, both for a computer and on paper.  Try it.
> Since base-conversions are easy, I'd say this is the way to go.
> AMC

I would much rather use Newton's method to find the square root or any other
root for that matter.  Newton's method finds the zeros or roots of a function.
Therefore a function that has a zero at the square root of a number, say A,
is desired.

i.e.
 2
X  = A
        2
f(x) = X - A

The general form of Newton's method is as follows:

             f( X  )
                 n
X    = X  - ---------
 n+1    n    f'( X )
                  n

f'(x) = 2X

Plugging this in:
              2
             X  - A
              n
X    = X  - --------
 n+1    n     2 X                     Form 1)
                 n

         2
        X  + A
         n
X    = --------                       Form 2)
 n+1     2 X
            n

        1           A
X   =  --- ( X  + ----- )             Form 3)
 n+1    2     n     X
                     n


Now the only problem is a starting value.  A decent starting value is 1.
If one were to choose zero the sequence would bomb out because of a divide
by zero that would be required to determine the next approximation.

Form 3) was known by Heron(?) before Christ's birth.


X  = 1
 0

Suppose you wished to know the square root of two. ( A = 2) then plugging that
into the different forms.


the sequence would progress as follows:

n)  0   1   2    3     4        ...

        3   17   577   665857
    1 , - , -- , --- , ------ , ...
        2   12   408   470832

I really don't care to take it out any further...  according to my calculator,
the square root of 2 is approximately 1.414213562.

the approximations represented in decimal form are:

 n      decimal equivalence
---    ---------------------
 0      1.0
 1      1.5  
 2      1.416666667
 3      1.414215686
 4      1.414213562

The fourth approximation has already surpassed the accuracy of my calculator.
Newton's method is easily appreciable.  If you were to program this into a
computer, you would probably generate the sequence via floating point numbers.
When the approximations are "close enough" then you can have the program end
and report back the approximated square root.

If you are advanced enough to do a base conversion, you more than likely can
easily apply this method.  The choice is yours.  Moreover, your choice should
hinge more upon the application than anything else.  I usually use pre-packaged
routines in my programming, but they can't always be trusted.  I have found
the packages on my computer and most others very reliable, though.

Leslie Ramer,
Ball State University, Muncie, IN

-- 

"No one runs so fast as he that is chased."
           ._
|  .-..-.  | | .-,.-.-..-..-. 00LHRAMER@bsu-ucs.bsu.edu
|  +-'`-,  |_' .-+| | |+-'|   00LHRAMER@bsu-ucs.UUCP
|__`-'`-'  | \ `.|| | |`-'|   00LHRAMER@bsuvax1.bitnet

rthomson@mesa.dsd.es.com (Rich Thomson) (03/15/91)

In article <1991Mar14.113902.97@bsu-ucs.uucp>
	00lhramer@bsu-ucs.uucp (Leslie Ramer) writes:
>I would much rather use Newton's method to find the square root or any other
>root for that matter. [...]

Well Newton's method has some interesting properties of its own (as
I'm sure many readers of comp.theory.dynamic-sys are aware).

>Now the only problem is a starting value.  A decent starting value is 1.

If you play with Newton's method long enough you come to realize that,
in general, picking a starting value isn't always obvious.  In
general, you pick a starting value that is "close" to the desired root
of the function.  If you have no idea where the roots of the function
lie, then you may be in trouble.  Newton's method may fail to
converge.  In fact, many interesting fractal images have been computed
for simple functions like z^3 = 0, with z being a complex number.

You find that if you pick points close to any one of the three zeros,
they converge as expected.  The wacky stuff happens at the boundaries
between the basins of attraction for these three roots.  You might
expect the basins of attraction for the three roots to carve the
complex plane into 3 simple regions; in fact the regions are far from
simple and the boundary often has fractal qualities (whether or not
it is a true fractal, i.e. a Hausdorff dimension different from its
covering dimension, I don't know).  Along this boundary region you can
find points arbitrarily close to each other that converge to different
roots.

A different way to compute square roots (and other rational or
non-rational numbers) is through the methods of continued fractions.
If I remember correctly, the continued fraction expansion of Sqrt[2]
is [1,2,2,2,....], where the notation [a0, a1, a2, ...] represents a
continued fraction expansion:

    a0 +         1
	 ------------------
	 a1 +       1
	      -------------
	      a2 +     1
		   --------
		   a3 + ...

The first few terms of the continued fraction expansion are:

[1]		1	1.0
[1,2]		3/2	1.5
[1,2,2]		7/5	1.4
[1,2,2,2]	17/12	1.4166667
[1,2,2,2,2]	41/29	1.4137931
[1,2,2,2,2,2]	99/70	1.4142857

This doesn't converge as quickly as the sequence you give, but it
still interesting.

I think it turns out that non-rational numbers have a continued
fraction expansion that is eventually periodic (i.e. after a finite
number of coefficients in the expansion, a block of coefficients
repeats infinitely), while all rational numbers have a finite
continued fraction expansion.

							-- Rich
-- 
  ``Read my MIPS -- no new VAXes!!'' -- George Bush after sniffing freon
	    Disclaimer: I speak for myself, except as noted.
UUCP: ...!uunet!dsd.es.com!rthomson		Rich Thomson
ARPA: rthomson@dsd.es.com			PEXt Programmer

jroth@allvax.enet.dec.com (Jim Roth) (03/15/91)

In article <1991Mar14.113902.97@bsu-ucs.uucp>, 00lhramer@bsu-ucs.uucp (Leslie Ramer) writes...
>In article <1991Mar1.001103.6341@cec1.wustl.edu>, amc4919@cec_ss1.wustl.edu (Adam M. Costello) writes:
>> ... pencil and paper method
> ... newtons method

If you just want to see a decimal expansion to many significant figures
a simple way to do it is with a spigot algorithm.

Approximate the root of n with a convenient fraction p/q such that
n = p^2/q^2*(1+s/p^2), where s/p^2 is small.

For example sqrt(5) is approximately 9/4 and 5 = 81/16*(1-1/81).

Then the root can be approximated by a binomial expansion

    sqrt(n) = p/q (1 + s/p^2)^(1/2)

            = 1/q*(p + s/p^2*(p - s/(2*p^2)*(p - 3*s/(4*p^2)*(p - ...

            = 1/q*(a[0] + b[1]/c[1]*(a[1] - b[2]/c[2]*(a[2] - b[3]/c[3]*(a[3]...

where the a[k]'s are initially set to p.

Now, multiply all a[k]'s by your radix (a power of 10) and reduce them
modulo the denominators, c[k], passing the quotients times the numerators
b[k] to the previous a[k]'s.

At the end of the series a new digit will pop out.

Here is a simple program for sqrt(5); for roots of other numbers
initialize p, q, and s accordingly.

Practical?  Of course not!  But it does generate lots of digits easily :-)

- Jim

/*
 *  Spigot algorithm for sqrt(n)
 */

#include <stdio.h>

main()
{
    long
	*a,
	nterms, ndigits,
	p, q, r, s, t, d, n,
	i, j, k, tprev;

    nterms = 210;	/* use this many terms in the binomial expansion */
    ndigits = 100;	/* generate 100*4 digits */
    r = 10000;		/* 4 digits at a time */

    a = (long *)malloc(sizeof(long)*(nterms+1));

    n = 5;
    p = 9;
    q = 4;
    s = -1;

    for (i = 0; i <= nterms; i++)
	a[i] = p;

    for (i = 0; i < ndigits; i++) {
	for (d = 0, k = nterms; k; k--) {
	    a[k] = a[k]*r - d;
	    t = a[k]/(2*k*p*p);
	    a[k] -= t*(2*k*p*p);
	    d = t*(2*k-3)*s;
	}
        a[0] = a[0]*r - d;
        t = a[0]/q;
        a[0] = a[0] - t*q;
	if (t < 0) { 
	    t += r;
	    tprev--;
	}
        if (i) printf("%8d", tprev);
	tprev = t;
        if (!(i % 8)) printf("\n");
    }
    printf("\n");
}

vsnyder@jato.jpl.nasa.gov (Van Snyder) (03/16/91)

In article <1991Mar14.113902.97@bsu-ucs.uucp>
	00lhramer@bsu-ucs.uucp (Leslie Ramer) writes:
>I would much rather use Newton's method to find the square root or any other
>root for that matter. [...]
>
>Now the only problem is a starting value.  A decent starting value is 1.
>
At least for square and cube roots, there are good rational approximations
for starting values in "Computer Approximations" by Hart et. al.  The usual
practice in a library code for a typical (base 2) computer is to separate
the exponent and fraction of the floating point number.  Then (for square
roots), if the exponent is odd, divide the fraction by 2 (shift right one) and
subtract one from the exponent.  The square root is then the number whose
floating point representation has 1/2 the exponent as revised above, and a
fraction given by the square root of the input fraction as revised above.
Since the revised fraction, however, is between 1/4 and 1, however, it's
possible to get a reasonably close initial estimate from a rational approx-
imation.  Since Newton's method doubles the number of correct digits on each
iteration (when it's converging), getting 8 bits from the initial approximation
and then doing two iterations of Newton's method is adequate.

But the one-bit-at-a time algorithms that are similar to division algorithms
(but a little simpler) are what are used inside co-processor chips:  You
get a square root for the cost of one division (approximately), instead of 3
divisions, some multiplications, some adds (all floating point), and some
jerking around of the floating point representation.


-- 
vsnyder@jato.Jpl.Nasa.Gov
ames!elroy!jato!vsnyder
vsnyder@jato.uucp

vsnyder@jato.jpl.nasa.gov (Van Snyder) (03/16/91)

In article <1991Mar15.202738.9606@jato.jpl.nasa.gov> vsnyder@jato.Jpl.Nasa.Gov (Van Snyder) writes:
>In article <1991Mar14.113902.97@bsu-ucs.uucp>
>	00lhramer@bsu-ucs.uucp (Leslie Ramer) writes:
>>I would much rather use Newton's method to find the square root or any other
>>root for that matter. [...]
>>
>>Now the only problem is a starting value.  A decent starting value is 1.
>>
>At least for square and cube roots, there are good rational approximations
>for starting values in "Computer Approximations" by Hart et. al.  The usual
>practice in a library code for a typical (base 2) computer is to separate
>the exponent and fraction of the floating point number.  Then (for square
>roots), if the exponent is odd, divide the fraction by 2 (shift right one) and
>subtract one from the exponent.  The square root is then the number whose
 ^^^^^^^^^^^^^^^^^ <---  oops! "add one to"
>floating point representation has 1/2 the exponent as revised above, and a
>fraction given by the square root of the input fraction as revised above.
>Since the revised fraction, however, is between 1/4 and 1, however, it's
>possible to get a reasonably close initial estimate from a rational approx-
>imation.  Since Newton's method doubles the number of correct digits on each
>iteration (when it's converging), getting 8 bits from the initial approximation
>and then doing two iterations of Newton's method is adequate.
>
>But the one-bit-at-a time algorithms that are similar to division algorithms
>(but a little simpler) are what are used inside co-processor chips:  You
>get a square root for the cost of one division (approximately), instead of 3
>divisions, some multiplications, some adds (all floating point), and some
>jerking around of the floating point representation.
>
>
>-- 
>vsnyder@jato.Jpl.Nasa.Gov
>ames!elroy!jato!vsnyder
>vsnyder@jato.uucp


-- 
vsnyder@jato.Jpl.Nasa.Gov
ames!elroy!jato!vsnyder
vsnyder@jato.uucp

rbr4@uhura.cc.rochester.edu (Roland Roberts) (03/16/91)

In article <1991Mar15.203346.9770@jato.jpl.nasa.gov>
vsnyder@jato.Jpl.Nasa.Gov (Van Snyder) writes: 

    But the one-bit-at-a time algorithms that are similar to
    division algorithms (but a little simpler) are what are used
    inside co-processor chips: You get a square root for the cost
    of one division (approximately), instead of 3 divisions, some
    multiplications, some adds (all floating point), and some
    jerking around of the floating point representation.

    vsnyder@jato.Jpl.Nasa.Gov
    ames!elroy!jato!vsnyder
    vsnyder@jato.uucp

If anyone is interested in looking over a piece of code that does this,
I have a routine in C that I wrote to be able to take square roots of
64- and 128-bit integers.

roland

-- 
Roland Roberts,  University of Rochester   BITNET: roberts@uornsrl
  Nuclear Structure Research Lab         INTERNET: rbr4@uhura.cc.rochester.edu
  271 East River Road                        UUCP: rochester!ur-cc!rbr4
  Rochester, NY  14267                       AT&T: (716) 275-8962

bumby@math.rutgers.edu (Richard Bumby) (03/16/91)

In article <1991Mar15.063023.6605@dsd.es.com> rthomson@mesa.dsd.es.com
(Rich Thomson) writes:

> . . .
> 
> If you play with Newton's method long enough you come to realize that,
> in general, picking a starting value isn't always obvious. . . .

Some cases are reasonably robust, however.  For example the largest
root of a polynomial with all roots real will be approached by
Newton's method with a starting value larger than that root.

> . . .
>
> A different way to compute square roots (and other rational or
> non-rational numbers) is through the methods of continued fractions.
> If I remember correctly, the continued fraction expansion of Sqrt[2]
> is [1,2,2,2,....], . . .

It is.

> . . .
> 
> This doesn't converge as quickly as the sequence you give, but it
> still interesting.
> 

The rate of convergence is easily worked out, and of the form c^n if n
terms are taken.  This is fine for ordinary machine precision, but
wasteful if you want millions of digits.

> I think it turns out that non-rational numbers have a continued
> fraction expansion that is eventually periodic . . .

It is easy (after Euler) to show that such numbers must be quadratic
irrationals.  Lagrange showed that all quadratic irrationals have
continued fractions which are eventually periodic, and conditions for
pure periodicity have been worked out.  Perron's book gives Galois as
the source for this work, as well as for determining the effect of
reversing the period.  The analytic theory allows the "digits" (called
"partial quotients") of some other numbers (for example e) to be
determined, but the expansion of pi or the cube root of 2 seems to be
random.
-- 

--R. T. Bumby ** Math ** Rutgers ** New Brunswick ** NJ08903 ** USA --
  above postal address abbreviated by internet to bumby@math.rutgers.edu
  voice: 908-932-0277 (but only when I am in my office)