[net.math] Determinant Evaluation

stchakra@uokvax.UUCP (02/21/85)

Give a method to find the determinant of a 1000 by 1000 matrix
using recursion.

gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (02/23/85)

> Give a method to find the determinant of a 1000 by 1000 matrix
> using recursion.

Is this supposed to be a good method?  Why specify recursion?

egs@epsilon.UUCP (Ed Sheppard) (02/24/85)

> Give a method to find the determinant of a 1000 by 1000 matrix
> using recursion.

Forgive the Fortranishness of this.

float	determ(n,a)
int	n;
float	a[n][n];	/* ugh! */
{
	int	i, j;

	if(n==1 || a[0][0]==0)
	{
		return(a[0][0]);
	}
	for(i=1; i<n; ++i)
	{
		factor = a[i][0]/a[0][0];
		for(j=1; j<n; ++j)
		{
			a[i][j] -= factor*a[0][j];
		}
	}
	return(a[0][0]*determ(n-1,&a[1][1]));	/* ditto */
}

You'll note the recursion is totally unneeded. Are there any faster
methods that use recursion to better effect?

						Ed Sheppard
						Bellcore

bs@faron.UUCP (Robert D. Silverman) (02/27/85)

> > Give a method to find the determinant of a 1000 by 1000 matrix
> > using recursion.
> 
> Forgive the Fortranishness of this.
> 
> float	determ(n,a)
> int	n;
> float	a[n][n];	/* ugh! */
> {
> 	int	i, j;
> 
> 	if(n==1 || a[0][0]==0)
> 	{
> 		return(a[0][0]);
> 	}
> 	for(i=1; i<n; ++i)
> 	{
> 		factor = a[i][0]/a[0][0];
> 		for(j=1; j<n; ++j)
> 		{
> 			a[i][j] -= factor*a[0][j];
> 		}
> 	}
> 	return(a[0][0]*determ(n-1,&a[1][1]));	/* ditto */
> }
> 
> You'll note the recursion is totally unneeded. Are there any faster
> methods that use recursion to better effect?
> 
> 						Ed Sheppard
> 						Bellcore

	The Coppersmith-Winograd method for computing determinants is
a divide and conquer approach based upon a special identity for multiplying
2 by 2 matrices. The conventional method takes 8 multiplications (2^3) but
the identity allows you to do it in 7. Refinements of the identity, when
applied recursively to segmented pieces of the matrix you wish to invert
leads to an O(N^(2.5+epsilon)) algorithm for inversion, while Gaussian
elimination is an O(N^3) algorithm. You thus save a factor of about
sqrt(N). For N near 1000 this means a speedup of about 30 times. There
is considerable overhead in the method and it does not become effective
until N reaches several hundred (i.e. N=1000 is large enough). See
for example Knuth, Volume 2, 2nd edition, on pages 481-483.


*** REPLACE THIS LINE WITH YOUR MESSAGE **
       The Coppersmith-Winograd method for computing determinants is
a divide and conquer approach based upon a special identity for multiplying
2 by 2 matrices. The convential method takes 8 multiplications (2^3) but
the identity allows you to do it in 7. Refinements of the identity, when applied
recursively to segmented pieces of the matrix you wish to invert leads to
an O(N^(2.5+epsilon)) algorithm for inversion, while Gaussian elimination,
which you have shown above, is an O(N^3) algorithm. You thus save a factor
of about sqrt(N). For N near 1000 this means about a 30 fold speed-up. This
algorithm has a high overhead and does not become effective until the
matrix becomes large, at least about 50,000 entries. See for example Knuth,
Volume 2, 2nd edition, pp. 481-483.


A
A
A
A
A
A
A
*