[comp.lang.fortran] SVD Troubles

ajayshah@alhena.usc.edu (Ajay Shah) (06/26/91)

I get different results in doing a SVD using Numerical Recipes
and using Linpack.

The diagonal elements of W using LINPACK:

    8546.9933049439    499.76058704715    237.98494198145    176.87448423245
    168.17932502747    164.22538937388    155.90921231184    154.22581082297
    1.1581996038494D-06    7.5680111867615D-07

Using Numerical Recipes:

    8546.9933049439    499.76058704715    237.98494198145    176.87448423245
    168.17932502747    154.22581082297    155.90921231184    164.22538937388
    7.5680116056573D-07    1.1581996113596D-06

The program using Linpack :

	program main
	implicit undefined (a-z)

	character*200 infname
	double precision A(10, MATSIZE), ACopy(10, MATSIZE), cond_num
	double precision V(10, MATSIZE)
	integer iargc, N, row, col, i, info
	double precision W(10), U(MATSIZE, MATSIZE)
	double precision E(10), JUNK(MATSIZE)

	if (iargc() .ne. 1) then
		write(6, *) 'Usage: test inputfilename'
		stop
	endif

	call getarg(1, infname)
	open(unit=12, file=infname, form='formatted')
	read(12, *) N
	do row = 1, N
		read(12, *) (A(row, col), col = 1, N)
		do col = 1, N
			ACopy(row, col) = A(row, col)
		end do
	end do

	call DSVDC(A, 10, 10, 10, W, E, U, 10, V, 10, JUNK, 11, info)
	write(6, *) (W(i), i = 1, 10)

	stop
	end

The program using Numerical Recipes:

Just replace the "call DSVDC" by this line:

	call svdcmp(A, 10, 10, 10, 10, W, V)

The matrix:

 6.636747497519518e+02  5.008953942699227e+02  4.991617147615034e+02  4.948647902994671e+02  5.188879374177632e+02  4.980781537241928e+02  1.016966089668349e+03  5.078873159404829e+02  5.134782219220833e+02  1.021365541078674e+03 
 5.008953942699227e+02  6.723039431493032e+02  5.109286501458118e+02  4.952147347142381e+02  5.221728614843887e+02  5.056299650521632e+02  1.027802823571663e+03  5.095200723085509e+02  5.130013465427219e+02  1.022521418424905e+03 
 4.991617147615034e+02  5.109286501458118e+02  6.673655592833904e+02  4.887271277871769e+02  5.158123899764032e+02  4.945279396016267e+02  1.010340327918646e+03  4.991340394292638e+02  5.091609623446893e+02  1.008295002892672e+03 
 4.948647902994671e+02  4.952147347142381e+02  4.887271277871769e+02  6.506183351017976e+02  5.015532147443155e+02  4.950677723007248e+02  9.966209867528814e+02  4.928878141644745e+02  4.949013404400230e+02  9.877891572079971e+02 
 5.188879374177632e+02  5.221728614843887e+02  5.158123899764032e+02  5.015532147443155e+02  6.901277989031132e+02  5.138790599255799e+02  1.204006852804248e+03  5.132902485212762e+02  5.224282790935458e+02  1.035718529371021e+03 
 4.980781537241928e+02  5.056299650521632e+02  4.945279396016267e+02  4.950677723007248e+02  5.138790599255799e+02  6.626712063819948e+02  1.176550270146403e+03  4.981592662382463e+02  5.078192064621735e+02  1.005978474314339e+03 
 1.016966089668349e+03  1.027802823571663e+03  1.010340327918646e+03  9.966209867528814e+02  1.204006852804248e+03  1.176550270146403e+03  2.380557118887831e+03  1.011449517950008e+03  1.030247484954856e+03  2.041697003701003e+03 
 5.078873159404829e+02  5.095200723085509e+02  4.991340394292638e+02  4.928878141644745e+02  5.132902485212762e+02  4.981592662382463e+02  1.011449517950008e+03  6.652074363115371e+02  5.104872775080257e+02  1.175694707787698e+03 
 5.134782219220833e+02  5.130013465427219e+02  5.091609623446893e+02  4.949013404400230e+02  5.224282790935458e+02  5.078192064621735e+02  1.030247484954856e+03  5.104872775080257e+02  6.827224609884938e+02  1.193209741323968e+03 
 1.021365541078674e+03  1.022521418424905e+03  1.008295002892672e+03  9.877891572079971e+02  1.035718529371021e+03  1.005978474314339e+03  2.041697003701003e+03  1.175694707787698e+03  1.193209741323968e+03  2.368904446580258e+03 

-------------------------                            ------------------------- 
Ajay Shah, ajayshah@usc.edu                              ajayshah%rcc%rand.org
Apt: (213) 734-3930                     RAND Corporation: (213) 393-0411 x7281

-- 
_______________________________________________________________________________
Ajay Shah, (213)734-3930, ajayshah@usc.edu
                             The more things change, the more they stay insane.
_______________________________________________________________________________

FC03@NS.CC.LEHIGH.EDU (Frederick W. Chapman) (06/28/91)

>Newsgroups: sci.math.num-analysis,comp.lang.fortran
>Subject: SVD Troubles
>From: ajayshah@alhena.usc.edu (Ajay Shah)
>Date: 26 Jun 91 13:14:40 GMT
>
>
>I get different results in doing a SVD using Numerical Recipes
>and using Linpack.
>
>The diagonal elements of W using LINPACK:
>
>    8546.9933049439    499.76058704715    237.98494198145    176.87448423245
>    168.17932502747    164.22538937388    155.90921231184    154.22581082297
>    1.1581996038494D-06    7.5680111867615D-07
>
>Using Numerical Recipes:
>
>    8546.9933049439    499.76058704715    237.98494198145    176.87448423245
>    168.17932502747    154.22581082297    155.90921231184    164.22538937388
>    7.5680116056573D-07    1.1581996113596D-06


[LINPACK and Numerical Recipes versions of FORTRAN program deleted.]

[Matrix deleted.]


After carefully reading what the "Numerical Recipes" book has to say (in
section 9 of chapter 2) about Singular Value Decomposition (SVD), I have come
to the conclusion that there is no real discrepancy between the results you
obtained with LINPACK and the results you obtained with the Numerical Recipes
(NR) subroutine.

Suppose that a matrix A has a SVD given by U*W*V', where W is a diagonal
matrix with non-negative entries (the singular values of A), the matrix U is
column-orthogonal, and the square matrix V is orthogonal.  Even when all of
the singular values (SVs) of A are distinct (as they are in your example),
the SVD of A is *not* unique, since permutations of the SVs are permitted (as
long as the corresponding columns of the matricies U and V are permuted in
the same way).  LINPACK was nice enough to list the SVs in decreasing order,
whereas NR did not list them in order.  However, both routines gave you
essentially the same *set* of SVs; the only real question concerns the number
of significant figures to which the corresponding SVs agree.

All of the SVs which are greater than 1 agree to all 14 significant figures.
You can't ask for better agreement than that!  The SVs which are less than 1
(the smallest two SVs) agree to only 7 or 8 significant figures.  Since you
are using double precision, this appears to be cause for concern, at first.
It is not really cause for concern if we understand how particularly small
SVs should be treated in the SVD.  It may be desirable to simply set such SVs
equal to *zero*, in which case, agreement to only 7 or 8 out of 14
significant figures becomes a moot point.  Before we will know whether we
should do this, we must first answer the question, "What constitutes
'particularly small'?"

Compare the largest SV to the smallest SV; the ratio of largest to smallest
is approximately 10**10 (the "condition number" of the matrix A).  Since
double precision is giving you about 14 significant figures, I would estimate
the machine's floating-point precision (machine epsilon) to be about
10**(-13).  If the condition number of A were greater than the reciprocal of
the machine epsilon, 10**13, the matrix A would be considered "ill-
conditioned" (i.e., poorly suited to some numerical computing techniques,
especially to the solving of linear systems by certain algorithms).  You have
about 3 orders of magnitude to go before your matrix A would be considered
ill-conditioned.

It seems to me that the closer the matrix is to being ill-conditioned, the
more one ought to consider the smallest SV to be "particularly small".  Given
a machine epsilon of 10**(-13), a difference of 10 orders of magnitude
between the largest and smallest SVs is significant.  (If the machine epsilon
were 10**(-100), a difference of 10 orders of magnitude would not seem like
much.) In light of the condition number and the machine epsilon, the smallest
two SVs are small enough that you should probably set them equal to zero
before using the results of the SVD in further computations.  With this
adjustment, LINPACK and NR give *exactly* (to *all* reported significant
figures) the same set of SVs.

I do not know how you intend to use the computed SVD of your matrix A, but if
you plan to solve linear systems such as Ax=b, where the vector 'b' is given
and the vector 'x' is unknown, then setting the smallest two SVs to zero may
well give *more* accurate solutions to Ax=b than if you use all of the SVs as
given.  (See the "Numerical Recipes" book, section 9, chapter 2, regarding
this claim.) Here, we are talking about solving Ax=b in the "least squares
sense", which means to find a vector 'x' which minimizes the 2-norm
(Euclidean length) of r=Ax-b (the residual vector).

DISCLAIMER: I am not a full-time numerical analyst, so if a more knowledgable
and experienced reader would like to let me know (via e-mail) whether they
agree/disagree with this treatment of the question, I would appreciate the
feedback.  Corrections of any blatantly incorrect statements should probably
be posted directly to the news group.


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Frederick W. Chapman (N3IJC)          Campus Phone:  (215) 758-3218
User Services Group                   Network Server UserID:  FC03
Computing Center                      Internet:  FC03@NS.CC.LEHIGH.EDU
Lehigh University                     Bitnet:  FC03@LEHIGH