[net.ai] NP-completeness and parallelism

DRogers@SUMEX-AIM.ARPA@sri-unix.UUCP (08/06/83)

From:  David Rogers <DRogers@SUMEX-AIM.ARPA>

        In AIList V#32 Fred comments that "NP-completeness cannot be 
gotten around in general by building bigger or faster computers".  My
guess would be that parallelism may offer a way to reduce the order of
an algorithm, perhaps even to a polynomial order (using a machine with
"infinite parallel capacity", closely related to Turing's machine with
"infinite memory"). For example, I have heard of work developing 
sorting algorithms for parallel machines which have a lower order than
any known sequential algorithm.

        Perhaps more powerful machines are truly the answer to some of
our problems, especially in vision analysis and data base searching.  
Has anyone heard of a good book discussing parallel algorithms and 
reduction in problem order?

David Rogers

DRogers@SUMEX-AIM.ARPA

ech@pyuxll.UUCP (Ned Horvath) (08/07/83)

A couple of clarifications are in order here:

1. NP-completeness of a problem means, among other things, that the
	best known algorithm for that problem has exponential
	worst-case running time on a serial processor.  That is not
	intended as a technical definition, just an operational one.
	Moreover, all NP-complete problems are related by the fact
	that if a polynomial-time algorithm is ever discovered for
	any of them, then there is a polynomial-time algorithm for
	all, so the (highly oversimplified!) definition of NP-complete,
	as of this date, is "intrinsically exponential."

2. Perhaps obvious, but I will say so anyway: n processors yoked in
	parallel can't do better than to be n times faster than a
	single serial processor. For some problems (e.g. sorting),
	the speedup is less.

The bottom line is that the "biggest tractable problem" is proportional
to the log of the computing power at your disposal; whether you increase
the power by speeding up a serial processor or by multiplying the number
of processors is of small consequence.

Now for the silver lining.  NP-complete problems often can be tweaked
slightly to yield "easy" problems; if you have an NP-complete problem
on your hands, go back and see if you can restrict it to a more readily
soluble problem.

Also, one can often restrict to a subproblem which, while it is still
NP-complete, has a heuristic which generates solutions which aren't
too far from optimal.  An example of this is the Travelling Salesman
Problem.  Several years ago Lewis, Rosencrantz, and Stearns at
GE Research described a heuristic that yielded solutions that were
no worse than twice the optimum if the graph obeyed the triangle
inequality (i.e. getting from A to C costs no more than going from
A to B, then B to C), a perfectly reasonable constraint.  It seems
to me that the heuristic ran in O(n-squared) or O(n log n), but
my memory may be faulty; low-order polynomial in any case.

So: "think parallel" may or may not help.  "Think heuristic" may
help a lot!

=Ned=

sts@ssc-vax.UUCP (Stanley T Shebs) (08/08/83)

Parallel machines buy you speed improvements at the cost of space.
As an example, a reduction machine of the style being worked on by
Gyula Mago (at either unc or ncsu - don't remember) using some
variation on the Backus FP language would be very fast.  It could
multiply two NxN matrices in log N time - but would require N**4
processors (=space) to achieve this performance!  This is not really
bad; it's certainly better than having to build machines that use
X-rays for the system clock :-) .  It does mean that algorithm
analysis is always going to be useful, no matter how good the
hardware is.

					stan the lep hacker
					ssc-vax!sts (soon utah-cs)