[sci.math] What philosophical problems does complexity theory yield?

biep@klipper.UUCP (06/03/87)

Suppose P != NP. Then some things will take a long time to compute.
But so what?

Suppose someone finds out not all problems can be solved in constant
time. Now that comes as a philosophical shock, of course. That has
lots of implications.

But once one has overcome that shock, finding that some problems cannot
be solved in linear time may be annoying, but now since the possibility
of constant time already has been destroyed, it's no great news.

As, one by one, all sorts of upper bounds on exponents prove false, and
finally it seems polynomial time isn't good at all, one gets even bored
by all those variations on the same theme, not? So what exactly is so
exciting about that polynomial limit?


About constant time solutions: Seemingly linear-time solutions can often be
turned into constant-time solutions by applying parallelism. This is the
way the universe is able to simulate itself, however big it (object) may
be or grow. I don't know of what complexity the collapsing of a wave-
function would be supposed that all "time-space-points" of it worked
parallelly on it.
But, isn't anything which cannot be turned into a constant-time process
philosophically annoying? Why just hassling about non-polynomial time
solutions? Am I missing something? (Shouldn't I have asked that at the
beginning of this article? :-))
-- 
						Biep.  (biep@cs.vu.nl via mcvax)
Popper tells us how science ought to be done - Kuhn tells us how it *is* done.
	And I ... I read neither.

ramesh@cvl.UUCP (06/04/87)

In article <789@klipper.cs.vu.nl> biep@cs.vu.nl (J. A. "Biep" Durieux) writes:

>But, isn't anything which cannot be turned into a constant-time process
>philosophically annoying? Why just hassling about non-polynomial time
>solutions? Am I missing something? (Shouldn't I have asked that at the
>beginning of this article? :-))
>-- 

Yes, you are missing the point !!

The difference between a polynomial and non-polynomial solution for 
a problem is the difference between structure and a complete lack
of it. If P not = NP we would have shown that some problems can be 
solved only by something similar to a dumb exhaustive search over
the solution space i.e. there is not enough structure in the 
problem to constrain its solutions.

Graph theorists have found eulerian circuits very interesting 
and there have been very strong theorems proved about graphs
with this property. However, the seemingly similar problem 
of Hamiltonian circuits have almost no characterisation inspite
of dilligent efforts for the past 100 years or so. The theory of
NP-completeness explains this anomaly. Eulerian circuit can be
solved in linear time while Hamiltonian circuit is NP-complete !!!

				Ramesh

(Defn:
      Eulerian ckt: A circuit passing through all edges of a graph
	without repeating an edge.
      Hamiltonian ckt: A circuit passing through all the vertices
	of a graph without repeating a vertex.
)
 

moll@umn-cs.UUCP (06/06/87)

In article <789@klipper.cs.vu.nl> biep@cs.vu.nl (J. A. "Biep" Durieux) writes:
>Suppose P != NP. Then some things will take a long time to compute.
>But so what?
>As, one by one, all sorts of upper bounds on exponents prove false, and[...]

You seem to be implying that if P=NP then *all* problems can be solved in
polynomial time.  This is certainly not so.  Given any computable function
f(x), one can construct (by diagonalization) a problem which can be solved,
but cannot be solved in time f(n) on a Turing machine.  I believe
the same proof would work for parallel machines. 

>About constant time solutions: Seemingly linear-time solutions can often be
>turned into constant-time solutions by applying parallelism.  [...]

Note that P=NP is stated as a problem about Turing machines (or sometimes
single processor random access machines).  Any problem in the class NP
can definately be solved in polynomial time is one is allowed to use an
arbitrary number (varying with the size of the problem instance) of
processors.

biep@klipper.UUCP (06/10/87)

In article <789@klipper.cs.vu.nl> biep@cs.vu.nl (J. A. "Biep" Durieux) writes:
>But, isn't anything which cannot be turned into a constant-time process
>philosophically annoying? Why just hassling about non-polynomial time
>solutions? Am I missing something?

In article <2258@cvl.umd.edu> ramesh@cvl.UUCP (Ramesh Sitaraman) writes:
>Yes, you are missing the point !!
>
>The difference between a polynomial and non-polynomial solution for 
>a problem is the difference between structure and a complete lack
>of it.

Thanks a lot, this sounds much more relevant than just computation time.
But, isn't finding the smallest element of a set solvable only by "dumb
exhaustive search" either? Are people having that much trouble with such a
linear algorithm too?

Also, thanks for including the defs! Such things make the net a whole lot
more readable.

But: please don't put your mail address on the "Follow-up-to: " line.
I'm having a terrible time getting this article out!

Inews
feeding
time