[sci.philosophy.tech] Complexity and Philosophy

tedrick@ernie.Berkeley.EDU.UUCP (05/28/87)

>Lately I've been chating informally to a philosopher/friend about
>common interests in our work.  He was unfamiliar with the concept of the
>TIME TO COMPUTE consequences of facts.  Furthermore, the ramifactions of
>intractability (ie. if P != NP is, as we all suspect, true) seemed to
>be new to my friend.  The absolute consequences are hard to get across
>to a non-computer scientist; They always say "but computers are getting
>faster all the time...".
>
>I'm digging around for references in AI on these ideas. This isn't my area.
>Can anyone suggest some?

I believe the philosophical consequences of complexity theory are
enormous and that the field is wide open for someone with the
ambition to pursue it.

obnoxio@BRAHMS.BERKELEY.EDU.UUCP (05/29/87)

In article <19071@ucbvax.BERKELEY.EDU>, tedrick@ernie (Tom Tedrick) writes:
>I believe the philosophical consequences of complexity theory are
>enormous and that the field is wide open for someone with the
>ambition to pursue it.

Any suggestions?

All I can think of is some strictly AI "anthropic principle" constraints
on how mind "works".  Thus, our mind presumably follows some polynomial
time algorithm when doing various tasks.  If the ideal version of these
tasks involves an NP-hard problem, then we would expect any P-based model
to inevitably err, much as in real life.  (Assuming P!=NP, of course.)

Now, beyond this--I'm stymied.  An example: even though physics might be
boiled down to automata theory some day, I fail to see how "time" would
mean anything to such low-level calculations.  If wave-function collapse
or what have you is an exponentially hard, or even non-recursive computa-
tion--that seems perfectly reasonable to me.  We need never notice, be-
cause of renormalization.

Or are you proposing a computational complexity implementation of Occam's
razor?

ucbvax!brahms!weemba	Matthew P Wiener/Brahms Gang/Berkeley CA 94720

litow@uwm-cs.UUCP (Dr. B. Litow) (05/30/87)

  An example: even though physics might be
> boiled down to automata theory some day, I fail to see how "time" would
> mean anything to such low-level calculations.  If wave-function collapse
> or what have you is an exponentially hard, or even non-recursive computa-
> tion--that seems perfectly reasonable to me.  We need never notice, be-
> cause of renormalization.
> 
> Or are you proposing a computational complexity implementation of Occam's
> razor?
> 
> ucbvax!brahms!weemba	Matthew P Wiener/Brahms Gang/Berkeley CA 94720

I doubt very much that physics is reducible to automaton theory because I
do not believe that automaton theory is reducible to itself. By that I 
only mean that we cannot understand automata behaviors in a purely
combinatorial way. The wave function collapsing mentioned in a previous
posting (included above in part) is apparently widely believed but in
'secret'. Because of the way in which I define computer science this notion
is disturbing to me. I should say that I 'sort of' believe it. CS is the
working out,largely mathematically as in physics,of the consequences of
assuming Church's Thesis. If some cooperative quantum effect e.g. spin
glasses can really be used to solve problems in,say PTIME,which have been
shown (some day) not to be in PTIME,then CT is called into question. Now
one objects that these 'damn fast' computations hold only over a finite range
of data sizes but I think that there may be theorems of CS which could
extend 'damn fast' computations over all sizes. 

I know this is vague and I am not sure that even I would agree with
everything written here. Only it is important to grasp how little is
known about computing. I would suggest also that recent work in
dynamical systems points to a severe problem in computability of
approximations to D.E.s. It would seem that even qualitative features i.e.
'...there is a gap in the spectrum in two places though we can't compute
the bands...' may not be reliably predictable before the fact because
there can be (under CT) no computable a priori estimation of the error.
Inability to use D.E.s to do such prediction runs exactly counter to
Newton's (and before him Galileo,Descartes,Kepler) mathematization of Nature.

webber@brandx.rutgers.edu (Webber) (05/31/87)

[Two replies, the first is to the following from Matthew P Wiener 
(ucbvax!brahms!weemba)]

>  An example: even though physics might be
> boiled down to automata theory some day, I fail to see how "time" would
> mean anything to such low-level calculations.  If wave-function collapse
> or what have you is an exponentially hard, or even non-recursive computa-
> tion--that seems perfectly reasonable to me.  We need never notice, be-
> cause of renormalization.

If one thinks of the universe as a state description, then the
automata that models it would be defined:

    Let S be the set of all state descriptions.
    Introduce a directed graph that has elements of S as vertices and
     each vertex s1 has one outward pointing arc indicating the state s2
     description that follows s1 after one time unit.
    Distinquish a start state, s0, as the `beginning of the universe'.
    Introduce an `interpretation rule' that a computation consists of
     moving from one state to the next.

Such a model gives us a perfectly reasonable infinite automata
representation of the universe.  Given that we have a way of
determining what `state' the universe is in and how many time units
have passed, it even gives us a predication as to what `state' the
universe should change to.

However, such a description of the behavior of the universe is a bit
large to keep in one's pocket, so we impose some structure and hope
things will work out.  Let p(n,s) denote the nth piece of the state
descriptor s.  We then make a locality assumption and claim that for
each n, there exists a set N' such that p(n,s1) is a function of
p(N',s0).  From this, we introduce the notion of a parallel automata
computing the universe.  At each step, each part of the state vector
is computed simultaneously.  Someone in a universe that could be
represented this way would probably not directly percieve this
structure.  However, someone who lived in a universe that could not be
represented this way might conceivably notice that predictions based
in this model never worked out.  How much along these lines they could
observe would depend on the universe in question.

However, this description of the universe is still too large to fit in
one's pocket.  Next we introduce a finiteness constraint, i.e., for
each n, the associated N' is finite.  Now we can carry around in our
pocket the description of one small piece of the universe, however,
the whole universe could still allude us.  However, we have gained the
ability to observe a finite portion of the universe and then predict
what state some other finite portion of the universe will assume.

What we need is a way of determining N' for any given n.  Also, you
might have already noticed that once we have N', we will need a way of
determining p(N',s0).  To have a `way of determing' something
formalizes into the notion that there is a computable function that
calculates that something.  Loosely, the function connecting n with N'
defines what we think of as the geometry of a universe and the
function that simulates p corresponds the the physical laws of that
universe.

One aspect of computability is that the person doing the computation
will exist in the universe being simulated.  Thus these functions must
define a system within which they themselves can be defined (under
suitable mappings).  From this we observe that in order to understand
the universe, it will be useful to figure out what functions can be
implemented within the universe.

A second observation is that it really doesn't matter if a description
of the universe can fit in one's pocket unless one can `understand' it in time
to make some use of it (such as verifying a prediction).  This
introduces the question of the complexity of the relevant computable
functions.  Complexity is not a direct measure of the rules of a
universe, but rather an investigation of how the rules could be
evaluated by an observer within a system that follows those rules.

> Or are you proposing a computational complexity implementation of Occam's
> razor?

Well, given that whatever set of rules you choose to use, you will
have to evaluate them in order to check them, one might as well begin
with the easy ones.  Of course, there is no guarentee that one of the
easy ones will work, just as there is no guarentee that the universe
has any of this structure.  However, it would appear even less
productive to assume that the universe is not totally understandable.

[and then to reply to the the reply to the above discussed message
from Matthew Wiener sent by B. Litow (litow@uwm-cs.UUCP)]

> I doubt very much that physics is reducible to automaton theory because I
> do not believe that automaton theory is reducible to itself. By that I 
> only mean that we cannot understand automata behaviors in a purely
> combinatorial way. 

Here we are moving yet another step further and saying that there are
actually very few automata models that we know how to analyze from a
global point of view.  However, at least these models give us clear
predictions at a local level.  We have not really been working on the
problem of global analysis long enough to know whether or not it is a
`real' difficulty or just a temporary lack of insight.

>The wave function collapsing mentioned in a previous
> posting (included above in part) is apparently widely believed but in
> 'secret'. 

This is a fascinating secret.  It has been revealed and yet I still
don't know what it is.

>Because of the way in which I define computer science this notion
> is disturbing to me. I should say that I 'sort of' believe it. CS is the
> working out,largely mathematically as in physics,of the consequences of
> assuming Church's Thesis. ...

Computer science (perhaps we should call it Computability Science) is
not restricted to studying the ramifications of Church's Thesis.
Church's Thesis is just the currently most productive hypothesis being
investigated.  There are many other possibilities, such as stochastic
automata.  However, it will probably be days before we know the
answers to all the questions of the universe.

---------------------------- BOB (webber@aramis.rutgers.edu)

     I never metaphysics I didn't like.
                                         Anonymous

kyukawa@watdragon.UUCP (06/03/87)

Check out this paper:

Computational Complexity and the Universal Acceptance of Logic,
Christopher Cherniak,
The Journal of Philosophy, No. 12, 1984.

As I haven't read it carefully, I refrain from making comments.

Kei Yukawa
kyukawa@waterloo.csnet

moll@umn-cs.UUCP (06/07/87)

In article <634@uwm-cs.UUCP> litow@uwm-cs.UUCP (Dr. B. Litow) writes:
>is disturbing to me. I should say that I 'sort of' believe it. CS is the
>working out,largely mathematically as in physics,of the consequences of
>assuming Church's Thesis. If some cooperative quantum effect e.g. spin
>glasses can really be used to solve problems in,say PTIME,which have been
>shown (some day) not to be in PTIME,then CT is called into question.

Church's Thesis (also known, I believe, as the Church-Turing thesis) is
the assertion that every "effectively computable" function is computable
by Turing machine.  It says only that anything you can compute in *finite*
time on any machine can be done in *finite* time on a Turning machine.
The TM implementation may take much longer.  The existence of a faster
machine doesn't invalidate the thesis, it would require the existence
of a machine which could compute something in finite time that no TM
can compute in finite time to disprove Church's thesis.

Note: There are many equivalent ways of stating Church's thesis.
Rather than Turing machines, for example, one can use recursive functions
or Lambda calculus.  These forms are, however, provably equivalent.  These
are the only statements I have ever seen referred to as "Church's thesis".
Is there some stronger statement, perhaps referring to speed of computation,
which is also referred to as "Church's thesis"?
 
>one objects that these 'damn fast' computations hold only over a finite range
>of data sizes but I think that there may be theorems of CS which could
>extend 'damn fast' computations over all sizes. 

If I understand what you're saying, then there is no such theorem.  Any
finite segment of any function can be computed "damn fast" simply by
coding a table of values into the program.  There are, however, functions
for which the best machine to compute the *whole* function is "damn slow".

Reference: _Introduction to Automata Theory, Languages and Computation_
by Hopcroft and Ullman.