[net.philosophy] Godel and Turing

throopw@rtp47.UUCP (Wayne Throop) (10/16/85)

> My understanding is that Godel's incompleteness theorems prove
> (assuming the consistency of Arithmetic) that no Turing machine
> can possibly simulate the human mind.
>
> This is because for any particular Turing machine there are certain
> statements that the human mind can recognize as true (again with
> the consistency assumption), that the machine cannot recognize
> as true.
>
> Does anyone dispute this?
>
>   tedrick@ucbernie.ARPA

I do.  For several reasons.

First, there is no reason for the simulation to be consistant.  Unless
the claim is made that Humans are consistant, which I find hard to
beleive.

Second, assuming that physical laws are consistant (in about the same
way that formal systems are consistant), there is reason to think that
Humans are "simulated" or "implemented" using consistant, formal,
physical laws.  Thus, Humans are arguably *already* implemented in
a "consistant formal system", so I see no reason that they couldn't
be simulated to any degree of accuracy you please in another formal
system.

Thirdly, there is no particular reason to beleive that there is no
"Godel Sentence" for Humanity, such that no human (or perhaps some humans)
cannot "see it to be true", that some other being *can* so see it.

I also note in passing (as I understand things) that nobody has ever
actually found a Godel sentence for the formal system of "mathematics"
or "logic".  It has simply been proven that such sentences exist.
So the claim that "there are certain statements that the human mind
can recognize as true [that a machine cannot]" is quite suspect.
In fact, I suspect that no unaided human could find a Godel sentence
for even the most limited and basic (but serious) attempt at
"artificial intelligence".
-- 
Wayne Throop at Data General, RTP, NC
<the-known-world>!mcnc!rti-sel!rtp47!throopw

throopw@rtp47.UUCP (Wayne Throop) (10/18/85)

> I claim that if we make the consistency assumption, and assume
> that the mind is equivalent to a Turing machine, we get a
> contradiction in that there are true statements recognizable
> by the mind which are not recognizable by the machine.
> Maybe I'm wrong but if I am I hope someone can explain to
> me why I am wrong.
>                       -Tom    tedrick@ucbernie.ARPA

I can see three ways out of the dilemma.  I've mentioned them before,
implicitly, but I'll try to make them more explicit, and state exactly
why each scenario is an escape from contradiction.

One is to note that "Turing machines" need not implement consistant
formal systems, thus contradicting one of the assumptions.  This is the
position that minds are not consistant, thus Godel's theorem doesn't
apply.

Another is to note that (if minds are consistant) Godel's theorem
applies to the entire simulation, including the underlying Turing
machine.  That is, assuming that "the mind" is a consistant formal
system, there will be statements that the particular mind in question
cannot recognize as true, but which indeed are true.  The fact that some
other mind can see the truth of these statements is irrelevant.  The
fact that "the mind" is "running on" a Turing machine rather than on a
"neural network" is likewise irrelevant.  This is the position that the
Turing simulation is not subject to scrutiny by the mind it is
simulating... that they are one and the same thing.

Yet another is to note that human minds have finite capacity.  Assume
that the Turing simulation is a formal system subject to scrutiny by the
mind it is simulating.  Nevertheless, it might be far beyond the
capacity of that mind to discover the Godel sentence of the formal
system upon which it is simulated.


All of these call into question the claim that "there are true
statements recognizable by the mind which are not recognizable by the
machine." The first says that the conditions under which this is true
are not met.  The second says that there is no seperate "machine" and
"mind" such that one can do something the other cannot.  And the third
says that the capacity of the simulated mind will simply be limited by
the complexity of the formal system.

I'm rather partial to the first refutation myself.  The examples of
"minds" that I've seen don't seem very consistant to me.  (And, alas,
the "Turing machine" I'm writing this note on seems inconsistant all too
often... :-)
-- 
Wayne Throop at Data General, RTP, NC
<the-known-world>!mcnc!rti-sel!rtp47!throopw

throopw@rtp47.UUCP (Wayne Throop) (10/20/85)

Some points raised by Tom Tedrick with regard to Godels incompleteness
theorem seem a little incorrect to me.  In particular:

> The issue is that humans seem to recognize that certain formal systems
> are consistent, but that this consistency cannot be proved within the
> system. This mysterious ability to recognize such things being something
> lacking in deterministic machines, I claim there is a distinction
> between the human mind and any Turing machine.

First, Godel's theory didn't have anything at all to do with recognizing
consistent formal systems.  It just states some properties that
consistant formal systems of "sufficent power" must have, in particular,
incompleteness.  This "mysterious ability" is something of your own
invention.

Second, even allowing humans this "mysterious ability", I don't see why
machines "lack" it.  Do you have evidence or proof that all machines
must "lack" this "mysterious ability" (or is it just that the ones you
are currently familiar with seem to lack it?)

> Exhibit the turing machine that is claimed to be equivalent to the human
> mind, and the human mind can reason about the system in ways impossible
> within the system.  Thus we contradict the assumption that the machine
> was equivalent to the mind.

This doesn't follow at all.  Your statement that "the human mind can
reason about the system in ways impossible within the system" is a
simple assertion, with no backing (certainly not by Godel's
incompleteness theorem).

If you are going to make the assumption "human mind H is equivalent to
turing machine T", then one possiblity is that H (if consistant) *indeed
cannot* know certain things about machine T.  (Or are you asserting that
humans *must be* capable of perfect self-knowlege?)

In any event, the key here is that you have simply made a set of
contradictory assumptions, namely, 1) T is equivalent to H, 2) T is a
consistent formal system, and 3) H is complete.  You can throw out any
of these assumptions... Godel doesn't help you choose which one to throw
out.

The most suspicious assumption I see there is "H is complete".  By
this assumption (that "humans" (aside: I'm not sure if you mean "all
humans", or "any human" or "some human", but let that pass) can discover
a Godel sentence for any given formal system (in this case T)), you are
asserting (in essence) that "humans" can *always* tell truth from
falsehood in formal mathematical systems.  This doesn't seem like a
tenable position to take.

> I don't believe human beings are deterministic. I also don't accept the
> laws of physics as absolute. I accept them as an absolutely brilliant
> model but not as complete truth. I don't accept the notion that the
> human being is just a very complex machine.

These, however, are *assumptions*.  They are not *proven* by anybody I
am aware of.

By the way, I'd be interested in knowing *why* you "don't accept the
notion that the human being is just a very complex machine." Do you also
"not accept" the notion that "the human being is just a primate", or
that "the human being is just a mammal"?  How about "insects are just
complex machines"?  "Reptiles are just complex machines"?  "Mammals are
just complex machines"?  "The (other) primates are just complex
machines"?

In other words, just what *is* "machine-like" and what is not, and why
do you draw the dividing line where you do.  I hope you don't think I'm
being nasty here, I'd really like to know.  I myself don't see any
definite boundaries here to point to as the reason for definitely
segregating humans from "machine-like things" (that is, things that
"merely" follow the "laws of physics").

> *IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
>  NOT EQUIVALENT TO A TURING MACHINE?*

It would help if you said what proof you are talking about.  If you mean
"Is Godel's incompleteness theorem such a proof?", the answer is
"definitely not".

If anybody *does* agree with you that the human mind is *PROVABLY* "more
powerful than" a general recursive formal system, I'd be interested in
hearing what they think the proof is.  (In my opinion, God Himself is no
more powerful than a general recursive formal system :-)

>           tedrick@ucbernie.ARPA
-- 
Wayne Throop at Data General, RTP, NC
<the-known-world>!mcnc!rti-sel!rtp47!throopw

tmoody@sjuvax.UUCP (T. Moody) (10/20/85)

In article <220@rtp47.UUCP> throopw@rtp47.UUCP (Wayne Throop) writes:
>
>I also note in passing (as I understand things) that nobody has ever
>actually found a Godel sentence for the formal system of "mathematics"
>or "logic".  It has simply been proven that such sentences exist.
>-- 
>Wayne Throop at Data General, RTP, NC

This is not quite correct.  Goedel's proof is a *constructive* proof; it
produces a Goedel sentence whose interpretation is "this sentence is not
provable in this formal system."


Todd Moody                 |  {allegra|astrovax|bpa|burdvax}!sjuvax!tmoody
Philosophy Department      |
St. Joseph's U.            |         "I couldn't fail to
Philadelphia, PA   19131   |          disagree with you less."