[net.math] Sc--nce Attack

tedrick@ucbernie.BERKELEY.EDU (Tom Tedrick) (10/16/85)

In article <272@umich.UUCP> torek@umich.UUCP (Paul V. Torek ) writes:
>In article <10642@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
>>...for any particular Turing machine there are certain
>>statements that the human mind can recognize as true (again with
>>the consistency assumption), that the machine cannot recognize
>>as true.
>>
>>Does anyone dispute this?
>
>Yes.  If the human brain is essentially a Turing machine, then for any
>particular human (or group of them) there is at least one statement that
>he (they) cannot recognize as true.  Not very earthshattering, given that
>there are probably lots of complex mathematical theorems which are true
>but which no human will ever recognize as true.
>
>--Paul V Torek						torek@umich

I don't understand your argument. I claim that the human mind
cannot be essentially a turing machine. If we assume that a
partcular mind is equivalant to a particular turing machine,
then we immediately get a contradiction, namely there exists
a statement recognizable as true by that human mind which is
not recognizable as true by that turing machine.

Can anyone explain to me what if anything is wrong with my
reasoning?

Thanks very much,

     -Tom
      tedrick@ucbernie.ARPA

torek@umich.UUCP (Paul V. Torek ) (10/17/85)

In article <10671@ucbvax.ARPA> tedrick (Tom Tedrick) writes:

>>there are probably lots of complex mathematical theorems which are true
>>but which no human will ever recognize as true.
>
>I don't understand your argument. I claim that the human mind
>cannot be essentially a turing machine. If we assume that a
>partcular mind is equivalant to a particular turing machine,
>then we immediately get a contradiction, namely there exists
>a statement recognizable as true by that human mind which is
>not recognizable as true by that turing machine.

Which one?  The statement that is not recognizable by the Turing
machine may be *extremely* complex -- what makes you so damn sure
you could recognize it as true?  Tell me, Tom, is it true that every
even number greater than two is the sum of two primes?  What, you don't
know?  Then you get the point -- I hope.

--Paul V Torek, making flames the old-fashioned way -- earning them.

lambert@boring.UUCP (10/18/85)

(I have missed most of the discussion, since American net philosophy does
not make it to this side of the Atlantic.)

> I don't understand your argument. I claim that the human mind
> cannot be essentially a turing machine. If we assume that a
> partcular mind is equivalant to a particular turing machine,
> then we immediately get a contradiction, namely there exists
> a statement recognizable as true by that human mind which is
> not recognizable as true by that turing machine.

> Can anyone explain to me what if anything is wrong with my
> reasoning?

The following attempt uses a device that is, unless I am mistaken, due to
Quine.

Consider texts (some of which represent statements, such as: "Two times two
equals four" and "`Two times two equals four' is a true statement about
natural numbers", and some of which do not, like "Who? Me?"  and "Don't
`Aw, mom' me".).  Some of these texts contain *internal* quoted texts.  If
T is a text, then let Q(T), or, in words, T *quoted*, stand for another
text, consisting of T put between the quotes "`" and "'". So if T is

    "Two times two equals for",

Q(T) is

    "`Two times two equals for'".

Let SQ(T), or T *self*quoted, mean: Q(T) followed by T.

So if T is

    " contains no digits"

then T, selfquoted, is

    "` contains no digits' contains no digits"

(which is a true statement).

Now consider the text S =

    "`, selfquoted, is not recognizable as true by the mind of Tom',
     selfquoted, is not recognizable as true by the mind of Tom".

S is a statement, and states that some text T, selfquoted, is not
recognizable as true by the mind of Tom.

So can Tom (or his mind) recognize SQ(T) as true, and is SQ(T) true in the
first place?

If Tom can recognize SQ(T) as true, then S is apparently false.  But note
that T is the text

    ", selfquoted, is not recognizable as true by the mind of Tom",

so SQ(T) = S.  So Tom would have recognized a false statement as true.  If
we collectively assume that Tom would never do such a thing, then all of us
non-Toms can now recognize S as true, something Tom can not.

If "Tom" is consistently replaced by "human being", then the argument still
goes through.  Neither I, nor you, or anyone else, can recognize that
statement as true without showing its falsehood (and human fallibility).
We would have to wait for some non-human intelligence telling us it is
true, but although we might believe it, we still could not recognize it as
being true.  (Now we might think that it is false, which may or may not be
quite true, but than it follows again that not all humans can be
infallible.)

This may all seem shallow.  But for me (to take an arbitrary example:-) to
assert that the mind of a fellow human being can recognize something as
true, with the same level of certainty as in mathematical proofs, requires
a rather total understanding of that mind, that, at least for me, is still
lacking.  More so, if I would also have to recognize the infallibility of
that mind (which is all the time an implicit argument).  With my own mind,
I thus far have not succeeded.  I guess it is the same for other people.

What the original reasoning really shows is that if we would, somehow,
construct a Turing-machine description of the workings of our own mind, we
could not with mathematical certainty recognize it as being that.  Neither
can a Turing machine do this for its own construction, or if it can, then
it is either fallible or has glaring defects in its logical power.  Applied
to human beings, the conclusion is not a big surprise.  It does not follow
that human minds are not Turing machines (although their memory tapes seem
not to be infinite:-).
-- 

     Lambert Meertens
     ...!{seismo,okstate,garfield,decvax,philabs}!lambert@mcvax.UUCP
     CWI (Centre for Mathematics and Computer Science), Amsterdam

tedrick@ucbernie.BERKELEY.EDU (Tom Tedrick) (10/18/85)

In article <299@umich.UUCP> torek@umich.UUCP (Paul V. Torek ) writes:
>In article <10671@ucbvax.ARPA> tedrick (Tom Tedrick) writes:
>
>>>there are probably lots of complex mathematical theorems which are true
>>>but which no human will ever recognize as true.
>>
>>I don't understand your argument. I claim that the human mind
>>cannot be essentially a turing machine. If we assume that a
>>partcular mind is equivalant to a particular turing machine,
>>then we immediately get a contradiction, namely there exists
>>a statement recognizable as true by that human mind which is
>>not recognizable as true by that turing machine.
>
>Which one?  The statement that is not recognizable by the Turing
>machine may be *extremely* complex -- what makes you so damn sure
>you could recognize it as true?  Tell me, Tom, is it true that every
>even number greater than two is the sum of two primes?  What, you don't
>know?  Then you get the point -- I hope.

No, I don't get the point. The complexity of the statement is not
the issue. The issue is that humans seem to recognize that certain
formal systems are consistent, but that this consistency cannot
be proved within the system. This mysterious ability to recognize
such things being something lacking in deterministic machines,
I claim there is a distinction between the human mind and any
Turing machine.

Of course, I may be wrong in believing that these formal
systems are consistent. 

jwl@ucbvax.ARPA (James Wilbur Lewis) (10/18/85)

In article <10699@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
>
>No, I don't get the point. The complexity of the statement is not
>the issue. The issue is that humans seem to recognize that certain
>be proved within the system. This mysterious ability to recognize
>such things being something lacking in deterministic machines,
>I claim there is a distinction between the human mind and any
>Turing machine.
>
>Of course, I may be wrong in believing that these formal
>systems are consistent. 

Your points seem to be:

(1) Humans can recognize consistency of certain formal systems, and
    machines lack this ability .
(2) There is something mysterious about this ability, and nondeterminism
    has something to do with it; therefore
(3) no Turing machine can be equivalent to a human mind.

You are confusing two issues: reasoning *within* a formal system, and
reasoning *about* a formal system. What is so mysterious about the
latter kind of reasoning? All one needs to do is define a more powerful
system, and then by reasoning within the new system you can show the
incompleteness/inconsistency/whatever of the weaker system.

Of course, the formal system for any given Turing machine is fixed, and
that machine will be unable to 'jump out of the system' to reason
about its own properties.  But we can always design a more powerful 
machine which *will* be able to reason about the weaker one.

Humans are subject to these constraints, too. Consider:
"Tom Tedrick cannot consistently assert this proposition."

I can prove it, but you can't do so and remain consistent.  Does that
make my mind more powerful than yours?  Of course not, because
you can exhibit the obvious proposition which *you* can prove but
*I* can't (assuming I'm consistent! :-) 

Your mention of determinism is irrelevant; humans are just as deterministic
as machines. Unpredictable, perhaps...since we are orders of magnitude more
complex than any machines we know how to build....but subject to the same
laws of physics.

For a fascinating presentation of this and many other topics, check out
any book by Douglas Hofstadter, especially "Godel, Escher, Back: An 
Eternal Golden Braid".

Cheers,

-- Jim 'down with human chauvinism' Lewis
   U. C. Berkeley
   ...!ucbvax!jwl        jwl@ucbernie.BERKELEY.EDU

"Lately it occurs to me
  What a long, strange trip it's been..."

tedrick@ucbernie.BERKELEY.EDU (Tom Tedrick) (10/18/85)

>>I claim there is a distinction between the human mind and any
>>Turing machine.

>Your points seem to be:

>(1) Humans can recognize consistency of certain formal systems, and
>    machines lack this ability .
>(2) There is something mysterious about this ability, and nondeterminism
>    has something to do with it; therefore

[I didn't say anything about nondeterminism, just that the 
turing machines I am talking about are deterministic.]

>(3) no Turing machine can be equivalent to a human mind.

>You are confusing two issues: reasoning *within* a formal system, and
>reasoning *about* a formal system.

[I don't think that is what I am confused about.]

>What is so mysterious about the
>latter kind of reasoning? All one needs to do is define a more powerful
>system, and then by reasoning within the new system you can show the
>incompleteness/inconsistency/whatever of the weaker system.
>Of course, the formal system for any given Turing machine is fixed, and
>that machine will be unable to 'jump out of the system' to reason
>about its own properties.  But we can always design a more powerful 
>machine which *will* be able to reason about the weaker one.

[Yes, this is exactly the point. Exhibit the turing machine that
is claimed to be equivalent to the human mind, and the human mind
can reason about the system in ways impossible within the system.
Thus we contradict the assumption that the machine was equivalent
to the mind.]

>Your mention of determinism is irrelevant; humans are just as deterministic
>as machines. Unpredictable, perhaps...since we are orders of magnitude more
>complex than any machines we know how to build....but subject to the same
>laws of physics.

OK, we at least have a clear point of disagreement. I don't believe
human beings are deterministic. I also don't accept the laws of
physics as absolute. I accept them as an absolutely brilliant
model but not as complete truth. I don't accept the notion that
the human being is just a very complex machine. 

I originally asked whether anyone disputed my claim that the human
mind is not equivalent to a turing machine. After all the negative
response, I would like to change my question to:

*IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
 NOT EQUIVALENT TO A TURING MACHINE?*

"Help, I'm trapped in a machine  :-)"

    -Beleaguered and beseiged on all fronts by the upholders of
     the dignity of turing machines, I remain

     -Tom the Human
      tedrick@ucbernie.ARPA

jwl@ucbvax.ARPA (James Wilbur Lewis) (10/18/85)

In article <10702@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
>
>>What is so mysterious about the
>>latter kind of reasoning? All one needs to do is define a more powerful
>>system, and then by reasoning within the new system you can show the
>>incompleteness/inconsistency/whatever of the weaker system.
>>Of course, the formal system for any given Turing machine is fixed, and
>>that machine will be unable to 'jump out of the system' to reason
>>about its own properties.  But we can always design a more powerful 
>>machine which *will* be able to reason about the weaker one.
>
>[Yes, this is exactly the point. Exhibit the turing machine that
>is claimed to be equivalent to the human mind, and the human mind
>can reason about the system in ways impossible within the system.
>Thus we contradict the assumption that the machine was equivalent
>to the mind.]

Foo! By reasoning about an equivalent Turing machine, the human mind
is *also* constrained to operate within the system. No fair jumping
out of the system here.  I ask again: what is your basis for claiming
that human reasoning can't be duplicated by a 'mere' machine, at least
in principle? Are you saying that machines are incapable of the kind of
reasoning involved in, say, the proof of Godel's Incompleteness Theorem?

>
>OK, we at least have a clear point of disagreement. I don't believe
>human beings are deterministic. I also don't accept the laws of
>physics as absolute. I accept them as an absolutely brilliant
>model but not as complete truth. I don't accept the notion that
>the human being is just a very complex machine. 
>

I'm not sure why this is relevant. Are you saying the laws of physics
are incomplete (because we don't know them all yet?) Or that certain 
phenomena are inherently inexplicable by ANY laws of physics, a la
religious arguments?  Whatever those laws of physics are, humans and
machines both must obey them. 

>
>     -Tom the Human
>      tedrick@ucbernie.ARPA

-- Jim Lewis, a Lean Mean Computing Machine!
   U. C. Berkeley
   ...!ucbvax!jwl     jwl@ucbernie.BERKELEY.EDU

mj@myrias.UUCP (Michal Jaegermann) (10/19/85)

I am afraid that a lot confusion comes from a simple mix-up (which took
quite a while for logicians to sort out :-) ). When somebody speaks
about Turing Machines, Goedel Theorem and things of that sort truth and
provability is understood >>within confines of a given FORMAL system<<.
You may always give answer to some "unanswerable" questions if you will
get out and look from "outside" (meta-reasoning). In everyday use of
logic and truth we are mixing freely different meta-levels - which
creates a lot of interesting and often funny paradoxes. Which probably
indicates that formal logic and Turing Machines are only quite simple
MODELS of our reasonig and that a human brain is not a Turing Machine
(this goes far beyond mathematics, so I better stop). If you are finding
"Goedel, Escher, Bach" too wordy and muddy. though funny and inspiring,
and you do not want to wade through monographies on formal mathematical
logic then find a book by R. Smullyan with a name "What is a name of
this book?" to find a lot answers and questions related to the problem.
So how is that book really called?
				     Michal Jaegermann
				     Myrias Research Corporation
				     ....ihnp4!alberta!myrias!mj

rlr@pyuxd.UUCP (Rich Rosen) (10/20/85)

>>Your points seem to be:
>>(1) Humans can recognize consistency of certain formal systems, and
>>    machines lack this ability .
>>(2) There is something mysterious about this ability, and nondeterminism
>>    has something to do with it; therefore
>>(3) no Turing machine can be equivalent to a human mind.
>>You are confusing two issues: reasoning *within* a formal system, and
>>reasoning *about* a formal system.

> [I don't think that is what I am confused about.]  [TEDRICK]

If you understand what you're confused about, you're not confused about it. 

> [Yes, this is exactly the point. Exhibit the turing machine that
> is claimed to be equivalent to the human mind, and the human mind
> can reason about the system in ways impossible within the system.
> Thus we contradict the assumption that the machine was equivalent
> to the mind.]
> OK, we at least have a clear point of disagreement. I don't believe
> human beings are deterministic. I also don't accept the laws of
> physics as absolute. I accept them as an absolutely brilliant
> model but not as complete truth. I don't accept the notion that
> the human being is just a very complex machine. 

The only reasons for doing so would be that you either have some evidence
that this is not so, or you simply refuse to believe it because you don't
like that conclusion.  The first possibility (which I doubt is true) would
be reasonable.  The second (which is engaged in by a large number of people
in this very newsgroup) is fallacious.

> I originally asked whether anyone disputed my claim that the human
> mind is not equivalent to a turing machine. After all the negative
> response, I would like to change my question to:
> 
> *IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
>  NOT EQUIVALENT TO A TURING MACHINE?*

I could care less about the exact type of machine that the human mind really
is, but I have no disagreement with the notion that the mind and brain are
represented as some sort of machine.

To throw yet another bone into this mix, I will quote from the oft-misquoted
(at least here) John Searle, from his "Minds, Brains, and Programs":

	    I want to try and state some of the general philosophical points
	implicit in the argument.  For clarity I will try to do it in a
	question and answer fashion, and I begin with that old chestnut of
	a question:

		"Could a machine think?"

	    The answer is, obviously, yes.  We are precisely such machines.

		"Yes, but could an artifact, a man-made machine, think?"

	    Assuming it is possible to produce artificially a machine with
	a nervous system, neurons, axions, and dendrites, and all the rest
	of it, sufficiently like ours, again the answer to the question seems
	to be, obviously, yes.  If you can exactly duplicate the causes, you
	could duplicate the effects.  And indeed it might be possible to
	produce consciousness, intentionality, and all the rest of it using
	some other sorts of chemical principles than those human beings use.

			[ALL THIS, MIND YOU, FROM A "CRITIC" OF AI!]

		"OK, but could a digital computer think?"

	    If by "digital computer" we mean anything at all that has a level
	of description where it can be correctly described as the instantiation
	of a computer program, then again the answer is, of course, yes, since
	we are the instantiations of any number of computer programs, and we
	can think.

		"But could something think, understand, and so on *solely*
		 in virtue of being a computer with the right sort of program?
		 Could instantiating a program, the right program of course,
		 by itself be a sufficient condition of understanding?"

	    This I think is the right question to ask, though it is usually
	confused with one of the earlier questions, and the answer to it is no.

		"Why not?"

	    Because the formal symbol manipulations themselves don't have
	any intentionality...

I think at this point Searle destroys his own argument.  By saying that these
things have "no intentionality", he is denying the premise made by the person
asking the question, that we are talking about "the right program".  Moreover,
Hofstadter and Dennett both agreed (!!!!) that Searle's argument is flawed.
"He merely asserts that some systems have intentionality by virtue of their
'causal powers' and that some don't.  Sometimes it seems that the brain is
composed of 'the right stuff', but other times it seems to be something else.
It is whatever is convenient at the moment."  (Sound like any other conversers
in this newsgroup?)  "Minds exist in brains and may come to exist in programmed
machines.  If and when such machines come about, their causal powers will
derive not from the substances they are made of, *but* *from* *their* *design*
*and* *the* *programs* *that* *run* *in* *them*.  [ITALICS MINE]  And the way
we will know they have those causal powers is by talking by them and listening
carefully to what they they have to say."  Readers of this newsgroup should
take note of how a non-presumptive position is built, and of how someone
quoted right and left in this newsgroup doesn't even agree halfheartedly with
the notions of those quoting him.
-- 
Anything's possible, but only a few things actually happen.
					Rich Rosen    pyuxd!rlr

laura@l5.uucp (Laura Creighton) (10/20/85)

In article <10702@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
>*IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
> NOT EQUIVALENT TO A TURING MACHINE?*

Me.  But not for the reasons that you give.  Aristotle propsoed the definiton
``man is a rational animal''.  In recent years we have worked very hard on
the ``rational'' part but not very hard on the ``animal'' part.  I think that
the concept of ``living'' is very important to the concept of ``mind''.

This does not meant htat it is impossible to construct a living turing machine,
but this is not where the efforts in AI have been spent so far.  I fear that
intelligence may be the easy part, and that it is AL (artificial life) which
is the tough one.

Laura Creighton
l5!laura

what's life to an immortal?

-- 
Laura Creighton		
sun!l5!laura		(that is ell-five, not fifteen)
l5!laura@lll-crg.arpa

matt@oddjob.UUCP (Matt Crawford) (10/22/85)

In article <10702@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
>
>*IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
> NOT EQUIVALENT TO A TURING MACHINE?*

Sure, I agree with you.  A Turing machine has unlimited memory.
_____________________________________________________
Matt		University	crawford@anl-mcs.arpa
Crawford	of Chicago	ihnp4!oddjob!matt

mcewan@uiucdcs.CS.UIUC.EDU (10/24/85)

> [Yes, this is exactly the point. Exhibit the turing machine that
> is claimed to be equivalent to the human mind, and the human mind
> can reason about the system in ways impossible within the system.
> Thus we contradict the assumption that the machine was equivalent
> to the mind.]

As far as I can see, your complete argument is:

assumption: The human mind can do things that no machine can do.

oconclusion: The human mind can do things that no machine can do.

I can't argue with your reasoning, but I can't say that I'm impressed.



			Scott McEwan
			{ihnp4,pur-ee}!uiucdcs!mcewan

"There are good guys and there are bad guys. The job of the good guys is to
 kill the bad guys."

dim@whuxlm.UUCP (McCooey David I) (10/25/85)

> In article <10702@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
> >
> >*IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
> > NOT EQUIVALENT TO A TURING MACHINE?*
> 
> Sure, I agree with you.  A Turing machine has unlimited memory.
> _____________________________________________________
> Matt		University	crawford@anl-mcs.arpa
> Crawford	of Chicago	ihnp4!oddjob!matt

Matt's reply goes along with my line of thought.  Consider the situation
realistically:  The human mind has a finite number of neurons and therefore
a finite number of states.  So I propose that the human mind is equivalent
to a finite state machine, not a Turing machine.  (I agree with Tom, but
for the opposite reasons).  Note that my comparison does not belittle the
human mind at all.  Finite can still mean very, very large.  The operation
of a finite state machine with a very large number of states is, for humans,
indistinguishable from that of a Turing machine.

				Dave McCooey
				AT&T Bell Labs, Whippany, NJ
				ihnp4!whuxlm!dim or ...!whlmos!dim

cleary@calgary.UUCP (John Cleary) (10/26/85)

> > [Yes, this is exactly the point. Exhibit the Turing machine that
> > is claimed to be equivalent to the human mind, and the human mind
> > can reason about the system in ways impossible within the system.
> > Thus we contradict the assumption that the machine was equivalent
> > to the mind.]
This is a very crucial point in this discussion I think.  This is only true
IF we assume that the human mind that is doing the reasoning is not itself
part of the Turing machine being exhibited.  The problem is that the 
physical boundary about a human is most unclear.  The wiggling of an electron
on Alpha Centauri might via changes in gravitation affect the firing of one of
my neurons and so alter my behaviour.  From this (extreme) example we have to
include the whole universe in the description of the human.  That is anything
which can affect us (and so observable by us) must be included in a complete
description of our behaviour.  The set of all things observable by us (or
potentially observable by us) can validly be called the whole universe.
Unfortunately the whole universe includes all entities that can observe us
and hence reason about us (remember Heisenberg, if it can observe you then
it can affect you).

The interesting thing about digital computers is that we confuse two things,
the actual physical machine and its abstract description.  The physical machine
just like a human needs the whole universe included in it to describe it.
The abstraction (what is described in the manuals) is an approxiamtion only.
It is proably unclear from the abstract description what happens when a high
energy gamma ray passes through the CPU chip.  So I agree with those who
say a digital computer AS DESCRIBED BY A FORMAL SYSTEM cannot have the same
status as a human.  However there is no reason we know of at the moment why
a physical system cannot, indeed as the description of the physical computer
includes the whole universe and the humans in it, it already has the same 
status as the human.

This then raises some fascinating questions:

	1) Church's thesis that all computers are equivalent to a Turing
	   machine.  This is actually a PHYSICAL law (like law of gravitation)
	   potentially subject to a physical experiment.  It is conceivable
	   for example that some of the pecualiar effects of quantum mechanics
	   could allow calculations faster than any possible Turing machine.

	2) Is the entire universe a Turing machine?

	3) Is it conceivable that any thing part of the universe could
	   verify or refute 2)?

I am also struck by the similarity of the conclusions of some philosophers from
the Eastern tradition that we are all intimately connected 
with the whole universe.


> > I originally asked whether anyone disputed my claim that the human
> > mind is not equivalent to a turing machine. After all the negative
> > response, I would like to change my question to:
> > 
> > *IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
> >  NOT EQUIVALENT TO A TURING MACHINE?*
See above.  I think this is a question for the physicists, and potentially
subject to physicl experiment.
> 

> 		"OK, but could a digital computer think?"
> 
> 	    If by "digital computer" we mean anything at all that has a level
> 	of description where it can be correctly described as the instantiation
> 	of a computer program, then again the answer is, of course, yes, since
> 	we are the instantiations of any number of computer programs, and we
> 	can think.

No I disagree, here he talks about the abstract machine.

> 
> 		"But could something think, understand, and so on *solely*
> 		 in virtue of being a computer with the right sort of program?
> 		 Could instantiating a program, the right program of course,
> 		 by itself be a sufficient condition of understanding?"
> 
> 	    This I think is the right question to ask, though it is usually
> 	confused with one of the earlier questions, and the answer to it is no.
> 
> 		"Why not?"
> 
> 	    Because the formal symbol manipulations themselves don't have
> 	any intentionality...
I agree.
> ...  If and when such machines come about, their causal powers will
> derive not from the substances they are made of, *but* *from* *their* *design*
> *and* *the* *programs* *that* *run* *in* *them*.  [ITALICS MINE]  And the way
> we will know they have those causal powers is by talking by them and listening
> carefully to what they they have to say."

This is a fascinating argument, incorrect I think.  Certainly in humans much
of their abilities come from there experience of the world, learning 
adaptation.  That is much of their state and behaviour is a result of their
experience not their genes.  I suspect any really interesting computer will be
similar. Much of its behaviour will be a result not of its original programming
but of its subsequent experience of the world.  Unfortunatly again to describe
the machines that result we must describe not only their original programming
but all their later possible experiences.  But they can potentially be 
affected by anything in the universe.

The problem with the current state of computing, robotics and AI is that most
computers have little or no interaction with the real world.  They have no 
bodies.  Hence they can to a very good approximatin be described by some formal
system.  Thus many people have a gut feeling that computers are fundamentally
different from humans.  In their guise as formal systems I think this is indeed
true.

I think there is also a practical lesson for AI here.  To get really 
interesting behaviour we need open machines which get a lot of experience of 
the real world.  Unfortunately we arent going to be able to formalize or 
predict the result. But it will be interesting.

Sorry about the length of this, but the question seemed too fascinating to
let alone.

John G. Cleary, Dept. Computer Science, The University of Calgary, 
2500 University Dr., N.W. Calgary, Alberta, CANADA T2N 1N4.
Ph. (403)220-6087
Usenet: ...{ubc-vision,ihnp4}!alberta!calgary!cleary
        ...nrl-css!calgary!cleary
CRNET (Canadian Research Net): cleary@calgary
ARPA:  cleary.calgary.ubc@csnet-relay

tedrick@ernie.BERKELEY.EDU (Tom Tedrick) (10/26/85)

Thanks very much for the responses about the mind-turing machine
problem. They were very interesting and educational. The most
interesting was from our distinguished mathematical colleague
from Amsterdam. I have the highest respect for the Amsterdam
mathematicians (having gone through some of the Lenstras' papers
and heard their talks, for example) so I will defer to his superior
knowledge, and only thank him for taking the time to reply.

I suspect some of the responses were from people not sufficiently
familar with the subject to have an informed opinion, but most
were quite good. I didn't appreciate the responses that treated
the problem as a joke, or subjected me to personal ridicule.
For lack of time I am unable to respond to all the messages I received.

I should mention that I saw a film where Godel said something
to the effect that either mathematics was inconsistent, or
there was some mysterious, not formally explainable process
going on in the human mind. Anyway that was my understanding
of what he said ...

  -Tom

jwl@ucbvax.BERKELEY.EDU (James Wilbur Lewis) (10/27/85)

In article <859@whuxlm.UUCP> dim@whuxlm.UUCP (McCooey David I) writes:
>> In article <10702@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
>> >
>> >*IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
>> > NOT EQUIVALENT TO A TURING MACHINE?*
>> 
>> Sure, I agree with you.  A Turing machine has unlimited memory.
>> _____________________________________________________
>> Matt		University	crawford@anl-mcs.arpa
>> Crawford	of Chicago	ihnp4!oddjob!matt
>
>Matt's reply goes along with my line of thought.  Consider the situation
>realistically:  The human mind has a finite number of neurons and therefore
>a finite number of states.  So I propose that the human mind is equivalent
>to a finite state machine, not a Turing machine.  (I agree with Tom, but
>for the opposite reasons).  Note that my comparison does not belittle the
>human mind at all.  Finite can still mean very, very large.  The operation
>of a finite state machine with a very large number of states is, for humans,
>indistinguishable from that of a Turing machine.

Not at all! I see two problems with your line of reasoning.  First, your
assertion that a finite number of neurons --> a finite state machine. This
assumes that neurons have discrete states; however when you consider the
continuous, analog nature of activation thresholds, this argument breaks
down.

A second, *major* flaw is the notion that humans must rely on their brains 
alone for 'storage'.  Ever since the invention of writing, this hasn't been
true; literature can be viewed as a Turing machine tape for humans!

I stand by my claim that minds and Turing machines are equivalent.

-- Jim Lewis
   U.C. Berkeley
   ...!ucbvax!jwl      jwl@ucbernie.BERKELEY.EDU

marv@ISM780.UUCP (10/29/85)

>Not at all! I see two problems with your line of reasoning.  First, your
>assertion that a finite number of neurons --> a finite state machine. This
>assumes that neurons have discrete states; however when you consider the
>continuous, analog nature of activation thresholds, this argument breaks
>down.

>A second, *major* flaw is the notion that humans must rely on their brains
>alone for 'storage'.  Ever since the invention of writing, this hasn't been
>true; literature can be viewed as a Turing machine tape for humans!

>I stand by my claim that minds and Turing machines are equivalent.

>-- Jim Lewis
>   U.C. Berkeley

I claim that a finite sized human (not all information processing is done in
the brain) *does* imply a finite state machine machine.  I think that human
information processing  involves chemical reactions (a finite number atoms)
and energy tranformations (a finite number of photons) and therfore only a
finite (albeit very large) number of states.

And surely you don't mean to imply that the amount of information stored in
a finite sized set of librarys is infinite.

I conclude that the human processing is *not* equivalent to a Turning machine.
Humans are clearly physically realizable.  Turning machines being infinite
are not physically realizable.  Therefore, I think a more reasonable question
to ask is: can a physically realizable machine be built that can mimic human
information processing?  I am not aware of any laws of physics that disallows
the construction of such a machine. It seams to me this is an open question
to be answered (hopfully) in the future.

	  Marv Rubinstein -- Interactive Systems.

creedy@cca.UUCP (Christopher Reedy) (11/15/85)

In article <> marv@ISM780.UUCP writes:
>
>I claim that a finite sized human (not all information processing is done in
>the brain) *does* imply a finite state machine machine.  I think that human
>information processing  involves chemical reactions (a finite number atoms)
>and energy tranformations (a finite number of photons) and therfore only a
>finite (albeit very large) number of states.
>
Unfortunately, at this level of interaction, quantum mechanics applies.
I.e. results of interactions are non-deterministic.  I am not enough of
a theoretical physicist to know whether the simulation is still possible
using probability distributions.  However, it seems like a more
sophisticated argument is needed here.
>
>And surely you don't mean to imply that the amount of information stored in
>a finite sized set of librarys is infinite.
>
A Turing Machine does not have infinite memory in the sense you imply.
The amount of memory that is in use at any point in time by a Turing
machine is finite, even though it can grow without bound over the life
of the computation.  I am not convinced that this is any different from
the memory that is available to a person who has the capability to
research for any information that is available in any library anywhere.

Chris Reedy