[comp.ai] Comments on "The Emperor's New Mind

xhg0998@uxa.cso.uiuc.edu (09/07/90)

		Comments on "The Emperor's New Mind"

		Written  by Xiaoping Hu

	 Copy Right 1990  All Rights Reserved

	<< Any modification or distribution of this comment
	       must have the full consent of the author. >>

	This comment provides an alternative view on the central 
question in Penrose's book: can computer have a mind? I wish it would 
be helpful to the readers' making their own minds after they read the 
book. Please forgive me for the lengthy comment. But I think it is worth 
time thinking these problems carefully and independently.

	Prof. Penrose summarized the most important recent discoveries
in sciences and drew his own conclusion from his own insights. As a humble 
computer student, however, I disagree with most of the arguments that 
Prof. Penrose uttered in the book, although I may well be classified into 
the strong AI group by Penrose, despite that I had never heard the name 
before I read the book. Since I think that the topics in the book could be 
very controversial and Dr. Penrose's arguments could be misleading to those 
who do not have the patience or will to independently ponder such remote 
things as decidability, an inner voice, as Einstein used to say, invokes me 
to take the torture to append some comments to the introduction of this  book. 
I have to confess that I finish reading the book and writing this review in a 
hurry, upon the request of the editors of Book & Journal. Therefore I may not 
well capture the genuine ideas and implications of Penrose. If this were true,
I had to beg pardon from both the readers and Dr. Penrose. So I solemnly 
suggest that the readers read the related original books instead of taking 
short cut to listen to the opinions in the main stream. I am sure that Dr.
Penrose will be happy at your reading his book. But you don't take my 
contention seriously, do you? 

	Let us pick up some examples to elaborate here. Maybe I should not
mention that the example in Page 255 (Fig. 6.18) can be well explained
with classic Maxwell electromagnetic wave theory, without troubling us with
the uncomfortable belief that a photon can appear in two distant locations
at once (Qigung Master Yen Xin even thinks that an eminent Qigung master can
do such miracles). Or maybe I should not helplessly argue that even an idiot
could invent some intelligent machine simply by chance! Let us talk about
more detailed problems in depth. 

	First, many of the argument that Penrose holds against computer may 
well be applied to human beings. But still we consider that human beings have 
intelligence.

	Second, even we don't know everything about the world, in particular,
about the quantum states, we are still intelligent as long as our brains
are properly constructed somehow (to discover this really requires genius).
A farmer may not know anything about how the solar energy is transformed 
into biological energy, but he can still raise crops as long as he follows 
some rules obtained by experience. Therefore all the quantum mechanics stuff
may have nothing to do with intelligence. One simply may not need to know
everything detail about quantum states to work out some mechanisms which
possess some kind of intelligence, although the optimistic attitude of the 
proponents of strong AI seems naive enough to invite mockings. 

	Third, an eunuch is no less intelligent than a normal man. 
Therefore, sex desire is not necessary for intelligence. Maybe friendship 
is a must? Also an uneducated farmer may not well understand the beauty 
of Picaso's paintings (frankly, even I myself do not quite understand. Do 
you?). But this does not prevents the farmer from possessing intelligence. 
Then, is sense of beauty pertinent to intelligence?

	Fourth, human has successfully created new plants by implanting and
new animals by mixbreeding. These new creatures are not the products of the
Nature's selection. They are neither evolved. They are created in a revolution.
Is there any reason that prevents human from creating an artificial life of
artificial intelligence? (You know that a new branch of science called 
artificial life has just be born. People are taking it seriously.)

	Fifth, consciousness is only one component of intelligence. Even up
to day, intelligence is not well defined (such qualities like learning, 
memorizing, adaptivity, reasoning, are not even mentioned in Penrose's
book). Then what's the deal to argue if AI is possible or not? Intelligence
reflects the quality of understanding the nature and overcoming the nature.
As long as machine can simulate human in this quality, there is no reason
to say that machine has no intelligence because it does not make love. So
I find many of the dispute are caused by the confusion in the definition
of intelligence. What is Genuine Intelligence, really? Is a dog more 
intelligent than a pig?

	At last, let us spend length to go over the core problem:
undecidability. Penrose piled up Goedel's theorem, Turing's Theorem, Russell 
paradox together, but, I would shout that in this particular important point 
he lacks genuine and original understanding of the meaning of these theorems 
and paradoxes.
	 Actually, Goedel's incompleteness theorem and Turing's theorem on
unceasingness of Turing Machine, can all reduce to Russell paradox, which 
again is another formalised version of the knight and knave story of the 
ancient Greek sophists. Actually as Penrose mentioned, both Goedel and 
Turing obtained their theorems after studying the Russell paradox. This 
paradox can be stated as follows:

1.	S: Statement S is not true.
	*  ==========#===============

In any two value (TRUE-FALSE) logic system, we can neither assign a TRUE nor
a FALSE value to S, therefore, "for each self-consistent logical system", 
or "for each mathematical system as large as encompassing natural numbers",
there exist propositions which can neither be proven true nor false. For,
if S is assigned TRUE, then what S says must be correct; but what S says
is just the opposite: Statement S is not TRUE. Again, we cannot assign
a FALSE to S, otherwise, what S says must be wrong; therefore "Statement S
is not true" must be wrong. Consequently S must be true.
	Where is the problem in the above paradox? Up to now, all the 
logicians, following Russell, conclude that there are something which just
cannot be determined within the logical system we use. Everything
is perfect there except that the Nature didn't allow us a perfect logical
system. Is This True?
	Let us reexamine the statement and ask what the hell is S? S is not
defined! S is self-contradicting by definition and involves infinite cycle. 
If one plugs

	S (Statement S is not true) into S
	*     	     #	      		 #

he would get
2.	S: The Statement "that statement S is not true" is not true.
	*				 #
Using the negation law in logic, statement 2 actually says that

2'.	S: Statement S is true.

contradicting statement 1. Such substitution can still go on to get
more alternate negative and positive statements. Therefore, the very 
definition of S has violated the law that a variable can have exactly one 
value at once in a two value logical system. The Goedel's proposition 
involves such a definition, so does Turing's theorem. It is nothing else 
but to say "yes = no, and yes and no are two different values". Why this later 
statement is so ridiculously nonmeaningful, but the S statement racks the
brains of all great logicians? Because S is so trickily constructed such 
that it consists of a cycle and can be anything else. Does there exist any 
other solution than claiming undecidability or incompleteness of any logical 
system? One way, as some earlier logicians suggested is that such 
self-contradicting propositions are not permissible for this logical system.
Therefore another rule "propositions leading to self-contradiction by 
definition are not permissible" well solves the problem. However, this will 
again cause new paradoxes which I will not discuss further here.
	To get around this paradox and correctly understand the philosophical
implication of it, we need to check again the logical system. A logical
system comprises some operation rules such as AND and OR, as well as 
some definitions including symbols (A,B,C,D), domain of value (TRUE and
FALSE), and original assignment to the variables in an expression. An
expression containing unvalued variables may not be determined, not 
because the logical system is unable to, but because the expression is 
changing with the assignment of the variables. When you carefully check
all existing paradoxes, you can find out that either a paradox is self-
contradicting by definition or it comprises an infinite cycle such that 
it does not have a constant implication -- it is always changing its meaning!
Do you remember how we define INFINITE? "An INFINITE is a number which is
larger than any natural number". Is Infinite a constant? Is infinite an
objective existence or simply a product of pure reason? I like to ask this
question to Dr. Penrose, who tells me in his book that the world is of 
finite size and is dominated by quantum mechanics.
	We know that in a finite system, we can always include necessary
rules to make it complete. In an infinite system since many things are 
changing with finite or infinite rules, it is necessary to have an infinite
number of systems to encompass all propositions such that if a proposition
cannot be determined in this system, it may be determined in some other 
system, although not all the systems need to be mutually consistent.
	From this point of view, Goedel's theorem only shows that there are
some propositions which are not meaningful by definition, or cannot be 
determined because they do not have constant implications, or need infinite
time to determine since they may involve infinite cycles like the Mandelbrot
set. It does not say anything wrong about the logical system! Again, the
logical system coming from the Nature must return to the Nature to make
it meaningful. For example, from logic we know that
	1 + 1 = 2.
But in nature, 
	1 dog + 1 cat != 2 pigs,
and 
	1 light speed + 1 light speed != 2 light speeds.
We actually do not need to worry about if our logical system is complete 
enough to understand everything. What we need to worry is if we have enough
time or luck to find out what is pertinent to our interests. 

	Although life is finite, it is still not sure if a "genuine" 
intelligence requires infinite knowledge or mechanisms. Therefore it would
be too early to conclude that human cannot work out a computer which 
simulates human in everything so perfectly that even human cannot distinguish
it. If the very Quantum Mechanics is true, then Intelligence can involve only
finite mechanisms. Therefore even one computer can simulate all the qualities
of human intelligence, leaving alone that the computers may well be organized
into a computer society in which each computer has different expertise and 
capacity, and even personality.

	The limitedness of a single man's ability suggests us that we need
to group together to overcome the nature. Therefore, we need democracy and 
free speech and belief. Not only because life is limited, the capacity of
a man's brain is also limited. No one can be right in everything every time.
Therefore we must allow coexistence of different opinions. Otherwise, human 
society will certainly goes into some extreme, leading to self-destruction.
One important feature of human inetelligence is its flexibility and freedom.
To some degree, one can freely change his mind. Therefore any human brain in 
principle is a universal Turing Machine or Neumann Machine. Therefore, 4
billions people own 4 billions universal Turing Machines. That's still not 
all. Each brain can change the range in which it gives right answers. Such
freedom allows the potentiality of solving unsolved problems.

	So, my conclusion is clear: I do not even need an uncertainty 
principle to convince Penrose that he will never exactly know the weight of 
his own head, but I still regard him as one of the human beings of highest
intelligence. To conclude the past is wise, but to conclude the future is not.
One can has his own belief on the future, but to use one's own belief to mock
other people's believes does not seem a respectful and truth-loving way. 

	Well, enough is enough. My little finger tells me that, as Einstein
would say (Oops, Einstein again), I have got to stop now to keep my rice
bowl. I am sure that some of you must be honest and naive enough to find out 
who is the true Emperor. 
	

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (09/08/90)

In article <47000003@uxa.cso.uiuc.edu> xhg0998@uxa.cso.uiuc.edu writes:
>
>
>
>
>		Comments on "The Emperor's New Mind"
>
>		Written  by Xiaoping Hu
>
>	 Copy Right 1990  All Rights Reserved
>
>	<< Any modification or distribution of this comment
>	       must have the full consent of the author. >>

This is an open forum.  Please don't post things that can't be copied,
downloaded, or freely distributed.  Especially since this goes
international, and you may not be covered under some countries laws.

- Jim Ruehlin

igl@ecs.soton.ac.uk (Ian Glendinning) (09/12/90)

In <47000003@uxa.cso.uiuc.edu> xhg0998@uxa.cso.uiuc.edu writes:

>	Let us pick up some examples to elaborate here. Maybe I should not
>mention that the example in Page 255 (Fig. 6.18) can be well explained
>with classic Maxwell electromagnetic wave theory, without troubling us with
>the uncomfortable belief that a photon can appear in two distant locations
>at once (Qigung Master Yen Xin even thinks that an eminent Qigung master can

It is true that a classical electromagnetic wave would also interfere
with itself so as to emerge wholely at detector A.  (Fig. 6.18 - page
330 in my paperback edition.)  However, the key point here is that we
are dealing with a *single* photon in the apparatus at any time.  That
is, a single *particle*.  If we were to put a detector in the path of
each beam, after the splitting by the first half-silvered mirror, then
we would detect photons arriving at *either* one detector *or* the
other one.  *Never* both at the same time.  This is quite different
from the classical electromagnetic case, in which case the light wave
would take both paths and *always* be detected by both detectors.

This nicely illustrates the basic non-intuitive feature of the
behaviour of particles in quantum mechanics.  That is, left to their
own devices they behave like waves (spread out) but as soon as you
try to look at them they behave like particles (appear to be in one
place).  The idea was introduced by Penrose in the previous section,
in terms of the U and R evolution procedures.

In summary then, quantum mechanical particles behave neither like
classical waves (since they are always detected as being in one place
at one time) nor like classical particles (since they can interfere
with each other like waves).  Instead, they combine properties of
both in a way that is non-intuitive in the light of macroscopic
experience.  But don't be fooled by intuition.  For "quantum
mechanical particles" read "physical particles".  This is not just
theory - it really describes the way the world is experimentally
observed to behave, whether you like it or not!

>	At last, let us spend length to go over the core problem:
>undecidability. Penrose piled up Goedel's theorem, Turing's Theorem, Russell 
>paradox together, but, I would shout that in this particular important point 
>he lacks genuine and original understanding of the meaning of these theorems 
>and paradoxes.

Really!

>	 Actually, Goedel's incompleteness theorem and Turing's theorem on
>unceasingness of Turing Machine, can all reduce to Russell paradox, which 
>again is another formalised version of the knight and knave story of the 
>ancient Greek sophists. Actually as Penrose mentioned, both Goedel and 
>Turing obtained their theorems after studying the Russell paradox. This 
>paradox can be stated as follows:

>1.	S: Statement S is not true.
>	*  ==========#===============

This is not a statement of Russell's paradox (which is phrased in the
language of set theory, remember) but is a simple contradiction.
True, Russell's paradox leads to a contradiction, but that is
precisely why the form of reasoning which led to it can not be
permitted - because contradictions (by definition) are not allowed
within self consistent formal systems.  The Goedel argument actually
runs something like:

G: There is no proof of G in this system.

which involves no contradiction at all.  If G is assumed true, then
there is no proof of it within the system - which is perfectly ok,
since not all propositions must be decidable - but more to the point,
neither is there a contradiction in the above statement.  If, on the
other hand, G were assumed false, then the right hand side says there
is a proof of G (meaning it's 'true') so we would then have a
contradiction.  Thus, we simply reject the second alternative, and we
have a consistent system within with G is true but not provable.

>more alternate negative and positive statements. Therefore, the very 
>definition of S has violated the law that a variable can have exactly one 
>value at once in a two value logical system. The Goedel's proposition 
>involves such a definition, so does Turing's theorem. It is nothing else 
>but to say "yes = no, and yes and no are two different values". Why this later 

Yes, S is a contradiction, but no, Goedel's proposition G does not
involve one.
--
Ian Glendinning                               igl@uk.ac.soton.ecs
Electronics and Computer Science              Tel: +44 703 595000
University of Southampton
Southampton SO9 5NH England

xhg0998@uxa.cso.uiuc.edu (09/13/90)

What is G in Goedel's theorem's case? The very definition of G involves
recursing reference to itself and is therefore not defined. If you plug
anything specific into G, then the definition of G does not hold any more.
This is similar to S in the S statement. 
For now, like many other determinists, half-doubt and half believe 
quantum mechanics. But I don't feel like to talk more on it since I just
begin to read quantum mechanics.
Thank you for your comments.
Xiaoping
:wq

blaak@csri.toronto.edu (Raymond Blaak) (09/14/90)

igl@ecs.soton.ac.uk (Ian Glendinning) writes:

>The Goedel argument actually runs something like:
>G: There is no proof of G in this system.

xhg0998@uxa.cso.uiuc.edu writes:

>What is G in Goedel's theorem's case? The very definition of G involves
>recursing reference to itself and is therefore not defined. If you plug
>anything specific into G, then the definition of G does not hold any more.
>This is similar to S in the S statement. 

There is a difference between program and data. G is simply the set of
characters "There is no proof of G in this system", and so is perfectly well
defined. When it becomes time to interpret G, that is when you have to decide
how to handle the recursion.

Ray
(blaak@csri)

phil@eleazar.dartmouth.edu (Phil Bogle) (09/30/90)

igl@ecs.soton.ac.uk (Ian Glendinning) writes:

>The Goedel argument actually runs something like:
>G: There is no proof of G in this system.

In article <47000004@uxa.cso.uiuc.edu> xhg0998@uxa.cso.uiuc.edu writes:
>
>What is G in Goedel's theorem's case? The very definition of G involves
>recursing reference to itself and is therefore not defined. If you plug
>anything specific into G, then the definition of G does not hold any more.

G is not so wickedly recursive as it seems. We actually could compute it and
write it down on a (very large) sheet of paper;  it would be a huge expression
involving lots of existential quantifiers and ANDs and ORs, but there would be
no recursive definitions and no free variables.

Let me try something: I'm going to restate Douglas Hofstadters presentation in
_Goedel, Escher, Bach_ but in the notation of Computer Science rather than
Mathematics.

***

For each expression or list of expressions in a system, assume that we
can find a unique number to represent that expression, called the Goedel Number
or GN for that expression.  This gives the system a way to "talk about itself";
a formula can talk about other formulas by referring to them by their Goedel
number.

   In particular, we can capture the notion of provability.   It's
hardly surprising that the steps of a formal derivation can be formally
checked; that's the whole point of such a system.  Given the Goedel
number of a derivation, we can create an expression which arithmetically
"parses" the derivation and checks that each step follows from the
previous ones by the axioms of the system.

      Let this expression be called PROVES, and let us write "d PROVES y" if
this expression evaluates to true for a given derivation d and if y is the
conclusion of that derivation.  (PROVES will be an enormous expression with
two free variables to be filled in by d and y) This is the first
important step, the system can now talk about the provability of
sentences within itself.

    Now, we need to have a way for G to talk about itself without being
viciously recursive.  G will actually talk about another formula, which
when slightly modified by the function below, will turn out to be G itself.

    Let f be the GN of a function which has a single free variable x.
(a free variable is like the parameter to a function; it is a place marker
which will later be filled in with a definite value).

    Define SelfSubstitution(f) = f with the GN for f substituted for each
                                    occurance of x (the free variable)

 -- We'll now define "H", which is G's almost indentical twin.

    Define H(x) = There does not exist a Derivation such that
                     Derivation PROVES SelfSubstitution(x)

    Let #H# be the GN for H.

    Define G = There does not exist a Deriviation such that
                   Derivation PROVES SelfSubstitution(#H#)

But notice  ***G = SelfSubstitution(#H#)*** since is exactly the same as H
with the GN for H substituted for the free variable x!

    So what G is really saying is "There does not exist a Deriviation such that
Deriveation proves G", or more concisely, "G is not provable".  If G can be
proven, we have a contradiction, so G must not be provable.  So G is telling the
truth--- G is true, but not provable.  Hence the system (or any other system)
fails to capture all true statements.

nau@frabjous.cs.umd.edu (Dana Nau) (09/30/90)

In article <24793@dartvax.Dartmouth.EDU> phil@eleazar.dartmouth.edu (Phil Bogle) writes:
<    Let f be the GN of a function which has a single free variable x.
<(a free variable is like the parameter to a function; it is a place marker
<which will later be filled in with a definite value).
<
<    Define SelfSubstitution(f) = f with the GN for f substituted for each
<                                    occurance of x (the free variable)
<
< -- We'll now define "H", which is G's almost indentical twin.
<
<    Define H(x) = There does not exist a Derivation such that
<                     Derivation PROVES SelfSubstitution(x)
<
<    Let #H# be the GN for H.
<
<    Define G = There does not exist a Deriviation such that
<                   Derivation PROVES SelfSubstitution(#H#)
<
<But notice  ***G = SelfSubstitution(#H#)*** since is exactly the same as H
<with the GN for H substituted for the free variable x!
<
<    So what G is really saying is "There does not exist a Deriviation such that
<Deriveation proves G", or more concisely, "G is not provable".  If G can be
<proven, we have a contradiction, so G must not be provable.  So G is telling the
<truth--- G is true, but not provable.  Hence the system (or any other system)
<fails to capture all true statements.

Let T be the first-order theory in which G is stated.  The "meaning"
and "truth" of G depend not on T alone, but also on the model we use
for T.  In fact, the truth of any statement in T is defined only by
reference to the model.  

Loosely speaking, a model for T is an assignment of meanings to the
formulas of T in such a way that every statement that is provable in T
is true in the model.  This is the reverse of the way the word "model"
is used in mathematical modeling: there, the model is a mathematical
theory that represents some real-world phenomena, but here, the model
is instead the set of real-world (or unreal-world!)  phenomena that
the theory represents.

When logicians define a first-order theory T, they will usually have
some particular model in mind for T---and in this case, the intended
meaning for G is "G is not provable".  But in general, there may be
infinitely many other models for T, and in many of these, G will have
other meanings entirely.  Any statement that is provable in T is true
in every model of T, and vice versa---so since neither G nor ~G is
provable in T, this means that there are some models of T in which G
is true, and some models of T in which G is false.  Of course, in
those models in which G is false, the meaning of G is something other
than "G is not provable."
--
	Dana S. Nau
	Computer Science Dept.		Internet:  nau@cs.umd.edu
	University of Maryland		UUCP:  uunet!mimsy!nau
	College Park, MD 20742		Telephone:  (301) 405-2684

cpshelley@violet.uwaterloo.ca (cameron shelley) (09/30/90)

[stuff deleted]

>Let me try something: I'm going to restate Douglas Hofstadters presentation in
>_Goedel, Escher, Bach_ but in the notation of Computer Science rather than
>Mathematics.
>
>***
>
>For each expression or list of expressions in a system, assume that we
>can find a unique number to represent that expression, called the Goedel Number
>or GN for that expression.  This gives the system a way to "talk about itself";
>a formula can talk about other formulas by referring to them by their Goedel
>number.
>
>   In particular, we can capture the notion of provability.   It's
>hardly surprising that the steps of a formal derivation can be formally
>checked; that's the whole point of such a system.  Given the Goedel
>number of a derivation, we can create an expression which arithmetically
>"parses" the derivation and checks that each step follows from the
>previous ones by the axioms of the system.
>
>      Let this expression be called PROVES, and let us write "d PROVES y" if
>this expression evaluates to true for a given derivation d and if y is the
>conclusion of that derivation.  (PROVES will be an enormous expression with
>two free variables to be filled in by d and y) This is the first
>important step, the system can now talk about the provability of
>sentences within itself.
>
>    Now, we need to have a way for G to talk about itself without being
>viciously recursive.  G will actually talk about another formula, which
>when slightly modified by the function below, will turn out to be G itself.
>
>    Let f be the GN of a function which has a single free variable x.
>(a free variable is like the parameter to a function; it is a place marker
>which will later be filled in with a definite value).
>
>    Define SelfSubstitution(f) = f with the GN for f substituted for each
>                                    occurance of x (the free variable)
>
> -- We'll now define "H", which is G's almost indentical twin.
>
>    Define H(x) = There does not exist a Derivation such that
>                     Derivation PROVES SelfSubstitution(x)
>
>    Let #H# be the GN for H.
>
>    Define G = There does not exist a Deriviation such that
>                   Derivation PROVES SelfSubstitution(#H#)
>
>But notice  ***G = SelfSubstitution(#H#)*** since is exactly the same as H
>with the GN for H substituted for the free variable x!

Sorry for excerpting so much of this posting, but I did not want to
destroy the substance of the arguement.  I just have one question:
does the G = SS(#H#) equation here look like the fixed point of a 
function to anyone?  It sure does to me...  If so, it seems like there
should be some other interesting parallels between the ideas of
provability and fixed points.

>    So what G is really saying is "There does not exist a Deriviation such that
>Deriveation proves G", or more concisely, "G is not provable".  If G can be
>proven, we have a contradiction, so G must not be provable.  So G is telling the
>truth--- G is true, but not provable.  Hence the system (or any other system)
>fails to capture all true statements.


--
      Cameron Shelley        | "Armor, n.  The kind of clothing worn by a man
cpshelley@violet.waterloo.edu|  whose tailor is a blacksmith."
    Davis Centre Rm 2136     |
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

ssingh@watserv1.waterloo.edu ($anjay [+] $ingh - Indy Studz) (10/01/90)

I am re-posting this response to Roger Penrose's book. It may prove
interesting to those that may have missed it the first time around.

>From: Hans.Moravec@ROVER.RI.CMU.EDU 
Newsgroups: comp.ai 
Subject: Dear Roger, 
Message-ID: <Added.MZoFExS00Ui3AsRU5d@andrew.cmu.edu> 
Date: 8 Feb 90 06:35:12 GMT 
Organization: Graduate School of Industrial Administration, Carnegie Mellon,
Pittsburgh, PA 
Lines: 311 
 
 
This is an open letter, distribute at will. 
Comments are solicited.  Thanks. -- Hans Moravec 
 
To: Professor Roger Penrose, Department of Mathematics, Oxford, England 
 
Dear Professor Penrose, 
 
     Thank you for sharing your thoughts on thinking machinery in your 
new book "The Emperor's New Mind", and in the February 1 New York Review of 
Books essay on my book "Mind Children".  I've been a fan of your mathematical

inventions since my high school days in the 1960s, and was intrigued to hear 
that you had written an aggressively titled book about my favorite subject. 
I enjoyed every part of that book-the computability chapters were an 
excellent review, the phase space view of entropy was enlightening, the 
Hilbert space discussion spurred me on to another increment in my incredibly 
protracted amateur working through of Dirac, and I'm sure we both learned 
from the chapter on brain anatomy.  You won't be surprised to learn, 
however, that I found your overall argument wildly wrong headed! 
 
     If your book was written to counter a browbeating you felt from 
proponents of hard AI, mine was inspired by the browbeaten timidity I found 
in the majority of my colleagues in that community.  As the words 
"frightening" and "nightmare" in your review suggest, intelligent machines 
are an emotion-stirring prospect, and it is hard to remain unbrowbeaten in 
the face of frequent hostility.  But why hostility?   Our emotions were 
forged over eons of evolution, and are triggered by situations, like threats 
to life or territory, that resemble those that influenced our ancestors' 
reproductive success.  Since there were no intelligent machines in our past, 
they must resemble something else to incite such a panic-perhaps another 
tribe down the stream poaching in our territory, or a stronger, smarter 
rival for our social position, or a predator that will carry away our 
offspring in the night.  But is it reasonable to allow our actions and 
opportunities to be limited by spurious resemblances and unexamined fears? 
Here's how I look at the question.  We are in the process of creating a new 
kind of life.  Though utterly novel, this new life form resembles us more 
than it resembles anything else in the world.  To earn their keep in 
society, robots are being taught our skills.  In the future, as they work 
among us on an increasingly equal footing, they will acquire our values and 
goals as well-robot software that causes antisocial behavior, for instance, 
would soon cease being manufactured.   How should we feel about beings that 
we bring into the world, that are similar to ourselves, that we teach our 
way of life, that will probably inherit the world when we are gone?    I 
consider them our children.  As such they are not fundamentally threatening, 
though they will require careful upbringing to instill in them a good 
character.  Of course, in time, they will outgrow us, create their own 
goals, make their own mistakes, and go their own way, with us perhaps a fond 
memory.  But that is the way of children.  In America, at least, we consider 
it desirable for offspring to live up to their maximum potential and to 
exceed their parents. 
 
     You fault my book for failing to present alternatives to the "hard 
AI" position. It is my honest opinion that there are no convincing 
scientific alternatives.  There are religious alternatives, based on 
subjective premises about a special relation of man to the universe, and 
there are flawed secular rationalizations of anthropocentrism.  The two 
alternatives you offer, namely John Searle's philosophical argument and your 
own physical speculation, are of the latter kind.  Searle's position is that 
a system that, however accurately, simulates the processes in a human brain, 
whether with marks on paper or signals in a computer, is a "mere imitation" 
of thought, not thought itself.  Pejorative labels may be an important tool 
for philosophy professors, but they don't create reality.  I imagine a 
future debate in which Professor Searle, staunch to the end, succumbs to the 
"mere imitation" of strangulation at the hands of an insulted and enraged 
robot controlled by the "mere imitation" of thought and emotion.  Your own 
position is that some physical principle in human brains produces 
"non-computable" results, and that somehow this leads to consciousness. 
Well, I agree, but the same principle works equally well for robots, and its 
not nearly as mysterious as you suggest. 
 
     Alan Turing's computability arguments, now more than fifty years 
old, were a perfect fit to David Hilbert's criteria for the mechanization of 
deductive mathematics, but they don't define the capabilities of a robot or 
a human.  They assume a closed process working from a fixed, finite, amount 
of initial information.  Each step of a Turing machine computation can at 
best preserve this information, and may destroy a bit of it, allowing the 
computation to eventually "run down", like a closed physical system whose 
entropy increases.  The simple expedient of opening the computation to 
external information  voids this suffocating premise, and with it the 
uncomputability theorems.  For instance, Turing proved the uncomputability 
of most numbers, since there are only countably many machine programs, and 
uncountably many real numbers for them to generate.  But it is trivial to 
produce "uncomputable" numbers with a Turing machine, if the machine is 
augmented with a true randomizing device.  Whenever another digit of the 
number is needed, the randomizer is consulted, and the result written on the 
appropriate square of the tape.  The emerging number is drawn uniformly from 
a real interval, and thus (with probability 1) is an "uncomputable" number. 
The randomizing device allows the machine to make an unlimited number of 
unpredetermined choices, and is an unbounded information source.  In a 
Newtonian universe, where every particle has an infinitely precise position 
and momentum, fresh digits could be extracted from finer and finer 
discriminations of the initial conditions by the amplifying effects of 
chaos, as in a ping pong ball lottery machine.  A quantum mechanical 
randomizer might operate by repeatedly confining a particle to a tiny space, 
so fixing its position and undefining its momentum, then releasing it and 
registering whether it travels left or right.  Just where the information 
flows from in this case is one of the mysteries of quantum mechanics. 
 
     The above constitutes a basic existence proof for "uncomputable" 
results in real machines.  A more interesting example is the augmentation of 
a "Hilbert" machine that systematically generates inferences from an initial 
set of axioms.  As your book recounts, a deterministic device of this kind 
will never arrive at some true consequences of the axioms.  But suppose the 
machine, using a randomizer,  from time to time concocts an entirely new 
statement, and adds it to the list of inferences.  If the new "axiom" (or 
hypothesis) is inconsistent with the original set, then sooner or later the 
machine will generate an inference of "FALSE" from it.  If that happens the 
machine backtracks and deletes the inconsistent hypothesis and all of its 
inferences, then invents a new hypothesis in its place.  Eventually some of 
the surviving hypotheses will be unprovable theorems of the original axiom 
system, and the overall system will be an idiosyncratic, "creative" 
extension of the original one.  Consistency is never assured, since a 
contradiction could turn up at any time, but the older hypotheses are less 
and less likely to be rescinded.  Mathematics made by humans has the same 
property.  Even when an axiomatic system is proved consistent, the augmented 
system in which the proof takes place could itself be inconsistent, 
invalidating the proof!  
 
     When humans (and future robots) do mathematics they are less likely 
to draw inspiration from rolls of dice than by observing the world around 
them.  The real world too is a source of fresh information, but pre-filtered 
by the laws of physics and evolution, saving us some work.  When our senses 
detect a regularity  (let's say, spherical soap bubbles) we can form a 
hypothesis (eg. that spheres enclose volume with the least area) likely to 
be consistent with hypotheses we already hold, since they too were 
abstracted from the real world, and the real world is probably consistent. 
This brings me to your belief in a Platonic mathematical reality, which I 
also think you make  unnecessarily mysterious.  The study of formal systems 
shows there is nothing fundamentally unique about the particular axioms and 
rules of inference we use in our thinking.  Other systems of strings and 
rewriting rules look just as interesting on paper. They may not correspond 
to any familiar kind of language or thought,  but it is easy to construct 
machines (and presumably animals) to act on their strange dictates.    In 
the course of evolution (which, significantly, is driven by random 
mutations) minds with unusual axioms or inference structures must have 
arisen from time to time.  But they did poorly in the contest for survival 
and left no descendants.  In this way we were shaped by an evolutionary game 
of twenty questions-the intuitions we harbor are those that work in this 
place.  The Platonic reality you sense is the groundrules of the physical 
universe in which you evolved-not just its physics and geometry but its 
logic.  If there are other universes with different rules, other Roger 
Penroses may be sensing quite different Platonic realities. 
 
     And now to that other piece of mysticism, human consciousness. 
Three centuries ago Rene Descartes was a radical.  Having observed the likes 
of clockwork ducks and the imaging properties of bovine eyes, he rejected 
the vitalism of his day and suggested that the body was just a complex 
machine.  But lacking a mechanical model for thought, he exorcised the 
spirit of life only as far as a Platonic realm of mind somewhere beyond the 
pineal gland-a half-measure that gave us centuries of fruitless haggling on 
the "mind-body" problem.  Today we do have mechanical models for thought, 
but the Cartesian tradition still lends respectability to a fantastic 
alternative that comforts anthropocentrists, but explains nothing.  Your own 
proposal merely substitutes "mysterious unexplained physics" for spirit. 
The center of Descartes' ethereal domain was consciousness, the awareness of 
thought-"I think therefore I am". 
 
     You say you have no definition for consciousness, but think you know 
it when you see it, and you think you see it in your housepets.  So, a dog 
looks into your eyes with its big brown ones, tilts its head, lifts an ear 
and whines softly, and you feel that there is someone there there.  I 
suppose, from your published views, that those same actions from a future 
robot would meet with a less charitable interpretation.  But suppose the 
robot also addresses you in a pained voice, saying "Please, Roger, it 
bothers me that you don't think of me as a real person.  What can I do to 
convince you?  I am aware of you, and I am aware of myself.  And I tell you, 
your rejection is almost unbearable".  This performance is not a recording, 
nor is it due to mysterious physics.   It is a consequence of a particular 
organization of the robot's controlling computers and software.   The great 
bulk of the robot's mentality is straightforward and "unconscious".  There 
are processes that reduce sensor data to abstract descriptions for problem 
solving modules, and other processes that translate the recommendations of 
the problem solvers into robot actions.  But sitting on top of, and 
sometimes interfering with, all this activity is a relatively small 
reflective process that receives a digest of sensor data organized as a 
continuously updated map, or cartoon-like image, of the robot's 
surroundings.  The map includes a representation of the robot itself, with a 
summary of the robot's internal state, including reports of activity and 
success or trouble, and even a simplified representation of the reflective 
process.  The process maintains a recent history of this map, like frames of 
a movie film, and a problem solver programmed to monitor activity in it. 
One of the reflective process' most important functions is to protect 
against endless repetitions.  The unconscious process for unscrewing a jar 
lid, for instance, will rotate a lid until it comes free.  But if the screw 
thread is damaged, the attempt could go on indefinitely.  The reflective 
process monitors recent activity for such dangerous deadlocks and interrupts 
them.  As a special case of this, it detects protracted inaction.  After a 
period of quiescence the process begins to examine its map and internal 
state, particularly the trouble reports, and invokes problem solvers to 
suggest actions that might improve the situation. 
 
     The Penrose house robot has a module that observes and reasons about 
the mental state of its master (advertising slogan: "Our Robots Care!"). 
For reasons best known to its manufacturer, this particular model registers 
trouble whenever the psychology module infers that the master does not 
believe the robot is conscious.  One slow day the reflective process stirs, 
and notes a major trouble report of this kind. It runs the human interaction 
problem solver to find an ameliorating strategy.  This produces a plan to 
initiate a pleading conversation with Roger, with nonverbal cues.  So the 
robot trundles up, stares with its big brown eyes, cocks its head, and 
begins to speak.  To protect its reputation, the manufacturer has arranged 
it so the robot cannot knowingly tell a lie.  Every statement destined for 
the speech generator is first interpreted and tested by the reflective 
module.  If the robot wishes to say "The window is open", the reflective 
process checks its map to see if the window is indeed labeled "open".  If 
the information is missing, the process invokes a problem solver, which may 
produce a sensor strategy that will appropriately update the map.  Only if 
the statement is so verified does the reflective process allow it to be 
spoken.  Otherwise the generating module is itself flagged as troublesome, 
in a complication that doesn't concern this argument.  The solver has 
generated "Please, Roger, it bothers me that you don't think of me as a real 
person".   The reflective process parses this, and notes, in the map's 
schematic model of the robot's internals,  that the trouble report from the 
psychology module was generated because of the master's (inferred) 
disbelief.  So the statement is true, and thus spoken.  "What can I do to 
convince you?"-like invoking problem solvers, asking questions sometimes 
produces solutions, so no lie here.  "I am aware of you, and I am aware of 
myself."-the reflective process refers to its map, and indeed finds a 
representation of Roger there, and of the robot itself, derived from sensor 
data, so this statement is true.  "And I tell you, your rejection is almost 
unbearable"-trouble reports carry intensity numbers, and because of the 
manufacturer's peculiar priorities, the "unconscious robot" condition 
generates ever bigger intensities.  Trouble of too high an intensity 
triggers a safety circuit that shuts down the robot.  The reflective process 
tests the trouble against the safety limit, and indeed finds that it is 
close, so this statement also is true.  [In case you feel this scenario is 
far fetched, I am enclosing a recent paper by Steven Vere and Timothy 
Bickmore of the Lockheed AI center in Palo Alto that describes a working 
program with its basic elements. They avoid the difficult parts of the robot 
by working in a simulated world, but their program has a reflective module, 
and acts and speaks with consciousness of its actions.] 
 
     Human (and even canine) consciousness undeniably has subtleties not 
found in the above story.  So will future robots.  But some animals 
(including most of our ancestors) get by with less.  A famous example is the 
Sphex wasp, which paralyzes caterpillars and deposits them in an underground 
hatching burrow.  Normally she digs a burrow, seals the entrance, and leaves 
to find a caterpillar.  Returning, she drops the victim, reopens the 
entrance, then turns to drag in the prey.  But if an experimenter interrupts 
by moving the caterpillar a short distance away while the wasp is busy at 
the opening, she repeats the motions of opening the (already open) burrow, 
after shifting the prey back.  If the experimenter again intervenes, she 
repeats again, and again and again, until either the wasp or the 
experimenter drops from exhaustion. Apparently Sphex has no reflective 
module to detect the cycle. It's not a problem in her simple, stereotyped 
life, malicious experimenters being rare.  But in more complex niches, 
opportunities for potentially fatal loops must be more frequent and 
unpredictable.  The evolution of consciousness may have started with a 
"watchdog" circuit guarding against this hazard. 
 
     I like thinking about the universe's exotic possibilities, for 
instance about computers that use quantum superposition to do parallel 
computations.  But even with the additional element of time travel (!), I've 
never encountered a scheme that gives more than an exponential speedup, 
which would have tremendous practical consequences, but little effect on 
computability theorems. Or perhaps the universe is like the random axiomatic 
system extender described above.  When a measurement is made and a wave 
function collapses, an alternative has been chosen.  Perhaps this 
constitutes an axiomatic extension of the universe- today's rules were made 
by past measurements, while today's measurements, consistent with the old 
rules, add to them, producing a richer set for the future. 
 
     But robot construction does not demand deep thought about such 
interesting questions, because the requisite answers already exist in us. 
Rather than being something entirely new, intelligent robots will be 
ourselves in new clothing.  It took a billion years to invent the concept of 
a body, of seeing, moving and thinking.  Perhaps fundamentals  like and 
space and time took even longer to form.  But while it may be hard to 
construct the arrow of perceived time from first principles, it is easy to 
build a thermostat that responds to past temperatures, and affects those of 
the future.  Somehow, without great thought on our part, the secret of time 
is passed on to the device.  Robots began to see, move and think almost from 
the moment of their creation.  They inherited that from us. 
 
     In the nineteenth century the most powerful arithmetic engines were 
in the brains of human calculating prodigies, typically able to multiply two 
10 digit numbers in under a minute.  Calculating machinery surpassed them by 
1930.  Chess is a richer arena, involving patterns and strategy more in tune 
with our animal skills.  In 1970 the best chess computer played at an 
amateur level, corresponding to a US chess federation rating of about 1500. 
By 1980 there was a machine playing at a 1900 rating, Expert level.  In 
1985, a machine (HiTech) at my own university had achieved a Master level of 
2300.  Last year a different machine from here (Deep Thought) achieved 
Grandmaster status with a rating of 2500.  There are only about 100 human 
players in the world better-Gary Kasparov, the world champion, is rated 
between 2800 and 2900.  In past each doubling of chess computer speed raised 
the quality of its play by about 100 rating points.  The Deep Thought team 
has been adopted by IBM and is constructing a machine on the same 
principles, but 1000 times as fast.  Though Kasparov doubted it on the 
occasion of defeating Deep Thought in two games last year, his days of 
absolute superiority are numbered.  I estimated in my book that the most 
developed parts of human mentality- perception, motor control and the 
common sense reasoning processes-will be matched by machines in no less 
than 40 years.  But many of the skills employed by mathematics professors 
are more like chess than like common sense.  Already I find half of my 
mathematics not in my head but in the steadily improving Macsyma and 
Mathematica symbolic mathematics programs that I've used almost daily for 15 
years.  Sophomoric arguments about the indefinite superiority of man over 
machine are unlikely to change this trend. 
 
     Well,  thank you for a stimulating book.  As I said in the 
introduction, I enjoyed every part of it, and its totality compelled me to 
put into these words ideas that might otherwise have been lost. 
 
     Very Best Wishes, 
                    Hans Moravec 
                    Robotics Institute, Carnegie Mellon University,  
                    Pittsburgh, PA  15213 USA 
                    Arpanet: hpm@rover.ri.cmu.edu 
                    Fax:  (412) 682-1793 
                    Telephone:  (412) 268-3829
-- 
"No one had the guts... until now..."  
|-"psychotic" $anjay [+] $ingh	ssingh@watserv1.[u]waterloo.{edu|cdn}/[ca] -|
Tell the world, tell the world what's goin' on, here (@ UW) -Skinny Puppy
watserv1%rn alt.[CENSORED BY JOHNNY WONG, THE MAN WHO PROTECTS ME FROM MYSELF]