[sci.nanotech] Dear Roger,

Hans.Moravec@rover.ri.cmu.edu (02/09/90)

This is an open letter, distribute at will.
Comments are solicited.  Thanks. -- Hans Moravec

To: Professor Roger Penrose, Department of Mathematics, Oxford, England

Dear Professor Penrose,

	Thank you for sharing your thoughts on thinking machinery in your
new book "The Emperor's New Mind", and in the February 1 New York Review of
Books essay on my book "Mind Children".  I've been a fan of your mathematical
inventions since my high school days in the 1960s, and was intrigued to hear
that you had written an aggressively titled book about my favorite subject.
I enjoyed every part of that book-the computability chapters were an
excellent review, the phase space view of entropy was enlightening, the
Hilbert space discussion spurred me on to another increment in my incredibly
protracted amateur working through of Dirac, and I'm sure we both learned
from the chapter on brain anatomy.  You won't be surprised to learn,
however, that I found your overall argument wildly wrong headed!

	If your book was written to counter a browbeating you felt from
proponents of hard AI, mine was inspired by the browbeaten timidity I found
in the majority of my colleagues in that community.  As the words
"frightening" and "nightmare" in your review suggest, intelligent machines
are an emotion-stirring prospect, and it is hard to remain unbrowbeaten in
the face of frequent hostility.  But why hostility?   Our emotions were
forged over eons of evolution, and are triggered by situations, like threats
to life or territory, that resemble those that influenced our ancestors'
reproductive success.  Since there were no intelligent machines in our past,
they must resemble something else to incite such a panic-perhaps another
tribe down the stream poaching in our territory, or a stronger, smarter
rival for our social position, or a predator that will carry away our
offspring in the night.  But is it reasonable to allow our actions and
opportunities to be limited by spurious resemblances and unexamined fears?
Here's how I look at the question.  We are in the process of creating a new
kind of life.  Though utterly novel, this new life form resembles us more
than it resembles anything else in the world.  To earn their keep in
society, robots are being taught our skills.  In the future, as they work
among us on an increasingly equal footing, they will acquire our values and
goals as well-robot software that causes antisocial behavior, for instance,
would soon cease being manufactured.   How should we feel about beings that
we bring into the world, that are similar to ourselves, that we teach our
way of life, that will probably inherit the world when we are gone?    I
consider them our children.  As such they are not fundamentally threatening,
though they will require careful upbringing to instill in them a good
character.  Of course, in time, they will outgrow us, create their own
goals, make their own mistakes, and go their own way, with us perhaps a fond
memory.  But that is the way of children.  In America, at least, we consider
it desirable for offspring to live up to their maximum potential and to
exceed their parents.

	You fault my book for failing to present alternatives to the "hard
AI" position. It is my honest opinion that there are no convincing
scientific alternatives.  There are religious alternatives, based on
subjective premises about a special relation of man to the universe, and
there are flawed secular rationalizations of anthropocentrism.  The two
alternatives you offer, namely John Searle's philosophical argument and your
own physical speculation, are of the latter kind.  Searle's position is that
a system that, however accurately, simulates the processes in a human brain,
whether with marks on paper or signals in a computer, is a "mere imitation"
of thought, not thought itself.  Pejorative labels may be an important tool
for philosophy professors, but they don't create reality.  I imagine a
future debate in which Professor Searle, staunch to the end, succumbs to the
"mere imitation" of strangulation at the hands of an insulted and enraged
robot controlled by the "mere imitation" of thought and emotion.  Your own
position is that some physical principle in human brains produces
"non-computable" results, and that somehow this leads to consciousness.
Well, I agree, but the same principle works equally well for robots, and its
not nearly as mysterious as you suggest.

	Alan Turing's computability arguments, now more than fifty years
old, were a perfect fit to David Hilbert's criteria for the mechanization of
deductive mathematics, but they don't define the capabilities of a robot or
a human.  They assume a closed process working from a fixed, finite, amount
of initial information.  Each step of a Turing machine computation can at
best preserve this information, and may destroy a bit of it, allowing the
computation to eventually "run down", like a closed physical system whose
entropy increases.  The simple expedient of opening the computation to
external information  voids this suffocating premise, and with it the
uncomputability theorems.  For instance, Turing proved the uncomputability
of most numbers, since there are only countably many machine programs, and
uncountably many real numbers for them to generate.  But it is trivial to
produce "uncomputable" numbers with a Turing machine, if the machine is
augmented with a true randomizing device.  Whenever another digit of the
number is needed, the randomizer is consulted, and the result written on the
appropriate square of the tape.  The emerging number is drawn uniformly from
a real interval, and thus (with probability 1) is an "uncomputable" number.
The randomizing device allows the machine to make an unlimited number of
unpredetermined choices, and is an unbounded information source.  In a
Newtonian universe, where every particle has an infinitely precise position
and momentum, fresh digits could be extracted from finer and finer
discriminations of the initial conditions by the amplifying effects of
chaos, as in a ping pong ball lottery machine.  A quantum mechanical
randomizer might operate by repeatedly confining a particle to a tiny space,
so fixing its position and undefining its momentum, then releasing it and
registering whether it travels left or right.  Just where the information
flows from in this case is one of the mysteries of quantum mechanics.

	The above constitutes a basic existence proof for "uncomputable"
results in real machines.  A more interesting example is the augmentation of
a "Hilbert" machine that systematically generates inferences from an initial
set of axioms.  As your book recounts, a deterministic device of this kind
will never arrive at some true consequences of the axioms.  But suppose the
machine, using a randomizer,  from time to time concocts an entirely new
statement, and adds it to the list of inferences.  If the new "axiom" (or
hypothesis) is inconsistent with the original set, then sooner or later the
machine will generate an inference of "FALSE" from it.  If that happens the
machine backtracks and deletes the inconsistent hypothesis and all of its
inferences, then invents a new hypothesis in its place.  Eventually some of
the surviving hypotheses will be unprovable theorems of the original axiom
system, and the overall system will be an idiosyncratic, "creative"
extension of the original one.  Consistency is never assured, since a
contradiction could turn up at any time, but the older hypotheses are less
and less likely to be rescinded.  Mathematics made by humans has the same
property.  Even when an axiomatic system is proved consistent, the augmented
system in which the proof takes place could itself be inconsistent,
invalidating the proof! 

	When humans (and future robots) do mathematics they are less likely
to draw inspiration from rolls of dice than by observing the world around
them.  The real world too is a source of fresh information, but pre-filtered
by the laws of physics and evolution, saving us some work.  When our senses
detect a regularity  (let's say, spherical soap bubbles) we can form a
hypothesis (eg. that spheres enclose volume with the least area) likely to
be consistent with hypotheses we already hold, since they too were
abstracted from the real world, and the real world is probably consistent.
This brings me to your belief in a Platonic mathematical reality, which I
also think you make  unnecessarily mysterious.  The study of formal systems
shows there is nothing fundamentally unique about the particular axioms and
rules of inference we use in our thinking.  Other systems of strings and
rewriting rules look just as interesting on paper. They may not correspond
to any familiar kind of language or thought,  but it is easy to construct
machines (and presumably animals) to act on their strange dictates.    In
the course of evolution (which, significantly, is driven by random
mutations) minds with unusual axioms or inference structures must have
arisen from time to time.  But they did poorly in the contest for survival
and left no descendants.  In this way we were shaped by an evolutionary game
of twenty questions-the intuitions we harbor are those that work in this
place.  The Platonic reality you sense is the groundrules of the physical
universe in which you evolved-not just its physics and geometry but its
logic.  If there are other universes with different rules, other Roger
Penroses may be sensing quite different Platonic realities.

	And now to that other piece of mysticism, human consciousness.
Three centuries ago Rene Descartes was a radical.  Having observed the likes
of clockwork ducks and the imaging properties of bovine eyes, he rejected
the vitalism of his day and suggested that the body was just a complex
machine.  But lacking a mechanical model for thought, he exorcised the
spirit of life only as far as a Platonic realm of mind somewhere beyond the
pineal gland-a half-measure that gave us centuries of fruitless haggling on
the "mind-body" problem.  Today we do have mechanical models for thought,
but the Cartesian tradition still lends respectability to a fantastic
alternative that comforts anthropocentrists, but explains nothing.  Your own
proposal merely substitutes "mysterious unexplained physics" for spirit.
The center of Descartes' ethereal domain was consciousness, the awareness of
thought-"I think therefore I am".

	You say you have no definition for consciousness, but think you know
it when you see it, and you think you see it in your housepets.  So, a dog
looks into your eyes with its big brown ones, tilts its head, lifts an ear
and whines softly, and you feel that there is someone there there.  I
suppose, from your published views, that those same actions from a future
robot would meet with a less charitable interpretation.  But suppose the
robot also addresses you in a pained voice, saying "Please, Roger, it
bothers me that you don't think of me as a real person.  What can I do to
convince you?  I am aware of you, and I am aware of myself.  And I tell you,
your rejection is almost unbearable".  This performance is not a recording,
nor is it due to mysterious physics.   It is a consequence of a particular
organization of the robot's controlling computers and software.   The great
bulk of the robot's mentality is straightforward and "unconscious".  There
are processes that reduce sensor data to abstract descriptions for problem
solving modules, and other processes that translate the recommendations of
the problem solvers into robot actions.  But sitting on top of, and
sometimes interfering with, all this activity is a relatively small
reflective process that receives a digest of sensor data organized as a
continuously updated map, or cartoon-like image, of the robot's
surroundings.  The map includes a representation of the robot itself, with a
summary of the robot's internal state, including reports of activity and
success or trouble, and even a simplified representation of the reflective
process.  The process maintains a recent history of this map, like frames of
a movie film, and a problem solver programmed to monitor activity in it.
One of the reflective process' most important functions is to protect
against endless repetitions.  The unconscious process for unscrewing a jar
lid, for instance, will rotate a lid until it comes free.  But if the screw
thread is damaged, the attempt could go on indefinitely.  The reflective
process monitors recent activity for such dangerous deadlocks and interrupts
them.  As a special case of this, it detects protracted inaction.  After a
period of quiescence the process begins to examine its map and internal
state, particularly the trouble reports, and invokes problem solvers to
suggest actions that might improve the situation.

	The Penrose house robot has a module that observes and reasons about
the mental state of its master (advertising slogan: "Our Robots Care!").
For reasons best known to its manufacturer, this particular model registers
trouble whenever the psychology module infers that the master does not
believe the robot is conscious.  One slow day the reflective process stirs,
and notes a major trouble report of this kind. It runs the human interaction
problem solver to find an ameliorating strategy.  This produces a plan to
initiate a pleading conversation with Roger, with nonverbal cues.  So the
robot trundles up, stares with its big brown eyes, cocks its head, and
begins to speak.  To protect its reputation, the manufacturer has arranged
it so the robot cannot knowingly tell a lie.  Every statement destined for
the speech generator is first interpreted and tested by the reflective
module.  If the robot wishes to say "The window is open", the reflective
process checks its map to see if the window is indeed labeled "open".  If
the information is missing, the process invokes a problem solver, which may
produce a sensor strategy that will appropriately update the map.  Only if
the statement is so verified does the reflective process allow it to be
spoken.  Otherwise the generating module is itself flagged as troublesome,
in a complication that doesn't concern this argument.  The solver has
generated "Please, Roger, it bothers me that you don't think of me as a real
person".   The reflective process parses this, and notes, in the map's
schematic model of the robot's internals,  that the trouble report from the
psychology module was generated because of the master's (inferred)
disbelief.  So the statement is true, and thus spoken.  "What can I do to
convince you?"-like invoking problem solvers, asking questions sometimes
produces solutions, so no lie here.  "I am aware of you, and I am aware of
myself."-the reflective process refers to its map, and indeed finds a
representation of Roger there, and of the robot itself, derived from sensor
data, so this statement is true.  "And I tell you, your rejection is almost
unbearable"-trouble reports carry intensity numbers, and because of the
manufacturer's peculiar priorities, the "unconscious robot" condition
generates ever bigger intensities.  Trouble of too high an intensity
triggers a safety circuit that shuts down the robot.  The reflective process
tests the trouble against the safety limit, and indeed finds that it is
close, so this statement also is true.  [In case you feel this scenario is
far fetched, I am enclosing a recent paper by Steven Vere and Timothy
Bickmore of the Lockheed AI center in Palo Alto that describes a working
program with its basic elements. They avoid the difficult parts of the robot
by working in a simulated world, but their program has a reflective module,
and acts and speaks with consciousness of its actions.]

	Human (and even canine) consciousness undeniably has subtleties not
found in the above story.  So will future robots.  But some animals
(including most of our ancestors) get by with less.  A famous example is the
Sphex wasp, which paralyzes caterpillars and deposits them in an underground
hatching burrow.  Normally she digs a burrow, seals the entrance, and leaves
to find a caterpillar.  Returning, she drops the victim, reopens the
entrance, then turns to drag in the prey.  But if an experimenter interrupts
by moving the caterpillar a short distance away while the wasp is busy at
the opening, she repeats the motions of opening the (already open) burrow,
after shifting the prey back.  If the experimenter again intervenes, she
repeats again, and again and again, until either the wasp or the
experimenter drops from exhaustion. Apparently Sphex has no reflective
module to detect the cycle. It's not a problem in her simple, stereotyped
life, malicious experimenters being rare.  But in more complex niches,
opportunities for potentially fatal loops must be more frequent and
unpredictable.  The evolution of consciousness may have started with a
"watchdog" circuit guarding against this hazard.

	I like thinking about the universe's exotic possibilities, for
instance about computers that use quantum superposition to do parallel
computations.  But even with the additional element of time travel (!), I've
never encountered a scheme that gives more than an exponential speedup,
which would have tremendous practical consequences, but little effect on
computability theorems. Or perhaps the universe is like the random axiomatic
system extender described above.  When a measurement is made and a wave
function collapses, an alternative has been chosen.  Perhaps this
constitutes an axiomatic extension of the universe- today's rules were made
by past measurements, while today's measurements, consistent with the old
rules, add to them, producing a richer set for the future.

	But robot construction does not demand deep thought about such
interesting questions, because the requisite answers already exist in us.
Rather than being something entirely new, intelligent robots will be
ourselves in new clothing.  It took a billion years to invent the concept of
a body, of seeing, moving and thinking.  Perhaps fundamentals  like and
space and time took even longer to form.  But while it may be hard to
construct the arrow of perceived time from first principles, it is easy to
build a thermostat that responds to past temperatures, and affects those of
the future.  Somehow, without great thought on our part, the secret of time
is passed on to the device.  Robots began to see, move and think almost from
the moment of their creation.  They inherited that from us.

	In the nineteenth century the most powerful arithmetic engines were
in the brains of human calculating prodigies, typically able to multiply two
10 digit numbers in under a minute.  Calculating machinery surpassed them by
1930.  Chess is a richer arena, involving patterns and strategy more in tune
with our animal skills.  In 1970 the best chess computer played at an
amateur level, corresponding to a US chess federation rating of about 1500.
By 1980 there was a machine playing at a 1900 rating, Expert level.  In
1985, a machine (HiTech) at my own university had achieved a Master level of
2300.  Last year a different machine from here (Deep Thought) achieved
Grandmaster status with a rating of 2500.  There are only about 100 human
players in the world better-Gary Kasparov, the world champion, is rated
between 2800 and 2900.  In past each doubling of chess computer speed raised
the quality of its play by about 100 rating points.  The Deep Thought team
has been adopted by IBM and is constructing a machine on the same
principles, but 1000 times as fast.  Though Kasparov doubted it on the
occasion of defeating Deep Thought in two games last year, his days of
absolute superiority are numbered.  I estimated in my book that the most
developed parts of human mentality- perception, motor control and the
common sense reasoning processes-will be matched by machines in no less
than 40 years.  But many of the skills employed by mathematics professors
are more like chess than like common sense.  Already I find half of my
mathematics not in my head but in the steadily improving Macsyma and
Mathematica symbolic mathematics programs that I've used almost daily for 15
years.  Sophomoric arguments about the indefinite superiority of man over
machine are unlikely to change this trend.

	Well,  thank you for a stimulating book.  As I said in the
introduction, I enjoyed every part of it, and its totality compelled me to
put into these words ideas that might otherwise have been lost.

	Very Best Wishes,
				Hans Moravec
				Robotics Institute, Carnegie Mellon University, 
				Pittsburgh, PA  15213 USA
				Arpanet: hpm@rover.ri.cmu.edu
				Fax:  (412) 682-1793
				Telephone:  (412) 268-3829