[comp.ai] Dear Roger,

Hans.Moravec@ROVER.RI.CMU.EDU (02/08/90)

This is an open letter, distribute at will.
Comments are solicited.  Thanks. -- Hans Moravec

To: Professor Roger Penrose, Department of Mathematics, Oxford, England

Dear Professor Penrose,

	Thank you for sharing your thoughts on thinking machinery in your
new book "The Emperor's New Mind", and in the February 1 New York Review of
Books essay on my book "Mind Children".  I've been a fan of your mathematical
inventions since my high school days in the 1960s, and was intrigued to hear
that you had written an aggressively titled book about my favorite subject.
I enjoyed every part of that book-the computability chapters were an
excellent review, the phase space view of entropy was enlightening, the
Hilbert space discussion spurred me on to another increment in my incredibly
protracted amateur working through of Dirac, and I'm sure we both learned
from the chapter on brain anatomy.  You won't be surprised to learn,
however, that I found your overall argument wildly wrong headed!

	If your book was written to counter a browbeating you felt from
proponents of hard AI, mine was inspired by the browbeaten timidity I found
in the majority of my colleagues in that community.  As the words
"frightening" and "nightmare" in your review suggest, intelligent machines
are an emotion-stirring prospect, and it is hard to remain unbrowbeaten in
the face of frequent hostility.  But why hostility?   Our emotions were
forged over eons of evolution, and are triggered by situations, like threats
to life or territory, that resemble those that influenced our ancestors'
reproductive success.  Since there were no intelligent machines in our past,
they must resemble something else to incite such a panic-perhaps another
tribe down the stream poaching in our territory, or a stronger, smarter
rival for our social position, or a predator that will carry away our
offspring in the night.  But is it reasonable to allow our actions and
opportunities to be limited by spurious resemblances and unexamined fears?
Here's how I look at the question.  We are in the process of creating a new
kind of life.  Though utterly novel, this new life form resembles us more
than it resembles anything else in the world.  To earn their keep in
society, robots are being taught our skills.  In the future, as they work
among us on an increasingly equal footing, they will acquire our values and
goals as well-robot software that causes antisocial behavior, for instance,
would soon cease being manufactured.   How should we feel about beings that
we bring into the world, that are similar to ourselves, that we teach our
way of life, that will probably inherit the world when we are gone?    I
consider them our children.  As such they are not fundamentally threatening,
though they will require careful upbringing to instill in them a good
character.  Of course, in time, they will outgrow us, create their own
goals, make their own mistakes, and go their own way, with us perhaps a fond
memory.  But that is the way of children.  In America, at least, we consider
it desirable for offspring to live up to their maximum potential and to
exceed their parents.

	You fault my book for failing to present alternatives to the "hard
AI" position. It is my honest opinion that there are no convincing
scientific alternatives.  There are religious alternatives, based on
subjective premises about a special relation of man to the universe, and
there are flawed secular rationalizations of anthropocentrism.  The two
alternatives you offer, namely John Searle's philosophical argument and your
own physical speculation, are of the latter kind.  Searle's position is that
a system that, however accurately, simulates the processes in a human brain,
whether with marks on paper or signals in a computer, is a "mere imitation"
of thought, not thought itself.  Pejorative labels may be an important tool
for philosophy professors, but they don't create reality.  I imagine a
future debate in which Professor Searle, staunch to the end, succumbs to the
"mere imitation" of strangulation at the hands of an insulted and enraged
robot controlled by the "mere imitation" of thought and emotion.  Your own
position is that some physical principle in human brains produces
"non-computable" results, and that somehow this leads to consciousness.
Well, I agree, but the same principle works equally well for robots, and its
not nearly as mysterious as you suggest.

	Alan Turing's computability arguments, now more than fifty years
old, were a perfect fit to David Hilbert's criteria for the mechanization of
deductive mathematics, but they don't define the capabilities of a robot or
a human.  They assume a closed process working from a fixed, finite, amount
of initial information.  Each step of a Turing machine computation can at
best preserve this information, and may destroy a bit of it, allowing the
computation to eventually "run down", like a closed physical system whose
entropy increases.  The simple expedient of opening the computation to
external information  voids this suffocating premise, and with it the
uncomputability theorems.  For instance, Turing proved the uncomputability
of most numbers, since there are only countably many machine programs, and
uncountably many real numbers for them to generate.  But it is trivial to
produce "uncomputable" numbers with a Turing machine, if the machine is
augmented with a true randomizing device.  Whenever another digit of the
number is needed, the randomizer is consulted, and the result written on the
appropriate square of the tape.  The emerging number is drawn uniformly from
a real interval, and thus (with probability 1) is an "uncomputable" number.
The randomizing device allows the machine to make an unlimited number of
unpredetermined choices, and is an unbounded information source.  In a
Newtonian universe, where every particle has an infinitely precise position
and momentum, fresh digits could be extracted from finer and finer
discriminations of the initial conditions by the amplifying effects of
chaos, as in a ping pong ball lottery machine.  A quantum mechanical
randomizer might operate by repeatedly confining a particle to a tiny space,
so fixing its position and undefining its momentum, then releasing it and
registering whether it travels left or right.  Just where the information
flows from in this case is one of the mysteries of quantum mechanics.

	The above constitutes a basic existence proof for "uncomputable"
results in real machines.  A more interesting example is the augmentation of
a "Hilbert" machine that systematically generates inferences from an initial
set of axioms.  As your book recounts, a deterministic device of this kind
will never arrive at some true consequences of the axioms.  But suppose the
machine, using a randomizer,  from time to time concocts an entirely new
statement, and adds it to the list of inferences.  If the new "axiom" (or
hypothesis) is inconsistent with the original set, then sooner or later the
machine will generate an inference of "FALSE" from it.  If that happens the
machine backtracks and deletes the inconsistent hypothesis and all of its
inferences, then invents a new hypothesis in its place.  Eventually some of
the surviving hypotheses will be unprovable theorems of the original axiom
system, and the overall system will be an idiosyncratic, "creative"
extension of the original one.  Consistency is never assured, since a
contradiction could turn up at any time, but the older hypotheses are less
and less likely to be rescinded.  Mathematics made by humans has the same
property.  Even when an axiomatic system is proved consistent, the augmented
system in which the proof takes place could itself be inconsistent,
invalidating the proof! 

	When humans (and future robots) do mathematics they are less likely
to draw inspiration from rolls of dice than by observing the world around
them.  The real world too is a source of fresh information, but pre-filtered
by the laws of physics and evolution, saving us some work.  When our senses
detect a regularity  (let's say, spherical soap bubbles) we can form a
hypothesis (eg. that spheres enclose volume with the least area) likely to
be consistent with hypotheses we already hold, since they too were
abstracted from the real world, and the real world is probably consistent.
This brings me to your belief in a Platonic mathematical reality, which I
also think you make  unnecessarily mysterious.  The study of formal systems
shows there is nothing fundamentally unique about the particular axioms and
rules of inference we use in our thinking.  Other systems of strings and
rewriting rules look just as interesting on paper. They may not correspond
to any familiar kind of language or thought,  but it is easy to construct
machines (and presumably animals) to act on their strange dictates.    In
the course of evolution (which, significantly, is driven by random
mutations) minds with unusual axioms or inference structures must have
arisen from time to time.  But they did poorly in the contest for survival
and left no descendants.  In this way we were shaped by an evolutionary game
of twenty questions-the intuitions we harbor are those that work in this
place.  The Platonic reality you sense is the groundrules of the physical
universe in which you evolved-not just its physics and geometry but its
logic.  If there are other universes with different rules, other Roger
Penroses may be sensing quite different Platonic realities.

	And now to that other piece of mysticism, human consciousness.
Three centuries ago Rene Descartes was a radical.  Having observed the likes
of clockwork ducks and the imaging properties of bovine eyes, he rejected
the vitalism of his day and suggested that the body was just a complex
machine.  But lacking a mechanical model for thought, he exorcised the
spirit of life only as far as a Platonic realm of mind somewhere beyond the
pineal gland-a half-measure that gave us centuries of fruitless haggling on
the "mind-body" problem.  Today we do have mechanical models for thought,
but the Cartesian tradition still lends respectability to a fantastic
alternative that comforts anthropocentrists, but explains nothing.  Your own
proposal merely substitutes "mysterious unexplained physics" for spirit.
The center of Descartes' ethereal domain was consciousness, the awareness of
thought-"I think therefore I am".

	You say you have no definition for consciousness, but think you know
it when you see it, and you think you see it in your housepets.  So, a dog
looks into your eyes with its big brown ones, tilts its head, lifts an ear
and whines softly, and you feel that there is someone there there.  I
suppose, from your published views, that those same actions from a future
robot would meet with a less charitable interpretation.  But suppose the
robot also addresses you in a pained voice, saying "Please, Roger, it
bothers me that you don't think of me as a real person.  What can I do to
convince you?  I am aware of you, and I am aware of myself.  And I tell you,
your rejection is almost unbearable".  This performance is not a recording,
nor is it due to mysterious physics.   It is a consequence of a particular
organization of the robot's controlling computers and software.   The great
bulk of the robot's mentality is straightforward and "unconscious".  There
are processes that reduce sensor data to abstract descriptions for problem
solving modules, and other processes that translate the recommendations of
the problem solvers into robot actions.  But sitting on top of, and
sometimes interfering with, all this activity is a relatively small
reflective process that receives a digest of sensor data organized as a
continuously updated map, or cartoon-like image, of the robot's
surroundings.  The map includes a representation of the robot itself, with a
summary of the robot's internal state, including reports of activity and
success or trouble, and even a simplified representation of the reflective
process.  The process maintains a recent history of this map, like frames of
a movie film, and a problem solver programmed to monitor activity in it.
One of the reflective process' most important functions is to protect
against endless repetitions.  The unconscious process for unscrewing a jar
lid, for instance, will rotate a lid until it comes free.  But if the screw
thread is damaged, the attempt could go on indefinitely.  The reflective
process monitors recent activity for such dangerous deadlocks and interrupts
them.  As a special case of this, it detects protracted inaction.  After a
period of quiescence the process begins to examine its map and internal
state, particularly the trouble reports, and invokes problem solvers to
suggest actions that might improve the situation.

	The Penrose house robot has a module that observes and reasons about
the mental state of its master (advertising slogan: "Our Robots Care!").
For reasons best known to its manufacturer, this particular model registers
trouble whenever the psychology module infers that the master does not
believe the robot is conscious.  One slow day the reflective process stirs,
and notes a major trouble report of this kind. It runs the human interaction
problem solver to find an ameliorating strategy.  This produces a plan to
initiate a pleading conversation with Roger, with nonverbal cues.  So the
robot trundles up, stares with its big brown eyes, cocks its head, and
begins to speak.  To protect its reputation, the manufacturer has arranged
it so the robot cannot knowingly tell a lie.  Every statement destined for
the speech generator is first interpreted and tested by the reflective
module.  If the robot wishes to say "The window is open", the reflective
process checks its map to see if the window is indeed labeled "open".  If
the information is missing, the process invokes a problem solver, which may
produce a sensor strategy that will appropriately update the map.  Only if
the statement is so verified does the reflective process allow it to be
spoken.  Otherwise the generating module is itself flagged as troublesome,
in a complication that doesn't concern this argument.  The solver has
generated "Please, Roger, it bothers me that you don't think of me as a real
person".   The reflective process parses this, and notes, in the map's
schematic model of the robot's internals,  that the trouble report from the
psychology module was generated because of the master's (inferred)
disbelief.  So the statement is true, and thus spoken.  "What can I do to
convince you?"-like invoking problem solvers, asking questions sometimes
produces solutions, so no lie here.  "I am aware of you, and I am aware of
myself."-the reflective process refers to its map, and indeed finds a
representation of Roger there, and of the robot itself, derived from sensor
data, so this statement is true.  "And I tell you, your rejection is almost
unbearable"-trouble reports carry intensity numbers, and because of the
manufacturer's peculiar priorities, the "unconscious robot" condition
generates ever bigger intensities.  Trouble of too high an intensity
triggers a safety circuit that shuts down the robot.  The reflective process
tests the trouble against the safety limit, and indeed finds that it is
close, so this statement also is true.  [In case you feel this scenario is
far fetched, I am enclosing a recent paper by Steven Vere and Timothy
Bickmore of the Lockheed AI center in Palo Alto that describes a working
program with its basic elements. They avoid the difficult parts of the robot
by working in a simulated world, but their program has a reflective module,
and acts and speaks with consciousness of its actions.]

	Human (and even canine) consciousness undeniably has subtleties not
found in the above story.  So will future robots.  But some animals
(including most of our ancestors) get by with less.  A famous example is the
Sphex wasp, which paralyzes caterpillars and deposits them in an underground
hatching burrow.  Normally she digs a burrow, seals the entrance, and leaves
to find a caterpillar.  Returning, she drops the victim, reopens the
entrance, then turns to drag in the prey.  But if an experimenter interrupts
by moving the caterpillar a short distance away while the wasp is busy at
the opening, she repeats the motions of opening the (already open) burrow,
after shifting the prey back.  If the experimenter again intervenes, she
repeats again, and again and again, until either the wasp or the
experimenter drops from exhaustion. Apparently Sphex has no reflective
module to detect the cycle. It's not a problem in her simple, stereotyped
life, malicious experimenters being rare.  But in more complex niches,
opportunities for potentially fatal loops must be more frequent and
unpredictable.  The evolution of consciousness may have started with a
"watchdog" circuit guarding against this hazard.

	I like thinking about the universe's exotic possibilities, for
instance about computers that use quantum superposition to do parallel
computations.  But even with the additional element of time travel (!), I've
never encountered a scheme that gives more than an exponential speedup,
which would have tremendous practical consequences, but little effect on
computability theorems. Or perhaps the universe is like the random axiomatic
system extender described above.  When a measurement is made and a wave
function collapses, an alternative has been chosen.  Perhaps this
constitutes an axiomatic extension of the universe- today's rules were made
by past measurements, while today's measurements, consistent with the old
rules, add to them, producing a richer set for the future.

	But robot construction does not demand deep thought about such
interesting questions, because the requisite answers already exist in us.
Rather than being something entirely new, intelligent robots will be
ourselves in new clothing.  It took a billion years to invent the concept of
a body, of seeing, moving and thinking.  Perhaps fundamentals  like and
space and time took even longer to form.  But while it may be hard to
construct the arrow of perceived time from first principles, it is easy to
build a thermostat that responds to past temperatures, and affects those of
the future.  Somehow, without great thought on our part, the secret of time
is passed on to the device.  Robots began to see, move and think almost from
the moment of their creation.  They inherited that from us.

	In the nineteenth century the most powerful arithmetic engines were
in the brains of human calculating prodigies, typically able to multiply two
10 digit numbers in under a minute.  Calculating machinery surpassed them by
1930.  Chess is a richer arena, involving patterns and strategy more in tune
with our animal skills.  In 1970 the best chess computer played at an
amateur level, corresponding to a US chess federation rating of about 1500.
By 1980 there was a machine playing at a 1900 rating, Expert level.  In
1985, a machine (HiTech) at my own university had achieved a Master level of
2300.  Last year a different machine from here (Deep Thought) achieved
Grandmaster status with a rating of 2500.  There are only about 100 human
players in the world better-Gary Kasparov, the world champion, is rated
between 2800 and 2900.  In past each doubling of chess computer speed raised
the quality of its play by about 100 rating points.  The Deep Thought team
has been adopted by IBM and is constructing a machine on the same
principles, but 1000 times as fast.  Though Kasparov doubted it on the
occasion of defeating Deep Thought in two games last year, his days of
absolute superiority are numbered.  I estimated in my book that the most
developed parts of human mentality- perception, motor control and the
common sense reasoning processes-will be matched by machines in no less
than 40 years.  But many of the skills employed by mathematics professors
are more like chess than like common sense.  Already I find half of my
mathematics not in my head but in the steadily improving Macsyma and
Mathematica symbolic mathematics programs that I've used almost daily for 15
years.  Sophomoric arguments about the indefinite superiority of man over
machine are unlikely to change this trend.

	Well,  thank you for a stimulating book.  As I said in the
introduction, I enjoyed every part of it, and its totality compelled me to
put into these words ideas that might otherwise have been lost.

	Very Best Wishes,
				Hans Moravec
				Robotics Institute, Carnegie Mellon University, 
				Pittsburgh, PA  15213 USA
				Arpanet: hpm@rover.ri.cmu.edu
				Fax:  (412) 682-1793
				Telephone:  (412) 268-3829

asanders@adobe.COM (02/10/90)

Excerpts and comments on an open letter from Hans Moravec to Roger Penrose:


|Our emotions were forged over eons of evolution, and are triggered by 
|situations, like threats to life or territory... Since there were no 
|intelligent machines in our past, they must resemble something else to 
|incite such a panic...

We have worked quite hard at making life orderly and predictable -- perhaps
because we fear the unpredictable, the unknown. Mr. Penrose's book is aptly
named: in the fairy tale, everyone is afraid to speak up for fear of being
thought a fool. Our supposed understanding of "intelligence" and "the mind"
is somewhat like this: deep down, we realize that we really don't know what
these concepts mean and thus are very frightened by the prospect of "intelligent
machines." We cannot predict what they would be like or (scariest of all)
how they would view us. What if somebody really DID build an intelligent
machine and it took one look at us and said: "You guys have been messing 
things up for 5000 years -- you're outa here!"?


|How should we feel about beings that we bring into the world, that are similar 
|to ourselves, that we teach our way of life, that will probably inherit the 
|world when we are gone?

Perhaps we should feel that, first of all, we have a responsibility to
examine our way of life in order to assess its fitness to be passed on.


|Searle's position is that a system that, however accurately, simulates 
|the processes in a human brain...is a "mere imitation" of thought, not 
|thought itself. Pejorative labels may be an important tool for philosophy 
|professors, but they don't create reality.

Neither do logical constructs. Do we really feel entitled to place our
creative powers on an equal footing with the forces that brought the Universe
into being? We may be swimming in the cosmic ocean, but we are still mighty
small fish in a mighty big pond!


|Your own position is that some physical principle in human brains produces
|"non-computable" results, and that somehow this leads to consciousness.
|Well, I agree, but the same principle works equally well for robots, and its
|not nearly as mysterious as you suggest.

The notion that we could really build *conscious* machines is way out there!


|In the course of evolution (which, significantly, is driven by random
|mutations)...

I have often wondered how it is that "random changes" -- which, according to 
the law of entropy, must lead to greater and greater disorder -- can be
credited with bringing about an orderly evolution.


|If there are other universes with different rules, other Roger
|Penroses may be sensing quite different Platonic realities.

While this statement is logically true, it has always struck me as a sort
of "back door." It would seem difficult enough to understand THIS universe,
the one we live in.


|But suppose the robot also addresses you in a pained voice, saying "Please, 
|Roger, it bothers me that you don't think of me as a real person.  What can 
|I do to convince you?  I am aware of you, and I am aware of myself..."

This is pure speculation. Who has built such a machine? It can be argued that 
even most humans are not aware of themselves -- most of the time.


|Chess is a richer arena, involving patterns and strategy more in tune
|with our animal skills. 

It is very interesting that computers can challenge grand masters in chess
tournaments, but surely this is like playing with blocks compared with the
difficult abstract questions the human mind must contend with. Or perhaps
we should conclude that the highest reaches of thought are, afterall, just
so much pouring "from the empty into the Void" and exclaim: "What is the
meaning of life? Who cares! Anyone for a game of chess?"

Regards,

Alan

aipdc@castle.ed.ac.uk (Paul D. Crowley) (02/11/90)

I am a little confused by one part of this letter, which seems to
suggest that including a randomizing device increases the problem
solving capacity of a machine. I would have thought that that any
non-random machine wishing to solve the same problems could explore all
the paths that a random machine could take.
-- 
This posting contains logical punctuation, for which I make no apology.
It may be reproduced freely in part or whole correctly accredited.
Paul D Crowley aipdc@uk.ac.ed.castle ---  It wasn't me, it was my aardvark.

kp@uts.amdahl.com (Ken Presting) (02/13/90)

In article <2206@castle.ed.ac.uk> aipdc@castle.ed.ac.uk (Paul D. Crowley) writes:
>I am a little confused by one part of this letter, which seems to
>suggest that including a randomizing device increases the problem
>solving capacity of a machine. I would have thought that that any
>non-random machine wishing to solve the same problems could explore all
>the paths that a random machine could take.

I think you are correct.  This slipped past me on the first reading, but
now I think I know what's going on, at least partly.

Moravec began the section with the randomizing technique by observing that
the inability of an algorithm to add true unprovable theorems to an
axiomatized theory is due to the finite information content of algorithms.

He explained the significance of the randomizing device in terms of its
potential for adding information to the system which is executing the
algorithm.  In support of your point, according to the Shannon/Weaver
information theory, a channel which always transmits a random signal
carries no information at all, if all "messages" are equiprobable.  Not
to mention that non-deterministic Turing machines are known to be equal
to the deterministic variety, in terms of functions computed if not speed.

I think the situation changes somewhat if the information source is not
random.  The logical procedure described by Moravec corresponds closely
to some accounts of *scientific* reasoning (as opposed to mathematical),
such as Popper's.  So let's suppose Moravec's machine is a hypothesis-
tester, and the signals it gets from the information source are like
scientific observations, perhaps increasingly precise measurements of
fundamental constants.  Then, supposing that the system can make use of
the input, it's information content can increase.  But there are very
serious problems with extending this process to full-scale science.
If a constant coding scheme is used to represent the input measurements,
the system would be unable to participate in major scientific conceptual
changes, such as relativity or QM caused.

Of course, there is always the old reliable (if mundane) technique for
increasing the information content of a system, Data Entry.  All by
itself, this everyday event shows that real computers are different from
abstract automata.

gerry@zds-ux.UUCP (Gerry Gleason) (02/14/90)

In article <d9qn02B687oD01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes:
> [...]  So let's suppose Moravec's machine is a hypothesis-
>tester, and the signals it gets from the information source are like
>scientific observations, perhaps increasingly precise measurements of
>fundamental constants.  Then, supposing that the system can make use of
>the input, it's information content can increase.  But there are very
>serious problems with extending this process to full-scale science.
>If a constant coding scheme is used to represent the input measurements,
>the system would be unable to participate in major scientific conceptual
>changes, such as relativity or QM caused.

If you are suggesting that limitations on the senses (input devices) would
keep an intellegence from making paradigm shifts, then you must be willing
to suggest that the same limitation exists for us.  In this era, we have
many tools for making measurements, and presenting the information in a
form our senses can digest; much of this information in inaccessable to
out senses.  An intellegent machine would have to have access to the same
instraments if you expect it to perform in the type of scientific domains
in your example.

Gerry Gleason

kp@uts.amdahl.com (Ken Presting) (02/14/90)

In article <182@zds-ux.UUCP> gerry@zds-ux.UUCP (Gerry Gleason) writes:
>In article <d9qn02B687oD01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>> [...]  So let's suppose Moravec's machine is a hypothesis-
>>tester, and the signals it gets from the information source are like
>>scientific observations, perhaps increasingly precise measurements of
>>fundamental constants.  Then, supposing that the system can make use of
>>the input, it's information content can increase.  But there are very
>>serious problems with extending this process to full-scale science.
>>If a constant coding scheme is used to represent the input measurements,
>>the system would be unable to participate in major scientific conceptual
>>changes, such as relativity or QM caused.
>
>If you are suggesting that limitations on the senses (input devices) would
>keep an intellegence from making paradigm shifts, then you must be willing
>to suggest that the same limitation exists for us.  In this era, we have
>many tools for making measurements, and presenting the information in a
>form our senses can digest; much of this information in inaccessable to
>out senses.  An intellegent machine would have to have access to the same
>instraments if you expect it to perform in the type of scientific domains
>in your example.

Well, I wouldn't want to imply that a computerized hypothesis-tester is
necessarily incapable of making paradigm shifts.  Actually, I do believe
that it is possible.  But Moravec's algorithm for mathematical progress
is statable using a constant notation for all theorems.  That is just fine
for mathematics, where no assertion need ever be discarded (assuming it
has been proved).  But in a paradigm shift, familiar measurements get
re-interpreted, and end up as descriptions of quite different phenomena,
in a new notation system with new semantics.  So a machine which is a lot
smarter than Moravec proposes would be needed to do revolutionary science.

I might add that I think the whole issue of advancing mathematics by adding
true unprovable assertions is completely bogus.  Every case I know of
which involves an independence theorem (a proof that some assertion is
unprovable on a given set of axioms) ends up generating at least two
distinct theories, each of which has some interest in its own right.

Take non-euclidean geometry.  A lot of people were upset that the parallel
postulate was unprovable, but negating it is just as interesting as
accepting it!  The continuum hypothesis is similar, although non-Cantorian
set theory has not yet turned out to be as useful as non-euclidean
geometry.

It surprised me that Penrose thought that the ability to see the truth
of Goedel sentences shows something important about people.  Given that
the incompletenes theorem is proven, I can't imagine any mathematical
fact *less* interesting than a Goedel sentence.  The incompleteness
theorem has the information.  The Goedel sentence adds nothing.

In mathematics, the proofs are at least as important as the theorems, if
not more so.