[sci.nanotech] "The Emperor's New Mind," a book review

merkle@parc.xerox.com (01/05/90)

Roger Penrose' recent book, "The Emperors's New Mind," (Oxford University
Press) discusses the age old questions of mind, thought, consciousness,
etc.  It's selling quite well: most of the local bookstores that I called
were stocked out.  Penrose displays a good grasp of physics; his grasp of
computer science, the nature of algorithms and complexity theory are quite
bad.  This weakness fatally cripples his central ideas about the mind and
consciousness.  He is at least intellectually honest: he clearly and
bluntly acknowledges some weaknesses in his position that others might
gloss over, and most of his unsupported speculations are clearly labeled as
such.

As he says in his introduction, "I should make clear that my point of view
is an unconventional one among physicists and is consequently one which is
unlikely to be adopted, at present, by computer scientists or
physiologists.  Most physicists would claim that the fundamental laws
operative at the scale of a human brain are indeed all perfectly well
known."  "Nevertheless, I shall argue that there is another vast unknown in
our physical understanding ... as could indeed be relevant to the operation
of human thought and consciousness..."  He tries to persuade.  He fails.
Worse, his speculations are uninteresting (except, perhaps, to True
Believers who are determined to "prove" that humans are superior to "mere
computers.")

His position is perhaps best summarized by his own conclusion:

"...science seems to have driven us to accept that we are all merely small
parts of a world governed in full detail (even if perhaps ultimately just
probabilistically) by very precise mathematical laws.  Our brains
themselves, which seem to control all our actions are also ruled by these
same precise laws.  The picture has emerged that all this precise physical
activity is, in effect, nothing more than the acting out of some vast
(perhaps probabilistic) computation -- and, hence our brains and our minds
are to be understood solely in terms of such computations.  Perhaps when
computations become extraordinarily complicated they can begin to take on
the more poetic or subjective qualities that we associate with the term
'mind'.  Yet it is hard to avoid an uncomfortable feeling that there must
always be something missing from such a picture."

Ultimately, it is this "uncomfortable feeling" that drives Penrose, yet he
fails utterly to support his position with anything more substantive than
repeated denigrations of software, algorithms, and formal systems.  They
are "mere", "nothing more than", etc.

Unlike some others, Penrose acknowledges up front that the first victim of
his "uncomfortable feeling" will be quantum mechanics.  (In the following,
I use capital letters for words that were italicized in the original).

"There is, in fact, at least one clear place where action at the single
quantum level can have importance for neural activity, and this is in the
RETINA."  "...  cells with single-photon sensitivity do appear to be
present in the human retina."

"Since there ARE neurons in the human body that can be triggered by single
quantum events, is it not reasonable to ask whether cells of this kind
might be found somewhere in the main part of the human brain?  As far as I
am aware, there is no evidence for this.  The types of cell that have been
examined all require a threshold to be reached, and a very large number of
quanta needed in order that the cell will fire.  One might speculate,
however, that somewhere deep in the brain, cells are to be found of single
quantum sensitivy.  If this proves to be the case, then quantum mechanics
will be significantly involved in brain activity."

"Even this does not yet look very USEFULLY quantum-mechanical, since the
quantum is being used merely as a means of triggering a signal.  No
characteristic quantum interference effects have been obtained.  It seems
that, at best, all that we shall get from this will be an uncertainty as to
whether or not a neuron will fire, and it is hard to see how this will be
of much use to us."

Penrose can speculate that neurons sensitive to single quantum events exist
deep within the brain, but there is no evidence to support this speculation
despite much research on the behavior of neurons.  So why bother?

Even if this speculation is true, it can still be modeled by a sufficiently
powerful computer.  Recognizing this, Penrose moves into high gear with a
speculative re-write of quantum mechanics to explain what hasn't been
observed.  Such speculations, unsupported by any evidence, can only elicit
yawns.  Before we care about his speculations, we must have at least some
evidence that current theories are wrong.  Penrose offers nothing except
his own discomfort -- a discomfort not shared by the vast majority of
neuroscientists, computer scientists, biochemists, etc.  Mercifully, he
labels this set of speculations as such.

Throughout his discussions, there is an undercurrent of misunderstood
computer science.  In one section, he tries to explain how profound and
important it is that the brain, unlike a computer, can re-work its own
wiring diagram.  The synaptic connections can be changed.  Unfortunately,
modeling a dynamically changing circuit on a computer with fixed circuitry
is a trivial exercise.  Any circuit simulation program shows how to do it.
But after all, isn't this wonderfully complicated?  And the brain is
influenced by neurochemicals, too!  "The whole question of neurochemistry
is complicated, and it is difficult to see how to provide a reliable
detailed computer simulation of everything that might be relevant."
Difficult.  An interesting weasle word, subject to a few different
interpretations.  Going to the moon is difficult.  This is not to say it is
impossible or even improbable.  Finding the computer program that is
equivalent to a particular human being is difficult, even if you think such
a program exists.  Does the difficulty of finding the program suggest that
it does not exist?  No.

Of course, Penrose does recognize one well known fact: "...there is no
difference IN PRINCIPLE between a parallel and a serial computer."

Unfortunately, he doesn't stop at that: "...in my own opinion, parallel
classical computation is very unlikely to hold the key to what is going on
with our CONSCIOUS thinking.  A characteristic feature of conscious thought
... is it's 'oneness' -- as opposed to a great many independent activities
going on at once."  Penrose goes on to explain that parallel computation
can't provide consciousness because parallel computers can't provide
'oneness'.  Penrose, whose brain is composed of between 100 billion and a
trillion nerve cells, is arguing with a straight face that parallel
computations cannot be 'one'.

In one paragraph, Penrose suggest that "consciousness" is needed to arrive
at profound mathematical truths, while the mere act of checking them might
be algorithmic.  To quote Penrose:

"Enough information is in principle available for the relevant judgement to
be made, but the process of formulating the appropriate judgement, by
extracting what is needed from the morass of data, may be something for
which no clear algorithmic process exists -- or even where there is one, it
may not be a practical one.  Perhaps we have a situation where once the
judgement IS made, it may be more of an algorithmic process (or perhaps
just an easier one) to CHECK that the judgement is an accurate one than it
is to form that judgement in the first place.  I am guessing that
consciousness would, under such circumstances, come into its own as a means
of conjuring up the appropriate judgements."

This is a reasonably clear description of the class of problems in NP.  And
it has indeed been said (by more than one person) that should we find an
efficient method of solving NP-complete problems, then AI would become
practical.  However, the idea that "consciousness" arises because humans
have some hidden ability to solve NP-complete problems is (a) utterly
unsupported by any evidence and (b) does not elevate humans above computers
(though it would make them remarkably efficient computers by today's
standards).  Penrose seems merely to have misunderstood what software can
actually accomplish.

Again, Penrose argues that natural selection on mere algorithms could not
produce such wonderous beings as us.  After all, computer programs are
complicated, and random changes to computer programs would produce garbage,
not something useful!  Indeed, experiments verify that conventional
low-level representations of algorithms simply don't produce anything
interesting when they are randomly mutated.  On the other hand, random
changes to high-level representations can produce some of the most
fascinating results.  Penrose's argument is not an argument against mere
algorithms, but an argument against human thought processes being
fundamentally based on random changes in something like IBM 360 assembler.
This latter conclusion is indeed plausible, but fails to support any of
Penrose's more sweeping claims.

Finally, Penrose gives the most gloriously loopy misinterpretation of
Godels unsolvability results to show that human consciousness is superior
to mere machines.  The logic is comparable to the well known claim that the
second law of thermodynamics proves that evolution is impossible.  The
quick summary of Penrose's summary of Godel's argument is the creation of
the following statement within a proof system P:

"If P proves this statement is true, then it is false; if P proves this
statement false, then it is true; if P can neither prove nor disprove this
statement, then it is true."

Of course, it is impossible to assert either the truth or falsity of this
statement within the proof system P, as long as P is consistent.  This, of
course, means the statement must be true (if P is consistent).

Penrose correctly realizes that this does not prove that humans are
superior to computers, because (a) we don't know that the proof system is
consistent, and (b) a Turing machine could produce the same result, because
the construction is entirely algorithmic.  Penrose admits as much, but then
proceeds on a new tack.  "A mathematical argument that convinces one
mathematician -- provided that it contains no error -- will also convince
another, as soon as the argument has been fully grasped."  From this Axiom
of Universal Truth among Mathematicians, Penrose then argues that if
mathematicians use an algorithm to determine truth, then all mathematicians
must use the same algorithm because all mathematicians agree on what is
true (I'm not making this up!).  Of course, this is a contradiction because
if a single algorithm were used by all mathematicians then there would be
true propositions that mathematicians could not prove (by Godels theorem).
Oh, yes, did I mention that all true propositions can be proved by the
collective wisdom of all mathematicians?  He doesn't state this explicitly,
but it's needed to create the contradiction that he wants.

I'm afraid Penrose did not add little warnings before this section that it
was speculative (and wrong).

What is consciousness?  I won't venture a definition, but unless we are
prepared to throw away the known laws of physics (even though there is no
evidence that these laws are incorrect as applied to the human brain) the
behavior of the human brain can be modelled on a sufficiently large
computer.  Is such a model conscious?  Well, it would certainly pass any
objective test that we could possibly devise, by definition.  And even
Penrose agrees that consciousness, whatever it might be, has objective
behavioral correlates.  Which means that such a model would satisfy Penrose
(though he would no doubt grumble about it).

Penrose is claiming that no such computer program exists.  He is claiming
that the presently known laws of physics are fundamentally wrong when
applied to the human brain.  He presents no single shred of evidence in
support of these propositions, and his arguments against the generally
accepted philosophy of the mind are fatally flawed.

At root, Penrose wants the reassurance that humans are inherently superior
to computers.  Thanks to Copernicus, we are no longer at the center of the
universe.  Thanks to Darwin, we are descended from apes.  Now, we must face
the reality that nothing fundamental separates us from computers.

What, then, prevents computers from becoming our superiors and taking over?
If we admit that computers can be conscious, haven't we also admitted our
own doom?  Some quite competent scientists have argued that this is our
fate -- to be replaced by vastly superior beings designed by our own hands.

Yet facing reality is usually safer than hiding from it.  If we pretend
that computers will never equal the wonderful stuff of which our minds are
made, then we are running away from the fundamental problem and increasing
our peril.  The strategy of the ostrich does not appear prudent.

Centuries ago, we built machines that were mechanically more powerful than
humans.  This did not result in the doom of the human species.  Today, we
have the technical ability to destroy the planet.  We haven't done it.  In
the future, we will build computers more intelligent than humans.  Again,
this does not mean the doom of the human species.

It does mean that we'd better think about where we're going and what we
want.  The problem is not that we'll try to build intelligent computers and
fail (though there have been many failed attempts, and no doubt will be
more).  The problem is that one day we're going to succeed.

gerry@uunet.uu.net (01/13/90)

In article <Jan.4.15.04.08.1990.20432@athos.rutgers.edu> merkle@parc.xerox.com writes:
>His position is perhaps best summarized by his own conclusion:

>"...science seems to have driven us to accept that we are all merely small
>parts of a world governed in full detail (even if perhaps ultimately just
>probabilistically) by very precise mathematical laws.  Our brains
>themselves, which seem to control all our actions are also ruled by these
>same precise laws.  The picture has emerged that all this precise physical
>activity is, in effect, nothing more than the acting out of some vast
>(perhaps probabilistic) computation -- and, hence our brains and our minds
>are to be understood solely in terms of such computations.  Perhaps when
>computations become extraordinarily complicated they can begin to take on
>the more poetic or subjective qualities that we associate with the term
>'mind'.  Yet it is hard to avoid an uncomfortable feeling that there must
>always be something missing from such a picture."

I just read _Cosmic_Blueprint_ by Paul Davies (I think that's right, but
I don't have the book here), which covers some of the same area, and
references Penrose in a few places.  From your review of Penrose' book,
Penrose may be more driven by the "uncomfortable feeling" than Davies is,
and Davies is very careful to distinguish between stronger claims that
he can ground and speculations or more mystical interpretations, etc.
He demonstrates in a number of ways why the extreme reductionism (as
in the first quoted sentence above) cannot be correct.

In essense he is taking exception to the idea that "nothing can come
from nothing," which he points out is the basis for a number of
philosophies that must have the universe created in a state of order
or perfection and have it degenerating from there.  Note that this
is at odds with Big Bang theories where the initial universe has a
high degree of symetry, a property of thermodynamic equilibrium.  He
claims that the emergence of order in successively higher levels of
organization is a second "creative" arrow of time running in the
oposite direction from the thermodynamic one.  Part of this idea is
that each level of order can have its own systemic laws that are
not derivable from the principles of lower levels of order.  In other
words, the other sciences are not reduceble to physics even with
exact initial conditions and infinite computer power to simulate it,
or that biology is not enough to predict the behavior of organisms
that have minds.

>"Since there ARE neurons in the human body that can be triggered by single
>quantum events, is it not reasonable to ask whether cells of this kind
>might be found somewhere in the main part of the human brain?  As far as I
>am aware, there is no evidence for this.  The types of cell that have been
>examined all require a threshold to be reached, and a very large number of
>quanta needed in order that the cell will fire.  One might speculate,
>however, that somewhere deep in the brain, cells are to be found of single
>quantum sensitivy.  If this proves to be the case, then quantum mechanics
>will be significantly involved in brain activity."

I don't understand the necessity of linking quantum events to the higher
level phenomenon.  Even very simple chaotic systems have enough sensitivity
to initial conditions to make them essentially unpredictable, so all you
need is some kind of "synchronistic" effects to maintain the unity of
higher level events.

>What is consciousness?  I won't venture a definition, but unless we are
>prepared to throw away the known laws of physics (even though there is no
>evidence that these laws are incorrect as applied to the human brain) the
>behavior of the human brain can be modelled on a sufficiently large
>computer.  Is such a model conscious?  Well, it would certainly pass any
>objective test that we could possibly devise, by definition.  And even
>Penrose agrees that consciousness, whatever it might be, has objective
>behavioral correlates.  Which means that such a model would satisfy Penrose
>(though he would no doubt grumble about it).

No, simulation of the brain may be possible, but not on the level of
physics, and probably not on the level of neurons either.

It is interesting to see how this type of discussion is related to the
question of whether computers can think.  The reductionist claim that
everything is physics, therefore we can simulate it, and some philosophers
complain that there is something special about life and/or man's conscousness
that makes this impossible.  Various approaches are used, but there are
always some links in the logic that are questionable.  Davies' claim that
there are "emergent" phenomenon leaves open the possibility of new types
of organization, for example a thinking computer, or that the products
of nano-technology could truly take on a life of its own.

>Penrose is claiming that no such computer program exists.  He is claiming
>that the presently known laws of physics are fundamentally wrong when
>applied to the human brain.  He presents no single shred of evidence in
>support of these propositions, and his arguments against the generally
>accepted philosophy of the mind are fatally flawed.

This is what I refer to above in "some philosophers complain. . .", and
as I said there are often questionable lines of reasoning.

>At root, Penrose wants the reassurance that humans are inherently superior
>to computers.  Thanks to Copernicus, we are no longer at the center of the
>universe.  Thanks to Darwin, we are descended from apes.  Now, we must face
>the reality that nothing fundamental separates us from computers.

Yes, I think this is another move away from a me-centered or us-centered
cosmology.  But I also suspect this movement is basic to the "maturing
order" of the world.  It can be observed in the maturing of a child, why
not also in the maturing of a culture.


>What, then, prevents computers from becoming our superiors and taking over?
>If we admit that computers can be conscious, haven't we also admitted our
>own doom?  Some quite competent scientists have argued that this is our
>fate -- to be replaced by vastly superior beings designed by our own hands.

I suppose, but it's likely to be a long way off.  I just don't think we
have a very good handle on the nature of the complexity of minds or
living organisms.  It should be close because we learn more at an
every increasing rate, but there is a vast uncharted abys between here
and there.

And yes, this fear of possible futures can motivate denial.  For me, the
fear is that we won't get wise fast enough to control what we already
know.  Futures like "grey goo", life-less in the sense of uninteresting,
but active enough to take over the environment, or alternatively the
first intellegent machines are built for a "defense net" as in The
Terminator that has no other purpose but to kill humans until they are
gone.  Maybe grey goo is not so bad, more like going back to the single
cell era; potential for "higher" forms to develop again.

>Yet facing reality is usually safer than hiding from it.  If we pretend
>that computers will never equal the wonderful stuff of which our minds are
>made, then we are running away from the fundamental problem and increasing
>our peril.  The strategy of the ostrich does not appear prudent.

Yes, this is also Drexler's position on nano-technology.  Actually more
like, "if we don't do it openly, someone (military?) will do it in
secret, and then where will we be?"

>It does mean that we'd better think about where we're going and what we
>want.

This is true of present technologies, but more so with stuff like AI and
nano-tech.

Gerry Gleason