[sci.nanotech] Review of Penrose's review of Moravec

josh@cs.rutgers.edu (01/23/90)

There is a review in the Feb 1 New York Review of Books by Roger Penrose
of Hans Morevec's Mind Children.  What follows is a review of that
review, and my two cents on the subject of the "AI controversy".

	*	*	*	*	*	*	*

It used to be an item of the conventional wisdom, not so long ago, that
there were tribes of savages living in various unexplored regions, who
took great exception to being photographed.  They believed, or so the
story went, that the camera would capture their souls, and that various
dire consequences would result.

Thus far have we advanced:  Penrose appears to have taken great alarm
over the prospect of the far more sophisticated recording and duplication
process Moravec advocates, precisely because he is afraid that it *won't*
capture his soul.

Penrose claims that there are three cases to consider, first, the
"Strong AI" position with which he labels Moravec; secondly, the
anti-AI position of Searle et al, and finally his own hypothesis that
consciousness consists of the brain's performing some non-computable
calculation with the aid of some unexplained phenomenon of quantum
mechanics.

Penrose takes Moravec to task for failing to consider the question of
AI in his book; Moravec does essentially assume his position and build
from there.  However, it seems a bit thick for Penrose to object, given
the extremely shallow coverage of substantive recent AI work in his own
book, where that was putatively the point at issue.

What is the "Strong AI" position and why should Penrose be worried
about it?  (Moravec apparently isn't worried about it.)  There are
various formulations, and it usually part of a position that is taken
in objection to expostulations by an AI researcher.  However, this
aspect needn't concern us here, because the distinction implied by the
phrase "Strong AI" is only significant if the AI researchers succeed!

Suppose they do succeed, and write a program that talks, and if in 
control of a robot, acts, as if it had feelings and intentions and
free will and consciousness.  Strong AI is the contention that it 
then has feelings, intentions, will, and consciousness by definition.
If no such "Turing test" program appears, the question is moot.

Searle illustrates his claim that "Strong AI" is false by the famous
Chinese Room example.  Searle, in a gedanken experiment, is placed
in a sealed room with an instruction book and scratch paper.  Through
a slot come sheets with odd symbols; Searle follows the instructions
through a lengthy and apparently meaningless calculation and issues 
other symbols through the slot.  It turns out that he has been asked 
a question in Chinese and has produced an appropriate answer in the 
same language.  

How does this refute "Strong AI"?  Searle claims (a) he doesn't
understand Chinese, (b) books, scratch paper, and rooms don't 
understand Chinese, and therefore (c) no understanding of Chinese
has taken place.  And, presumeably, if a mere symbol manipulation 
system isn't understanding, it surely isn't feeling or intending
or any of those things.

Searle is using a classical technique of argumentation known as
"begging the question".  He is trying to establish that mere symbol
manipulating systems cannot "truly understand" but only appear to.  He
does this by exhibiting such a system that in common experience is not
thought of as capable of understanding, but of course in common
experience is also incapable of the question-answering task he 
ascribes to it.  By this device Searle attempts to induce us to 
assume what he has set out to prove: that the symbol-manipulation
system cannot "truly understand".

What is the difference between a mountain and a molehill?  Is it 
qualitative or merely quantitative?  One can argue that both are 
elevations of the surface of the Earth, varying merely in size;
but one can also argue that mountains can be ascribed properties--
grandeur in the seeing, hazard in the crossing-- that molehills
simply do not have in any degree.  Let us call this kind of 
quantitative difference that makes a qualitative one, a *quantum*
difference.  Let us note that, as in the case of mountains and
molehills, a quantum difference is particularly to be noted where 
the varying property, in this case size, is moved completely 
across the commonly encountered human scale.

There is a quantum difference between the kind of systems we are
used to seeing implemented by people using pencil and paper, and
a symbol manipulation system of the scale necessary to claim
"understanding" of Chinese or any other human tongue.  Using
Moravec's figures, some personal experimentation, and assuming
that only 1 tenth of the brain is used in understanding language,
we can estimate that it would take Searle over 60,000 years to
answer a simple question like "Which way to the men's room?"

Thus Searle's example, whose intent is to show us like processes
and have us adduce like properties, would have us equilibrate 
phenomena whose quantitative difference is that in size between 
a grain of sand, and the sun.

I believe that Penrose himself has gotten hung up on the same 
"quantum" difference, though at a more reasonable level.  The more
you study computers, the more they don't look like minds.  Indeed,
if Moravec is right, a Sun-4 should bear the same resemblance to 
the human mind that a molehill does Mount Everest.  Penrose appears
unsettled by the prospect of "surrendering our superiority" to more-
than-human robots;  one feels silly bowing to molehills, but there 
is no discredit in being awed by a mountain.

Penrose paints two pictures, one he believes horrific in which 
Searle is correct, but AI succeeds, and we all upload into machines
which are not conscious.  The machines continue to act in grotesque
parody of real people, but true consciousness has died.

He doesn't actually believe this, however, but rather that because
(as he contends in tENM) consciousness depends on non-computational
properties of QM, "computers will never be able to achieve genuine
understanding, insight, or intelligence, [and that] human beings
will [continue to] supply the guidance, the motivation, and the 
'being' of society."

This I can live with.  Searle's position poses no terror for me
as a candidate uploader; it is too obviously the rationalization 
of an anthropocentric epistemology.  Penrose, on the other hand,
has simply posed a testable hypothesis:  if we can build 'em smart,
he's wrong.

--JoSH

alan@oz.nm.paradyne.com (Alan Lovejoy) (01/25/90)

In article <Jan.22.17.03.36.1990.9997@athos.rutgers.edu> josh@cs.rutgers.edu writes:
>Penrose claims that there are three cases to consider, first, the
>"Strong AI" position with which he labels Moravec; secondly, the
>anti-AI position of Searle et al, and finally his own hypothesis that
>consciousness consists of the brain's performing some non-computable
>calculation with the aid of some unexplained phenomenon of quantum
>mechanics.

>Penrose paints two pictures, one he believes horrific in which 
>Searle is correct, but AI succeeds, and we all upload into machines
>which are not conscious.  The machines continue to act in grotesque
>parody of real people, but true consciousness has died.
>
>He doesn't actually believe this, however, but rather that because
>(as he contends in tENM) consciousness depends on non-computational
>properties of QM, "computers will never be able to achieve genuine
>understanding, insight, or intelligence, [and that] human beings
>will [continue to] supply the guidance, the motivation, and the 
>'being' of society."

Why is the assumption made that natural brains either might, or must, have
recourse to some device or physical phenomena that artificial brains can not
use?  The fact that the Motorolas of the world have yet to construct an
artificial brain which works like a natural brain does not prove that
natural brains rely on qualitatively different physical principles than
do artificial ones.  And it certainly does not prove that artificial brains
could not be constructed using the same physical phenomena, processes,
principles and devices used by natural brains.  Since natural brains can be
constructed, what prevents the construction of artificial brains using the
same mechanisms employed in the construction of natural ones? What law of
nature permits the design (by evolotion) of natural brains but prevents the
design (by application of the scientific method) of artificial brains?

It must be conceded that we don't know, or don't know that we know, all 
the significant facts about how natural brains work.  Our best artificial
brains cannot match many of the capabilities of even rather modest natural
brains.  However, artificial brains can and do match, even overmatch, many
of the capabilities of natural brains.  And every generation of artificial
brains is more capable than the preceding generation.  Also, research is
continuing into the inner workings of natural brains, and it is resulting
in an explosion of new knowledge which is being used to construct
ever-better artificial brains.  So yes, there is still a qualitative
difference between natural brains and artificial brains.  But there is no
basis for therefore concluding that all future attempts to fully emulate
the operation of a natural brain are doomed to failure simply because it
has not been done up to now; and as long as progress continues, *THERE WILL
BE NO SUCH BASIS*.
 
There are many things that we don't know.  But that does not mean that they
are forever unknowable.  There are things "known" to be unknowable (although
that is subject to change), but there is no indication (let alone proof) 
that the mechanisms of consciousness are such.

Some people seem to have a deep emotional need to believe that their brain
is a magical instrument which cannot be duplicated by human engineers.  This
need drives them to conjure up theories and hunt for facts in support of
their belief, instead of letting experimental evidence and common sense
lead them to correct conclusions.  They already know what they believe;
their understanding of reality is filtered through their bias.

The real fear, which is just badly stated, is that it will be possible
to construct artificial brains which emulate the observed behavior of natural 
brains without bestowing true consciousness on the artificial brains.  Since
our only test for conscousness is observable behavior (unless the Japanese
or the University of Utah discover telepathy :-) :-)), we could easily fail 
to detect this crucial deficiency.  It is NOT clear that identical behavior
proves identical internal phenomena.  Whether it is possible at all to bestow 
consciousness on artificial brains is really a separate issue.

>This I can live with.  Searle's position poses no terror for me
>as a candidate uploader; it is too obviously the rationalization 
>of an anthropocentric epistemology.  Penrose, on the other hand,
>has simply posed a testable hypothesis:  if we can build 'em smart,
>he's wrong.

Behavior is testable.  But is consciousness?  We can't even DEFINE it, let
alone test for it.  It reminds one of the old saw about intelligence tests:
intelligence is whatever intelligence tests measure.  Ah, but what do
intelligence tests *REALLY* measure?!

But then, the same issue could be raised in other contexts.  How do I know
that all of you are conscious? I *KNOW* I'm conscious; but lacking telepathy,
the only way I can test *YOU* for consciousness is to observe your behavior.
Or is it?  I can also note the fact that you and I are *VERY* similar.
Similarly, artificial brains which behave like natural ones AND WHICH OPERATE
ON THE SAME PRINCIPLES, USING THE SAME MECHANISMS, STRUCTURES AND PROCESSES
as natural brains, almost certainly produce the same internal phenomena, such
as consciousness.


____"Congress shall have the power to prohibit speech offensive to Congress"____
Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.
Disclaimer: I do not speak for AT&T Paradyne.  They do not speak for me. 
Mottos:  << Many are cold, but few are frozen. >>     << Frigido, ergo sum. >>

[(a) Penrose does not claim that it would be impossible to build 
     artificial minds using his unknown phenomenon.  Indeed he explicitly
     allows that that might happen when and if the phenomenon were
     actually discovered.  He simply believes that such a difference
     does exist between the kind of computers we have now and ones
     that would be capable of consciousness.
 (b) I think the remarks about some people's need to believe that
     the brain is magical therefore apply more to Searle than to 
     Penrose.  However, there definitely are such people.
 (c) As regards testability, remember Penrose said "computers will 
     never be able to achieve genuine understanding, insight, or 
     intelligence...".  That is subject to objective determination
     no matter whether they achieve consciousness or not.
 --JoSH]

Daniel.Mocsny@uc.edu (daniel mocsny) (02/08/90)

In article <Jan.24.20.08.56.1990.12247@athos.rutgers.edu> alan@oz.nm.paradyne.com (Alan Lovejoy) writes:
>However, artificial brains can and do match, even overmatch, many
>of the capabilities of natural brains.  And every generation of artificial
>brains is more capable than the preceding generation.  Also, research is
>continuing into the inner workings of natural brains, and it is resulting
>in an explosion of new knowledge which is being used to construct
>ever-better artificial brains.

The first and second sentences above lead me to conclude that by
"artificial brain" you refer to (conventional) digital computers.
The third sentence must refer to the ongoing work in connectionist
AI. While I am partially aware of the work of Grossberg, Carver Mead,
Hopfield, et al., where has any insight into the structure of the
human brain fed back into conventional microprocessor design? My 
impression was that the theory of digital computation has proceeded
quite independently of any knowledge of the human brain, and that
most connectionism is an application of digital computers, not a
method for designing better ones.

I expect that to change, especially with the hardware-based work of
people like Mead. A near-term application might be to interface
hardware neural networks as real-world data filters and classifiers
for conventional computers. For example:

Most of the work people do to support digital computers appears
to revolve around organizing sloppy, noisy data into the neat,
orthogonal forms that digital computers can handle. For example,
consider the general problem of technical publishing. A scientist
can rapidly fill notepads with rough sketches, scribbled equations,
etc. But translating the information in the scribbles into, say,
LaTeX commands is a tedious and slow process. WYSIWYG editors can
help, but they merely shift the problem rather than solve it---i.e.,
they still require humans to translate their natural expression into
some rather stilted and artificial sequence of logical operations.
Now imagine a neural network that implemented the mapping between my
scribblings and syntactically valid LaTeX codes. That would put me
one step closer to being able to work as fast as I can think...

Well---I'm on the verge of digressing horribly, so I'll cut it here.
But I'm still curious---have digital computers gotten anything from
brain studies yet, or was the above passage merely a bit of literary
license?

Dan Mocsny
dmocsny@uceng.uc.edu

[I think the term "artificial brain" has to be understood as including
 both hardware and software.  No one on either side of the question 
 claims that a sufficiently powerful processor, new from the factory
 without any program at all, will be an "artificial mind".  If your
 position is "well of course they could simply simulate the brain 
 on a computer, but that would be cheating", you're actually well
 on the AI side of the debate.
 --JoSH]