[comp.ai] Chinese Rooms

djones@megatest.UUCP (Dave Jones) (07/25/90)

Hi.

I just now peeked into this group for the first time. Coincidentally,
only last night I skimmed Penrose's new book, "The Emperor's New Mind"
in which he trots out the Chinese Room early on. Low and behold if
a Chinese Room thread isn't raging here now. (Did the Penrose book
start it?) 

I know almost nothing about AI, and as I said, I've only skimmed the book,
but on first reading, it looks to my naive eyes like a load of pure-dee
crap. I'll explain how I got that opinion. But first, a short cut: If you
want to skip to the Chinese Room, look for the three splats, "***".

Still with me, eh? Okay.

To be fair, I should not review the book, having not read it all. But then
I don't think I'm ever going to read it all, so sue me. I never claimed
that I'm always fair.

To be even fairer, I should say that it has some interesting stuff
in it. (Penrose can probably explain quantum weirdness to us mere
laymen better than most lesser genius/physicists can.) But I kept asking
myself, "Why is he telling me all this?"   You see, early
on he promises to clue us in on his startling insight that AI is
a load of shit, and that computers are fundamentally incapable of
doing the sort of thing that a mind does. Great. Tell all. I'm listening.
But then he goes into long discussions of all the trendy techno-topics:
complexity, undecidability, recursion, Godel, Escher, Bach, black holes,
Mandelbrot sets, the two slit experiment, the Turing test, you know ...
that lot. One of the first chapters comprises lengthy, rambling, how-to
lessons on the programming of Turing machines. It is complete with example
programs, including a universal Turing machine coded as a single decimal
number that covers a page! (I trusted him on that one. I've never done a
code-walkthrough of a Turing machine program, and I'm not about to
start now.)

As I skimmed each chapter, I kept thinking, if he really ties this all
together to reach his conclusion, I'll be astounded. A few hours into the
reading I was becoming more and more annoyed. Except for the Chinese
Room argument early on, none of it seemed like it could contribute even
remotely to the promised debunking. What the hell was he even *talking*
about? He kept arguing about "consciousness" as though the word meant
something -- (he rebuked straw men for their poor definitions of the word)
-- but if he offered any hint of a definition himself, I missed it. And
why was he so coy about revealing the jist of his argument?

Finally, I skipped to the end, exactly as I do when I get impatient with a
murder mystery because I no longer trust the author to keep all the clues
consistent with the intended solution.  Well, as nearly as I could tell,
it turns out that his argument hinges on unpredictable quantum effects
influencing brain-function. Apparently he never heard of soft memory-errors.
besides, if you wanted to make a computer more mind-like in that particular
way, you could just hook it up to a Gieger counter and program it to
randomize itself. La di da. Thanks for nuttin.

***

Indeed his mistake all along is to consider the I/O of the human as
being integral to the working of the brain (as indeed it is), but
to consider the computer only as a calculating device devoid of I/O
(which it is not). In his rendition of the Chinese Room, the Room is
asked to answer a question about people react in the presence of
good and bad hamburgers. Penrose then asks if the Room really has a
concept of a hamburger the way a mind does. Of course the answer is,
"probably not."

The difference is due largely to the manner in which hamburger-
data was collected by the mind. The mind employed external sniffers, and
a system of internal rewards and punishments (for identifying and eating
food), to name only two mechanisms. The process of data-collection then
presumably created a model of hamburgerness, and related it to other things
which smell this way or that, and which variously relieve hunger or induce
nausea, etc.. Then it associated the hamburger model with appropriate
tendencies to act. Or so I would guess.

But the way the Chinese Room was set up, there was no model of
hamburgerness along with other potential food models. There
was  only a sequential program for recognizing hamburgerness
in an abstract (Chinese) input string. That only reflects Penrose's
(Searle's?) choice of how to program the Room! Computers don't have to
be programmed that way. Computers too can build models.

Dr. Penrose might be interested to hear about a device I programmed
back in 1980. It was a machine that actually had a sniffer! Not
for hamburgers, but for noxious fumes. More precisely,
it had a gas mass spectrometer. The purpose of the device was to blow
a whistle and empty bottles of breathable air when the atmosphere got
too nasty. (It was for use in nuclear power plants. Makes me shudder even
to think about it today!)  The way it was programmed was to let it sniff
various things (experience), while telling it how nasty those things
were (very roughly analogous to reward/punishment). It then formed a
priority scheme for testing for the various gasses and reacting
appropriately.

True, that machine did not form nearly as elaborate a model of the gasses
as we probably do for hamburger smells, but by golly it did form concepts
of gasses, and it was not built out of meat.

Furthermore, the computer per se did not have a "direct" concept of smell.
It received the smell info through an RS-232 data link, analogous to the
way the smells from our noses get coded as nerve impulses. If you are not
"in on the joke" regarding smell-encoding, those impulses might as well
be Chinese.