[comp.ai] No more Chinese rooms, please?

park@usceast.cs.scarolina.edu (Kihong Park) (06/13/90)

I think the following points out the fallacy committed by Searle in a clear
and simple fashion without need for "subjective" discussions:

Searle's claim:

	Let M be a person who speaks only English. Let M reside in a room
which has a Chinese(specialized) keyboard as well as a terminal which can 
display Chinese characters. Let M have access to a "rule" book B written in
English which dictates what M is to type on the keyboard(answer) given some
display of Chinese characters(question) on the terminal. Finally, let M 
follow the instructions of the book completely without reference to other 
extraneous sources and factors.
	Assuming book B exists based upon which the answers forwarded by M to
questions in compliance with the above scenario pass the Turing test, it can
be said that M with the help of B passes the Turing test without having
understood the content of the discourse at all. 

Argument's fault:

	Searle is right in pointing out that in the above scenario M doesn't
"understand" the content of the questions and answers at all. But this is
precisely the merit by which modern day computers can be built. General purpose
computers are physical realizations of Universal Turing machines. Universal
Turing machines are devices which can simulate the behavior of other Turing
machines. What a UTM basically does is perform sophisticated "book-keeping"
operations, just as M does above. In doing so, the complexity of a problem
solving procedure is separated from the necessary overhead needed for its
implementation. The former is captured in what we call "programs" whereas the
latter is hardwired in the hardware of the control unit of a "computer".
	But as every Computer Science student should know, programs can
always be hardwired into the circuitry of a semi-conductor device. There is
no essence in trying to distinguish software from hardware with respect to
the final capability of the endproduct from a theoretical point of view. This
is precisely where Searle makes a mistake.

It's surprising that Searle should have aroused such controversy over a
simple mistake, or lack of understanding of a basic result stemming back to
1936. If we belief that continuity and nondeterminism are nonessential aspects
in the design of intelligent systems, then, yes, intelligent computers can
in principle be built. The question is how to build one. If some people
advocate that continuity and nondeterminism are absolutely essential
properties of an intelligent system, well, then, it's a no-win situation.
But that's not the topic of contention in the Chinese room argument.

frank@bruce.cs.monash.OZ.AU (Frank Breen) (06/13/90)

From article <3285@usceast.UUCP<, by park@usceast.cs.scarolina.edu (Kihong Park):
< I think the following points out the fallacy committed by Searle in a clear
< and simple fashion without need for "subjective" discussions:
< 
< Searle's claim:
< 
< 	Let M be a person who speaks only English. Let M reside in a room
< which has a Chinese(specialized) keyboard as well as a terminal which can 
< display Chinese characters. Let M have access to a "rule" book B written in
< English which dictates what M is to type on the keyboard(answer) given some
< display of Chinese characters(question) on the terminal. Finally, let M 
< follow the instructions of the book completely without reference to other 
< extraneous sources and factors.
< 	Assuming book B exists based upon which the answers forwarded by M to
< questions in compliance with the above scenario pass the Turing test, it can
< be said that M with the help of B passes the Turing test without having
< understood the content of the discourse at all. 
< 
< Argument's fault:
< 
Here's what I think is wrong with Searle's arguement.

Of course M doesn't understand Chinese any-more than someone with half
the speech centre of their brain missing would understand English.
M is only a small part of the system - all the knowlege is stored in
B and together M+B does understand Chinese.  To put it another way
- no single neuron in your brain understands anything - its only
when you put them all together that there is any understanding,
and--  No single part of the chinese room understands chinese - 
its only when you put it all together that it understands anything.

It seems kind of obvious to me (now that I've thought of it) but
I only caught the end of the arguement so I may be missing something.
Tell me if I'm right.

Frank Breen

park@usceast.UUCP (Kihong Park) (06/14/90)

In article <2410@bruce.cs.monash.OZ.AU> frank@bruce.cs.monash.OZ.AU (Frank Breen) writes:
>Here's what I think is wrong with Searle's arguement.
>
>Of course M doesn't understand Chinese any-more than someone with half
>the speech centre of their brain missing would understand English.
>M is only a small part of the system - all the knowlege is stored in
>B and together M+B does understand Chinese.  To put it another way
>- no single neuron in your brain understands anything - its only
>when you put them all together that there is any understanding,
>and--  No single part of the chinese room understands chinese - 
>its only when you put it all together that it understands anything.
>
>It seems kind of obvious to me (now that I've thought of it) but
>I only caught the end of the arguement so I may be missing something.
>Tell me if I'm right.
>
>Frank Breen

Yes, you are basically right. But Searle is aware of this precise argument,
in fact it is listed in his 1990 Sci. Am. article, yet still he dismisses it
as incorrect. What you're doing above is basically falling into a trap
whereby you are engaging in a discussion as to the validity of the statement
that there is some fundamental difference between biological information
processing systems such as the brain and any other artificial, mechanical
counterpart. This is a statement which can't be proven or disproven at this
very time. Either you accept it as a postulate or you don't.

But he is making a different mistake in formulating his Chinese room argument
which everybody can agree on to be faulty. Namely, his main point is that
the person in the room(M), since he is essentially performing a table-lookup
operation, does not understand the content of the question/answers. This is
true. But from a theoretical point of view, given book B(program) and person
M(control unit), there exists an equivalent Turing machine T which has B
hardwired in its circuitry, and hence a pointing of fingers to the book-keeping
entity M is not possible anymore. If you read his original articles, you will
see that his Chinese room argument rests entirely on being able to point to
M as the culprit. But his example is incorrect for the above reason. It's
just a consequence of ther existence of Universal Turing machines.

He probably would still like carry on with his core conviction of there
being a fundamental difference between machines and brains, but he has to
find another argument; Chinese rooms are impartial w.r.t. supporting his
argument.

reynolds@bucasd.bu.edu (John Reynolds) (06/16/90)

In article <2410@bruce.cs.monash.OZ.AU> frank@bruce.cs.monash.OZ.AU 
(Frank Breen) wrote:

Of course M [Searle] doesn't understand Chinese any-more than someone
with half the speech centre of their brain missing would understand
English.  M is only a small part of the system - all the knowlege is
stored in B [the room] and together M+B does understand Chinese.
[I]ts only when you put it all together that it understands anything.

Tell me if I'm right.

park@usceast.UUCP (Kihong Park) replied:

Yes, you are basically right. But ... [w]hat you're doing above is
basically falling into a trap whereby you are engaging in a discussion
as to the validity of the statement that there is some fundamental
difference between biological information processing systems such as
the brain and any other artificial, mechanical counterpart.

reynolds@bucasd.bu.edu asks:

Am I missing something?  That's not the trap he's falling into at all.
The idea that the components of a system acting in isolation may be
unable to carry out some functions they can achieve when working
together doesn't depend in any way on whether the components of that
systems are biological or not.

park@usceast.UUCP (Kihong Park) went on to add:

But he is making a different mistake in formulating his Chinese room argument
which everybody can agree on to be faulty. Namely, his main point is that
the person in the room(M), since he is essentially performing a table-lookup
operation, does not understand the content of the question/answers. This is
true. But from a theoretical point of view, given book B(program) and person
M(control unit), there exists an equivalent Turing machine T which has B
hardwired in its circuitry, and hence a pointing of fingers to the book-keeping
entity M is not possible anymore. If you read his original articles, you will
see that his Chinese room argument rests entirely on being able to point to
M as the culprit. But his example is incorrect for the above reason. It's
just a consequence of ther existence of Universal Turing machines.

reynolds@bucasd.bu.edu looks puzzled and types:

I don't see your point.  And so by removing Searle, who you say
doesn't understand the content of the question/answers, and replacing
him and his book with circuitry you put intelligence into the system?
And in what way is M the culprit?

cam@aipna.ed.ac.uk (Chris Malcolm) (06/27/90)

In article <36453@shemp.CS.UCLA.EDU> martin@oahu.cs.ucla.edu (david l. martin) writes:
>In article <965@idunno.Princeton.EDU> markv@gauss.Princeton.EDU (Mark VandeWettering) writes:

>>	Searle's language is CRIMINALLY loose.  Concepts such as understanding,
>>	causal powers, the distinction between syntax and semantics are 
>>	not ever defined in any paper of his that I have read.
>>	his recent Scientific American article was not a "proof": he merely
>>	assumed that his conclusion was correct and proceeded.

>Note that to the extent
>that the above 2 claims were in fact made by AI researchers, they were the
>ones who initiated the "loose" usage of concepts such as understanding, etc.

Hold on a minute! "Understanding" etc. were being used "loosely" (in the
sense of lacking precise definitions) by English speakers, psychologists,
and philosophers long before AI was thought of, as were their cognates in
other languages, such as Latin and Greek, long before the English
language was invented. And lacking a precise definition is not
necessarily a failure in a term: for example, despite centuries of
wrangling, there is still no satisfactorily agreed definition of
mathematics.

-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

forbis@milton.u.washington.edu (Gary Forbis) (06/27/90)

In article <25457@cs.yale.edu> blenko-tom@CS.YALE.EDU (Tom Blenko) writes:
>Consider
>again the artificial city, and suppose that someone does succeed in
>constructing such a thing.  So the artificial city replicates
>externally, as closely as anyone can tell, just the behaviors that a
>real city would.

I'm not sure you believe this.  I am pretty sure you do not.  Please
look again at your quote which follows.

>Now, the question is, does the artificial city have "civic pride"?  But
>the architects of the artificial city are only concerned with inputs
>and outputs, and when they deliver the desired transfer function, they
>suppose, using your view, that they are finished.  So there's no reason
>for anyone to suppose that it's meaningful to talk about the civic
>pride of an artificial city.

It does not make sense to say a city has been replicated then say it
has not.  If an observer can tell the difference between the real and
the artificial then as far as this city goes it has failed the Turing
Test.  If the architects are concerned with making the replica indestinguish-
able from the real then if civic pride is important it must be replicated.

>
>Searle takes consciousness
>and emotional states to be properties of the mind.  His claim (indeed
>his solution to the mind-body problem) is that these intentional
>properties are identically the states of the underlying processor...

Is Searle really a functionalist?  I don't understand how he could be
and still dispute the claims of Strong AI.

--gary forbis@milton.u.washington.edu

llama@eleazar.dartmouth.edu (Joseph A. Francis) (06/28/90)

In article <25457@cs.yale.edu> blenko-tom@CS.YALE.EDU (Tom Blenko) writes:
>Just as civic pride is a property of a city, Searle takes consciousness
>and emotional states to be properties of the mind.  His claim (indeed
>his solution to the mind-body problem) is that these intentional
>properties are identically the states of the underlying processor --
>that when one reports that someone is hungry, for example, one is
>saying nothing other than that the current state of her neurons is one
>element of the subset of all neuron states that has been labelled
>"hungry".  Therefore, he claims, the information-transducing properties
>of an intelligent artificial entity do not suffice -- the artificial
>entity must also reproduce the relationship between physical states and
>mental states.

Ahh.  And here is the crux of the whole matter (to me at least).  While
others cite a variety of assumptions Searle makes in CR as the straw 
that breaks CR's back, I believe the following is a more telling problem:

I don't think the CR can function as per CR WITHOUT having mental states.
For instance, we would not say something passes the turing test if it
can never remember the last question you asked it - so clearly the inards
of CR do not have static comments -  the man in CR must not be just reading
symbols - applying rules from the book - and outputing symbols; he must 
also be WRITING things (in the book or on scrath paper or somewhere).
Also, this system as a whole must be able to learn new Chinese words, learn
how to play crazy eights and tic-tac-toe, and formulate opinions on the
validity of Searle's CR argument, etc...  The inards of CR are an 
extremely active, complicated, and evolving place.

I claim (without any support - just a claim) that doing all this 
neccessitates mental states, self awareness, thought, and so on.  So in
essence, if Searle insists his CR has no mental states, I assert CR is
impossible.  If Searle doesn't mind attributing mental states to CR,
then fine, but now he'll have to grant CR the property of intelligence.

-Joe

blenko-tom@CS.YALE.EDU (Tom Blenko) (06/28/90)

In article <4490@milton.u.washington.edu> forbis@milton.u.washington.edu (Gary Forbis) writes:
|It does not make sense to say a city has been replicated then say it
|has not.  If an observer can tell the difference between the real and
|the artificial then as far as this city goes it has failed the Turing
|Test.  If the architects are concerned with making the replica indestinguish-
|able from the real then if civic pride is important it must be replicated.

A city has extentional properties (resources it consumes, products it
produces) and intentional properties (I suggest civic pride as an
example).  The analogy is made to the mind.  Searle says input/output
relations do not suffice to reproduce the mind because they capture
extentional properties while neglecting intentional properties (of
which hunger might be an example).  And he takes hoping, fearing,
loving, hungering, and so forth, which are not objectively observable,
to be essential and intentional states of any mind.

|>Searle takes consciousness
|>and emotional states to be properties of the mind.  His claim (indeed
|>his solution to the mind-body problem) is that these intentional
|>properties are identically the states of the underlying processor...
|
|Is Searle really a functionalist?  I don't understand how he could be
|and still dispute the claims of Strong AI.

I don't know what you mean by "functionalist". Certainly he is a
physicalist.  And (one of) his arguments against strong AI is that
intentional properties arise not just from the program but from the
processor, as I've outlined previously.

	Tom

kenp@ntpdvp1.UUCP (Ken Presting) (06/28/90)

In article <965@idunno.Princeton.EDU>, markv@gauss.Princeton.EDU (Mark VandeWettering) writes:
> In article <593@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
> >
> >Searle gets a lot of heat for using loose language, but there are 
> >important cases where he says just what he means, in no uncertain
> >terms.  
> 
> 	Searle's language is CRIMINALLY loose.  Concepts such as understanding,
> 	causal powers, the distinction between syntax and semantics are 
> 	not ever defined in any paper of his that I have read.  In particular,
> 	his recent Scientific American article was not a "proof": he merely
> 	assumed that his conclusion was correct and proceeded.

Don't bother with the places where his language is loose.  If we are serious
about wanting to defeat this argument, we need to look at those few places
where he is clear.  

We are all familiar with the process of "desk debugging" - simulating a
program by hand to trace it's operation.  Searle is not in a vacuum at
UC Berkeley.  He has talked to *plenty* of programmers, and knows about
hand simulation.  

The CR does NOT get its staying power from the vagueness.  This tar baby
is sticky because it's based on everyday common sense.  A programmer can
runs through the steps of his program just as well (if not as fast) as
a chip.  And once you've "learned" a foreign language, you can do a lot
more than just generate replies to written notes.

But we need a stronger solvent than common sense.  Let's talk logic:

> >For example, he says that the CR example itself is presented only
> >to show that semantics is not reducible to syntax.  
> 	
> 	... which has NOT been shown at all ...

Searle doesn't really need to argue for this - he just wants a compelling
example.  Tarski has already proved it, and it is quite beyond debate.
Tarksi's theorem states that no predicate P can satisfy the following 
criterion for al sentences S:

		P('S') is provable if and only if 'S' is true. 

Note that this criterion does not involve the provability of S, but
only the provability of a sentence *about* S.  Truth is a semantic 
property, while any "P" in the theorem is a syntactic property (because
it takes a quoted sentence as its operand), so Tarski shows that truth
cannot be reduced to any syntactic property.

Pat Hayes has correctly pointed out that programs are more than syntax,
which is very important.  A single floppy disk with my program on it
is physically different from the same disk with your program on it, and 
running the different programs will produce physically distinguishable
output.  This issue is important for the question of whether programmed
computers have any specific "causal powers," but is independent of the 
question of understanding.

>  Searle is trying to prove the following:
> 
> >		For any program P whatsoever, and for any machine M whatsoever,
> >		the following inference is always invalid:
> 
> >		Machine M runs Program P, therefore Machine M understands.
> 
> > . . .             Searle is *not* trying to show that no program can
> >think, or that no machine can think.  He is too clever for that.  He
> >is not attacking the *goal* of Strong AI.  
> 
> Interesting distinction, but why would ever use the fact that program A 
> causes "understanding"?  The only really valid test for understanding is 
> demonstrating it.
> 

Suppose you want to sell your latest "Conversational Chinese" program.
Wouldn't you like to claim that running your program will make the 
computer speak Chinese?  It's a question of whether the way to make an
AI is by writing programs or by building machines (or some combination).

Strong AI says (according to Searle, and I think he is not far off) that
given the right program, any machine that is big enough to run it will
actually be intelligent, while the program is running.  I think this
claim is true, but Searle does not.  He thinks something else must be  
said about the machine before we can conclude that it understands.  Perhaps
you agree with Searle?

> >He is attacking the *argument* behind Strong AI.  This is much easier  
> >to do, but almost as devastating.  Write any program you want, and
> >run it any way you want, on any hardware, parallel, serial, or cerebral.
> >But if you want to claim that the system is thinking, you'll need a
> >better reason than "It's running my program".  
> 
> Searle's Chinese Room was designed to attack the concept of the Turing test
> as a valid test for intelligence.  It simply fails.  To Searle, it makes
> no sense to say that the Chinese Room _understands_ Chinese.  

This is simply handwaving.  Look - when I'm in my office, my office could
pass the Turing test in English, but it still makes no sense to say that
my office understands English.  If you want to claim that Searle defies
common sense in *his* argument, then you better not defy common sense in
your *own*!

Go back to the logic of the problem.  Take any programmed Turing machine
that can pass the Turing test.  This machine can do nothing more than 
implement a syntactic algorithm, since the input tape can never contain
anything other than a string of symbols.  Therefore, by Tarski's theorem,
(and Church's thesis) the machine could represent a syntactic predicate
but not a semantic predicate, such as truth.  

Now, *whatever* it is that constitutes understanding, it must have something
to do with knowing what words mean, and that certainly requires semantics.
We have just seen that no Turing machine can represent the concept of
"truth", so there is at least one word which no Turing machine can understand.
This is a different conclusion than Searle wanted, but that's because I have
used slightly different premises (getting Searle's original conclusion
would require a more general concept of "semantic information").

I hope we can put an end to the vague language of both sides, in this
thread.


Ken Presting  ("Let us calculate")

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (06/29/90)

In article <25457@cs.yale.edu> blenko-tom@CS.YALE.EDU (Tom Blenko) writes:
 
>Well, this is exactly one of the things Searle is disputing. Consider
>again the artificial city, and suppose that someone does succeed in
>constructing such a thing.  So the artificial city replicates
>externally, as closely as anyone can tell, just the behaviors that a
>real city would.
 
>If we go to a real city, we can pretty well arrive at an opinion about
>how much "civic pride" it has. It is reflected in various tangible
>elements of the city (parks, libraries, services) and less tangibly in
>the attitudes and dispositions of its human inhabitants.
 
>Now, the question is, does the artificial city have "civic pride"?  But
>the architects of the artificial city are only concerned with inputs
>and outputs, and when they deliver the desired transfer function, they
>suppose, using your view, that they are finished.  So there's no reason
>for anyone to suppose that it's meaningful to talk about the civic
>pride of an artificial city.

You may be carrying the metaphor of the city a little too far.  What's 
the corelation between "civic pride" and something that happens in
neurons (it's something intangible, I assume, but what?).  And simply
because cities have "civic pride" is no reason to assume that neurons
have a coresponding phenomenon.

In any case, we can linguistically define "civic pride".  It has certain
effects and, presumably, causes.  It manafests itself in a certain way.
As long as we can define a property like this, we can simulate it or
duplicate it.

- Jim Ruehlin

blenko-tom@CS.YALE.EDU (Tom Blenko) (06/30/90)

In article <3431@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
|You may be carrying the metaphor of the city a little too far.  What's 
|the corelation between "civic pride" and something that happens in
|neurons (it's something intangible, I assume, but what?).  And simply
|because cities have "civic pride" is no reason to assume that neurons
|have a coresponding phenomenon.

The point about civic pride is that it ultimately is just a disposition
shared by the inhabitants of a city. Concrete manifestations, (e.g.
parks) may be taken as evidence for civic pride, but they can arise in
the absence of civic pride, and they need not arise in the presence of
civic pride.

Searle's mind/brain hypothesis is that mental states (e.g.
consciousness, hunger) are simply labels for collections of neural
states.  So, just as it is difficult to talk about the civic pride of a
"city" lacking nearly-human inhabitants, it is difficult to talk about
the "hunger" of a system defined by a program running on an arbitrary
processor.  Mental states are taken as a necessary property of a
mind, artificial or otherwise.

|In any case, we can linguistically define "civic pride".  It has certain
|effects and, presumably, causes.  It manafests itself in a certain way.
|As long as we can define a property like this, we can simulate it or
|duplicate it.

Searle thinks this view represents a major (and commonplace)
misunderstanding (so do I).  Let's accept that you can linguistically
define "civic pride".  Now, how do you duplicate it without using
humans as elements of the system duplicating it?

There are lots of other examples. You can simulate the aerodynamic
properties of an aircraft design -- but there are always details
missing, and some of them may prove critical to the aircraft's
performance. Similarly, if you can provide an accurate simulation of
the economy, which simply represents the aggregate behavior of a group
of more-or-less independent actors, you can easily become the first
billionaire on your block.

	Tom

Victoria_Beth_Berdon@cup.portal.com (07/02/90)

(Note: this article is being posted for Ken Presting)

> In article <593@ntpdvp1.UUCP>, kenp@ntpdvp1.UUCP (Ken Presting) writes:
> > [...stuff deleted...]
> > Searle is *not* trying to show that no program can
> > think, or that no machine can think.  He is too clever for that.  He
> > is not attacking the *goal* of Strong AI.  
> > 
> > He is attacking the *argument* behind Strong AI.  This is much easier  
> > to do, but almost as devastating.  Write any program you want, and
> > run it any way you want, on any hardware, parallel, serial, or cerebral.
> > But if you want to claim that the system is thinking, you'll need a
> > better reason than "It's running my program".
> 
> Ken, the only article I have read by Searle was the one in the January
> 1990 Scientific American. In that article, it sure seems to me that
> Searle is claiming that the Strong AI position is provably wrong, not
> only that arguments in its favor are incorrect. 

This is a very subtle point, and well worth getting straight.  In the 
second paragraph of the Sci.Am. article, Searle says:

        The question that has been posed ... is, Could a machine
        think just by virtue of implementing a computer program?

On page 27, his Conclusion 1 is:

        Programs are neither constitutive of nor sufficient for
        minds.

Finally, later on the same page he says:

        Third, strong AI's thesis is not that, for all we know, computers
        with the right programs might be thinking, that they might have
        some as yet undetected psychological properties; rather, it is 
        that they must be thinking because that is all there is to 
        thinking.

So you are right that Searle thinks that Strong AI is provably wrong.  But
I want to emphasize that for Searle, "strong AI" is *not* the straightforward
claim that computers can think (someday).  It is not even the slightly
more sophisticated claim that the right program will make it possible for
computers to think.  Searle is attacking the claim that "running the right
program, by itself, will CERTAINLY make computers think."

When he uses phrases like "just by virtue of ..." or "that's all there is
to ...", Searle is saying two things:

        The thesis of Strong AI is not just an atomic sentence, such as
        "Computers can Think", it is actually an inference, such as,
        "Programs can make computers think, BECAUSE <you name it>".

        The inference of Strong AI depends on exactly one premise, that
        the machine is running a certain program.  No other premises are
        allowed.


> . . .     His Chinese room
> argument seemed to be an attempt to prove that *no* program that only
> uses symbolic manipulation can ever be said to understand. You are
> certainly right that Searle did not go so far as to claim that no
> machine can understand, but he sure seemed to be claiming that any
> such hypothetical machine must be doing more than symbol manipulation.

I guess we pretty much agree on this whole issue, but I thought I
should beat it into the ground ...

> 
> Searle's argument seemed to boil down to "if the man doing the
> symbol-manipulation doesn't understand, then the Chinese room (man +
> rules) doesn't understand", which is equivalent for computers to "if
> the cpu doesn't understand, then the cpu running a program doesn't
> understand". That claim, if you buy it (I don't) seems to me to
> completely rule out the possibility of a computer understanding
> anything.
> 
> Daryl McCullough

Searle does say (p.27) that he "has not tried to prove that 'a computer
cannot think'", so I would say that if you are reading his argument in 
a way that commits him to the stronger position, you may want to look
again.

What makes Searle's weaker point (that mind cannot be inferred from
programming) interesting is that he does not need to say that Strong AI
will fail - but he is saying that there cannot be any reason to believe that
it will succeed.  That is, I'm not claiming that Searle is trying to 
undermine any arguments in favor of the goal of Strong AI being possible.
What Searle is attacking can be viewed as the pratical arguments behind
strong AI as a research program.  

"We want a smart computer, so let's write a program to make computers smart."
Searle is saying that you can program from now till doomsday, and then
Totally Turing Test The resulTs for a Thousand lifeTimes.  But that will
*not* entitle us to conclude that the computer is smart.  So in addtion
to writing programs, we should be doing something else, presumably related
to "causal powers". 


> In article <593@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
> |precise.  Searle is trying to prove the following:
> |
> |       For any program P whatsoever, and for any machine M whatsoever,
> |       the following inference is always invalid:
> |
> |       Machine M runs Program P, therefore Machine M understands.
>
> In article <25422@cs.yale.edu>, blenko-tom@CS.YALE.EDU (Tom Blenko) writes:> 
> This is much too strong, and you are arguing against yourself.  Searle
> claims that functional equivalence does not suffice, that intelligence
> is an intensional property.  This is presented as a counter to the
> "machine-independent" property he ascribes to strong AI advocates.

Why do you think this is too strong, or that I'm arguing against myself?

> 
> I believe he makes the claim about biological versus silicon
> implementations in his first paper, and I've certainly heard him make
> that claim in person.

On p.27 of the Sci. Am. article, Searle says:

        Second, I have not tried to show that only biologically
        based systems like our brains can think.  Right now, those 
        are the only systems that we know for a fact can think, but we
        might find other systems in the univers that can produce 
        conscious thoughts, and we might even be able to create 
        thinking systems artificially.

I think this is a clear statement.


Ken Presting  ("Burn AFTER reading")

blenko-tom@CS.YALE.EDU (Tom Blenko) (07/03/90)

In article <31329@cup.portal.com> Victoria_Beth_Berdon@cup.portal.com writes:
|(Note: this article is being posted for Ken Presting)
|...
|> In article <593@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
|> |precise.  Searle is trying to prove the following:
|> |
|> |       For any program P whatsoever, and for any machine M whatsoever,
|> |       the following inference is always invalid:
|> |
|> |       Machine M runs Program P, therefore Machine M understands.
|>
|> In article <25422@cs.yale.edu>, blenko-tom@CS.YALE.EDU (Tom Blenko) writes:> 
|> This is much too strong, and you are arguing against yourself...
|
|Why do you think this is too strong, or that I'm arguing against myself?
|

It reads as

	FORALL P FORALL M  NOT(M(P) ==> M understands)

which says that no program running on any machine  results in a machine
that "understands" (should be system that understands).  Searle's claim
is closer to saying there is no universal intelligent program, i.e.,

	NOT(EXISTS P FORALL M  M(P) ==> M(P) is intelligent)

which is logically equivalent to the much weaker (than yours) assertion

	FORALL P EXISTS M  NOT(M(P) ==> M(P) is intelligent)

I say that you are arguing against yourself because you attribute this
claim to Searle (and the informal one it is intended to capture, saying
that Searle denies the "relevance" of programs), yet it is at odds with
your acknowledgement that Searle is not arguing against the possibility
of an intelligent, artificial entity.

|> I believe he makes the claim about biological versus silicon
|> implementations in his first paper, and I've certainly heard him make
|> that claim in person.
|
|On p.27 of the Sci. Am. article, Searle says:
|
|        Second, I have not tried to show that only biologically
|        based systems like our brains can think.  Right now, those 
|        are the only systems that we know for a fact can think, but we
|        might find other systems in the univers that can produce 
|        conscious thoughts, and we might even be able to create 
|        thinking systems artificially.
|
|I think this is a clear statement.

My point was that Searle believes not only the program, but the
implementing processor, contribute essential properties to the
resulting entity. Therefore it is relevant whether the implementing
processor/system consists of hardware, software, or wetware.

	Tom

daryl@oravax.UUCP (Steven Daryl McCullough) (07/03/90)

In article <593@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
>       Searle is trying to prove the following:
>
>       For any program P whatsoever, and for any machine M whatsoever,
>       the following inference is always invalid:
>
>       Machine M runs Program P, therefore Machine M understands.
>

If Searle were only trying to show that the inference above is
invalid, then I would have no further argument with him; he would be
correct. Furthermore, his Chinese Room argument would indeed be a
convincing argument: If Machine M is the man in the Chinese room, then
for any program P, the man could run program P and still not
understand Chinese.

However, the validity of the above inference is not claimed by Strong
AI (or if it is, then they are just speaking loosely). The more
precise claim would be that, for the right program P, one can infer

       Machine M runs Program P, therefore the system (Machine M running
       Program P) understands.

This is closer to the strong AI position, and it seems that Searle has
no good argument against it. For the Chinese room to count as an
argument against this claim, it would be necessary to establish that
the system (man + rules + room) does not understand Chinese. And
Searle cannot establish this without offering *some* definition of
what it means for a system to understand. (Comment: Searle's variant
of having the man memorize the rules does not change anything; there
would still be two systems: the man "acting himself" and the man
following the rules. Establishing that one system does not understand
does not automatically establish that the other doesn't.)

Daryl McCullough

kenp@ntpdvp1.UUCP (Ken Presting) (07/10/90)

> |> In article <593@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
> |> |. . .     Searle is trying to prove the following:
> |> |
> |> |       For any program P whatsoever, and for any machine M whatsoever,
> |> |       the following inference is always invalid:
> |> |
> |> |       Machine M runs Program P, therefore Machine M understands.
>
> Tom Blenko writes: 
> This is much too strong, and you are arguing against yourself...
> . . .  
> It reads as
> 
>	 FORALL P FORALL M  NOT(M(P) ==> M understands)
> 
> which says that no program running on any machine  results in a machine
> that "understands".  . . . 

Tom, you are mistaken.  You have overlooked the distinction between 
"valid inferences" and conditional assertions.  In standard symbols,
my version of Searle's thesis would read:

	(P) (M) - ( M runs P |= M understands ) 

The "|=" symbol denotes the logical relation called "entailment".  The
simple conditional form which you use here ignores Searle's repeated
use of conjunctions like "must" and "simply by virtue of", which indicate
a *necessary* relation between the antecedent and consequent.  (I have
neglected the object- vs. meta-language issue in my formula, but that
should not lead to much confusion.  I have also avoided the standard modal 
interpretation of "necessity", which should positively reduce confusion.)

Since entailment is a stronger relation than implication, the negation
of an entailment is weaker than than a negation of an implication, and
my version of Searle's claim has similar truth conditions to the version
you propose below.  Since Searle is claiming (on my reading of him) that  
the running of any program will not *necessitate* the presence of 
understanding in any machine, he can proceed in two steps, the first of
which is identical to your proposal:

> . . . Searle's position 
> is closer to saying there is no universal intelligent program, i.e.,
> 
>	 NOT(EXISTS P FORALL M  M(P) ==> M(P) is intelligent)
> 
> which is logically equivalent to the much weaker (than yours) assertion
> 
>	 FORALL P EXISTS M  NOT(M(P) ==> M(P) is intelligent)
> 

Notice that for Searle to support this last claim, he needs to demonstrate
the existence of a single Machine such that no matter what Program it is
running, it will not understand Chinese.  He thinks he has done so with
the Chinese Room.  Perhaps he has not, but that is another question.  If
the CR example is successful, then he has his first step.

Perhaps the difference between your reading of Searle and mine comes to this: 
You have formulated the conditions which he tries to meet with the CR example
itself, while I am attempting to formulate the general conclusion for the 
argument of which the CR example is a part.

The second step requires the application of a rule of inference which is
analogous to "Universal Generalization" in natural deduction systems.  If 
Searle is granted the assumption that there is no relevant difference between 
the case of a computer running a program and himself running the same    
program, then he can conclude for all machines that there is no necessary
connection between the program it runs and its understanding.  Searle thinks
this assumption follows trivially from "Axiom 1: Programs are purely formal".
Pat Hayes denies the assumption (with some justice, I think, but the issue
is not simple).


> I say that you are arguing against yourself because you attribute this
> claim to Searle (and the informal one it is intended to capture, saying
> that Searle denies the "relevance" of programs), yet it is at odds with
> your acknowledgement that Searle is not arguing against the possibility
> of an intelligent, artificial entity.
> 

Even on your own formulation of my reading, this does not follow.  From

	(P)(M) - ( Runs(M,P) -> Understands(M) )

it does not follow that 

	(M) - ( Understands(M) ).

All that follows is that running a certain program is not a sufficient
condition for understanding.  

Now, you may object that if the question "What program is that machine
running?" is not enough to decide the issue of the machine's intelligence,
then no amount of additional information could ever establish that a
general-purpose computer is intelligent.  Many people do believe this
(the Churchlands seem to), and propose that Connectionism is the only 
hope of AI.  

Whatever the status of that issue, the Chinese Room does not, by itself,
establish that no programmed general purpose computer can understand.  If 
it establishes anything, it establishes *only* that we must know more 
about a computer than what program it is running, before we draw any 
conclusions about its intelligence. 

> Tom
> 

Thanks for your comments.  I especially appreciate the formal direction
you have given to this thread.  If we can keep this up, we may get 
somewhere.

Ken Presting  ("Metastasis Before Modality")

dave@cogsci.indiana.edu (David Chalmers) (07/13/90)

In article <597@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
>
>> Tom Blenko writes: 
>
>>	 FORALL P EXISTS M  NOT(M(P) ==> M(P) is intelligent) 
>
>Notice that for Searle to support this last claim, he needs to demonstrate
>the existence of a single Machine such that no matter what Program it is
>running, it will not understand Chinese.

Just for the record, this is fallacious.  Such a strategy would be
sufficient to support the claim, but not necessary.  Take another look
at the order of the quantifiers.

Talk of "machines" tends only to confuse the issue, anyway.  All we need
is the notion of *program* (a formal object), and *implementation of program*
(a physical system).  It's not clear that all implementations will be
describable as running on pre-existing machines.

In this framework, the strong AI claim becomes:

  EXISTS P (program) such that FORALL S (physical system):
     S is an implementation of P  =>  S is intelligent.

Actually, even this may be too strong.  Some might like to say "S
produces intelligence" rather than "S is intelligent" -- the question of
the "ownership" of the intelligence is somewhat vague.  e.g. is your *brain*
intelligent?; is your *body*?;  such technical questions don't need to be
answered to deal with Searle's argument.

Anyway, with this in place, Searle needs to show
  
  FORALL P, EXISTS S such that S is an implementation of P but S does not
  produce intelligence,

which is what the Chinese Room purports to show.  Of course it doesn't show
that, but that's another story.  Suffice to reiterate the often-made point
that the fact that the pre-existing machine (i.e. the person in the room)
that implements the program fails to understand is quite irrelevant.
Implementing machines aren't what counts: implemented systems are.


>program, then he can conclude for all machines that there is no necessary
>connection between the program it runs and its understanding.  Searle thinks
>this assumption follows trivially from "Axiom 1: Programs are purely formal".
>Pat Hayes denies the assumption (with some justice, I think, but the issue
>is not simple).

Actually, I think that programs are indeed purely formal (or purely
syntactic, or whatever you like).  However, *implementations of programs*
certainly aren't.  They're concrete physical systems with all kinds of
interesting internal causal structure.  The fallacy of "programs are purely
syntactic, minds are semantic, syntax isn't sufficient for semantics; 
therefore implementing an appropriate program cannot be sufficient to
produce a mind" argument is best brought out by a corresponding argument:

(1) Recipes are completely syntactic.

(2) Cakes are tasty (or crumbly, or heavy, or...)

(3) Syntax is not sufficient for tastiness (or crumbliness, or heaviness...)

(4) Implementing the appropriate recipe cannot be sufficient to produce a cake.

I hope that even Searle would see the fallacy here.  Recipes are syntactic,
but *implemented recipes* are not.  Of course, one needs a meaningful
interpretation procedure to go from the recipe (formal specification) to
the cake (physical implementation).  But one has such a procedure (it's hanging
around in the head of (good) cooks, and could presumably be mechanized.)
Exactly the same goes for programs.  Programs are syntactic, implemented
programs are not.  Implemented programs are physical systems, derived from
formal programs through an interpretation procedure (either a compiler
or an interpreter, in practice, or both.).  The role of the
compiler/interpreter is precisely analogous to the role of the chef.

>Now, you may object that if the question "What program is that machine
>running?" is not enough to decide the issue of the machine's intelligence,
>then no amount of additional information could ever establish that a
>general-purpose computer is intelligent.  Many people do believe this
>(the Churchlands seem to), and propose that Connectionism is the only 
>hope of AI.  

This statement seriously misconstrues the nature of connectionism.
The issue of Connectionism vs. Traditional AI is quite orthogonal to the
issue of Strong AI vs. Searle.  Personally, I'm a dyed-in-the-wool
connectionist (or, more generally, a subsymbolic computationalist), but
I'm also a dyed-in-the-wool Strong AI supporter.  The two positions are
quite compatible.  Most connectionists believe that implementing the right
program is enough to give you intelligence -- they just happen to believe
that the program you need will be of a particular kind, compatible with
the principles of connectionism.

The notion that connectionism rejects, say, the Turing notion of computation
is quite prevalent in some circles, and can even be found in print from
time to time.  It's quite fallacious, though.  Personally, I think that
the Turing notion of computation is the greatest thing since sliced bread.
It's just that people in traditional AI placed far too heavy a restriction
on the kind of computations they allowed in (by making a deep prior commitment
about the ways in which computational states could carry semantics).
Connectionism advocates removing this heavy semantic commitment (note: it
doesn't advocate removing semantics, it just remains silent about the level
at which the semantics might lie), and thus returning to the full-fledged,
unrestricted class of computations that Turing allowed.

Most connectionists believe in Strong AI, without a doubt.  Only the
*class* of sufficient programs is in dispute.


Sorry about this... and I had vowed "never again".  Chinese-Room withdrawal
symptoms, I guess.  One of these days I'm going to write a paper called
"Everything You Wanted to Know About the Chinese Room but Were Afraid to Ask".
Searle's arguments are deeply fallacious, but they raise an enormous number of
interesting issues.

--
Dave Chalmers     (dave@cogsci.indiana.edu)      
Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable"

kenp@ntpdvp1.UUCP (Ken Presting) (07/18/90)

In article <50741@iuvax.cs.indiana.edu>, dave@cogsci.indiana.edu (David Chalmers) writes:
> 
> Talk of "machines" tends only to confuse the issue, anyway.  All we need
> is the notion of *program* (a formal object), and *implementation of program*
> (a physical system).  It's not clear that all implementations will be
> describable as running on pre-existing machines.
> 
> In this framework, the strong AI claim becomes:
> 
>   EXISTS P (program) such that FORALL S (physical system):
>      S is an implementation of P  =>  S is intelligent.
> 

This version of the Strong AI claim is anticipated by Searle, on p.29 of
the Sci.Am. article:

         The thesis of Strong AI is that any system whatsoever ... not
      only might have thoughts and feelings, but _must_ have thoughts
      and feelings, provided only that it implements the right program,
      with the right inputs and outputs.

Notice two (small) differences:

1) Searle is insistent on the issue of programs "constituting" minds, and
physical objects "causing" thought.  So the arrow above must not be
read as simple material implication - it must be necessary implication,
or entailment, or some other counterfactual (eg causal).

2) As I read him, Searle is lumping together all automata which compute
the same function, independently of the algorithm they use.

If I follow Daryl McCullough's last article, he for one would not accept
this version of Strong AI.  Probably few would.  I myself balk at (2),
because I believe a thinking thing must contain a representation of the
concept of "truth", which cannot be finitely defined in I/O terms.
 
Searle is perfectly willing to face Strong AI defined in terms of
"implemented systems", but there is probably a difference between his
concept of implementation and Dave's.  The usual software engineering
concept of "implemented system" involves:

a) an independently pre-existing "machine" which can run most any "program"
b) a "machine-readable" copy of a program
c) a mechanical process for "loading" the program into the machine
d) a user-initiated process of "running" the program.

Searle seems to view "implementation" in this fashion, which I'll call
GOFR, for "Good Old Fashioned Running" (apologies to John Haugeland).

Note that Hilary Putnam claims to have shown that every physical
system is an *instantiation* of every finite automaton.  Unless Dave's
concept of "implemented system" can be distinguished from Putnam's
"instantiated automata", Searle could object that Dave's version of
Strong AI implies panpsychism.  "Go ahead," Searle would say, "write
your magic program.  Now find a system that *doesn't* implement it, or
else explain why all these implementations lying around on the ground
still act so stupid."

GOFR is not subject to Putnam's argument, because to identify an object
as a "machine" requires the concurrent specification of the processes of
loading and running the program, and a coding scheme for machine-
readable copy.  (See _Representation and Reality_ for the argument)

This is the first problem with the Systems Reply - the high powered
abstract concept of implementation reduces Strong AI to an absurdity,
while the traditional GOFR concept does not neatly excise the
"pre-existing machine" and its gripes about not understanding its data.

> 
> Anyway, with this in place, Searle needs to show
>   
>   FORALL P, EXISTS S such that S is an implementation of P but S does not
>   produce intelligence,
> 
> which is what the Chinese Room purports to show.  Of course it doesn't show
> that, but that's another story.  Suffice to reiterate the often-made point
> that the fact that the pre-existing machine (i.e. the person in the room)
> that implements the program fails to understand is quite irrelevant.
> Implementing machines aren't what counts: implemented systems are.

Let me try to give a formal analogue of this objection, in the forlorn
hope of clarifying the issue once and for all.  Let U(n,m) be the
function computed by a Universal Turing Machine, where 'n' is the Goedel
number of an arbitrary TM, and 'm' is the Goedel number of an arbitrary
starting configuration of an input tape.  Let C(m) be the function
computed by a TM that, when implemented, can pass the Turing Test in
Chinese.  Let 'c' be the Goedel number of some TM that computes C(m).
Finally, suppose for a moment that we have made sense of the concept of
"implementation", and let S and R be implements.

Note first that on any acceptable concept of "implementation":

      For any S,
         S implements U(c,m) if and only if S implements C(m).

Now, according to Systems Repliers, all that Searle shows is:

      There are S, R such that
         S implements U(n,m) and R implements U(c,m)
         and S does not think.

And of course, this is irrelevant.  We only care whether R thinks.

If I finally got it straight, this is the gist of Bob Kohout's "Viola"
(:-) article last spring.  NOBODY, NOT EVEN SEARLE, IS THAT STUPID.

The Putnam-based objection I gave above is never raised by Searle,
because he has a more straightforward counter on p. 30 of the
Sci.Am. article:

      The point of the original argument was that symbol shuffling by
      itself does not give any access to the meanings of the symbols.
      But this is as much true of the whole room as it is of the person
      inside.

Searle's point is that there is nothing a Universal TM can do with a
program that he cannot do just as well himself.  When he says "But I
still don't understand Chinese", he is not just reporting a subjective
state of ignorance.  He is correctly emphasizing that the syntactically
specified operations he is performing on the symbols are unrelated to
their semantics.  Everyone agrees that Searle-without-books not
understanding is completely irrelevant.  The controversial issues are:

Problem 1:  How much do we have to add to Searle in order to get an
            "implemented system", and what "causal powers" will that
            system have?

Problem 2:  Once we have an "implemented system", what connection, if
            any, is there between the operations of the system and the
            semantics of the symbols?

There are (min) two distinct threads in the CR debate.  One is not an issue
for AI at all - Problem 1.  General Computer Science should be able to
handle that issue, but not until the semantics of programs is understood
and the concept of implementation is cleared up.  Problem 2 is specific
to AI, but it can only be studied if we make some general assumptions
about how symbols get their meanings.  IMO, we can get these assumptions
from Quine and Davidson, so there's no need to think in a vacuum.



Ken Presting   ("Anybody else abhor a vacuum?")