[comp.ai.philosophy] Searle's Chinese Room

cw2k+@andrew.cmu.edu (Christopher L. Welles) (11/08/90)

I've just finished a paper on Searle's "Minds, Brains, and Programs." 
Just thought I'd post it to see what other people thought.  Anyone agree
with me??

Brains, the Magic Vessel of Intentionality


	In his "Minds, Brains, and Programs", John Searle's whole 
conclusion is based on a bad assumption.  The responses to his paper do bring 
up some things worth considering, but this occurs only because Searle's 
statements were misinterpreted.  His ideas, themselves, represent 
something that is obviously, and completely wrong, not worthy of 
consideration.  His whole paper represents a lack of  understanding of how 
formal systems work, what "intentionality" is, and the difference between 
hardware and software.
	Beginning with his first example, Searle makes a tragicly false 
assumption.  Searle claims that since the English speaking man, who 
performs all the formal operations, does not understand Chinese, there is no 
understanding involved.  The obvious problem with this, is the Systems 
Reply:  The system as a whole understands.  The man is only a part of the 
system, therefore he does not understand.  This was understandably 
misinterpreted by Searle.  As he understood it, it implied that simply 
internalizing the system( i.e. memorizing the rules, script, etc. ) would 
mean that the English speaking man must therefore understand Chinese.  
Something that would obviously not be true.
	The point of the Systems Reply, however, has nothing to do 
with the physical representation of the system.  For all it cares, the 
components of the system could be spread all over the world.  It depends 
Foley upon the logical structure, and how they interact.  The system.  
Stated more clearly, the English speaking man was only performing the 
part of a machine, executing a program.  It is the result which 
understands, and not the machine itself.  In order for the English speaking 
man to perceive understanding in the system, he would have to see it from 
the system's viewpoint.  Not just performing the actions that bring the 
system about.
	An analogy to this would be a neuron in the brain.  It merely 
reacts according to specified rules, a program.  Even if it were capable, it 
would have no understanding of what it's actions represented to the whole 
system, i.e. the mind.  Pulses are just received and sent according to a
set of 
rules.  You can apply this to the whole brain for that matter.  The brain 
itself does not understand, only the result of it's execution, the mind.
	To say otherwise, you would have to say that somehow, if the 
brain stopped executing it's instructions, there would still be 
understanding.  After all, if the understanding were inherent in there 
simply being a brain, why bother acting according to fixed rules.  Do you 
decide to keep following those fixed rules so you will continue thinking???
	The idea that following formal principles will cause you to 
understand is absurd.  Do you feel that you keep following a set of rules so 
that you keep understanding?  You are, however, a result of formal rules 
being followed.  Your brain, acts according to physical principles, a set of 
rules.
	This distinction between mind, is exactly one of the things 
Searle argues against:

"The distinction between the program and its realization in hardware 
seems to be parallel to the distinction between the level of mental 
operations and the level of brain operations. . . .  But the equation "mind is 

to brain as program is to hardware" breaks down. . . . the distinction 
between program and realization has the consequence that the same 
program could have all sorts of crazy realizations. . . .  Weizenbaum, for 
example, shows in detail how to construct a computer using a role of toilet 
paper and a pile of small stones.    Similarly, the Chinese. . . program
can be 
programmed into a sequence of water pipes, a set of wind machines, or a 
monoligual English speaker, none of which thereby acquires 
understanding of Chinese."(Perry, p. 400-1)

	All this means, the only thing he demonstrates throughout his 
whole paper, is that performing a series of rules can not result in one 
understanding.  However, it does not mean that no understanding will 
result, just that the one executing the program won't be the one doing it.  
There is nothing against this idea anywhere in AI.  No one expects the 
circuits to begin understanding just because it's following a certain 
program.  A program, in the state of being executed, might be said to be 
understanding however.
	It seems Searle never got this distinction strait however:  "One 
of the claims made by the supporters of strong AI is that when I 
understand a story in English, what I am doing is exactly the same--or 
perhaps more of the same--as what I was doing in manipulating the 
Chinese symbols."(Perry, p. 393)
	Searle obviously didn't understand what was meant here.  In 
the case of him performing symbol manipulation, yes that is done.  But it is 
not something he, his understanding conscious self, does.  The same term is 
being used here to actually refer to two different things.  Both his mind, 
and his body; the brain being considered part of the body.  There is 
generally no need to distinguish between them because, at least at our 
current technology level, the mind and the body happen to stay together.
	Searle's argument against the Chinese box is simply that there 
can be no understanding with just formal symbols.  The symbols must 
have some intrinsic  meaning assigned to them.  He terms this 
"intentionality."  By his own definition, intentionality is "that feature of 
certain mental states by which they are directed at or about objects and 
states of affairs in the world."  While this, at least seems to hold with the 
generally accepted idea of what intentionality, Searle's interpretation of it 
distinguishes it.
	When Searle says something has intentionality, he means that 
it has an intrinsic meaning of some sort.  Some have tried to take this to 
mean that the symbol and what it's referring to are causally connected in 
some way, but Searle's reply clearly rules that out:  "no matter what 
outside causal impacts there are on the formal tokens, these are not by 
themselves sufficient to give the tokens any intentional content.  No 
matter what caused the tokens, the agent still doesn't understand 
Chinese"(The Behavioral and Brain Sciences, p. 454)
	By this, and other comments, intentionality seems to be some 
method of letting the brain know, without referring to other symbols, 
what the symbols coming in mean.  Sort of like a dictionary entry for each 
symbol coming in, but instead of defining it in words( themselves symbols 
),  it simply carries the meaning:  "for intentionality there must be 
intentional content in addition to the formal symbols."(The Behavioral
and Brain Sciences, p. 454)
	Of course, how do you communicate meaning without referring 
to symbols?  This is the very thing that Searle supposes biological material 
is capable of, and that's why we can't make a program that involves, in 
and of itself, understanding.  It depends upon the material to contain the 
meaning.  Lets look at a description of how we see things according to 
Searle:

"From where I am seated, I can see a tree.  Light reflected from the tree in 
the form of photons strikes my optical apparatus.  This sets up a series of 
sequences of neural firings.  Some of these neurons in the visual cortex are 
in fact remarkably specialized to respond to certain sorts of visual stimuli.  
When the whole set of sequences occurs, it causes a visual experience, and 
the visual experience has intentionality.  It is a conscious mental event 
with an intentional content."(The Behavioral and Brain Sciences, p. 452)

It follows an expected sequence at first, but then, after the neuron
firings, is 
suddenly translated into something with intentionality.  Notice that, it is 
no longer represented by symbols(i.e.  the firing of the neurons) but is 
instead, by the great power of organic material, now carried around the 
brain as an image.
	His whole ridiculous idea of intentionality, and the idea that 
somehow the meaning must be passed on to that agent carrying out the 
program hinges on the idea that the understanding is done by the agent.  
What Searle is trying to do, is set up a Chinese box wherein, the meaning of 
the Chinese symbols, are translated into English and the Person inside the 
box does the understanding.  That, however, only leads to breaking it down 
further and further.  Is there sum sub-atomic particle that does the 
understanding for us.
---------------------------------------------------------------

Hope you liked it, or maybe hated it.  Let's hear some opinions!

			<<<<< Chris >>>>>

larryc@poe.jpl.nasa.gov (Larry Carroll) (11/09/90)

In article <deleted> cw2k+@andrew.cmu.edu (Christopher L. Welles) writes:
>Searle argues against:
>
>"The distinction between the program and its realization in hardware 
>seems to be parallel to the distinction between the level of mental 
>operations and the level of brain operations....  But the equation "mind is 
>to brain as program is to hardware" breaks down.... the distinction 
>between program and realization has the consequence that the same 
>program could have all sorts of crazy realizations....  Weizenbaum, for 
>example, shows in detail how to construct a computer using a roll of toilet 
>paper and a pile of small stones.  Similarly, the Chinese... program
>can be programmed into a sequence of water pipes, a set of wind machines, or
>a monolingual English speaker, none of which ...

It would be interesting to ask Searle if he thinks that an alien creature
with a base in an entirely different biology could be intelligent or have
consciousness.  Say, fluorine-silicon rather than hydrogen-carbon, using
thermal energy reactions that mimic our form of oxidation.  Or something even
more radical: plasma life-forms living in a stars chromosphere, using fusion
reactions rather than chemical reactions.

			Larry Carroll
			"Takes-us" (correct pronunciation of Texas)
			Dancin' Fool

sarima@tdatirv.UUCP (Stanley Friesen) (11/10/90)

In article <kbCADGe00VsL5230Uu@andrew.cmu.edu> cw2k+@andrew.cmu.edu (Christopher L. Welles) writes:
>I've just finished a paper on Searle's "Minds, Brains, and Programs." 
>Just thought I'd post it to see what other people thought.  Anyone agree
>with me??
...
>	An analogy to this would be a neuron in the brain.  It merely 
>reacts according to specified rules, a program.  Even if it were capable, it 
>would have no understanding of what it's actions represented to the whole 
>system, i.e. the mind.  Pulses are just received and sent according to a
>set of rules. ...

This is very critical.  There is *no* part of the brain which understands,
only the whole brain, operating normally can be said to understand.
Certainly any given part of the brain, when examined in detail, performs
a rather rigid, relatively simple data transformation.  This has been true
for *every* brain region so far studied.  The cerebellum is a particularly
good example.  Any given region of the cerebellum performs an *identical*
tranformation, yet due to its external wiring it may perform any of several
functions, including balance, muscle tone, motion smoothing, and perhaps
others.  Other section of the brain are similar in that they are data
transformers that do not 'care' where the data came from or where it is going.
[Of course this whole discussion applies to individual neurons in spades]

...
>	By this, and other comments, intentionality seems to be some 
>method of letting the brain know, without referring to other symbols, 
>what the symbols coming in mean.  Sort of like a dictionary entry for each 
>symbol coming in, but instead of defining it in words( themselves symbols 
>),  it simply carries the meaning:  "for intentionality there must be 
>intentional content in addition to the formal symbols."(The Behavioral
>and Brain Sciences, p. 454)
>	Of course, how do you communicate meaning without referring 
>to symbols?  This is the very thing that Searle supposes biological material 
>is capable of, and that's why we can't make a program that involves, in 
>and of itself, understanding.  It depends upon the material to contain the 
>meaning.  Lets look at a description of how we see things according to 
>Searle:

Oh wonderful :-)  And just what does he see in brains that provides this
non-referential semantics?  How do patterns of neuronal firing differ from
any other symbolic system?  [A pattern of neuronal firings is as much a symbol
as anything else, it has no particular relationship to any meaning it might
have]

As far as I can see Searle's basic problem is a lack of understanding of
neurobiology.  He seems to have no concept of the current state of neural
science, especially with regard to how a brain processes data.

>"From where I am seated, I can see a tree.  Light reflected from the tree in 
>the form of photons strikes my optical apparatus.  This sets up a series of 
>sequences of neural firings.  Some of these neurons in the visual cortex are 
>in fact remarkably specialized to respond to certain sorts of visual stimuli.  
>When the whole set of sequences occurs, it causes a visual experience, and 
>the visual experience has intentionality.  It is a conscious mental event 
>with an intentional content."(The Behavioral and Brain Sciences, p. 452)
 
>It follows an expected sequence at first, but then, after the neuron
>firings, is 
>suddenly translated into something with intentionality.  Notice that, it is 
>no longer represented by symbols(i.e.  the firing of the neurons) but is 
>instead, by the great power of organic material, now carried around the 
>brain as an image.

Oh foo!!  This is starting to get totally silly.  I have a hard time believing
anyone, even Searle could really believe this!  What does he think a 'visual
experience' is if it is not a pattern of neuronal firing??  Jeez!
[So some of the earlier neuron in the chain have a very specific response
mode, most of these are essentially the preprocessor that converts light
into an internal mental model (i.e. into symbolic form).  It is not until the
more complex cells of the tertiary visual areas are reached that we can talk
about symbols in any meaningful sense.  But even there everything is still
a pattern of neuronal firing.]

And what in blue blazes is an 'intentional content'?  If he just means that
the symbols have a behavioral significance, so what?  If he means something
else, what is its physical basis in brain activity?
[And if it *has* no physical basis, he is just reinventing dualism]

>	His whole ridiculous idea of intentionality, and the idea that 
>somehow the meaning must be passed on to that agent carrying out the 
>program hinges on the idea that the understanding is done by the agent.  

And it thus becomes a problem of infinite regress - where is the 'agent'
in the human brain?  How does a brain qualify as an 'agent'?

>...  That, however, only leads to breaking it down 
>further and further.  Is there sum sub-atomic particle that does the 
>understanding for us.

Quite, and even then, who's to say we have reached the end???


As you can see I basically agree with you.  And I am coming from a background
in the biological sciences.  Searle is a biological idiot.

My main problem with AI as a field is that I think most research is approaching
the problem bass-ackwards.  Too much time is being spent on finding 'reliable'
reasoning paradigms for artificially contrained, highly unnatural domains.
This is *not* how natural intelligence works.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)

fraser@bilby.cs.uwa.oz.au (Fraser Wilson) (11/10/90)

In <10297@jpl-devvax.JPL.NASA.GOV> larryc@poe.jpl.nasa.gov (Larry Carroll) writes:


>It would be interesting to ask Searle if he thinks that an alien creature
>with a base in an entirely different biology could be intelligent or have
>consciousness.  Say, fluorine-silicon rather than hydrogen-carbon, using
>thermal energy reactions that mimic our form of oxidation.  Or something even
>more radical: plasma life-forms living in a stars chromosphere, using fusion
>reactions rather than chemical reactions.

Searle's argument had nothing to do with conciousness residing in
alternative biologies.  What he was essentially saying was that a
_formal system_ (which is what a computer is) can never be conscious.
I think the systems reply essentially chucks this one right out the
window where it belongs :-).

>			Larry Carroll
>			"Takes-us" (correct pronunciation of Texas)
>			Dancin' Fool

	Fraser Wilson.
	(fraser@bilby.cs.uwa.oz)

miodeen@buddha.ncc.umn.edu (Mike Odeen) (11/11/90)

I've always felt that Searle's argument falls apart when you deny that anything
like "meaning" or "intentionality" actually exist.  

Meaning results from an interaction between symbols.  Look at a dictionary.
It wont give you the meaning of any word, but it will provide you with other
words, setting up an association between them.  No one element carries any
meaning, just an association value with other elements.

Something like what happens in the dictionary caould easily be instantiated
in some kind of network model of the brain.  "Meaning" if it exists anywhere
would just be a complex interaction between symbols in the brain.



-- 
Michael J. Odeen
miodeen@buddha.ncc.umn.edu

smoliar@vaxa.isi.edu (Stephen Smoliar) (11/12/90)

In article <10297@jpl-devvax.JPL.NASA.GOV> larryc@poe.jpl.nasa.gov (Larry
Carroll) writes:
>
>It would be interesting to ask Searle if he thinks that an alien creature
>with a base in an entirely different biology could be intelligent or have
>consciousness.

I seem to recall seeing him address this point in one of his many publications
on this subject.  As I recall, he takes the rather dubious stand that ANY
chemical basis for life may at least potentially possess his precious
intentionality.  This means that he is essentially saying that chemical
processes may ultimately give rise to intentionality while computational
ones cannot.  This does not strike me as particularly sound, but then I
am not sure that I can claim to really understand just what Searle has
in mind when he brings intentionality into the argument.  (Perhaps he
just wishes to use it as a cross to ward off those vampires who actually
write AI programs!)

=========================================================================

USPS:	Stephen Smoliar
	5000 Centinela Avenue  #129
	Los Angeles, California  90066

Internet:  smoliar@vaxa.isi.edu

"It's only words . . . unless they're true."--David Mamet

shafto@ils.nwu.edu (Eric Shafto) (11/12/90)

In article <57@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
> This is very critical.  There is *no* part of the brain which 
understands,
> only the whole brain, operating normally can be said to understand.


I think this still leaves you vulnerable to Searle's argument.  
The critical fact is this:  Not only does no part of the brain 
understand, the brain doesn't understand.  The system of which 
the brain is a part (for which the brain is the substrate?) 
understands.  Let's call that system the mind.  

In other words, *I* understand, but my brain does not.  Am I my 
brain?  I think not.  The system in the room understands, but the 
human does not.  Where's the contradiction?

Searle's genius lies in making things seem overwhelmingly 
counterintuitive.  He never actually proves or disproves anything.

Regards,

Eric Shafto
Institute for the Learning Sciences
Northwestern University

raphael@fx.com (Glen Raphael) (11/13/90)

cw2k+@andrew.cmu.edu (Christopher L. Welles) writes:

>I've just finished a paper on Searle's "Minds, Brains, and Programs." 
>Just thought I'd post it to see what other people thought.  Anyone agree
>with me??

I agree with you that the Chinese room is a lot of hooey; wrote a 20-page
pager on it myself once...

>The responses to his paper do bring 
>up some things worth considering, but this occurs only because Searle's 
>statements were misinterpreted.

Not at all; there just happen to be a LOT of ways in which to attack the
Chinese Room concept, most of which are valid...

>His whole paper represents a lack of  understanding of how 
>formal systems work, what "intentionality" is, and the difference between 
>hardware and software.

I don't think you should accuse Searle of misunderstanding intentionality
unless you are willing to attack *his* concept of intentionality at length;
he wrote a book called "Intentionality" which is a standard text in Speech
Theory.

>),  it simply carries the meaning:  "for intentionality there must be 
>intentional content in addition to the formal symbols."(The Behavioral
>and Brain Sciences, p. 454)

Again, for a full formal attack from this angle you'll have to look at
how Searle defines "intentional content".

I found it fairly easy to attack the Chinese room by reductio ad absurdum
based on a few thought experiments. For furthur ideas in this vein
you might read the answer to Searle in "The Mind's I" by Douglas Hofstadter
and Daniel Dennett. The Chinese Room is basically an "argument from intuition"
rather than a proof, and most of us on the net probably have a contrary
intuition. Searle gets a lot of mileage out of a conviction that basically
boils down to "If computers could think, then systems made out of rocks and
toilet paper, or even *beer cans* could think, and that's PREPOSTEROUS!"
The rest is handwaving.
(I'll have to take another look at my paper one of these days...)

Incidentally, Searle is a *fantastic* lecturer; he's just out of his depth
when it comes to AI.

>			<<<<< Chris >>>>>

Glen Raphael
raphael@fx.com

cw2k+@andrew.cmu.edu (Christopher L. Welles) (11/13/90)

In <10297@jpl-devvax.JPL.NASA.GOV> larryc@poe.jpl.nasa.gov (Larry
Carroll) writes:
>It would be interesting to ask Searle if he thinks that an alien creature
>with a base in an entirely different biology could be intelligent or have
>consciousness.  Say, fluorine-silicon rather than hydrogen-carbon, using
>thermal energy reactions that mimic our form of oxidation.  Or something even
>more radical: plasma life-forms living in a stars chromosphere, using fusion
>reactions rather than chemical reactions.


In<fraser.658229808@bilby>, fraser@bilby.cs.uwa.oz.au (Fraser Wilson) writes:
>What he was essentially saying was that a
>_formal system_ (which is what a computer is) can never be conscious.
>I think the systems reply essentially chucks this one right out the
>window where it belongs :-).

Just thought I should point out.  Searle did make it clear that "formal
systems", computers,  could be conscious.  He emphasized the fact that
humans were such systems.

His main thesis went something to the effect of:  That there mere
instantiation of a program could not be in itself, sufficient for for
consciousness.  In this, he is saying that consciousness somehow depends
upon the "stuff" that the computer is made out of.

In fact, he did claim that, Martians, for example, might have
consciousness, as well, but it would depend upon the "stuff" they were
made out of.  Somehow, conscious systems, unlike another formal system,
could not be represented in any symbolic structure.  The material had to
have this special property of intentionality.  Something, he theorized,
only maybe human brains had.

Throughout Searle's Chinese room paper, it is easy to misunderstand what
Searle is saying.  Just the fact that what he's saying really doesn't
make any sense, causes one to read into it something completely
different.  A major problem with the commentaries is that almost no one
really understood all that Searle was saying.  Many of the arguments
there were designed to deal with  the point they thought Searle was
trying to make.  What he actually said was just too ridiculous.  Only
after reading his response to several of the commentaries can it really
understood what Searle is saying.

When I first read his Chinese room paper, itself, I read it three times
just to figure out what he was saying about intentionality.  Even then
it wasn't clear though.  Only after reading some commentary on it, as
well as Searle response to the commentary did I have any chance of
figuring out what he was really saying.  I usually assume that the
writer is a rational human being.  It looks as if that just doesn't
apply for Searle.

I suppose that last isn't really true, it just that Searle had
absolutely no concept of how a computer, or a computer program actually
worked.  It's obvious that Searle was arguing about a subject in which
he had absolutely no background.  Anyone, dealing either with computers,
or (is it operational psychology?) would have immediately seen such an
idea was wrong.

				<<<<< Chris >>>>>

cam@aipna.ed.ac.uk (Chris Malcolm) (11/14/90)

In article <10297@jpl-devvax.JPL.NASA.GOV> larryc@poe.jpl.nasa.gov (Larry Carroll) writes:

>It would be interesting to ask Searle if he thinks that an alien creature
>with a base in an entirely different biology could be intelligent or have
>consciousness.

He answered that in 1980 in the replies to his original BBS target article
criticism, when he said:

  We need to keep reminding ourselves over and over: Cognition is a biological
  phenomenon - as biological as digestion, photosynthesis, lactation, or the
  secretion of bile.  We might do any of one of these in an artificial medium
  removed from normal biochemistry, but we couldn't do any one of them
  by pure syntax.

Somewhere else I can't remember he quite explicitly said that a computer
couldn't be made to think, but a machine could -- after all what were we
but biological machines?

So he certainly doesn't deny the possibility of cognition to robots or
Martians, just automated syntax shufflers.

I don't mean to attack Larry Carroll here, but there are too many
posters in this thread who haven't read much (or any) Searle, quite
apart from the young twits who say "I haven't read any of ... but it
seems to me ... etc."

If you want to know what Searle thought don't read this newsgroup! Read
Searle! He's not as stupid as most of those who disagree with him :-)
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

cam@aipna.ed.ac.uk (Chris Malcolm) (11/14/90)

In article <57@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:

>My main problem with AI as a field is that I think most research is
>approaching the problem bass-ackwards.  Too much time is being spent on
>finding 'reliable' reasoning paradigms for artificially contrained,
>highly unnatural domains.  This is *not* how natural intelligence works.

I agree. Most AI research has been doing this. But not all. Moravec
suggested in "Locomotion, Vision, and Intelligence", in Robotics
Research 1, eds Brady and Paul, MIT Press, 1984 that "... the most fruitful
direction for this track [the development of artificial intelligence] is
along the oxtrail forged by natural evolution before stumbling on us ...
developing a responsive mobile entity ...". And there are some (e.g.
Minsky) who were never much taken with the logical approach.

Six years on there's quite an entertainingly active research field. Not
to mention the confectionists YAWNs (Yet Another Wonderful Network) and
sundry other PDP approaches of various granularities. The big money and
the textbooks are still pushing Liebniz's dream of finding the calculus
of thought, but there's also a lot of diverse research programmes which
say, like you, that "is *not* how natural intelligence works".
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

blenko-tom@cs.yale.edu (Tom Blenko) (11/15/90)

In article <3488@aipna.ed.ac.uk> cam@aipna.ed.ac.uk (Chris Malcolm) writes:
|
|If you want to know what Searle thought don't read this newsgroup! Read
|Searle! He's not as stupid as most of those who disagree with him :-)

Well said. I have for some time regarded Searle's original article as an
intelligence test for the AI community (it is even more entertaining to
suppose that that was its intent).  Not because there's any a priori
reason it should serve as such, it just it seems to work pretty well
that way.

	Tom

me@csri.toronto.edu (Daniel R. Simon) (11/15/90)

Followers of the current debate in this newsgroup on Searle's "Chinese room" 
argument, who are familiar, through my last posting here, with the Laboratory 
for Artificial Appearance, may be interested in an email message I received 
from a friend and contact there not long ago.

[For those who are unfamiliar with the LAA, it was originally the graphics
group within the computer science department of a university that shall 
remain nameless.  It owes its inspiration (and its new name) to a revolutionary
paper, published some time ago in an obscure graphics journal, which 
introduced the concept of "artificial appearance" by proposing the following 
test:  a room is fitted with two high-resolution CRT monitors, one of which is 
connected to a camera pointed at a real person in another room, and the other 
of which receives its input from a computer graphics generation program.  If a 
viewer is unable to tell, by scrutinizing the two screens, which one displays 
the image of a real human being, the paper's author argued, then there is no 
reason not to assume that the computer has artificially generated human 
appearance.  This so-called "blurring test" has revolutionized thinking about 
computer graphics, prompting heated philosophical debates about the nature of 
human appearance, and radically altering prevailing opinions on the best 
approaches to its artificial simulation.]

Here is the message I received:

	Hi, Dan!  I hope you enjoyed your visit with us, and that your next 
	visit will be soon.  Several people here at LAA have remarked to me 
	that they found their discussions with you to be most fruitful.

	I thought I'd bring you up to date on recent developments since you 
	left.  At that time, the mood here was (as I expect you perceived) 
	fairly confident among us; we felt (and still feel) that while we have 
	only begun to make a dent in the huge problems involved in creating 
	artificial human appearance, we have at least been making steady 
	progress towards that goal.  Imagine our surprise when a well-respected
	local professor of philosophy here began, just a few days later, 
	distributing advance copies of a soon-to-be-published article he wrote,
	arguing that artificial human appearance is not merely difficult to 
	achieve, but actually impossible in principle!

	Our surprise quickly turned to scorn, however, once we were able to 
	study his arguments in detail.  His main premise is that human 
	appearance is inherently "continuous", and cannot be simulated by any 
	"discrete" representation, such as a computer would generate.  He 
	illustrates his thesis as follows:  suppose that he were to possess the
	entire text of such a representation (in, say hexadecimal code) for the
	appearance of a 21-year-old Chinese man in traditional wedding garb.  
	Suppose further that a particular video monitor receives its input not 
	from a camera or digital input, but from a device which turns short, 
	sharp tapping sounds into numbers (the way telephone exchanges do when 
	old-style "pulse" telephones are dialed).  Then, merely by tapping on 
	the table (or on himself, if necessary), he can, in principle, conjure 
	up on this monitor the image of said 21-year-old Chinese man.  "Does 
	that mean", the author asks provocatively, "that I look Chinese?"

	I expect I need hardly point out to you the obvious flaw in this 
	"Chinese groom" argument.  It is clearly not the man himself, but the 
	whole array of person, text, and monitor that realizes the appearance
	of the Chinese man.  The author himself even recognizes the possibility
	of this refutation, admitting that the presence of the text may make 
	his claim of Chinese appearance suspect; he replies with a revised 
	scenario in which he has memorized the entire text.  However, he never
	follows the objection to its logical conclusion, recognizing that the 
	whole combination of "hardware" and "software" can be said to be 
	forming an artificial human appearance.  In retrospect, I find it 
	somewhat puzzling that his paper has garnered so much attention, given 
	the weakness of its arguments.  I'm interested to hear your opinion, 
	of course, but I'd be surprised if you lent his propositions any more 
	credence than we have.

	Anyway, keep in touch, and let me know when you'll be in the area 
	again.

						D----.


(I include here my reply to him:)

Well, I certainly agree with you that the philosophy professor's argument seems
weak--after all, surely there is some level (molecular?  subatomic?) at which 
the real appearance of a real person is either discrete or negligibly different
from a discrete representation of it.  On the other hand, I have some nagging
questions about your willingness to localize appearance in what seems to me to 
be a rather arbitrary way.  If a person, a text, an electronic apparatus, and a
monitor can be said to have a human appearance, then what about the person, the 
text, the apparatus, the monitor, and the stand on which the monitor rests?  Or
all of these things, plus me?  Do two people look like one person?  Does a 
volleyball team?  The earth, including all its inhabitants? I get the feeling 
that if I were compelled to localize human appearance (at least as defined by 
the "blurring test"), I would do so in an altogether different way.  But I 
suppose I'd best defer to the experts on that score.  Anyway, best of luck with
your research, and I hope to visit again some time soon.


"There *is* confusion worse than death"		Daniel R. Simon
			     -Tennyson		(me@theory.toronto.edu)

fostel@eos.ncsu.edu (Gary Fostel) (11/17/90)

   I always wondered why people spent so much time argueing about the Turing
   Test when it is so poorly defined.  (Or perhaps that's why it can go on and
   on).  Consider a modification for a chess playing version of the Turing
   test.  (We could use Chinese Chess if that would help :-)

   My wife would probably be greatly impressed by almost any computer chess
   playing program and would be forced to conclude that BY HER STANDARDS,
   the computer was passing any reasonable test she could construe of the
   machines ability to play chess *as* a human would.  But of course, she
   knows very little of chess or algorithms for chess and implementations of 
   same on computers.  I know a bit more about each and I would be able to
   detect some properties that might give it away, e.g. very constant
   response time, or pathological weaknesses in certain features if its
   play, e.g. with non-book openings or cleverly arranged combinations of
   various lengths to detect its search depth etc. Yes, I know there are
   chess programs that would pass these tests, but there are people more
   expert than me who would be able to contruct more useful tests for 
   these "smarter" programs.

   The only well defined "test", whether it be for chess playing or more
   generally, in the Turing test, for intelligence, would be that the machine
   could fool EVERYBODY who was observing.  This is a very strong test,
   much stronger than the usual Turing test I believe, and I'm not sure
   many REAL people could convince everyone else that they were intelligent
   humans.  Perhaps none could! 

   So ... if I try to make the Turing test into a definite, well defined
   test, it is not clear that ANYTHING could ever pass it.

   There's no doubt that the Turing test is great sport, but when
   you come right down to it, it is not very useful.  If I follow this one
   step, it means that the Chinese Room device is also not very useful since
   it exists merely to weaken another device (the Turing Test) which 
   can be weakened far more without recourse to the Chinese Room -- which
   muddies the situation rather than clarifying it.

----GaryFostel----                     Department of Computer Science
                                       North Carolina State University
 

tj@pons.cis.ohio-state.edu (Todd R Johnson) (11/17/90)

In article <3488@aipna.ed.ac.uk> cam@aipna.ed.ac.uk (Chris Malcolm) writes:
>>If you want to know what Searle thought don't read this newsgroup! Read
>>Searle! He's not as stupid as most of those who disagree with him :-)

	Agreed.  After disagreeing with Searle I went back and re-read
(and re-read) his Scientific American article.  I still don't agree
(or perhaps understand) the point he is trying to make with the
Chinese Room argument.  However, his main point seems to be that any
intelligent artifact with the capabilities of a human must be built in
hardware that is at least as power as the human brain.  In other
words, if we are to produce intelligent artifacts we MUST be willing
to accept the fact that we need to create special software AND
hardware.  This seems quite reasonable.  In fact, I don't see how
anyone can disagree with it.

	---Todd



--
Todd R. Johnson
tj@cis.ohio-state.edu
Laboratory for AI Research
The Ohio State University

smoliar@vaxa.isi.edu (Stephen Smoliar) (11/18/90)

In article <3488@aipna.ed.ac.uk> cam@aipna.ed.ac.uk (Chris Malcolm) writes:
>
>If you want to know what Searle thought don't read this newsgroup! Read
>Searle! He's not as stupid as most of those who disagree with him :-)

This is an important point, but there is also a major CAVEAT.  Listening to
Searle speak should not be regarded as an adequate substitute for reading his
publications.  When I heard him at UCLA, he indulged in a nasty habit of
playing to his audience, putting far more emphasis on making his phrases
dance than on conveying the content of his message.  The best way to deal
with Searle is on the printed page (and in the sort of relaxed mood which
is likely to open the mind).

I also feel that it is about time we apply this new rule to Turing as well.  I
cannot help but wonder how many participants in this debate (mind you, that
includes Searle) have actually READ "Computing Machinery and Intelligence,"
the source of that now notorious test which continues to inspire such
controversy.  What is particularly important is that, in his first paragraph,
Turing recognizes that "think" is probably too highly-charged a word to serve
as a basis for discussion.  The whole purpose of what is now known as the
Turing test was to recognize that "Can machines think?" was too vague a
question and to replace it with one which could be "expressed in relatively
unambiguous words."  If Turing is now looking down on us from Heaven, he is
probably aghast at all the ways in which his simple intellectual exercise has
been abused.

=========================================================================

USPS:	Stephen Smoliar
	5000 Centinela Avenue  #129
	Los Angeles, California  90066

Internet:  smoliar@vaxa.isi.edu

"It's only words . . . unless they're true."--David Mamet

cam@aipna.ed.ac.uk (Chris Malcolm) (11/19/90)

In article <8bDqHlK00VsLBAOkxp@andrew.cmu.edu> cw2k+@andrew.cmu.edu (Christopher L. Welles) writes:

>Just thought I should point out.  Searle did make it clear that "formal
>systems", computers,  could be conscious.  He emphasized the fact that
>humans were such systems.

No. He said that machines could underatand, that we were no more than
biological machines, but denied that a computer running any program
would be able to do this, on the grounds that in this last case ALL that
is going on is syntactical, and you can't get semantics into a syntactic
process by whatever elaborations of syntactic manipulation.

>His main thesis went something to the effect of:  That there mere
>instantiation of a program could not be in itself, sufficient for for
>consciousness.  In this, he is saying that consciousness somehow depends
>upon the "stuff" that the computer is made out of.

Lots of people unfamiliar with philosophy of mind imagine that this is
what Searle's "causal powers" arguments comes down to -- the particular
stuff. Well, it is true that that is one possibility, but Searle, and
philosophers in general, do not mean "causal powers" in this context to
be taken so simplistically. "Causal powers" could equally well refer to
the kind of elaborate symbol grounding mechanisms espoused by Stevan
Harnad, "symbol grounding" being another short hand phrase (but a
slightly more transparent one) for the abracadabra (or "causal powers")
which permits semantics to perfuse the otherwise purely syntactic.

>A major problem with the commentaries is that almost no one
>really understood all that Searle was saying.

There's a lot of it about....

>What he actually said was just too ridiculous.

Searle talks about computers in language which to computer scientists is
alien, often naive, and sometimes wrong. Nevertheless, his central
point, that you can't get semantics out of syntax, is a very important
theoretical point for AI and cognitive science, and one which very few
of those who like to laugh at his computational solecisms manage to
grasp, let alone address.
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

thornley@cs.umn.edu (David H. Thornley) (11/20/90)

In article <1990Nov16.171041.14144@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:
>
>   I always wondered why people spent so much time argueing about the Turing
>   Test when it is so poorly defined.  (Or perhaps that's why it can go on and
>   on).  Consider a modification for a chess playing version of the Turing
>   test.  (We could use Chinese Chess if that would help :-)
>
It is reasonably well-defined.  Read Turing's paper.  (Please, everyone,
read and ponder both one or both of Searle's articles, and Turing's paper,
before jumping in.  Searle gives me the impression that he is arguing
against something of a straw Turing test.)

>   The only well defined "test", whether it be for chess playing or more
>   generally, in the Turing test, for intelligence, would be that the machine
>   could fool EVERYBODY who was observing.  This is a very strong test,
>   much stronger than the usual Turing test I believe, and I'm not sure
>   many REAL people could convince everyone else that they were intelligent
>   humans.  Perhaps none could! 

The Turing test is not to convince the observer of intelligence, but to
be *indistinguishable* from an intelligent adult human.  If you put a
standard chess program on line, I can distinguish it immediately by asking
about the situation in the Middle East.  It seems to me that, if you put
two intelligent adults on the lines, instead of one human and one computer,
that one of the humans will be identified as the human, and will therefore
have passed the test.

DHT

thornley@cs.umn.edu (David H. Thornley) (11/20/90)

In article <86001@tut.cis.ohio-state.edu> Todd R Johnson <tj@cis.ohio-state.edu> writes:
>....  In other
>words, if we are to produce intelligent artifacts we MUST be willing
>to accept the fact that we need to create special software AND
>hardware.  This seems quite reasonable.  In fact, I don't see how
>anyone can disagree with it.
>
>	---Todd
>
If you mean that we need adequate hardware to create intelligence, yes,
but nobody has ever claimed that a lawn chair can be made to think with
the appropriate algorithms.  (OK, I'm probably wrong, but no serious AI
researcher has while speaking professionally :-)

If you mean that we necessarily need specific hardware not normally
found in modern computers, it is easy to disagree with.  It seems
very likely to me that a powerful enough modern computer, with the
adequate software, could "think," in a useful sense of that word.

DHT

cw2k+@andrew.cmu.edu (Christopher L. Welles) (11/20/90)

In <3525@aipna.ed.ac.uk>cam@aipna.ed.ac.uk (Chris Malcolm)writes:

>In article <8bDqHlK00VsLBAOkxp@andrew.cmu.edu> >cw2k+@andrew.cmu.edu
(Christopher L. Welles) writes:
>
>>Just thought I should point out.  Searle did make it clear that "formal
>>systems", computers,  could be conscious.  He emphasized the fact that
>>humans were such systems.
>
>No. He said that machines could understand, that we were no more than
>biological machines, but denied that a computer running any program
>would be able to do this, on the grounds that in this last case ALL that
>is going on is syntactical, and you can't get semantics into a syntactic
>process by whatever elaborations of syntactic manipulation.

In answer to this, let me pull a quote from Searle's original "The
Behavioral and Brain Sciences" article:

     "Ok, but could a digital computer think?"
     If by "digital computer" we mean anything at all that has a level
of description where it can correctly be described as the instantiation
of a computer program, then again the answer is, of course, yes, since
we are the instantiations of any number of computer programs, and we can
think.
     "But could something think, understand, and so on solely(in
italics) in virtue of being a computer with a right sort of program? 
Could instantiating a program, the right program of course, by itself be
a sufficient condition of understanding?"

It is possible to interpret your reply in such a way that it does not
conflict with this.  However, in doing so, it clearly would not conflict
with what I was saying.

>>His main thesis went something to the effect of:  That there mere
>>instantiation of a program could not be in itself, sufficient for for
>>consciousness.  In this, he is saying that consciousness somehow depends
>>upon the "stuff" that the computer is made out of.
>
>Lots of people unfamiliar with philosophy of mind imagine that this is
>what Searle's "causal powers" arguments comes down to -- the particular
>stuff. Well, it is true that that is one possibility, but Searle, and
>philosophers in general, do not mean "causal powers" in this context to
>be taken so simplistically. "Causal powers" could equally well refer to
>the kind of elaborate symbol grounding mechanisms espoused by Stevan
>Harnad, "symbol grounding" being another short hand phrase (but a
>slightly more transparent one) for the abracadabra (or "causal powers")
>which permits semantics to perfuse the otherwise purely syntactic.

Let me point out that I did not come to conclusion immediately!  By
means of several arguments, the chinese room test itself is invalid and
worthless.( One of the best I've seen of these is in <27320@cs.yale.edu>
by mcdermott-drew@cs.yale.edu (Drew McDermott).)  However, the issue of
intentionality  does not need to rest on this in order to have merit. 
If interpreted as "symbol grounding" it would be a valid point of
discussion.  This, is something I had taken into account.  But, by
virtue that it "could equally well refer to" it is ambiguous.  Only
later is it shown that this is not what he means.

The original BBS article consisted of Searle's original paper,
commentary on that paper, and Searle's response to the commentary.  In
that commentary, the "symbol grounding" idea was most closely
represented by Fodor's response.  His his reasoning was as follows:  It
is true that merely instantiating a program is not sufficient for
consciousness.  It must have some sort of causal connection to objects
in the real world.  This, however, does not mean that those causal
connections can not be built.  Thus, it does not depend upon the special
biological "stuff" the brain is made out of.

Searle's response to this clearly shows that he is not referring to
"symbol grounding" when he refers to intentionality:

     "Fodor agrees with my central thesis that instantiating a program
is not a sufficient condition of intentionality.  He thinks, however,
that if we got the right causal links between the formal symbols and
things in the world that would be sufficient.  Now there is an obvious
objection to this variant of the robot reply that I have made several
times:  the same thought experiment as before applies to this case. 
That is, no matter what outside causal impacts there are on the formal
tokens, these are not by themselves sufficient to give the tokens any
intentional content.  No matter what caused the tokens, the agent still
doesn't understand Chinese.  Let the egg foo yung symbol be causally
connected to egg foo yung in any way you like, that connection by itself
will never enable that agent to interpret the symbol as meaning egg foo
yung."

In this, Searle returns to the same mistake that he made with the
Chinese room itself, and gives up the only valid point his paper could
have possibly made.

				<<<<< Chris >>>>>

fostel@eos.ncsu.edu (Gary Fostel) (11/22/90)

I few posts back, I made the observation that the Turing test was not 
a well defined test at all because it does not pin down who's judegement
is to be used in deciding if the test has been passed.  No doubt Turing
was expecting it to be conducted by himself, but with him gone, I wonder
who will decide?  David Thornley, at the University of Minnesota replied:
  
   It is reasonably well-defined.  Read Turing's paper.  (Please, everyone,
   read and ponder both one or both of Searle's articles, and Turing's paper,
   before jumping in.  Searle gives me the impression that he is arguing
   against something of a straw Turing test.)

I have read Turing original description, and also some of his other musings
on the use of similar tests to see if men and women could tell each other
apart by linguistic means.  I spent quite some time discussing it with 
AI people and technical philospohers and my conclusion about the weakness
of the test is a strongly held one.  Whether or not a machine can pass the
Turing Test ought not to be a function of the judgement of the person who
is trying to apply the test.  

To illustrate this point, I used a trivial analogy of a test for chess 
playing similar to the Turing test for "intelligence".  The point is that
experts in chess, computing, or both, will be far better able to recognize
a computer program playing chess, than will the average person.  If we have
a Chess-Test, constructed as the Turing test, then do we accept the judgement
of the experts who can recognize oddities of the computer programs or do we
accept the judegement of an average person?  To be more hip, perhaps I should
change this to Chinese Chess and put the chess playing agent in a room.  Then
we can all debate Fostel's Chinese Chess Room.

Thornley went on to say:

   The Turing test is not to convince the observer of intelligence, but to
   be *indistinguishable* from an intelligent adult human.  If you put a
   standard chess program on line, I can distinguish it immediately by asking
   about the situation in the Middle East.

This seems to miss the point I was making.  Perhaps Thornley should "read
and ponder" my words before jumping in.  The analogy to the chess playing
test is simply a way to amplify the ambiguity present in the Turing test
by using a structurally identical test of properties we understand better.
The same problems exist in the Turing test. Computer programs already exist
that have been confused with intelligent humans, e.g. Weizebaum's Doctor
and the "Paranoid" program (from Stanford?)  Observers with different
background (i.e. people on this newsgroup) would not be so easily fooled,
but whose judegement it to be used?

How can the issue of individual judgement be eliminated from the Turing
test?  Well, that's the point, it can not.  Perhaps one could formulate a
democratic Turing test, and use the consensus of observers to decide if
the agent "passed".  Or the Genius Turing test, and have the observer with
the highest IQ make the judgement.  Or go for consensus: every observer 
must agree.  The latter is the only one that make sense.  I do not believe
any agent would ever pass that test, if only becuase one or more observers
did not like the personality/politics or lingustic style of the agent.
Remember, we are looking to distinguish a computer from an intelligent human.

How intelligent?   Suppose the agent a very stupid and ill-informed human.  
They may have no opinion on the current going-on in the mideast, may have a
3rd grade vocabulary and after a while get mad and refuse to co-operate.

Are we going to judge them to NOT be a human? Not intelligent?  So far as I
know, we have not yet developed the capability to measure human intelligence
very well; how are we going to specifify a cut-off on the Turing test for
"how intelligent" the agent must be?  This is another judegement call for
which we are forced to rely upon the consensus of the observers.

If the agent on the other end of the line is not very intelligent, do we
conclude that they are not human?  I hope not.  But this suggests that an
easy way to subvert the Turing test is to program a computer to be
surley, uncommunicative, stupid and ill informed. Would it pass?  What 
would any of this have to say about any of the profound questions of
intelligence, consciousness or minds and machines?

Not a heck of a lot. 

It's worth recalling that Turing originally made up the test in the context
of being able to distinguish a person of sex A from a person of sex B
pretending to be a person of sex A. I wonder if Turing himself took it
as seriously as people think. If I imagine him looking down from heaven,
on the current debate, I think he would be chortling in that annoying 
high-pitched nasal voice that earned him so many friends.  He probably
had the forsight to know that programs like Doctor were feasible and he
was anxious to weaken the then-common dualist belief that humans were the
uniquely chosen vessel of intelligence.  Of course, he was an atheist, 
so if he IS looking down from heaven, the jokes on him!
  
----GaryFostel----                           Department of Computer Science
                                             North Carolina State University 

thornley@cs.umn.edu (David H. Thornley) (11/22/90)

In article <1990Nov21.181445.11552@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:
>
>I few posts back, I made the observation that the Turing test was not 
>a well defined test at all because it does not pin down who's judegement
>is to be used in deciding if the test has been passed.  No doubt Turing
>was expecting it to be conducted by himself, but with him gone, I wonder
>who will decide?  David Thornley, at the University of Minnesota replied:
>  
>   It is reasonably well-defined.  Read Turing's paper.  (Please, everyone,
>   read and ponder both one or both of Searle's articles, and Turing's paper,
>   before jumping in.  Searle gives me the impression that he is arguing
>   against something of a straw Turing test.)

Sorry, that's not the impression I got.  This may well have been my
mistake; I see lots of misconceptions.
>
>I have read Turing original description, and also some of his other musings
>on the use of similar tests to see if men and women could tell each other
>apart by linguistic means.  I spent quite some time discussing it with 
>AI people and technical philospohers and my conclusion about the weakness
>of the test is a strongly held one.  Whether or not a machine can pass the
>Turing Test ought not to be a function of the judgement of the person who
>is trying to apply the test.  
>
>To illustrate this point, I used a trivial analogy of a test for chess 
>playing similar to the Turing test for "intelligence".  The point is that
>experts in chess, computing, or both, will be far better able to recognize
>a computer program playing chess, than will the average person.  If we have
>a Chess-Test, constructed as the Turing test, then do we accept the judgement
>of the experts who can recognize oddities of the computer programs or do we
>accept the judegement of an average person?  To be more hip, perhaps I should
>change this to Chinese Chess and put the chess playing agent in a room.  Then
>we can all debate Fostel's Chinese Chess Room.
>
>Thornley went on to say:
>
>   The Turing test is not to convince the observer of intelligence, but to
>   be *indistinguishable* from an intelligent adult human.  If you put a
>   standard chess program on line, I can distinguish it immediately by asking
>   about the situation in the Middle East.
>
>This seems to miss the point I was making.  Perhaps Thornley should "read
>and ponder" my words before jumping in.  The analogy to the chess playing
>test is simply a way to amplify the ambiguity present in the Turing test
>by using a structurally identical test of properties we understand better.
>The same problems exist in the Turing test. Computer programs already exist
>that have been confused with intelligent humans, e.g. Weizebaum's Doctor
>and the "Paranoid" program (from Stanford?)  Observers with different
>background (i.e. people on this newsgroup) would not be so easily fooled,
>but whose judegement it to be used?
>
I have not heard that Doctor or Paranoid have passed the Turing test
as Turing specified it.  Further, let's drop the "Paranoid" test.
I can write a version of Catatonic that is guaranteed indistinguishable
from a genuine catatonic.

>How can the issue of individual judgement be eliminated from the Turing
>test?  Well, that's the point, it can not.  Perhaps one could formulate a
>democratic Turing test, and use the consensus of observers to decide if
>the agent "passed".  Or the Genius Turing test, and have the observer with
>the highest IQ make the judgement.  Or go for consensus: every observer 
>must agree.  The latter is the only one that make sense.  I do not believe
>any agent would ever pass that test, if only becuase one or more observers
>did not like the personality/politics or lingustic style of the agent.
>Remember, we are looking to distinguish a computer from an intelligent human.
>
>How intelligent?   Suppose the agent a very stupid and ill-informed human.  
>They may have no opinion on the current going-on in the mideast, may have a
>3rd grade vocabulary and after a while get mad and refuse to co-operate.
>
Reread the paper.  Look at the example dialogs Turing gives.  These aren't
the dialogs from an ignorant, inarticulate, human.  While I don't
remember Turing specifying an intelligent adult, I don't remember
him specifying typing ability either.  Use a little common sense.

>[More on this point.  How about a Turing Test for the stupid and
> inarticulate?]
>
>It's worth recalling that Turing originally made up the test in the context
>of being able to distinguish a person of sex A from a person of sex B
>pretending to be a person of sex A. I wonder if Turing himself took it
>as seriously as people think. If I imagine him looking down from heaven,
>on the current debate, I think he would be chortling in that annoying 
>high-pitched nasal voice that earned him so many friends.  He probably
>had the forsight to know that programs like Doctor were feasible and he
>was anxious to weaken the then-common dualist belief that humans were the
>uniquely chosen vessel of intelligence.  Of course, he was an atheist, 
>so if he IS looking down from heaven, the jokes on him!
>  

I'm not completely sure about the Imitation Game myself.  (This is the
version where there is a man and a woman instead of a computer
and a human.  The man's job is to convince the interrogator that
he is a woman; the woman's job is to convince the interrogator
that she is a woman.  The analogy is exact if you require the
interrogator to be a woman.)

As far as the definition goes, I will admit that Turing didn't write
up the experimental technique like a description in a psychology
paper, but the necessary elements are there.  If I were to suggest
a test for the hypothesis that concrete words are better remembered
than abstract, I might say something like "Take some subjects.  Give
them a list of concrete and abstract nouns combined.  Give them
recognition and recall tests."  I would expect the experimenter to
know that subjects should speak the language the words are in.

Unfortunately, far too much of the arguments surrounding the Turing
test have tried to distort the test in various ways.  Did Turing
have to explicitly specify that an intelligent and cooperative
adult with reasonable typing ability was to be used as the human
in the comparison?  Do I have to specify that the words should be
taken from the subject's native language?

So who should judge?  I wouldn't trust a psychologist to know how
a computer acts, a police officer might consider the psychologist
and me to lack practical experience with people's reactions, the
list goes on.  How about taking various groups of people, such as
psychologists, computer scientists, cognitive scientists, police
officers, social workers, politicians, any sort of group with
some expertise either with people or with computers, and see if
any group has an above-chance ability to tell the human from the
computer?  I'd ask for confidence measures, myself.

Personally, I don't think Turing thought it was very important who
should judge.  I think he considered that any computer that could
fool a large number of informed and intelligent observers should
be thought of as passing the test.  If you want something more
specific, he thought that, by 2000, computers with giga-something
(byte?  digit?) memory would do so well that an average
interrogator would identify the human only 70% of the time after
five minutes' conversation, and he thought that, by then, people
would agree that machines could think.  (If you want a specific
answer to "who should interrogate," I think this satisfies
your requirements.)

DHT

cam@aipna.ed.ac.uk (Chris Malcolm) (11/24/90)

In article <wbG=nXC00VsLJNdEVj@andrew.cmu.edu> cw2k+@andrew.cmu.edu (Christopher L. Welles) writes:
>In <3525@aipna.ed.ac.uk>cam@aipna.ed.ac.uk (Chris Malcolm)writes:
>
>>In article <8bDqHlK00VsLBAOkxp@andrew.cmu.edu> >cw2k+@andrew.cmu.edu
>(Christopher L. Welles) writes:
>>
>>>Just thought I should point out.  Searle did make it clear that "formal
>>>systems", computers,  could be conscious.  He emphasized the fact that
>>>humans were such systems.

>>No. He said that machines could understand, that we were no more than
>>biological machines, but denied that a computer running any program
>>would be able to do this ...

>In answer to this, let me pull a quote from Searle's original "The
>Behavioral and Brain Sciences" article:

>     "Ok, but could a digital computer think?"
>     If by "digital computer" we mean anything at all that has a level
>of description where it can correctly be described as the instantiation
>of a computer program, then again the answer is, of course, yes, since
>we are the instantiations of any number of computer programs, and we can
>think.
>     "But could something think, understand, and so on solely(in
>italics) in virtue of being a computer with a right sort of program? 

>It is possible to interpret your reply in such a way that it does not
>conflict with this.  However, in doing so, it clearly would not conflict
>with what I was saying.

Ok, let me quote Searle's reply to the "robot reply" in the original
BBS target article and debate:

    I see no reason in principle why we couldn't give a machine the
    capacity to understand English or Chinese, since in an important
    sense our bodies with our brains are precisely such machines.
    
    But ... we could not give such a thing to a machine ... [whose]
    operation ... is defined soley in terms of computational
    processes over formally defined elements.
	
As I hope is now clear, he is willing to concede the possibility of
consciousness to machines "which have a level of description ...
described as the instantiation of a computer program", but NOT to
anything whose operation is defined SOLEY in those terms. He uses that
little word "solely" in both your quotation and mine: it is the
important word.

The extra magic ingredient required for intentionality (not necessarily
consciousness, but it does no damage to the arguments to make that
equation here) Searle does often refer to as "causal powers", but I
challenge you to find him saying anything anywhere to substantiate your
claim that he thinks these caual powers depend -

>>>upon the "stuff" that the computer is made out of.

This is (IMHO) an extremely common misunderstanding of Searle, based
simply on failure of the imagination, i.e., "I can't imagine what else
he could have meant".

Your later quotation of Searle's rebuttal of Fodor in support of your
thesis in fact begs the question:

>     "Fodor agrees with my central thesis that instantiating a program
>is not a sufficient condition of intentionality.  He thinks, however,
>that if we got the right causal links between the formal symbols and
>things in the world that would be sufficient.  Now there is an obvious
>objection to this variant of the robot reply that I have made several
>times:  the same thought experiment as before applies to this case. 
>That is, no matter what outside causal impacts there are on the formal
>tokens, these are not by themselves sufficient to give the tokens any
>intentional content.

You seem to think that Searle's dismissal here of the utility of
"outside causal impacts" is equivalent to a dismissal of a functional
interpretation of his "causal powers". It isn't. And although you may be
correct in saying (I haven't looked back to check) that Fodor's argument
is the closest in the original BBS argument to Harnad's symbol grounding
point, it is also the case that Harnad would agree with Searle in
denying that "the right causal links ... would be sufficient." (though
maybe not for the same reasons :-).
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

fostel@eos.ncsu.edu (Gary Fostel) (11/28/90)

Is the Turing Test well defined?  Thornley, in answer to some of my
objections to serious use of the Turing test was that the person
applying the test needed to:

    Use a little common sense.

This is delightful.  Is "common sense" related to intelligence?  Well,
if so than the key to Thornley's defence of the Turing test might be
that the tester needs to be intelligent.  How intelligent?  I'm reminded
of the old saw:

    Common sense is neither common nor sensible.

My criticism of the Turing test is directed at it's use as an operational
definition of intelligence.  It ends up being circular since it can not
be used without slecting a judge to make the decision and the judge must
be intelligent themselves.  

A valid SCIENTIFIC test can not rely on the judegement of the experimenter.
That is the point of constructing objective measures and experimental technique
that can be repeated by another researcher.  If they do what you do and they 
do not get the same result, there is something wrong with the experiment.
I have no trouble imagining that different people will decide differently
about an agent undergoing the Turing test, so the experiment is not valid.

Of course, one answer is to disclaim any intent to be scientific about it
and go back to argueing about what it all means in a subjective sense.  It
is pointless to seriously think about constructing a test for the presense
of a thing that may not be a well defined thing at all.  My own view of
what intelligence is, will be different from yours and that pretty much 
scotches any serious scientific work until someone comes up with a more
testible property than "intelligence".    There is probably some way
to convert the Turing test into a useable measurement instrument, but I
hope anyone doing that would not claim they were measuring or detecting
"intelligence".  

----GaryFostel----                      Department of Computer Science
                                        North Carolina State University   

thornley@cs.umn.edu (David H. Thornley) (12/01/90)

In article <1990Nov27.231501.1621@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:
>
>Is the Turing Test well defined?  Thornley, in answer to some of my
>objections to serious use of the Turing test was that the person
>applying the test needed to:
>
>    Use a little common sense.
>
>This is delightful.  Is "common sense" related to intelligence?  Well,
>if so than the key to Thornley's defence of the Turing test might be
>that the tester needs to be intelligent.  How intelligent?  I'm reminded
>of the old saw:
>
>    Common sense is neither common nor sensible.
>
>My criticism of the Turing test is directed at it's use as an operational
>definition of intelligence.  It ends up being circular since it can not
>be used without slecting a judge to make the decision and the judge must
>be intelligent themselves.  
>
>A valid SCIENTIFIC test can not rely on the judegement of the experimenter.

True.  So what's wrong with Turing's suggestion of "an average
interrogator will not have more than 70 percent chance of making
the right identification after five minutes of questioning"?
This is a specific proposal, if you will grant a little leeway
on the word "average."  (In psychology departments, it seems to be
the average introductory psych student.)

When I read a paper claiming that a system has passed the Turing test,
I want to read how the interrogators were selected, what sort of
relevant backgrounds they had, and a statistical breakdown of success
rates.  I also want to see confidence ratings, and various other
assorted details.  I then intend to study the article carefully.
I certainly would not take a claim seriously without such background.
In particular, if someone were to claim that their husband/wife could not
tell the difference, and that is how the machine passed the test,
I would not be impressed.

In the meantime, please feel free to propose any selection process
you please for the interrogators (I insist on multiple interrogators).
I don't think you will, in good faith, come up with one so weird that
I will refuse to consider it a Turing test.

However, I claim that it makes sense to speak of the test in general
terms, with the understanding that we will nail it down when necessary.
Describing something as "passing the Turing test" is something like
referring to "the capacity of short-term memory" in that we don't
quite define it as we say it, but we can use it in the lab just fine.

DHT