[comp.ai] The Turing Test is no good!

frank@bruce.cs.monash.OZ.AU (Frank Breen) (08/13/90)

Well this turned out to be a bit rambling and I don't think that it
has any thing particularly revolutionary in it but I think it may
be a different way of thinking about it that some of you haven't
considered.  On the other hand this kind of posting could well turn
up in this group every few months.
Anyway here goes:

To me the turing test only tests if a computer can imitate human
intelligence (and presumably human thought).  I'm not convinced
that it is a good idea to have an AI that is so close to being
human.  If computers can do all the thinking that people can (and
presumably better) then what's the point in any humans thinking.
We would be reduced to being amused by the AI's (presuming they're
nice) and all usefull thought would be done by the AI's.

What AI should do is let us humans keep doing what we're good at
and let the AI's do what they are better at.  Of course the AI's
will take over many task's performed by humans but hopefully some
will remain.  I suppose eventually it is quite likely that AI's 
will surpass human thought in all respects - and who knows what 
will happen then.  I guess I can't help being a bit frightened
by the prospect of becoming obsolete but I still wouldn't want
to slow down progress.

The point is that the Turing test seems to me to be somewhat contrived
and meaningless.  There are many things that computers can do that
people can't and it would be impossible for a person to pass an AI's
version of the turing test - but it doesn't mean a great deal.  AI's
should be measured by their usefulness not their likeness to people.

Frank Breen

pnettlet@gara.une.oz.au (Philip Nettleton) (08/14/90)

From article <2860@bruce.cs.monash.OZ.AU>, by frank@bruce.cs.monash.OZ.AU (Frank Breen):
> ... On the other hand this kind of posting could well turn up in this group
> every few months. ...

It does - most recently (the still ongoing) Searl debate.

> ... To me the turing test only tests if a computer can imitate human
> intelligence (and presumably human thought). ...

"I think, therefore I am" - I still haven't found any proof that other
people exist, I merely choose to BELIEVE they do. Imitation is a nice
concept to pose when trying to undermine the Turing Test, but something
clever enough to imitate a human being well enough to fool a human
interrogator, must be of equivalent or higher intelligence itself.
Remember, ANY question is fair game in the Turing Test.

> ... I'm not convinced that it is a good idea to have an AI that is so close
> to being human.  If computers can do all the thinking that people can (and
> presumably better) then what's the point in any humans thinking.
> We would be reduced to being amused by the AI's (presuming they're
> nice) and all usefull thought would be done by the AI's. ...

I don't believe this - "Dem AI's Gunna Be Takin' Our Jobs Nex'".
There are a thousand good reasons for pursuing AI, I've never heard
of this one against it. Deep space exploration, deep sea exploration,
replacement of humans in life endangering jobs, etc, etc. I, for one,
will not stop thinking just because of the advent of AI. I wouldn't
advise arming them with nuclear weapons but thats a different issue.

> ... What AI should do is let us humans keep doing what we're good at
> and let the AI's do what they are better at. ...

We might well be extremely bad at it - there could be thousands of species
of creatures throughout the Galaxy more intelligent than we are, and we're
so smart we can't even think of a way to prove whether they exist or not.

> ... The point is that the Turing test seems to me to be somewhat contrived
> and meaningless. ...

IQ tests are contrived and meaningless - we still do them and so do our
kids. It is meant to be a critical test of success or failure in creating
a machine with capabilities approaching those of a human being (at what human
beings do best). It does not represent the definitive answer to what we
expect AI to give us. After all, who needs a machine that can imitate being
a human being?

						Philip Nettleton,
						Tutor in Computer Science,
						University of New England,
						Armidale,
						New South Wales,
						AUSTRALIA.

frank@bruce.cs.monash.OZ.AU (Frank Breen) (08/16/90)

In <3156@gara.une.oz.au> pnettlet@gara.une.oz.au (Philip Nettleton) writes:

<From article <2860@bruce.cs.monash.OZ.AU>, by frank@bruce.cs.monash.OZ.AU (Frank Breen):

<> ... To me the turing test only tests if a computer can imitate human
<> intelligence (and presumably human thought). ...

<"I think, therefore I am" - I still haven't found any proof that other
<people exist, I merely choose to BELIEVE they do. Imitation is a nice
<concept to pose when trying to undermine the Turing Test, but something
<clever enough to imitate a human being well enough to fool a human
<interrogator, must be of equivalent or higher intelligence itself.
<Remember, ANY question is fair game in the Turing Test.

But to me Searle shows how it is possible to construct something that
will answer any question without understanding it.  (I don't believe
it would be possible but I can't prove it and after reading Searle's
paper it seems like a valid though experiment).  I disagree with
Searles conclusion however since his reasoning seems a bit circular.

'lets construct something that answer questions using only symbol
manipulation and think about the consequences. think think...  Well
it seems to me that it is only using symbol manipulation and so doesn't
really understand it.'

What Searle shows (to me) is that the turing test might be inadequate
to test understanding of some things but only in extreme cases. (the
chinese room being one of them).  But I don't see how he disproves
strong AI.

<I don't believe this - "Dem AI's Gunna Be Takin' Our Jobs Nex'".
<There are a thousand good reasons for pursuing AI, I've never heard
<of this one against it. ...
<...I, for one,
<will not stop thinking just because of the advent of AI.

I agree that it's not a reason to stop AI but I do think they could
take all our jobs - but they might not - we'll just have to wait and see.

<We might well be extremely bad at it - there could be thousands of species
<of creatures throughout the Galaxy more intelligent than we are,...

Frightening - I like being a member of the smartest known species.

<> ... The point is that the Turing test seems to me to be somewhat contrived
<> and meaningless. ...

<IQ tests are contrived and meaningless - we still do them and so do our
<kids. It is meant to be a critical test of success or failure in creating
<a machine with capabilities approaching those of a human being (at what human
<beings do best). It does not represent the definitive answer to what we
<expect AI to give us. After all, who needs a machine that can imitate being
<a human being?

Yes I agree with this - it's a good point.

Frank Breen

pnettlet@gara.une.oz.au (Philip Nettleton) (08/16/90)

From article>2870@bruce.cs.monash.OZ.AU>, by frank@bruce.cs.monash.OZ.AU (Frank Breen):
>>> ... To me the turing test only tests if a computer can imitate human
>>> intelligence (and presumably human thought). ...
> 
>>"I think, therefore I am" - I still haven't found any proof that other
>>people exist, I merely choose to BELIEVE they do. Imitation is a nice
>>concept to pose when trying to undermine the Turing Test, but something
>>clever enough to imitate a human being well enough to fool a human
>>interrogator, must be of equivalent or higher intelligence itself.
>>Remember, ANY question is fair game in the Turing Test.
> 
> But to me Searle shows how it is possible to construct something that
> will answer any question without understanding it.  (I don't believe
> it would be possible but I can't prove it and after reading Searle's
> paper it seems like a valid though experiment).  I disagree with
> Searles conclusion however since his reasoning seems a bit circular.

The problem with this is that YOU are not real. Or rather, you are MERELY
a machine trying to convince me that you are a human being by posing questions
that put your own intelligence into question :-). Prove that you are not,
and we may have some basis for discussion on the Turing Test.

You see, Searl's experiment means nothing because although Searl, as part
of the system, knows nothing about Chinese, the system as a whole does.
Would you expect the CPU of a computer to know about Computer Aided Design?
And yet a computer system, containing a CPU and running a CAD program, can
do CAD. Searl is acting as the CPU. That doesn't mean the system, containing
Searl, is not intelligent, even if Searl isn't. If the system could pass
Turing Test (and this is debatable) then I for one wouldn't argue against
it being intelligent.

> What Searle shows (to me) is that the turing test might be inadequate
> to test understanding of some things but only in extreme cases. (the
> chinese room being one of them).  But I don't see how he disproves
> strong AI.

Remember, ANY question is fair game.

-	"What did you have for breakfast?"
-	"What do you think of Black Sabbath?"
-	"Are you married?"
-	"What is your wife's name?"
-	"Do you enjoy sex?"
-	"Have you ever been unfaithful to her?"
-	"If you saw a dog run over in the street, what would you do?"

(This is starting to sound a little like "Blade Runner", or the original
Philip K. Dick novel, "Do Androids Dream of Electric Sheep".)

For a machine to pass a test like this, you'd better hope its not packing a
gun when you say its not intelligent. It may REALLY have feelings and may
take extreme offense to an inferior (organic no less) being denying its
obviously superior intellignce.

						Philip Nettleton,
						Tutor in Computer Science,
						University of New England,
						Armidale,
						New South Wales,
						2351,
						AUSTRALIA.

jonabbey@walt.cc.utexas.edu (Jonathan Abbey) (08/16/90)

In article <3211@gara.une.oz.au>
pnettlet@gara.une.oz.au (Philip Nettleton) writes:

 [...]
>
>Remember, ANY question is fair game.
>
>-	"What did you have for breakfast?"
>-	"What do you think of Black Sabbath?"
>-	"Are you married?"
>-	"What is your wife's name?"
>-	"Do you enjoy sex?"
>-	"Have you ever been unfaithful to her?"
>-	"If you saw a dog run over in the street, what would you do?"
>
>(This is starting to sound a little like "Blade Runner", or the original
>Philip K. Dick novel, "Do Androids Dream of Electric Sheep".)
>
>For a machine to pass a test like this, you'd better hope its not packing a
>gun when you say its not intelligent. It may REALLY have feelings and may
>take extreme offense to an inferior (organic no less) being denying its
>obviously superior intellignce.
>
J

Er, why is it again that this indicates superior intelligence?  In fact,
what do you mean by the term?  In humans, I believe speed of processing,
ability to adapt to novelty and rapidity of learning, speed and accuracy
of recall are all commonly used as indicators of intelligence.  These
general abilities find themselves expressed in many different ways.  There
is, obviously, the personality.  Then there are the seven (I believe
that is the classical number.. can't remember all of them, though.?)
specialized intelligences.. spatial, musical, mathematical (?), and
so forth.  While these may or may not be valid compartmentalizations
of the human intelligence, they do seem to be perceived in that manner.
Would we be willing to grant bonus points to an AI that could play good
music?  One that could ride a bicycle?  One that could efficiently route
packets through a network suffering sporadic link failures?  One that
could speak all terrestrial languages?

The Turing test simplifies the issue considerably.  I would be willing
to concede intelligence to a machine that passed the test, provided the
test included the ability to creatively extrapolate from old concepts to new
ones.  Without that, even if the purported AI could carry on a stunningly
convincing debate on various issues of the day,  I rather think I would
feel as if I were talking with Eliza's precocious little sister.  With that
ability, however, I believe I would be speaking with a true AI, whether or
not the machine's implementation of the Turing machine's read/write head
understood Chinese itself or not.

Once that point is reached, on what basis could the AI's intelligence
be measured?  Certainly the size and complexity of its knowledge-base, the
speed at which accesses and correlations take place, and the degree to
which the machine's is willing and able to make and operate on creative 
extrapolations, would all be basic.

Can anyone out there suggest further basic criteria of intelligence?  The only
other things I can think of at the moment involve the machine's ability to
judge when it is and when it is not appropriate to apply creative means in
its thought processes.  Other possibilites would seem to be limited to
criteria applied to the knowledge-base, but I'm not sure what nature
such would take.

>						Philip Nettleton,
>						Tutor in Computer Science,
>						University of New England,
>						Armidale,
>						New South Wales,
>						2351,
>						AUSTRALIA.


Jonathan Abbey (512) 472-2052        \                           (512) 835-3081
jonabbey@ccwf.cc.utexas.edu           \        broccol@csdfx8a.arlut.utexas.edu
The University of Texas at Austin      \          Applied Research Laboratories

reynolds@syd.dit.CSIRO.AU (Chris.Reynolds) (08/17/90)

>  Can anyone out there suggest further basic criteria of intelligence?  The only
>  other things I can think of at the moment involve the machine's ability to
>  judge when it is and when it is not appropriate to apply creative means in
>  its thought processes.  Other possibilites would seem to be limited to
>  criteria applied to the knowledge-base, but I'm not sure what nature
>  such would take.
      
Surely intelligence, like beauty, is in the eye of the beholder, and any 
attempt to get a single unambigious definition is about as fruitless as the 
medieval alchemists trying to find the philosopher's stone. 

Intelligence represents little more than a measure of position in the pecking-
order of success in the society subgroup in which it is observed. If you are 
an academic working on artificial intelligence research, a profound 
knowledge of philosophy or formal mathematics will increase your 
intelligence as seen by your colleagues - which will be even higher if you 
combine both attributes. However most A.I. researchers would have 
considerable survival problems if they suddenly found themselves, for 
example, on their own, miles from anywhere, in the Australian outback.

May I suggest that a useful common factor of inteligence (in as far as there 
can be one shared by Western academics, Buddist monks, and tribes from 
the Brazilian jungle) is the ability to cope with ignorance - i.e. the ability to 
rapidly adapt behaviour to met, and possibly exploit, unanticipated features 
of the perceived environment.

Chris Reynolds

s64421@zeus.usq.edu.au (house ron) (08/17/90)

pnettlet@gara.une.oz.au (Philip Nettleton) writes:

>You see, Searl's experiment means nothing because although Searl, as part
>of the system, knows nothing about Chinese, the system as a whole does.

Now we're just going roud in circles!  This exact claim is answered by Searle
in his original document.  There's no point in just blandly posting the
same old opinions; we know there are people who think like you without
being reminded ad infinitum.  Please post something which advances the
state of the debate.
-- 
Regards,

Ron House.   (s64421@zeus.usq.edu.au)
(By post: Info Tech, U.C.S.Q. Toowoomba. Australia. 4350)

pnettlet@gara.une.oz.au (Philip Nettleton) (08/18/90)

In article <1179@zeus.usq.edu.au>, s64421@zeus.usq.edu.au (house ron) writes:
> pnettlet@gara.une.oz.au (Philip Nettleton) writes:
> 
> >You see, Searl's experiment means nothing because although Searl, as part
> >of the system, knows nothing about Chinese, the system as a whole does.
> 
> Now we're just going roud in circles!  This exact claim is answered by Searle
> in his original document.  There's no point in just blandly posting the
> same old opinions; we know there are people who think like you without
> being reminded ad infinitum.  Please post something which advances the
> state of the debate.

I didn't bring Searl up again, I would rather forget the stupid Searl debate
altogether. We were discussing the Turing Test and Frank brought up Searl.
Searl's argument sucks - it proves nothing! So lets just let it drop!
Philosophers should stick to the meaning of life - it more up there alley.

It might help if you followed the Turing Test discussion more closelythen
you wouldn't make unsupportable statements about who is doing what ad finitium.

							Philip Nettleton.

frank@bruce.cs.monash.OZ.AU (Frank Breen) (08/19/90)

In <3211@gara.une.oz.au> pnettlet@gara.une.oz.au (Philip Nettleton) writes:

>From article>2870@bruce.cs.monash.OZ.AU>, by frank@bruce.cs.monash.OZ.AU (Frank Breen):

>The problem with this is that YOU are not real. Or rather, you are MERELY
>a machine trying to convince me that you are a human being by posing questions
>that put your own intelligence into question :-). Prove that you are not,
>and we may have some basis for discussion on the Turing Test.

I am not real and I am not human.  There that proves that I am not trying
to convince you that I am :-)

>You see, Searl's experiment means nothing because although Searl, as part
>of the system, knows nothing about Chinese, the system as a whole does.

This was how I first replied to the problem but after reading Searle's
article I'm not so convinced about this.  I just think it is possible
to imitate understanding without actual understanding.  I recently read
about someone teaching chimps a simple language but someone else taught
the same language to some of his (human) students and although they
appeared to understand the language they did not and were basically
doing pattern matching while being ignorant of the real mean of the
language (or even that it was a language).

>[elaborating the systems reply]... If the system could pass
>Turing Test (and this is debatable) then I for one wouldn't argue against
>it being intelligent.

I wouldn't either argue against it's intelligence either - I would give
it the benefit of the doubt.

>Remember, ANY question is fair game.

>-	"What did you have for breakfast?"
[ some more such questions ]

It could lie.


Frank Breen

frank@bruce.cs.monash.OZ.AU (Frank Breen) (08/19/90)

In <3240@gara.une.oz.au> pnettlet@gara.une.oz.au (Philip Nettleton) writes:

>I didn't bring Searl up again, I would rather forget the stupid Searl debate
>altogether. We were discussing the Turing Test and Frank brought up Searl.
>Searl's argument sucks - it proves nothing! So lets just let it drop!
>Philosophers should stick to the meaning of life - it more up there alley.

I'm sorry for bringing up Searle again - I was sick of that arguement as well
but it was after reading Searl that made me start thinking about the Turing
test differently which led me to starting this thread.

Oh well.

Frank Breen

lhamey@mqccsunb.mqcc.mq.oz.au (Len Hamey) (08/21/90)

In article <1179@zeus.usq.edu.au> s64421@zeus.usq.edu.au (house ron) writes:
>pnettlet@gara.une.oz.au (Philip Nettleton) writes:
>
>>You see, Searl's experiment means nothing because although Searl, as part
>>of the system, knows nothing about Chinese, the system as a whole does.
>
>Now we're just going roud in circles!  This exact claim is answered by Searle
>in his original document.

Actually, I would dispute that Searle answered the claim in his original
document.  He claimed that, even if the program was loaded into his mind
(he learned the rules) then he still would not 'understand' so therefore
the room does not 'understand'.  But, why does he not 'understand'?  It
is because the program, as he chose to load it, did not link with the
other programs already in his mind -- he could not think of a horse when
he saw the chinese symbol for horse BECAUSE HE MADE NO ASSOCIATION BETWEEN
the chinese symbol and his other understanding.  So, Searle is now running
two distinct programs, which each have their own basis of understanding.

Now, suppose that Searle was to, after learning the rules of chinese,
add symbol grounding to his new program -- i.e. learn relationships between
the symbols of chinese and the symbols of his other program (the one that
he calls his self) -- what happens?  Suddenly he finds that he understands!
But have the rules of chinese changed? No... yet, he may well choose
(on the basis of this other program, his self) to disobey the externally
imposed program at some points, and answer in a way that reflect his
'self' program..  e.g. If asked "Are you a man?", the chinese room
program may respond, "no, I am a woman" whereas the Searle 'self'
program would presumably respond, "yes".

So.. I conclude that when Searle says that he still does not understand
chinese after absorbing the chinese room program, it is because there
is insufficient cross-linking between that program and the existing
'self' program.  Searle's error is to assume that if he is implementing
a program then that program can only be understanding something if that
program is cross-linked to his own 'self' program which he assumes is
really understanding.

I find myself wondering... do schitzophreniacs (sp?) implement multiple
programs?  Can one personality of a schitzo person understand something
without the other personality having understood it?  What about split
brains?  Seems like humans can implement multiple distinct programs,
each of which understands independently of the other.  Not pleasant, though.

Len Hamey.

luhn@ztivax.UUCP (Dr Achim Luhn) (08/30/90)

From evers@apollo21 Thu Aug 30 16:34:01 1990
Received: by ztivax.uucp; Thu, 30 Aug 90 16:33:58 +0200
From: Hr. Evers <evers@apollo21>
Date: Thu, 30 Aug 90 16:28:55 mes
Message-Id: <9008301428.AA00431@apollo21>
To: luhn@ztivax
Status: R

I have a brief point to make here about what Phillip(I think)
said about it being useless developing AI systems the reproduce
human though.  True : we will be made redundant and that is frightening
but we must learn to control our source of intelligence.  Machines
will not simply copy our intelligence and drop dead... It is obvious
that they will go much further than we could ever perceive.
We are unfortunate that our brains, though very flexible and
good at controlling our bodies, are simply not reliable enough.
How much, as a percentage of what your senses inform you, do you
think you can remember ?

A gross exageration may be 0.1%	 Compare that to any silicon based
system today and you'll feel inferior...
   I have diverted a bit, so to get back to the point :
We can design and build intelligent systems, (and they will continue
to design further such intelligent systems), our only qualm is that
we are afraid....  Since when have we been too cowardly to advance !