[comp.ai] Value of turing test?

cr_kempke@eisvxe.moundst.mn.org (Travelling SMU GURU) (08/16/90)

In article <3240@psueea.UUCP>, erich@eecs.cs.pdx.edu (Erich Boleyn) writes:
> In <66412@lll-winken.LLNL.GOV> loren@tristan.UUCP (Loren Petrich) writes:
>>
>>	That's very good point about the Turing Test -- that our
>>"knowledge" of other people's minds is based on EXACTLY that
>>principle. 

	[Much deleted]

	Actually, I have an even larger problem with the Turing Test:  I don't
believe most PEOPLE could pass it.   When most of us converse (verbally), we
stutter, make mistakes, lie, misunderstand, fail to communicate, etc. etc.
More importantly, not all of US are equally skilled in every domain.  For
example, if you asked me about sports figures or basic Newtonian physics, I
might not be able to answer even "simple" domain-specific questions.  If you
were talking across a teletype to me, and were fairly convinced that I was a
computer, I could do little to convince you otherwise, despite the fact that I
am allegedly intelligent.

	The Turing test fails because of the fundamental problem that there's 
no good definition of intelligence, but it's a fair estimate.  We just have to 
make sure that we're not requiring the computer to solve a harder task than we
ourselves do.  We have a habit of defining intelligence as "Anything we can do
that a mere computer can't", which will get us nowhere in the end.

	--Chris
	(kempkec@mist.cs.orst.edu, not wherever this mailer thinks I am.)

weyand@csli.Stanford.EDU (Chris Weyand) (08/18/90)

In <2356@eisvxe.moundst.mn.org> cr_kempke@eisvxe.moundst.mn.org (Travelling SMU GURU) writes:

>	Actually, I have an even larger problem with the Turing Test:  I don't
>believe most PEOPLE could pass it.   When most of us converse (verbally), we
>stutter, make mistakes, lie, misunderstand, fail to communicate, etc. etc.

These points don't have anything to do with the TT.  Would you label some one
as un-intelligent just because they...ummmm...couldn't....ummm...well, you know
...uuhhhh...speak without mistakes?  On the contrary if I was conversing in
real-time with someone (or thing) through a terminal I'd find it very strange if
they were able to whip out nice grammatical sentences that were perfectly clear
Lying, misunderstanding are parts of intelligence.  Why does a person ususally
tell a lie?  Not because they are defective but because they have determined 
through some careful (or maybe hasty) reasoning that lying in some given
situation is in their interests.  Anaylyzing misunderstandings probably 
involves some pretty interesting excursions into intentions and reasoning about
others motives/intentions.

>More importantly, not all of US are equally skilled in every domain.  For
>example, if you asked me about sports figures or basic Newtonian physics, I
>might not be able to answer even "simple" domain-specific questions.  If you
>were talking across a teletype to me, and were fairly convinced that I was a
>computer, I could do little to convince you otherwise, despite the fact that I
>am allegedly intelligent.

Nothing in the TT says that the questionee has to be able to answer all 
questions knowledgably.  If you look at Turing's original paper he demonstrates
what a conversation might look like;  I don't remember it exactly but he writes
something that goes like

	Q: Write me a sonnet on ...
	A: Count me out on that one, I was never any good at poetry.
	Q: Add 123456 and 54321
	A: [pause 30 seconds]    177877  (The wrong answer!)

I think we know that having a database that covers a large domain of knowledge
has nothing to do with intelligence.  Or at least not with the interesting
aspects of intelligence; creativity, adaptation, learning, analogical thought..


>	The Turing test fails because of the fundamental problem that there's 
>no good definition of intelligence, but it's a fair estimate.  We just have to 
>make sure that we're not requiring the computer to solve a harder task than we
>ourselves do.  We have a habit of defining intelligence as "Anything we can do
>that a mere computer can't", which will get us nowhere in the end.

This isn't why it fails this is why it succeeds!  Turing thought the question
"Do machines think?" to be meaningless.  That was his motivation for inventing
this game.  Yes he also noted the objection that this test may be too hard,
however surely if we come across a machine that passes it we have shown 
something very interesting about intelligence.


Chris Weyand
weyand@csli.Stanford.edu

pnettlet@gara.une.oz.au (Philip Nettleton) (08/18/90)

In article <2356@eisvxe.moundst.mn.org>, cr_kempke@eisvxe.moundst.mn.org (Travelling SMU GURU) writes:
> 
> 	Actually, I have an even larger problem with the Turing Test:  I don't
> believe most PEOPLE could pass it.   When most of us converse (verbally), we
> stutter, make mistakes, lie, misunderstand, fail to communicate, etc. etc.
> More importantly, not all of US are equally skilled in every domain.  For
> example, if you asked me about sports figures or basic Newtonian physics, I
> might not be able to answer even "simple" domain-specific questions. ...

A computer, imitating a human, would also need to be able to imitate human
limitations. It would require a persona, it would need to make the occasional
mistake and it would know only domain-specific information for that persona
(or at least give the impression that that was the case). If this was not so,
it would be EASY to pick the computer, it would be too clever to be believed.

The interrogator would be an EXTREMELY skilled person and would have to allow
for human error, otherwise, as you say, no human could pass the Turing Test.
As such the computer would need to imitate human error to throw the interrogator
off completely.

The Turing Test, when applied correctly, should act much like the jury system.
It is better to let ten guilty people go free than to convict an innocent
person wrongly. That is, if your human, you should pass the Turing Test.
Therefore, if you can't pass the Turing Test, your almost certainly not human.

> ... If you were talking across a teletype to me, and were fairly convinced
> that I was a computer, I could do little to convince you otherwise, despite
> the fact that I am ALLEGEDLY intelligent.

I'm sure your a computer - prove me wrong :-).

> 	The Turing test fails because of the fundamental problem that there's 
> no good definition of intelligence, but it's a fair estimate.  We just have
> to make sure that we're not requiring the computer to solve a harder task
> than we ourselves do.  We have a habit of defining intelligence as "Anything
> we can do that a mere computer can't", which will get us nowhere in the end.

"Fails" is far to strong a word to use here. The Turing Test does not attempt to
say what intelligence IS, it just compares one (known) type of intelligent
behaviour with another. If the computer's behaviour cannot be distinguished
from a human's behaviour then we have no cause for not attributing the computer
with the same intelligence as observed in humans, even it is IS only an act.

Remember the computer's REAL intelligence may be very alien to the biological
style of intelligence we all know and love. But as long as it can act the part
of a human, probably requiring a superior intelligence under the circumstances,
this ability itself demonstrates its intelligence, even if it is a totally alien
(non-biological) variety.

						Philip Nettleton,
						Tutor in Computer Science,
						University of New England,
						Armidale,
						New South Wales,
						2351,
						AUSTRALIA.

vdasigi@thor.wright.edu (Venu Dasigi) (08/24/90)

From article <14942@csli.Stanford.EDU>, by weyand@csli.Stanford.EDU (Chris Weyand):
> In <2356@eisvxe.moundst.mn.org> cr_kempke@eisvxe.moundst.mn.org (Travelling SMU GURU) writes:
> 
>>	Actually, I have an even larger problem with the Turing Test:  I don't
>>believe most PEOPLE could pass it.   When most of us converse (verbally), we
>>stutter, make mistakes, lie, misunderstand, fail to communicate, etc. etc.
> 
> These points don't have anything to do with the TT.  Would you label some one
> as un-intelligent just because they...ummmm...couldn't....ummm...well, you know
> ....uuhhhh...speak without mistakes?

I am entering the debate in the middle, so I may be a little out of
context. Not much, I hope.

While one might interpret the Turing test as a test/definition of
intelligence, it seems to me to be actually a test of the "humanness"
of the subject (and in this sense, all normal people should be able to
pass the test), and I believe most of the current discussion supports
this view. On the assumption that intelligence may be equated with being
human, one could say Turing test offers a test of intelligence. This
argument would still be more or less valid even if both the subject
(that is, the machine being evaluated) and the human being (say, a man)
it is being compared with simulate another intelligent person (say, a
woman).

On a related note, I remember a similar discussion about a year or so
ago, from which I excerpt the following quote from Drew McDermott:

"Turing's test can never hope to provide a NECESSARY condition for
intelligence, but only a SUFFICIENT one."

I believe I quoted him correctly, because I had written it down as
soon as I saw it. I may have copied it incorrectly, because it appears
to me that the quote makes more sense if the words NECESSARY and SUFFICIENT
were interchanged. Comments?

--- Venu Dasigi (vdasigi@cs.wright.edu)
Venu Dasigi      vdasigi@cs.wright.edu
Dept. of CS&Eng, Wright State U, 3171 Research Blvd, Dayton, OH 45420

dave@cogsci.indiana.edu (David Chalmers) (08/24/90)

In article <1353@thor.wright.EDU> vdasigi@thor.wright.edu writes:

> "Turing's test can never hope to provide a NECESSARY condition for
> intelligence, but only a SUFFICIENT one."
>
>I believe I quoted him correctly, because I had written it down as
>soon as I saw it. I may have copied it incorrectly, because it appears
>to me that the quote makes more sense if the words NECESSARY and SUFFICIENT
>were interchanged. Comments?

I don't recall the quote in question, but quoted version seems more plausible
than the reverse.  You can argue about the sufficiency clause until the
cows come home -- whether there are operational criteria for intelligence,
and so on -- and this is what tends to be concentrated on.  Indeed,
Turing in his original article says that the important claim is the sufficiency
claim.

Any claim about necessity of TT-passing ability is very dubious, for the simple
reason given by Turing in his 1950 article.

  "May not machines carry out something which ought to be described as thinking
   but which is very different from what a man does?  This objection is a very
   strong one, but at least we can say that nevertheless, if a machine can be
   constructed to play the imitation game satisfactorily, we need not be
   troubled by the objection."

The point is that the TT doesn't test for *intelligence* per se, but for
*human-like intelligence*.  This point was extended in an interesting 
recent article in _Mind_ by R.M. French.  He argues that *no* machine
could pass the Turing Test, unless it had experienced the world exactly as we
had.  The outline of the argument is that no matter how you restrict the class
of questions on the TT, a sufficently assiduous Tester will be able to uncover
differences between a human and an artificial machine by a technique
of "subcognitive probing".  The different subcognitive substrate of such a
machine will manifest itself in non-humanlike answers to certain questions.
Of course, such failure to pass the TT doesn't imply lack of intelligence.

A.M. Turing, "Computing Machinery and Intelligence", Mind, 59:433-460, 1950.

R.M. French, "Subcognition and the Limits of the Turing Test", Mind, 99:53-66,
1990.

--
Dave Chalmers     (dave@cogsci.indiana.edu)      
Concepts and Cognition, Indiana University.

"It is not the least charm of a theory that it is refutable."

forbis@milton.u.washington.edu (Gary Forbis) (08/24/90)

In article <1353@thor.wright.EDU> vdasigi@thor.wright.edu writes:
>While one might interpret the Turing test as a test/definition of
>intelligence, it seems to me to be actually a test of the "humanness"
>of the subject (and in this sense, all normal people should be able to
>pass the test), and I believe most of the current discussion supports
>this view.

From _Metamagical_Themas_ by D. Hofstadter:

in the chapter titled "A Coffeehouse Conversation on the Turing Test"
Post Scriptum.

        The first trip was so successful that I decided to do it again
     a couple of months later.  This time they threw an informal party
     at an apartment a few of them shared.  Zamir had forwarned me that
     they were hoping to give me a demonstration of something that had
     already been done in a recent class meeting.  It seems that the 
     question of whether computers could ever think had arisen, and most
     of the group members had taken a negative stand on the issue.

The pages following this covered a transcript between a confederate playing
the part of a computer program and Douglas Hofstadter.  The conclusion is
startling.  "Zamir summarizes this dramatic demonstration [the one in the
class] by saying that his class was willing to view _anything_on_a_video_
terminal_ as mechanically produced, no matter how sophisticated, insightful,
or poetic an utterance it might be.  They might find it interesting and
even surprising, but they would find some way to discount those qualities."

I suspect that this 1983 experiment could be repeated today with similar
results.  Given a predisposition to discount computer interactions there
may be no way to convince some of the identity between how humans think
and how computers think even after such an identity has been established.

--gary forbis@milton.u.washington.edu

kohout@cme.nist.gov (Robert Kohout) (08/24/90)

In article <1353@thor.wright.EDU> vdasigi@thor.wright.edu writes:
>
>On a related note, I remember a similar discussion about a year or so
>ago, from which I excerpt the following quote from Drew McDermott:
>
>"Turing's test can never hope to provide a NECESSARY condition for
>intelligence, but only a SUFFICIENT one."
>
I believe this is part of an excellent article posted to the net some
months ago. A reposting may be appropriate in light of the current
turn in this discussion. In any event, can someone out there please
E-mail me a copy?

It is important to realize the the Turing Test is only one measure of
intelligence. More correctly, it is one definition of intelligence.
Some (operational) psychologists like to claim that "Intelligence is
what IQ tests measure" and leave it at that. Obviously, this has very little 
intuitive appeal, but it implicitly recognizes the difficulty one faces 
in trying to define or characterize intelligence. It is my belief that ANY
attempt to define or characterize intelligence in a general way will be
problematic, and the shortcomings of the Turing Test are symptomatic of
this. 

A corollary to this is that one should not burden the AI community with
the defense of the Turing Test. Those who care to use it as a foundation
may feel free to do so, but one should not make the assumption that anyone
who claims to be an AI practitioner has submitted himself to the programme
which it dictates. If, as I have claimed, intelligence defies an exact 
definition, we should be satisfied which inexact and problematic ones.


R.Kohout