[net.ai] The Turing Test - machines vs. people

colonel@gloria.UUCP (George Sicherman) (06/23/84)

[This followup was actually written by a very clever computer program.]

As you say, the Turing test is a _conversational_ test.  Do you remember
Turing's original "conversation"?  "...Count me out on this.  I never
could write poetry."

The whole conversation is fatuous!  But then, it has no bonafide purpose.
It was merely set up by a scientist to prove something.  Nothing would
be easier, for that matter, than to program a computer to take part in
what Berne calls "8-stroke rituals":

	Hi.
	Hi.
	How are you?
	Fine.  How are you?
	Fine.  Nice day, isn't it?
	Yes.
	Well, goodbye.
	Goodbye.

But would you want to carry on such a conversation with a computer?
One converses socially only with conversers that one knows to be people.

-- 
Col. G. L. Sicherman
...seismo!rochester!rocksanne!rocksvax!sunybcs!gloria!colonel

ags@pucc-i (Seaman) (06/28/84)

>  [This followup was actually written by a very clever computer program.]
>  
>  As you say, the Turing test is a _conversational_ test.  Do you remember
>  Turing's original "conversation"?  "...Count me out on this.  I never
>  could write poetry."
	.
	.
	.
>  The whole conversation is fatuous!  But then, it has no bonafide purpose.
>  It was merely set up by a scientist to prove something.  
>  
>  But would you want to carry on such a conversation with a computer?
>  One converses socially only with conversers that one knows to be people.

Your bug-killer line turns out to have more apparent truth in it than the
rest of the article.  It's too bad you didn't read the original conversation
which you quoted from.  I am giving you the benefit of the doubt here by
assuming that you did not deliberately misrepresent the conversation (and that
you were not unable to understand it):

	Q: Please write me a sonnet on the subject of the Forth Bridge.
	A: Count me out on this one.  I never could write poetry.
	Q: Add 34957 to 70764.
	A: (Pause about 30 seconds and then give as answer) 105621.
	Q: Do you play chess?
	A: Yes.
	Q: I have K at my K1, and no other pieces.  You have only K at K6
	   and R at R1.  It is your move.  What do you play?
	A: (After a pause of 15 seconds) R-R8 mate.

The point of the first answer is that no human is an expert on everything,
and that a program which hopes to pass the Turing test had best not give
itself away by being overly knowledgeable.

Did you notice that the answer to the second question is incorrect?  It
should be 105721.  [Aha! a sexist machine!  It assumes that women are no
good with figures.  Oops--I forgot.  Since you haven't read Turing's
"Can a Machine Think?" you won't understand what women have to do with
this discussion.  Oh, well...]
-- 

Dave Seaman			"My hovercraft is full of eels."
..!pur-ee!pucc-i:ags

colonel@gloria.UUCP (George Sicherman) (07/02/84)

[Regale si tu peux et mange si tu l'oses.]

For those of you who missed the start of this colloquy, here's the text
of Turing's original hypothetical conversation:

	Q: Please write me a sonnet on the subject of the Forth Bridge.
	A: Count me out on this one.  I never could write poetry.
	Q: Add 34957 to 70764.
	A: (Pause about 30 seconds and then give as answer) 105621.
	Q: Do you play chess?
	A: Yes.
	Q: I have K at my K1, and no other pieces.  You have only K at K6
	   and R at R1.  It is your move.  What do you play?
	A: (After a pause of 15 seconds) R-R8 mate.

>>	The point of the first answer is that no human is an expert on
>>	everything, and that a program which hopes to pass the Turing
>>	test had best not give itself away by being overly
>>	knowledgeable.

This strains my credulity.  Is it coincidence that the computer declines
to write a sonnet and accepts the other challenges?  A real human, trying
to prove that he is not a computer program, would probably welcome the
opportunity to offer a poem.

And did Turing believe that one can be an "expert" poet in the same way
that one can be an expert arithmetician or chess-player?  I hope not!

>>	Did you notice that the answer to the second question is
>>	incorrect?  It should be 105721.  [Aha! a sexist machine!  It
>>	assumes that women are no good with figures.  Oops--I forgot.
>>	Since you haven't read Turing's "Can a Machine Think?" you
>>	won't understand what women have to do with this discussion.
>>	Oh, well...]

This is unworthy of its author.  Of course I read the article.  My attack
was not against the details of the conversation (for that matter, the
third problem is ambiguous), but the premise of the Test.  You may
remember that Turing called it a "Game" rather than a "Test."  This
sort of situation arises _only_ as a game; if you really want to know
whether somebody is a person or a computer, you just look at him/it.

I should think that ELIZA has laid to rest the myth that a program's
"humanity" has anything to do with its intelligence.  ELIZA's intel-
ligence was low, but she was a very human source of comfort to many
people who talked with her.
-- 
Col. G. L. Sicherman
...seismo!rochester!rocksanne!rocksvax!sunybcs!gloria!colonel

dgary@ecsvax.UUCP (07/10/84)

<>
Kilobaud magazine (now Microcomputing) ran an article ~5 years ago on ai and
"humanlike conversation" in which the author concluded that humanlike dialog
had little to do with intelligence, artificial or genuine.  To accurately
simulate human dialog required, among other things, WOM (write only memory)
which was used to store anything not of direct immediate interest to the
speaker.  You could do a pretty good simulation of Eddy Murphie on the other
end of a Turing test with a very simple algorithm.

D Gary Grady
Duke University Computation Center, Durham, NC  27706
(919) 684-4146
USENET:  {decvax,ihnp4,akgua,etc.}!mcnc!ecsvax!dgary

ags@pucc-i (Seaman) (07/10/84)

>  Kilobaud magazine (now Microcomputing) ran an article ~5 years ago on ai and
>  "humanlike conversation" in which the author concluded that humanlike dialog
>  had little to do with intelligence, artificial or genuine.  
>  ...You could do a pretty good simulation of Eddy Murphie on the other
>  end of a Turing test with a very simple algorithm.

Anyone who believes this either doesn't understand the Turing test or has
a very low opinion of his own intelligence.  Are you seriously claiming
that YOU would NOT BE ABLE TO TELL THE DIFFERENCE between Eddie Murphy and
a "very simple algorithm" if you were connected to both by a terminal and
you could ask them about ANYTHING YOU LIKE for AS LONG AS YOU WANT?

You have to assume, of course, that the real Eddie Murphy is being helpful
and is not trying to emulate a "very simple algorithm."  This is also one
of the conditions in the original test.
-- 

Dave Seaman			"My hovercraft is full of eels."
..!pur-ee!pucc-i:ags

dgary@ecsvax.UUCP (07/12/84)

<>
Someone took issue with a recent posting I made:

>From: ags@pucc-i (Seaman) Tue Jul 10 10:38:42 1984
>>  ...You could do a pretty good simulation of Eddy Murphie on the other
>>  end of a Turing test with a very simple algorithm.
>
>Anyone who believes this either doesn't understand the Turing test or has
>a very low opinion of his own intelligence.  Are you seriously claiming
> ...

From the kidding tone of the rest of my posting, I assumed the :-) was
quite unnecessary.  Evidently I was wrong.  So I retract my insult
to Messrs Turing and Murphy, and suggest that a simple algorithm could
substitute for "Cheech" Marin.  OK, what about Marcel Marceau...

:-) :-) :-)  <-- Please note!!

D Gary Grady
Duke University Computation Center, Durham, NC  27706
(919) 684-4146
USENET:  {decvax,ihnp4,akgua,etc.}!mcnc!ecsvax!dgary

ags@pucc-i (Seaman) (07/13/84)

>  Someone took issue with a recent posting I made:
>  
>>>  ...You could do a pretty good simulation of Eddy Murphie on the other
>>>  end of a Turing test with a very simple algorithm.
>  
>  From the kidding tone of the rest of my posting, I assumed the :-) was
>  quite unnecessary.  Evidently I was wrong.... 
>  
>  :-) :-) :-)  <-- Please note!!

It's not so much that I can't find humor in the thought of Eddie Murphy
participating in the Turing Test, but that the following humor was a little
too sophisticated for me on first reading:

>  Subject: Re: The Turing Test - machines vs. people
>  
>  <>
>  Kilobaud magazine (now Microcomputing) ran an article ~5 years ago on ai and
>  "humanlike conversation" in which the author concluded that humanlike dialog
>  had little to do with intelligence, artificial or genuine.  

Now that I can see Kilobaud (or whatever it's called this week) for what it
really is, a humor magazine, maybe I should subscribe.
-- 

Dave Seaman			My hovercraft is no longer full of 
..!pur-ee!pucc-i:ags		eels (thanks to my confused cat).

Gloger.es@XEROX.ARPA (07/19/84)

Someone (I no longer have any record of who) apparently said:

>>  If a program passes a test in calculus the best we can grant
>>  it is that it can pass tests.  ...
>>  We make the same mistaken assumption about humans--that is
>>  that because you can pass a "test" you understand a subject.

To which Dave Seaman replied:

>  Suppose the program writes a Ph.D. dissertation and passes its
>  "orals"?  Then can we say it understands its field?  If not,
>  then how can we decide that anyone understands anything?

Implicit in the first quote is the answer to the second.  We cannot
(absolutely) decide that anyone understands anything, i.e. that
understanding exists, since "understanding" as used here is not a
scientific observable.  We can, if we wish, observe the observables,
like test passing.  And we can choose to infer from them the existence
of a causative agency for them, like "understanding" for test passing.
But this inference is true only to the extent that we can observe the
agency; and it is valid only to the extent that from it we can deduce
other, observably true and useful facts.

If you're willing for "understanding" to mean some observable thing,
like passing of some tests or other, then you can decide if someone
"understands" something, i.e. if "understanding" exists.  Otherwise, you
can't absolutely decide where, when, or how much understanding exists or
doesn't exist.

And ditto the entire preceding discussion with the buzzword
"understanding" replaced by "intelligence."  And again, replaced by
"luck."  And again, by "soul."  And again, by "god."

(Credit for the basis of much of my argument is due to Prof. Andrew J.
Galambos.)

Paul Gloger
<Gloger.es@Xerox.arpa>