[comp.ai] Eliminating Species Bias from the Turing Test

radzy@radzy.UUCP (Tim Radzykewycz) (02/03/90)

In all the discussion I've seen about the Turing test, people
have often brought up the possibility of asking the question
"Are you human?"  In all the other discussions (e.g. other than
this one on the net) SOMEBODY has always brought up the point
that the HUMAN in the test might also lie.  I guess I have to
be the one this time.

How can the person asking the question know for certain that
the answers of either subject (e.g. human or computer program)
are true?  What are the criteria which s/he can use to determine
this?

If you were the one asking the questions, would you ask this
question and base your decision on the result?  What about the
possibility of a program which simply printed one of "yes", "no",
and "maybe" at random after receiving any input?

This discussion has gone too long looking only at the computer
program side of things.  You also have to look at the other
side, the one with a human behind it.

That's my humble opinion, anyway.

		-- Tim "radzy" Radzykewycz
		   The Incredible Radical Cabbage

		radzy@cogsci.berkeley.edu
		   - or -
		radzy@radzy.net.com

janeric@control.lth.se (Jan Eric Larsson) (02/05/90)

In article <376@radzy.UUCP> radzy@radzy.PacBell.COM (Tim Radzykewycz) writes:
>How can the person asking the question know for certain that
>the answers of either subject (e.g. human or computer program)
>are true?  What are the criteria which s/he can use to determine
>this?

In the Turing test (as described by Turing) the computer/program
is supposed to "pose" as a human, by giving all sorts of answers.
Obviously, most of these will be lies. It is important to observe
that the Turing test is no test for intelligence or consciousness,
at least not according to Turing. It would only prove that the
computer is good at "posing" as a human. The good Alan would
certainly not agree with either Searle or the defenders of the
infamous "strong AI", when it comes to the conclusions of the
Turing test.

See Turing's article in Mind, october, 1950.

Jan Eric Larsson                      JanEric@Control.LTH.Se      +46 46 108795
Department of Automatic Control
Lund Institute of Technology         "We watched the thermocouples dance to the
Box 118, S-221 00 LUND, Sweden        spirited tunes of a high frequency band."

ruth@aiai.ed.ac.uk (Ruth Aylett) (02/05/90)

As I recall, deception was an essential element of the original Turing
Test - the one that involved a man and a woman. The task was to
distinguish which was the real woman, given that the man was doing his
best to appear as a woman too. 

To stand a chance of succeeding, the man has to possess the human
capacities of imagination, empathy, and understanding of tests and
games, that is social ritual. Would any intelligent entity (assuming we
can imagine what a non-human intelligence might be like) be wedded to
the literal truth? Or, in the original case, why would the man want to
do something as ridiculous as pretend to be a woman over a teletype?

As a matter of interest, has anyone anywhere ever tried this original
turing test? If so, with what result?
                                                  Ruth Aylett
                                                  ruth@aiai.uucp
                                                  R.Aylett@uk.ac.ed

kp@uts.amdahl.com (Ken Presting) (02/06/90)

In article <376@radzy.UUCP> radzy@radzy.PacBell.COM (Tim Radzykewycz) writes:
>In all the discussion I've seen about the Turing test, people
>have often brought up the possibility of asking the question
>"Are you human?"  In all the other discussions (e.g. other than
>this one on the net) SOMEBODY has always brought up the point
>that the HUMAN in the test might also lie.  I guess I have to
>be the one this time.
>
>How can the person asking the question know for certain that
>the answers of either subject (e.g. human or computer program)
>are true?  What are the criteria which s/he can use to determine
>this?

This particular problem is easy.  If you ask a few questions like "What
city are you in?" and "What's the weather like?" you can use a newspaper
to tell whether the answers are consistent.  The idea is to use yet
another instance of a task which is trivial for people but tedious for
computers.

Almost every discussion of the Turing Test I've read focuses on the
subjective experience of humans, and whether a computer can give a
convincing impression of having the same experiences - hopes, fears, etc.
This is perhaps a natural place to focus, if we suppose that the computer
will have only a teletype with which to interact with the outside world.
It might seem excessively "unfair to the machine" to expect it to be
prepared with up-to-the-minute data on weather or current events.  Look
at how easy it would be to trip up a computer in a discussion about daily
life.  People ate dinner last night, went out, or watched TV, or read a
newspaper, and in the process acquired some topical information.  For a
computer to convince us that it occupies its evenings similarly, a data
entry operation of massive proportion would be required.  This is my
favorite reason for supposing that the Turing Test will NEVER be passed.
No funding agency in its right mind would underwrite the project.

Of course, it's no slur on an AI that it doesn't watch Johnny Carson, or
talk about the weather.


Ian Sutherland comments:
> . . . Let's try and come a little
>closer to passing the Turing test before we start worrying about
>whether passing the Turing test is adequate or not.

We might be able to specify an objective criterion which is easier to
implement, yet more convincing than the TT.  One aspect of the TT that
reduces its significance is that a machine can pass it without knowing
it's a machine.

Let me emphasize this: LYING IS NOT A PROBLEM. NOT KNOWING THE TRUTH IS.

An elementary ability to make descriptive statements about one's own body
and its current condition is present in children soon after they learn to
speak.  Animals can't talk, but they clearly adapt their behavior to
their various states of hunger, temperature, etc.  Absence of this ability
completely precludes any claim that the system in question is conscious.

weyand@csli.Stanford.EDU (Chris Weyand) (02/06/90)

References: <15439@well.UUCP> <11673@csli.Stanford.EDU> <11324@venera.isi.edu> <1700@castle.ed.ac.uk> <11489@venera.UUCP> <6340@sdcc6.ucsd.edu> <7cHZ028I81fo01@amdahl.uts.amdahl.com> <4819@convex.convex.com> <376@radzy.UUCP> <f3Gj02be85fI01@amdahl.