[comp.ai] Turing test and lies

ian@oravax.UUCP (Ian Sutherland) (02/04/90)

In article <376@radzy.UUCP> radzy@radzy.PacBell.COM (Tim Radzykewycz) writes:
>How can the person asking the question know for certain that
>the answers of either subject (e.g. human or computer program)
>are true?  What are the criteria which s/he can use to determine
>this?

I don't think the Turing test has anything to do with whether the
answers are true or not.  There's no assumption that the computer
program will always answer truthfully.  The question is, can the
program be distinguished from a human being?

Much of the recent discussion on this newsgroup about the Turing test
and Searle's Chinese room argument has been concerned with whether a
machine can "think" or not, or whether passing the Turing test really
means that the machine is "intelligent" or not.  While these are
reasonable philosophical questions, I think they have very little to do
with the practical side of AI.  If I could write a program which
interacted with you through, say, postings to comp.ai, and you couldn't
tell the difference between the program and me, then I'd say I'd
probably go down in history as a great man.  I would almost certainly
have created a program which could do most of the things which people
want to use artificial intelligence for.  Let's try and come a little
closer to passing the Turing test before we start worrying about
whether passing the Turing test is adequate or not.
-- 
Ian Sutherland		ian%oravax.uucp@cu-arpa.cs.cornell.edu

Sans Peur