[alt.cyberpunk] The CyberTest

grady@apple.UUCP (10/05/87)

	O.K.  So the Turing test can tell you when AI's are
indistinguishable from humans.  But what we'll need in the 90's is a
way to tell when AI's are uncontrovertedly superior to humans --
this test, I propose to call the CyberTest.
	The criterion the CyberTest uses is simple: when an AI is so
persuasive, intuitive, and eloquent that EVERY person is convinced of
its superiority, then, in fact, it will be.  This is a good test because the
sufficiency of it cannot be questioned.  (Since everyone is convinced,
no one would think to ask the question.) 
	The hard part, of course, is convincing everybody of the AI's
preeminence.  Isn't there always at least one old redneck who
refuses to accept anything?  Well, thats why I call the criterion the
"CyberTest," because in that case -- when no one's looking -- the AI
ensures unanimity by excising the holdout.
-- Grady Ward

cmaag@csd4.milw.wisc.edu.UUCP (10/07/87)

In article <6413@apple.UUCP> grady@apple.UUCP (Grady Ward) writes:
>
>	O.K.  So the Turing test can tell you when AI's are
>indistinguishable from humans.  

Being fairly new to the genre (I've only read Gibson up to this point), 
could someone expand a little on the "Turing Test"?  Was it actually 
devised by Turing?  Please e-mail if you don't feel it is of general
interest.

Chris.

=======================================================================
   Path: uwmcsd1!csd4.milw.wisc.edu!cmaag
   From: cmaag@csd4.milw.wisc.edu 
 bitnet: cmaag%csd4.milw.wisc.edu@wiscvm.bitnet
{seismo|nike|ucbvax|harvard|rutgers!ihnp4}!uwvax!uwmcsd1!uwmcsd4!cmaag 
=======================================================================

RLWALD@pucc.UUCP (10/07/87)

In article <6413@apple.UUCP>, grady@apple.UUCP (Grady Ward) writes:
>        The criterion the CyberTest uses is simple: when an AI is so
>persuasive, intuitive, and eloquent that EVERY person is convinced of
>its superiority, then, in fact, it will be.  This is a good test because the
>sufficiency of it cannot be questioned.


   And then we can make it watch while we delete it.

   CyberSnuff?                   :-)


-Nexus


-Rob Wald                Bitnet: RLWALD@PUCC.BITNET
                         Uucp: {ihnp4|allegra}!psuvax1!PUCC.BITNET!RLWALD
                         Arpa: RLWALD@PUCC.Princeton.Edu
"Why are they all trying to kill me?"
     "They don't realize that you're already dead."     -The Prisoner

vnend@ukecc.UUCP (10/08/87)

In article <6413@apple.UUCP> grady@apple.UUCP (Grady Ward) writes:
>	O.K.  So the Turing test can tell you when AI's are
>indistinguishable from humans.  But what we'll need in the 90's is a
>way to tell when AI's are uncontrovertedly superior to humans --
>this test, I propose to call the CyberTest.
>	The criterion the CyberTest uses is simple: when an AI is so
>persuasive, intuitive, and eloquent that EVERY person is convinced of
>its superiority, then, in fact, it will be.  This is a good test because the
>sufficiency of it cannot be questioned.  (Since everyone is convinced,
>no one would think to ask the question.) 
>-- Grady Ward

	But all (ha!) that this requires is an expert system for convincing
humans,  a talking head that scores high on retoric.  The only area that 
it would really be superior in would be convincing people (not easy, but...)
I'm afraid that the Cybertest is going to require a little more than this,
as some of us will require more than persuation, intuition and eloquence to
say that someon or thing is superior.  It's gonna take *proof*!

	Nice idea though.  How would you have it go about proving its
superiority as opposed to just talking about it?


-- 
Later y'all,             Vnend            Ignorance is the Mother of Adventure.                        
cbosgd!ukma!ukecc!vnend;  vnend@engr.uky.edu;  vnend%ukecc.uucp@ukma.BITNET             
    Also: cn0001dj@ukcc.BITNET, Compuserve 73277,1513 and VNEND on GEnie                  
            I may be smart, but I can lift heavy things.

grady@apple.UUCP (10/09/87)

You object to the CyberTest on the basis that persuasion and proof are
distinct processes.  Presumably, you believe that persuasion and
eloquence are simple sophistry, both easily detected and dismissed,
while "proof" is a much stronger test against error.  May I ask how
*you* were persuaded that "proof" is convincing?  By an eloquent
geometry teacher perhaps?  Be warned that that teacher may have
been one of my CyberTest agents practicing its ScientificMode
persuasion heuristic.  Successfully, apparently.

I suspect that the superior AI reads both Thomas Kuhn and Karl Popper
and knows how to recast argument to suit both the longshoreman as
well as the intellectual skeptic, and that if it were still unable to
convince you, given *whatever* reasonable or unreasonable canon of
proof you employ, then it would not yet be worthy.  On the other hand,
you may be the last holdout, in which case. . .
:-) Grady Ward

samlb@well.UUCP (10/09/87)

In article <3099@uwmcsd1.UUCP> cmaag@csd4.milw.wisc.edu.UUCP (Christopher N Maag) writes:
>could someone expand a little on the "Turing Test"?  Was it actually 
>devised by Turing?

	As far as I know, there is no such thing as a formal "Turing Test" with
criteria -- the idea came from a remark of Turing's that a machine would have
reached "intelligence" or "sentience" if, when put in a room with a teletype
machine (the ultimate in I/O in those days), you couldn't tell whether the
entity on the other end of the circuit was a human or a machine.
	In Gibson, the Turing Test seems to involve finding out whether the
AI obeys Asimov's Laws of Robotics or not -- i.e. is ultimately controllable
by human beings, rather than self-determining, capricious, and ruthless (like
_real_ human beings).  The "Turing Commission" people (with some justification)
seek to pull the plug on dangerous machine intelligences . . .
	{ Enter asbestos suit }
-- 
Sam'l Bassett, Writer/Editor/Consultant -- ideas & opinions mine!
34 Oakland Ave., San Anselmo  CA  94960;  (415) 454-7282
UUCP:  {...known world...}!hplabs OR ptsfa OR lll-crg!well!samlb;
Compuserve:  71735,1776;  WU Easylink ESL 6284-3034;  MCI SBassett

jr@lf-server-2.bbn.com.UUCP (10/09/87)

In article  <1639@ukecc.engr.uky.edu> vnend@engr.uky.edu (D. V. W. James) writes:
>In article <6413@apple.UUCP> grady@apple.UUCP (Grady Ward) writes:
>>	O.K.  So the Turing test can tell you when AI's are
>>indistinguishable from humans.
>>-- Grady Ward
>
>	But all (ha!) that this requires is an expert system for convincing
>humans,  a talking head that scores high on retoric.  The only area that 
>it would really be superior in would be convincing people (not easy, but...)
>I'm afraid that the Cybertest is going to require a little more than this,
>as some of us will require more than persuation, intuition and eloquence to
>say that someon or thing is superior.  It's gonna take *proof*!

Now come on.  Just look at Reagan's success in selling ridiculous
ideas.  Couple the cybertron to the right folksy tone and style, and
you'll convince the world (well, at least the U.S. population).  Garry
Trudeau is onto something.

>-- 
>Later y'all,             Vnend            Ignorance is the Mother of Adventure.
-- 
/jr
jr@bbn.com or jr@bbn.uucp

steve@nuchat.UUCP (10/17/87)

In article <3099@uwmcsd1.UUCP>, cmaag@csd4.milw.wisc.edu (Christopher N Maag):
> In article <6413@apple.UUCP> grady@apple.UUCP (Grady Ward) writes:
> >	O.K.  So the Turing test can tell you when AI's are
> >indistinguishable from humans.  

> Being fairly new to the genre (I've only read Gibson up to this point), 
> could someone expand a little on the "Turing Test"?  Was it actually 
> devised by Turing?  Please e-mail if you don't feel it is of general
> interest.

Alan Turing proposed the test which is now named for him in the context
of a debate in the mathematics community over just what artificial
intelligence _meant_.  He did not intend it as a test for AI but
more as a definition of it.  I don't have that lecture with me,
so I paraphrase:

	The sceptic sits before a teleprinter.  The testor is free
	to attach the printer to a similar device with a human operator
	or to the mechanism under test.  If the sceptic cannot determine
	which is the machine and which is the human, the machine can
	be said to be intelligent.

It was at one time said that intelligence was the ability to make
choices.  As soon as digital systems started makeing choices, the
definition was narrowed.  Each time the definition is met by
a computer intelligence is redifined.  I wish I could remember
some of the other difinitions, but we've been through 3 or 4 
widely accepted definitions.  Turing's test will probably
not be redefined when it is successfully met, but unlike Alan
noone today expects that to happen any time soon.

_The_Enigma_, a biography of Alan Turing by (first name?) Hodges
is recommended.
-- 
Steve Nuchia	    | [...] but the machine would probably be allowed no mercy.
uunet!nuchat!steve  | In other words then, if a machine is expected to be
(713) 334 6720	    | infallible, it cannot be intelligent.  - Alan Turing, 1947

brad@ut-sally.UUCP (10/19/87)

In article <407@nuchat.UUCP> steve@nuchat.UUCP (Steve Nuchia) writes:
>In article <3099@uwmcsd1.UUCP>, cmaag@csd4.milw.wisc.edu (Christopher N Maag):
>> Being fairly new to the genre (I've only read Gibson up to this point), 
>> could someone expand a little on the "Turing Test"?  

For your edification (quoted without permission, but for what I hope
constitutes "fair use"):

"	I propose to consider the queston 'Can machines think?' ....
Instead of attempting ... a definition, I shall replace the question
by another which is closely related to it and is expressed in
relatively unambiguous words.
	The new form of the problem can be described in terms of a
game which we call the 'imitation game.'  It is played with three
people, a man (A), a woman (B), and an interrogator (C) who may be
of either sex.  The interrogator stays in a room aprt from the other
two.  The object of the game for the interregator is to determine
which of the other two is the man and which is the woman.  He knows
them by labels X and Y....
	In order that tones of voice may not help the interrogator
the answers should be written, or better still, typewritten.  The
ideal arrangement is to have a teleprinter communicating between the
two rooms.
	We now ask the question, 'What will happen when a machine
take the part of A in the game?'  Will the interrogator decide
wrongly as often as when the game is played between a man and a
woman?'  These questions replace our original, 'Can machines think?'"

"Computng Machinery and Intellience," _Mind_, Vol. LIX, No. 286,
(1950).  Reprinted by permission in _Minds and Machines_ ed. Alan
Ross Anderson, Prentice Hall, 1964.

>It was at one time said that intelligence was the ability to make
>choices.

I think part of the point of the imitation game is the demonstration
of some ability to consider someone else's position for the purposes
of discourse and to respond in a way that shows that consideration
(regardless of whether that person's position was taken into account
for reasons of deceit or otherwise).

>
>_The_Enigma_, a biography of Alan Turing by (first name?) Hodges
>is recommended.

Seconded.

>-- 
>Steve Nuchia	   | [...] but the machine would probably be allowed no mercy.


Brad Blumenthal  {ihnp4,harvard}!ut-sally!brad || brad@sally.utexas.edu