[comp.ai.philosophy] Turing Test: opinions on an idea

mlevin@jade.tufts.edu (05/13/91)

    I am new to this group, so if this has been covered recently,
please point me to the articles.
    I'd like to hear opinions on the following thought I had, about
the Turing Test. Start off with a story. Suppose in X years, physics
gets to such a point where very fast storage and retrieval of
arbitrary amounts of information is easy (imagine some sort of
hyperdimensional memory, or something). They then make an enormous
'game-tree' of all possible conversations in English (taking 
into account randomizing elements, repeat questions,
etc.), and make an idiot box that simply accepts inputs from an
interrogator, and, by direct table look-up, spits out answers, which
are good enough to pass the Turing Test. I imagine supporters of the
test (except behaviorists, I guess) will not want to classify this
device as intelligent (or as a 'person') in any sense of the word.
One way out for them is to say that this device exploits advances in a
science (physics/engineering) which really has nothing to do with the
question of sentience, to produce an indistinguishable simulation of
the real thing.  Given that, what is to stop an opponent of AI (like a
dualist, for example) from saying the same thing about any
currently-feasable AI project? i.e., that it exploits advances in
computer science to produce a good simulation, but really has nothing
to do with the question of primary consciousness? 
     Any and all opinions are welcome. Especially, if anyone has seen
this problem brought up in the literature before (I vaguely recall
someone telling me this has already been thought of), I'd appreciate a
reference. 

Mike Levin

sandberg@bart (Stephanie) (05/13/91)

In article <1991May13.133711.102@athena.mit.edu> mlevin@jade.tufts.edu writes:

> ...
> hyperdimensional memory, or something). They then make an enormous
> 'game-tree' of all possible conversations in English (taking 
> into account randomizing elements, repeat questions,
> etc.), and make an idiot box that simply accepts inputs from an
> interrogator, and, by direct table look-up, spits out answers, which
> are good enough to pass the Turing Test. 
> 
> Mike Levin


What is missing here is the intentionality of the responses. It is our
intentions as humans, that predicts our actions. How are these actions 
chosen? What motivates our intentions? How do we represent this? (the
answers to these questions can be found in "Scripts, Plans Goals and 
Understanding" Schank and Ableson.) But mere conversation is boring,
it when a computer is able to play mind games with you, and it plays
the games based on it's own goals and intentions in life that makes it  
conscious.

Why should a computer use an infinte tree of representing every possible
conversational response, when that is not how humans do it? I'm sorry
I don't understand the reason behind this sort of solution. 

Stephanie Sandberg

forbis@milton.u.washington.edu (Gary Forbis) (05/13/91)

I have pared this down.

In article <1991May13.133711.102@athena.mit.edu> mlevin@jade.tufts.edu writes:
>
>Suppose in X years, physics
>gets to such a point where very fast storage and retrieval of
>arbitrary amounts of information is easy (imagine some sort of
>hyperdimensional memory, or something). They then make an enormous
>'game-tree' of all possible conversations in English (taking 
>into account randomizing elements, repeat questions,
>etc.), and make an idiot box that simply accepts inputs from an
>interrogator, and, by direct table look-up, spits out answers, which
>are good enough to pass the Turing Test. I imagine supporters of the
>test (except behaviorists, I guess) will not want to classify this
>device as intelligent (or as a 'person') in any sense of the word.

I guess I am beyond any of these people you imagine for I think one has
to call the machine intelligent if the word is to retain any useful meaning.
Are you an implementationalist?  Is there some right way to implement 
specific behavior such that it and it alone may be called intelligent and
all other implementations are simulations?  If you are then how do I know
any specific human has intelligence rather than a simulation of the same?

An aside to the question.  Are you imagining a device which gives the 
appearance of learning without actually doing so?  If you are not then you
might reexamine the assertion that it is good enough to pass the Turing
Test.  Suppose I ask, "Do you remember the last time we talked about the
Turing Test?"  How would the machine respond?  The machine could not be
static but must have an ever growing knowledge base of the world about it.
The ability to learn and function at a human level is why I would call it
intelligent (there is no way to give the appearance of learning without
actually doing so.)

>Mike Levin

--gary forbis@u.washington.edu

maxwebb@moe.cse.ogi.edu (Max G. Webb) (05/14/91)

In article <1991May13.133711.102@athena.mit.edu> mlevin@jade.tufts.edu writes:

>... They then make an enormous
>'game-tree' of all possible conversations in English (taking 
>into account randomizing elements, repeat questions,
>etc.), and make an idiot box that simply accepts inputs from an
>interrogator, and, by direct table look-up, spits out answers, which
>are good enough to pass the Turing Test.

How do they generate this 'enormous game tree' of all possible
conversations? If automatically, then the part that does it
automatically is arguably the part that has passed the test,
demonstrated it's understanding of natural language (and of
the exterior world knowledge that that entails). If by hand (HAH!)
Then the part that does it is _known_ to be intelligent.

In other words, this argument plays a shell game. The hard part
(generating all possible conversations) is what would be tested
by this procedure, and is not addressed.

You might as well say that playing a good chess game doesn't
prove your opponent understands chess. How do you know he
doesn't have a huge game tree somewhere? The answer is, whatever
compiled that game tree is your true opponent, and understands
chess.

>  Given that, what is to stop an opponent of AI (like a
>dualist, for example) from saying the same thing about any
>currently-feasable AI project? i.e., that it exploits advances in
>computer science to produce a good simulation, but really has nothing
>to do with the question of primary consciousness? 

The following should slow him down a bit:

	1) The above argument is a shell game.
	2) Behavior, and your biological similarity to other humans
	   are the only clues you have of their consciousness.
	   Since some current approaches to AI involve biologically
	   inspired designs, if and when they exhibit the same
	   behavioral clues, you have _all_ the same reasons to
	   suspect the one of being conscious as the other.

All the dualist would have to fall back on is interspecies chauvinism.
(Which is what he started out with. Why else this need to
show our superiority over all possible machines?)

Even *DUALLISTS* have to use behavioral criteria.

>Mike Levin

	Max

stucki@retina.cis.ohio-state.edu (David J Stucki) (05/14/91)

   An aside to the question.  Are you imagining a device which gives the 
   appearance of learning without actually doing so?  If you are not then you
   might reexamine the assertion that it is good enough to pass the Turing
   Test.  Suppose I ask, "Do you remember the last time we talked about the
   Turing Test?"  How would the machine respond?  The machine could not be
   static but must have an ever growing knowledge base of the world about it.
   The ability to learn and function at a human level is why I would call it
   intelligent (there is no way to give the appearance of learning without
   actually doing so.)

   --gary forbis@u.washington.edu

I have had many students who serve as counter-examples to your
parenthetical remark.  :)

But seriously, what if you are not simulating learning, but simulating
someone who is learning (say, for example, a dog).  You would want to
say that the dog is learning and that the computer isn't (since it is
simulating the dog, not the learning), but from what you said above
this distinction has been smeared.

I think we need to discipline ourselves from equating the
computational concepts of intelligence, learning, etc., from the
corresponding cognitive concepts.  They aren't equivalent and this has
been the cause of much of the confusion in the discussions in the
newsgroup.

dave...

--
David J Stucki	   /\ ~~ /\  ~~	 /\  ~~	 /\  ~~	c/o Dept. Computer and 
537 Harley Dr. #6 /  \  /  \ 	/  \ 	/  \   	/   Information Science
Columbus, OH  43202   \/    \  /    \  /    \  /    2036 Neil Ave.
stucki@cis.ohio-state.edu ~  \/  ~~  \/	 ~~  \/	    Columbus, OH 43210

jj@medulla.cis.ohio-state.edu (John Josephson) (05/14/91)

It is unreasonable to think that the Turing test is infallible.  That
is, in general the best explanation that something (upon rigourous and
demanding testing) appears to be intelligent, is that it is, indeed,
intelligent.  The appearance of intelligence is good evidence for it's
presence, the more evidence, the more that efforts to trip it up have
failed.

It is conceivable for something to pass the Turing test that isn't
intelligent.  It just has negligible likelihood.

.. jj

minsky@media-lab.media.mit.edu.MEDIA.MIT.EDU (Marvin Minsky) (05/14/91)

In article <1991May13.133711.102@athena.mit.edu> mlevin@jade.tufts.edu writes:
> Start off with a story. Suppose in X years, physics
>gets to such a point where very fast storage and retrieval of
>arbitrary amounts of information is easy (imagine some sort of
>hyperdimensional memory, or something). [...] and, by direct table
>look-up, spits out answers, which are good enough to pass the Turing
Test.

Then you can conclude that the machine has passed that Turing test.
Nothing more.

>Given that, what is to stop an opponent of AI (like a
>dualist, for example) from saying the same thing about any
>currently-feasable AI project? i.e., that it exploits advances in
>computer science to produce a good simulation, but really has nothing
>to do with the question of primary consciousness? 

There is indeed no known force or argument that can stop a dualist.
This is why they occupy all the powerful positions in our societies.

Seriously, passing the Turing test is merely something that (according
to Turing) which is likely to convince a person that another object is
sentient. Clearly that has nothing whatever to do with whether that
other thing is actually sentient, but only assesses the gullibility of
that observer.  The real question is whether the observer itself is
sentient.  And in my view, that question is meanlingless, because
"sentience" is a complicated social-psychological relation between
four entities.  That is, it only makes sense when used in the form "A
emits a signal that causes B to emit statements of the form 'C regards
D to be sentient'"

Now you might retort that this makes the term "sentient" too complex,
obscure, and elaborate to have any practical use.  Precisely.  

steven@legion.rain.com (steven furber) (05/14/91)

jj@medulla.cis.ohio-state.edu (John Josephson) writes:

> It is unreasonable to think that the Turing test is infallible.  That
> is, in general the best explanation that something (upon rigourous and
> demanding testing) appears to be intelligent, is that it is, indeed,
> intelligent.  The appearance of intelligence is good evidence for it's
> presence, the more evidence, the more that efforts to trip it up have
> failed.

I have only read about the Turing test in cognitive science and 
linguistics books.  Something I have been wondering is if the test is to 
prove intelligence from the point of view of a particular species (or 
type of being).  The test requires use of language and linguistic 
knowledge.  Does it necessarily require that some particular language be 
used?  Although we have not encountered extraterrestrias, there is very 
little reason (from what I have read and can `see') to believe that 
non-humans communicate with the same system we use.  If we find an 
species that does not communicate in the same way that we do and 
communicate in a language we know, is that species necessarily 
unintelligent?

Like I said, my knowledge of the Turing test is limited.

chalmers@bronze.ucs.indiana.edu (David Chalmers) (05/14/91)

In article <1991May13.133711.102@athena.mit.edu> mlevin@jade.tufts.edu writes:

>    I'd like to hear opinions on the following thought I had, about
>the Turing Test. Start off with a story. Suppose in X years, physics
>gets to such a point where very fast storage and retrieval of
>arbitrary amounts of information is easy (imagine some sort of
>hyperdimensional memory, or something). They then make an enormous
>'game-tree' of all possible conversations in English (taking 
>into account randomizing elements, repeat questions,
>etc.), and make an idiot box that simply accepts inputs from an
>interrogator, and, by direct table look-up, spits out answers, which
>are good enough to pass the Turing Test.

N. Block, "Psychologism and Behaviorism", Philosophical Review 90:5-43, 1981.

This is about precisely the scenario that you imagine.  A long, thorough,
and interesting article -- definitely good value.  Block draws the conclusion
that the TT is too behaviourist to serve as a sufficient criterion for
intelligence.  As an "in-principle" point, I find myself in somewhat
reluctant agreement -- reluctant because of the ridiculousness of the
scenario (we're talking about a lot of cubic light-years to store that 
information).  Perhaps the TT can be saved by imposing some very mild 
restriction on the kind of mechanisms that can is allowed -- e.g. that they
be "generative" (productive, systematic, etc) in some sense.

If you're at Tufts, talk to Dan Dennett (big Turing-Test fan) about this.
He hates the Block example (along with a few others, e.g. Hofstadter,
Cherniak) because of its implausibility, but I'm not sure that he has any
really good arguments against it.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

markh@csd4.csd.uwm.edu (Mark William Hopkins) (05/14/91)

In article <1991May13.133711.102@athena.mit.edu> mlevin@jade.tufts.edu writes:
(Store everything and fake intelligence by a table look-up)

No memory technology will ever allow you to store a massive amount of
information such as the corpus of all possible English conversations.

Anyway, if it could store that much information AND retrieve it in a short
time, hey, it's pretty damn intelligent on grounds of its efficient retrieval
alone!

So here's a more interesting question to ask: if I have a large body of
information at hand and am able to access anything I want by content in such
a short time that nobody even knows I'm using "external" sources, then do I
"know" that information??!

Consider the same question, especially if that retrieval is controlled via a
direct communication link between the brain and database.

My answer is: yes.  I know the information because the information sources are
now a part of my extended nervous system solely in virtue of my ability to
rapidly access it.  It's like an appendage to my body in exactly the way my
arms and legs are.

Therefore, it is possible for me to know everything and become adept at every
field of science despite my "personal limitations".

So: right now I know 19 languages, know the gory details of the goegraphy of
every place on the planet, know the details of world history, of every player
in major league baseball from '75 and before, of all of physics, biology, etc
etc. :)

ziane@nuri.inria.fr (ziane mikal @) (05/15/91)

In article <1991May13.133711.102@athena.mit.edu> mlevin@jade.tufts.edu writes:

> ...
> hyperdimensional memory, or something). They then make an enormous
> 'game-tree' of all possible conversations in English (taking 
> into account randomizing elements, repeat questions,
> etc.), and make an idiot box that simply accepts inputs from an
> interrogator, and, by direct table look-up, spits out answers, which
> are good enough to pass the Turing Test. 
> 
> Mike Levin

It is reasonable to assume that such a table can be constructed ?
What about questions that refer to the discussion itself ?
How can you know statically a result that can only be computed
dynamically ?

I have the same objection towards Searle's Chinese room argument.
It seems to me that the main problem comes from the assumption
that a static system could give acceptable answers.
If on the other hand the guy in the room is asked by the instructions
in English, to make computations, store results, etc, it is clearer
that the room itself (guy + instructions ...) understands Chinese.
It is even possible that the necessary computations are so complex
that they make the guy in the room learn Chinese !

Mikal Ziane (Mikal.Ziane@nuri.inria.fr)

hearn@claris.com (Bob Hearn) (05/15/91)

In article <1991May13.133711.102@athena.mit.edu> mlevin@jade.tufts.edu writes:
>
>    I'd like to hear opinions on the following thought I had, about
>the Turing Test. Start off with a story. Suppose in X years, physics
>gets to such a point where very fast storage and retrieval of
>arbitrary amounts of information is easy (imagine some sort of
>hyperdimensional memory, or something). They then make an enormous
>'game-tree' of all possible conversations in English (taking 
>into account randomizing elements, repeat questions,
>etc.), and make an idiot box that simply accepts inputs from an
>interrogator, and, by direct table look-up, spits out answers, which
>are good enough to pass the Turing Test.
>
> ...
>
>Mike Levin
>

I think this scenario is a little too far-fetched to be believable.
I can conceive of the fast storage and retrieval of arbitrary amounts
of information, but what exactly do you mean by 'all possible
conversations?'  How is this information to be generated?  How long
a conversation must be supported?
Assuming that all these questions can be answered satisfactorily, 
then yes, the system is intelligent.  BUT the requirements for
answering them satisfactorily are such that viewing the system
as operating by table-lookup would be missing the point.
After all, I operate according to the laws of physics.  That means
that, starting with a very large (but much smaller than yours)
database of particles (it would probably be more feasible to model
me at the cell level), and following a set of rules in theory no
more difficult than table lookup, you would get a system which
behaved just like me.  (Quantum physicists may debate this, but
most people believe that quantum phenomena are not relevant in
biological systems.)  But the intelligence in the system lies
in the database itself, not in the lookup mechanism.
You may argue that your database is static, while mine is dynamic,
in that my rules modify it.  But then I can make a database just
like yours, containing an entry for each distinguishable state I
can be in, with transfer indices based on sensory input.
I argue that this model is identical to yours, and also identical
to me.  So it is intelligent, but you have to view the system
from the right angle for it to make sense.

Bob Hearn


  If we pick, arbitrarily, an hour,
then I think that (1) the game tree could not conceivably be generated
by humans, implying the existence of some artificial intelligence
capable of creating the tree, and (2) I would not be satisfied that
it was intelligent anyway.  What good is something that is only 
intelligent for an hour, then loses its memory?
If we assume, instead, tha 

zane@ddsw1.MCS.COM (Sameer Parekh) (05/15/91)

	We don't need to use the same system as humans. . .

	(I just thought of a new term for when an AI becomes "conscious":
Sentiogenesis.)
-- 
The Ravings of the Insane Maniac Sameer Parekh -- zane@ddsw1.MCS.COM

rickert@mp.cs.niu.edu (Neil Rickert) (05/15/91)

In article <2200@seti.inria.fr> ziane@nuri.inria.fr (ziane mikal @) writes:
>I have the same objection towards Searle's Chinese room argument.
>...
>It is even possible that the necessary computations are so complex
>that they make the guy in the room learn Chinese !

 Glad to see someone else say this.  I believe it is the crux of the
matter.


-- 
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
  Neil W. Rickert, Computer Science               <rickert@cs.niu.edu>
  Northern Illinois Univ.
  DeKalb, IL 60115                                   +1-815-753-6940

yking@cs.ubc.ca (Yossarian Yggy King) (05/15/91)

The proposed idea of throwing vast amounts of memory at the intelligence
problem and using table lookup sounds to me virtually identical to the
Chinese room argument against strong AI (which I don't buy, but that's
a whole other can of worms that hopefully needn't be reopened :-).

WRT the Turing test, it seems like a very naive way to assess intelligence.
To draw an analogy with software engineering, the TT is equivalent to
running a program for a while, trying a whole bunch of different inputs,
and hoping that you manage to detect all the bugs. While a lot of software
testing is done in this manner, there are more thorough, principled methods
of software verification (ensure all modules are tested, take all paths, etc,
and various types of "theoretical" approaches such as Floyd's method of
inductive assertions for verifying partial and total correctness [work done
at Stanford; sorry, no reference]).

I realize that until we can nail down better what intelligence is, this will
be very difficult, but shouldn't there be more principled and thorough ways
of evaluating intelligence than the TT? (perhaps producing an intelligence
rating on some scale, rather than the simple yes/no results of the TT)

Just MHO's
--
~..~		NETLAND WHO'S WHO -- the DOTTZIG
 ((O))~	  This small nocturnal parasite dwells in the nether regions of the
 /\ /\	  Arrtikul, another denizen of netland. In extreme cases, the Dottzig
	  may grow to completely dominate the host Arrtikul.

G.Joly@cs.ucl.ac.uk (Gordon Joly) (05/15/91)

steven furber writes:
 > [...]
 > used?  Although we have not encountered extraterrestrias, there is very 
 > little reason (from what I have read and can `see') to believe that 
 > non-humans communicate with the same system we use.  If we find an 
 > species that does not communicate in the same way that we do and 
 > communicate in a language we know, is that species necessarily 
 > unintelligent?
 > 
 > Like I said, my knowledge of the Turing test is limited

The topic of the strangeness of extraterrestrias "thought" is covered
in "The Mote in God's" - authored Larry Nevin and somebody else (not
sure of the names here at all).

Trying to imagine a Turing Test for ET is the business of Carl Sagan.

____

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

    "Pessimism of the intellect, optimism of the will" - Gramsci

lintz@cis.udel.edu (Brian Lintz) (05/15/91)

In article <1991May15.055331.10631@cs.ubc.ca> yking@cs.ubc.ca (Yossarian Yggy King) writes:

>WRT the Turing test, it seems like a very naive way to assess intelligence.
>To draw an analogy with software engineering, the TT is equivalent to
>running a program for a while, trying a whole bunch of different inputs,
>and hoping that you manage to detect all the bugs.

I look at it this way. I have conversations on the net and
through email with people I have never seen. But just by
their responses to my questions or comments, I know they
are intelligent. If I found out that one of these people
were actually a computer, I would probably think that it
was intelligent. The Turing Test is even more stringent;
you know beforehand that the person may be human or a computer,
so you can gear your questions toward it. Remember, you can
ask it anything; tell it jokes to see if it understands the 
humor, ask it to do something creative, etc. If I couldn't
tell if the machine was a machine or a human, in all fairness,
I would have to assume it was intelligent.

Brian Lintz
lintz@udel.edu

afzal@cui.unige.ch (Afzal Ballim) (05/16/91)

In article <2200@seti.inria.fr> ziane@nuri.inria.fr writes
>In article <1991May13.133711.102@athena.mit.edu> mlevin@jade.tufts.edu writes:
>> ...
>> hyperdimensional memory, or something). They then make an enormous
>> 'game-tree' of all possible conversations in English (taking 
>> into account randomizing elements, repeat questions,
>> etc.), and make an idiot box that simply accepts inputs from an
>> interrogator, and, by direct table look-up, spits out answers, which
>> are good enough to pass the Turing Test. 
>> 
>> Mike Levin
>
>It is reasonable to assume that such a table can be constructed ?
>What about questions that refer to the discussion itself ?
>How can you know statically a result that can only be computed
>dynamically ?

To which the answer is a definite no.  Given that the number of *sentences*
alone in English is transfinite, it seems improbable at best to imagine that
the number of conversations could be finite.


------------------------------------------------------------------------------
Afzal Ballim	             |EAN,BITNET,EARN,MHS,X.400: afzal@divsun.unige.ch
 ISSCO, University of Geneva |UUCP: mcvax!cernvax!cui!divsun.unige.ch!afzal
 54 route des Acacias	     |JANET: afzal%divsun.unige.ch@uk.ac.ean-relay
 CH-1227 GENEVA,Switzerland  |CSNET,ARPA: afzal%divsun.unige.ch@relay.cs.net

nrasch@cs.ruu.nl (Menno Rasch) (05/16/91)

The conclusion is clear: Computers an human-beiings can not be compared!!

wallingf@cps.msu.edu (Eugene Wallingford) (05/16/91)

Brian Lintz writes:

>... The Turing Test is even more stringent;
>you know beforehand that the person may be human or a computer,
>so you can gear your questions toward it. ... If I couldn't
>tell if the machine was a machine or a human, in all fairness,
>I would have to assume it was intelligent.

     Actually, in Turing's original "Imitation Game," the interrogator
     does not know beforehand which is which; the task is to determine
     which respondent is the female.  (I thinks that's right...)  So
     Brian's second sentence above is closer to Turing's intention --
     can the interrogator determine which is which, without knowing
     in advance?


--
~~~~ Eugene Wallingford             ~~~~   AI/KBS Laboratory         ~~~~
~~~~ wallingf@pleiades.cps.msu.edu  ~~~~   Michigan State University ~~~~

G.Joly@cs.ucl.ac.uk (Gordon Joly) (05/16/91)

In article <2200@seti.inria.fr>  ziane@nuri.inria.fr (ziane mikal @)
>> In article <1991May13.133711.102@athena.mit.edu> mlevin@jade.tufts.edu writes:
>> 
>> > ...
>> > hyperdimensional memory, or something). They then make an enormous
>> > 'game-tree' of all possible conversations in English (taking 
>> > into account randomizing elements, repeat questions,
>> > etc.), and make an idiot box that simply accepts inputs from an
>> > interrogator, and, by direct table look-up, spits out answers, which
>> > are good enough to pass the Turing Test. 
>> > 
>> > Mike Levin
>> 
>> It is reasonable to assume that such a table can be constructed ?
[...]
>> Mikal Ziane (Mikal.Ziane@nuri.inria.fr)

Probably not. Most Gedanken are impossible to carry out in reality,
for example Maxwell's Demon. 

Or a Turing Test.
____

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

    "Pessimism of the intellect, optimism of the will" - Gramsci

cs012116@cs.brown.edu (Mike Perkowitz) (05/16/91)

In article <u0yw21w164w@legion.rain.com>, steven@legion.rain.com (steven furber) writes:
|> 
|> I have only read about the Turing test in cognitive science and 
|> linguistics books.  Something I have been wondering is if the test is to 
|> prove intelligence from the point of view of a particular species (or 
|> type of being).  The test requires use of language and linguistic 
|> knowledge.  Does it necessarily require that some particular language be 
|> used?  Although we have not encountered extraterrestrias, there is very 
|> little reason (from what I have read and can `see') to believe that 
|> non-humans communicate with the same system we use.  If we find an 
|> species that does not communicate in the same way that we do and 
|> communicate in a language we know, is that species necessarily 
|> unintelligent?

I think the assertion of the TT is simply that IF it passes the test, THEN
it must be intelligent. This is in no way meant to imply that IF it's
intelligent, THEN it will pass the test. Clearly, we have no way of forming
an opinion one way or the other about an entity with whom we cannot communicate.
Perhaps a loose interpretation of the spirit of TT would allow one to seek
all sorts of other intelligent behaviors (how about a computer that knows
American Sign Language, an extraterrestrial that plays a great game of chess,
or a computer capable of designing a movie soundtrack that sets the perfect
mood for each scene - aren't these all reasonable evidence of intelligence without
fitting into a traditional TT or conception of "communication"?).

Mike Perkowitz

will@aristotle.ils.nwu.edu (William Fitzgerald) (05/17/91)

I'm reading a book called _The Vastness of Natural Languages_ by
Langendoen and Postal, in which they claim/prove that no 
natural language is recursively enumerable.  Accepting 
this as true, this means there is no Turing Machine which can
be built to recognize the sentences of a natural language.

john@publications.ccc.monash.edu.au (John Wilkins) (05/17/91)

In article <1991May16.102046.2063@cs.ruu.nl> nrasch@cs.ruu.nl (Menno Rasch)
writes:
>The conclusion is clear: Computers an human-beiings can not be compared!!
>
Balderdash. Anything and anything can be compared. What is instructive are the
differences between two items that are supposedly of the same kind. In trying
to compare human thought and computer processing, we may
1. discover the processes that give rise to thought; or
2. improve the capacities of computers as we learn more about the models of
thought; or
3. both.

I'm plugging for 3.

jane@latcs2.lat.oz.au (Jane Philcox) (05/17/91)

In article <1563@ucl-cs.uucp> G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:
>steven furber writes:

>The topic of the strangeness of extraterrestrias "thought" is covered
>in "The Mote in God's" - authored Larry Nevin and somebody else (not
>sure of the names here at all).

"The Mote in God's Eye" by Larry Niven and Jerry Pournelle.  Very good book,
if a little slow moving at times.

Regards, Jane.
-- 

           A programmer is a machine for converting coffee into code.

jane@latcs2.lat.oz.au (Jane Philcox) (05/17/91)

In article <1991May13.133711.102@athena.mit.edu> mlevin@jade.tufts.edu writes:

>Suppose in X years, physics gets to such a point where very fast storage and 
>retrieval of arbitrary amounts of information is easy (imagine some sort of 
>hyperdimensional memory, or something). They then make an enormous 'game-tree' 
>of all possible conversations in English ... 

Lets start by generating all the possible sentences in English.  

Take the Oxford English Dictionary (sorry to all you users of Webster, but I 
don't know it so well), which is now, due to its size, only available on 
microfiche and probably other forms of compact storage.  The last time it was 
printed on paper that I saw, it was, I think, 24 very large volumes.  Somewhat 
larger than most average encyclopaedias.  We'll use that for our vocabulary.

Suppose we have a _really_ efficient natural language generator, using a truly
representative model of English syntax, which in spite of the best efforts of
the linguists, over a number of years, does not currently exist.  By truly
representative, I mean something that can generate _every_ English sentence
which would be considered grammatical by some native English speaker somewhere.

And, as someone else posted, have on hand all the material in the universe to
use to build your physical memory structures out of.

Now, start generating, and adding your sentences to some structure which, when 
complete, will allow you to start tacking together all those possible 
conversations.  

I suspect that by the time this task , which is considered theoretically 
impossible by the linguists, is complete, you will find that you have taken so 
long that many of the sentences you have generated will be considered 
marginally grammatical at best, and quite unacceptably archaic, and the 
language itself will have acquired millions more words for you to play with, 
using the 20,001 edition of the Oxford English Dictionary, or whatever it's 
called by then - probably something quite unrecognizable to us.

On the whole, I think it might be easier, and definitely more profitable, to
build something that does it the way we do: by knowing the words, knowing what
they mean (that's the hard bit!) and knowing how to tack them together into
meaningful structures.  The only real problem here (:-)) is that it would 
probably take half a universe of material to store all that you need to know 
about the world to make a meaningful conversation.  And I suspect that by the
time you've built something that can do that, you will have built something
that is so self-evidently intelligent, that people will wonder why the Turing 
Test was once thought necessary, or even useful.

Regards, Jane.
-- 

           A programmer is a machine for converting coffee into code.

jane@latcs2.lat.oz.au (Jane Philcox) (05/17/91)

In article <1991May16.143804.16487@msuinfo.cl.msu.edu> wallingf@cps.msu.edu (Eugene Wallingford) writes:
>     Actually, in Turing's original "Imitation Game," the interrogator
>     does not know beforehand which is which; the task is to determine
>     which respondent is the female.  
                              ^^^^^^
Huh?  I've only heard of the test as a test of intelligence.  Have I missed
something somewhere?  Was it originally a test to see whether you could tell
males from females, and then later adapted to the intelligence area?

References, someone?

Regards, Jane.


-- 

           A programmer is a machine for converting coffee into code.

uh311ae@sunmanager.lrz-muenchen.de (Henrik Klagges) (05/17/91)

G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:

>The topic of the strangeness of extraterrestrias "thought" is covered
>in "The Mote in God's" - authored Larry Nevin and somebody else (not
>sure of the names here at all).

"The Mote in God's eye", Larry Niven & J. Pournelle.
Rick@vee.lrz-muenchen.de
MUNG == MUNG until no good.

bohm@cs.Buffalo.EDU (Eric "Gothmog" Bohm) (05/17/91)

In article <1991May17.064714.5942@latcs2.lat.oz.au>, jane@latcs2.lat.oz.au (Jane Philcox) writes:
|> Huh?  I've only heard of the test as a test of intelligence.  Have I missed
|> something somewhere?  Was it originally a test to see whether you could tell
|> males from females, and then later adapted to the intelligence area?
|> 
|> References, someone?


	Here is how Charniak and McDermott describe the situation in 
_Introduction to Artificial Intelligence_. pg 10
"After the war, in 1950, he published the famous article "Computing Machinery
ad Intelligence " [Turing63] in which he explictly puts forward the idea that a computer could be programmed to as to exhibit intelligent behavior. He also 
examines, and rejects, arguments as to the impossibility of artificial
intelligence But probably the most famous contributions of the article is the so-called 'Turing Test'. Turing envisions a test in which you have typewriter
communication to two rooms, one of which has a man in it and one of which has a 
woman. Both that man and the woman would claim to be a woman, and it would be your problem to decide which was telling the truth. Similarly, Turing suggests
we could have a person in one room and a computer in the other, both claiming to
be a person, and you would have to decide on the truth. Obviously, if you failed at this task (or could only guess at chance level), then one would be inclined 
to say that the computer was intelligent, the alternative being out of the
question in polite company. (Actually, the paper makes it sound as if Turing
had in mind the computer pretending to be a woman in the man/woman game, but
the point is not completely clear, and most have assumed that he intended the
test to be a person/computer one, and not woman/computer.)"

	Sorry I didn't go back to the original source, but the book was right by
the terminal.(not a bad book incidentally, I like the polite company jab :-)

	I have heard similar reports on the lack of clarity about the test
from others, so I guess we'll never know exactly what Alan Turing had in mind. 
Although it looks like he was envisioning a typical 3 way send or talk session
of modern times, with a computer at one end and a people at the other ends.






-- 
Gothmog AKA Eric Bohm
[ It can be shown that a neat .sig file can be created and that there exists  ]
[ a valid address for this user. (the proof is left as an exercise for the    ]
[ student)							              ]

petersja@debussy.cs.colostate.edu (james peterson) (05/17/91)

In article <53693@nigel.ee.udel.edu> lintz@cis.udel.edu (Brian Lintz) writes:
>In article <1991May15.055331.10631@cs.ubc.ca> yking@cs.ubc.ca (Yossarian Yggy King) writes:
>
>>WRT the Turing test, it seems like a very naive way to assess intelligence.
>>To draw an analogy with software engineering, the TT is equivalent to
>>running a program for a while, trying a whole bunch of different inputs,
>>and hoping that you manage to detect all the bugs.
>

> [stuff deleted]
>were actually a computer, I would probably think that it
>was intelligent. The Turing Test is even more stringent;
>you know beforehand that the person may be human or a computer,
>so you can gear your questions toward it. Remember, you can
>ask it anything; tell it jokes to see if it understands the 
>humor, ask it to do something creative, etc. If I couldn't
>tell if the machine was a machine or a human, in all fairness,
>I would have to assume it was intelligent.
>



I once thought the Turing Test was a bogus measure of intelligence.  I
have come to appreciate, however, just how difficult it would be to
pass.  Like Bill Rappaport, I believe that the TT is an index of intelligence
because it is really a test of the ability to manipulate natural language.
If you consider the abilities a computer would have to possess in order
give "reasonable" responses to various questions, you will come to
appreciate just how difficult the TT would be to pass.  Searle 
notwithstanding, if I came across a computer that could convince me
it was a normal human after, say, a half hour of "conversing" with it,
I would have to admit it *was* intelligent.  I don't beleive, however,
that any machine has even come the slightest distance towards this, and
I have serious doubts any formal automata will ever.



-- 
james lee peterson				petersja@CS.ColoState.edu
dept. of computer science                       
colorado state university		"Some ignorance is invincible."
ft. collins, colorado  (voice:303/491-7137; fax:303/491-2293)

christo@psych.toronto.edu (Christopher Green) (05/18/91)

In article <1991May17.064714.5942@latcs2.lat.oz.au> jane@latcs2.lat.oz.au (Jane Philcox) writes:
>Huh?  I've only heard of the test as a test of intelligence.  Have I missed
>something somewhere?  Was it originally a test to see whether you could tell
>males from females, and then later adapted to the intelligence area?
>
>References, someone?
>
I think the original paper's in _Mind_ 1950. It's well worth the read. Turing
was far less zealous than some of latter-day followers.
 
-- 
Christopher D. Green
Psychology Department                             e-mail:
University of Toronto                   christo@psych.toronto.edu
Toronto, Ontario M5S 1A1                cgreen@lake.scar.utoronto.ca 

DOCTORJ@SLACVM.SLAC.STANFORD.EDU (Jon J Thaler) (05/19/91)

In article <1744@anaxagoras.ils.nwu.edu>, will@aristotle.ils.nwu.edu (William
Fitzgerald) says:

>I'm reading a book called _The Vastness of Natural Languages_ by
>Langendoen and Postal, in which they claim/prove that no
>natural language is recursively enumerable.  Accepting
>this as true, this means there is no Turing Machine which can
>be built to recognize the sentences of a natural language.

It's interesting to turn this around and ask whether human intelligence
can recognize (all of) the sentences of a natural language.

zane@ddsw1.MCS.COM (Sameer Parekh) (05/19/91)

In article <1991May17.064714.5942@latcs2.lat.oz.au> jane@latcs2.lat.oz.au (Jane Philcox) writes:
>In article <1991May16.143804.16487@msuinfo.cl.msu.edu> wallingf@cps.msu.edu (Eugene Wallingford) writes:
>>     Actually, in Turing's original "Imitation Game," the interrogator
>>     does not know beforehand which is which; the task is to determine
>>     which respondent is the female.  
>                              ^^^^^^
>Huh?  I've only heard of the test as a test of intelligence.  Have I missed
>something somewhere?  Was it originally a test to see whether you could tell
>males from females, and then later adapted to the intelligence area?
>
>References, someone?
>
>Regards, Jane.

	Turing adapted his test from a "parlor game" in which a person would
judge out of two people, one male and the other females, which one was
female.

-- 
The Ravings of the Insane Maniac Sameer Parekh -- zane@ddsw1.MCS.COM

mason@endor.uucp (Richard Mason) (05/19/91)

I think many people (here and elsewhere) read way too much into the
Turing test.

The Turing test is not a rigorous definition of intelligence.  Nor
is it a method of measuring intelligence (as if "intelligence" was a
thing you could objectively measure at all!).

What the Turing test is is a pragmatic argument, aimed at people who do
not think a non-human machine (*by virtue of being a non-human machine*)
can possess intelligence.  The line of argument goes:

(A) Essentially everyone believes, with an ultra-high level of certainty,
    that other human beings are conscious, intelligent entities.

(B) Almost everyone would assert that, whatever consciousness and
    intelligence are, they are not dependent upon physical appearance,
    physical capabilities, etc.  Almost everyone would therefore
    agree, upon reflection, that the decision that another human being
    is intelligent can be made without reference to what they look like,
    etc.

(C) THEREFORE, IF you cannot tell the difference between a human and a 
    non-human (e.g. a computer) without examining their physical
    appearance, THEN you must extend the same courtesy to each.
    That is, if you accept that the human is sentient, then you must also
    accept that the computer is sentient.  Since there was no detectable
    difference in the "evidence" that each one offered up, to do otherwise
    is to admit that your judgement is irrational and not based on
    evidence.


NOTE: There has been no attempt to DEFINE intelligence here; just an
observation that if you think humans are intelligent, you must think
entities indistinguishable from humans are also intelligent.
You may disagree with the conclusion (C) if you do not accept one of the
premises (A) or (B).  In particular, you may assert that your judgement of
intelligence *is* based on some physical feature (e.g. the presence of a
network of synapses.

It is foolish to talk about the time required for the Turing Test, or the
testing conditions, etc.  The Turing Test is not something you sit down and
take in two three-hour periods, with a fifteen-minute break, DO NOT BEGIN
UNTIL INSTRUCTED BY THE PROCTOR.

As long as YOU, as an individual, cannot distinguish between HAL 9000 and a
human being over the phone, AND you admit that everything important about
intelligence should be detectable over the phone, THEN it is only fair that
you give HAL 9000 the same status and recognition as you give a human
being.  That is all the Turing test means.

mason@endor.uucp (Richard Mason) (05/19/91)

In article <5577@cui.unige.ch> afzal@cui.unige.ch (Afzal Ballim) writes:
>
>To which the answer is a definite no.  Given that the number of *sentences*
>alone in English is transfinite, it seems improbable at best to imagine that
>the number of conversations could be finite.
>

Transfinite? You mean uncountably infinite?  Surely not...

There are only so-many-thousand words in an Oxford English Dictionary.
There might be more English words, but surely only a finite number since
only a finite number of people have been making them up for a finite period
of time.  Now it's true that a sentence can THEORETICALLY be of arbitrary
length, but in practice we could put a finite limit on the allowed length
of English sentences.  A thousand-word limit would be adequate, but let's
make it a billion billion to be on the safe side.  Similarly, we can safely
say that conversations are only allowed to last for a billion billion
sentences.  So we have a finite number of combinations of words making a
finite number of sentences, and a finite number of combinations of
sentences (a billion billion factorial is still finite) which means a
finite number of conversations.

You may want to assume that the English language will last forever (i.e. a
countably infinite period of time), in which time we will coin a countably
infinite number of words.  You might also remove the restrictions above, so
that both sentences and conversations can be of *ANY* finite length.
You still have a countable infinity of countable infinities, which adds up
to one countable infinity and not uncountable infinity.

All this is completely idle chatter and should not in any way be
interpreted as support for the "construct a game tree for English
conversations" idea, which I regard as utterly infeasible and ridiculous.

shafto@aristotle.ils.nwu.edu (Eric Shafto) (05/20/91)

DOCTORJ@SLACVM.SLAC.STANFORD.EDU (Jon J Thaler) writes:
> In article <1744@anaxagoras.ils.nwu.edu>, will@aristotle.ils.nwu.edu (William
> Fitzgerald) says:
 
> >I'm reading a book called _The Vastness of Natural Languages_ by
> >Langendoen and Postal, in which they claim/prove that no
> >natural language is recursively enumerable.  Accepting
> >this as true, this means there is no Turing Machine which can
> >be built to recognize the sentences of a natural language.
 
> It's interesting to turn this around and ask whether human intelligence
> can recognize (all of) the sentences of a natural language.

An even more interesting point arises from your point.  If no human
can recognize all the sentences in a natural language, how are you
defining the language?

If you can't even DEFINE the language, recursive enumerability is
the least of your problems.

Along the same vein, I find most of Searle's arguments have the same
failing:  they prove that no computer could do something that I'm
not sure any human could do.
--
*Eric Shafto             * Sometimes, I think we are alone.  Sometimes I  *
*Institute for the       * think we are not.  In either case, the thought *
*    Learning Sciences   * is quite staggering.                           *
*Northwestern University *     -- R. Buckminster Fuller                   *

G.Joly@cs.ucl.ac.uk (Gordon Joly) (05/20/91)

Jane Philcox <jane@latcs2.lat.oz.au> writes
>> In article <1991May16.143804.16487@msuinfo.cl.msu.edu> wallingf@cps.msu.edu (Eu
>> gene Wallingford) writes:
>> >     Actually, in Turing's original "Imitation Game," the interrogator
>> >     does not know beforehand which is which; the task is to determine
>> >     which respondent is the female.  
>>                               ^^^^^^
>> Huh?  I've only heard of the test as a test of intelligence.  Have I missed
>> something somewhere?  Was it originally a test to see whether you could tell
>> males from females, and then later adapted to the intelligence area?
>> 
>> References, someone?

Yes, that's about it. I have only this reference, which I assume has
the story of the Imitation Game:-

    Hodges, Andrew
       Alan Turing : the enigma / Andrew Hodges.
       London : Burnett Books, Sept.1983. - 1v... - 0-09-152130-0

Afzal Ballim <afzal@cui.unige.ch> writes
>> [...]
>> To which the answer is a definite no.  Given that the number of *sentences*
>> alone in English is transfinite, it seems improbable at best to imagine that
>> the number of conversations could be finite.

Given that one in an infinite number of monkeys has typed out the
complete works of Shakespeare, it is impossible for a finite editor to
discover which monkey completed this useful:-) task, except by chance...

____

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

    "Pessimism of the intellect, optimism of the will" - Gramsci

krista@sandman.hut.fi (Krista Hannele Lagus) (05/20/91)

Erich Bohm writes:

>Similarly, Turing suggests we could have a person in one room and 
>a computer in the other, both claiming to be a person, and you 
>would have to decide on the truth. Obviously, if you failed at this 
>task (or could only guess at chance level), then one would be inclined 
>to say that the computer was intelligent, the alternative being out of the
>question in polite company. 

One thing about Turing tests disturbes me:  If the test is whether or
not one can tell the difference between a computer and a person, I
don't think one could derive anything about any participants
intelligence. The premise for deciding about the computer's
intelligence would be a requirement that both the person making the
decision and the other one trying to deceive were intelligent also.

There are stupid people, why couldn't there be stupid computers?
What are we *trying* to measure with Turing tests? The intelligence,
consciousness or humanlikeness?  Why would intelligence necessarily be
like human intelligence (if such a type exists)? When will wi invent a
computer that tries to decide which one is which, the person and the
computer, and how does it interpret the results, which one is the
measure of intelligence, the human or the computer? 

Krista Lagus

bohm@cs.Buffalo.EDU (Eric "Gothmog" Bohm) (05/20/91)

In article <1991May20.115838.8969@nntp.hut.fi>, krista@sandman.hut.fi (Krista Hannele Lagus) writes:
|> There are stupid people, why couldn't there be stupid computers?
|> What are we *trying* to measure with Turing tests? The intelligence,
|> consciousness or humanlikeness? Why would intelligence necessarily be
|> like human intelligence (if such a type exists)? When will wi invent a
|> computer that tries to decide which one is which, the person and the
|> computer, and how does it interpret the results, which one is the
|> measure of intelligence, the human or the computer? 
|> 
|> Krista Lagus


	What is intelligence?
	Your question simply begs the original question Turing was trying
to avoid dealing with in his test.
	Lacking any kind of objective measure of intelligence (don't talk to
me about IQ tests) all we have when dealing with exterior agents is our own
perception. If I perceive you to be intelligent, a perception I extend to 
most humans, then I treat you as an intelligent entity. Does your appearance, hardware, mode of walking etc. need to be relevant to my deciding whether you
are intelligent or not? I do not believe so.
	How do I tell you from a manniken?  By communication. How do I tell an
AI computer from an non-AI computer? Thats the problem.
	Turing's test is simply using the old.
"If it walks like a duck, and acts like a duck, it might be a chicken, but
we might as well consider it a duck." aphorism. What else do we have to go on
when you come down to the final analysis?
	
|>	 Why would intelligence necessarily be
|> like human intelligence (if such a type exists)? When will wi invent a
|> computer that tries to decide which one is which, the person and the
|> computer, and how does it interpret the results, which one is the
|> measure of intelligence, the human or the computer? 

	It wouldn't necessarily be like human intelligence, but how do you
recognize that something is intelligent if it isn't intelligence as we humans
know it? 
	Turing's Test wasn't supposed to be a catch all, _THIS IS IT_, test. It
was simply a way to test for intelligence without getting into the fuzzy area
of what intelligence actually is. Its not perfect, its not undeniably correct,
but it is beautifully simple to implement.

-- 
Eric "Gothmog" Bohm
	It can be shown that a neat .sig file can be created and
        that a valid address exists for this user.
        (the proof is left as an exercise for the student.)

mcdermott-drew@cs.yale.edu (Drew McDermott) (05/21/91)

   In article <1991May13.133711.102@athena.mit.edu> mlevin@jade.tufts.edu writes:
   >
   >    I'd like to hear opinions on the following thought I had, about
   >the Turing Test. Start off with a story. Suppose in X years, physics
   >gets to such a point where very fast storage and retrieval of
   >arbitrary amounts of information is easy (imagine some sort of
   >hyperdimensional memory, or something). They then make an enormous
   >'game-tree' of all possible conversations in English (taking 
   >into account randomizing elements, repeat questions,
   >etc.), and make an idiot box that simply accepts inputs from an
   >interrogator, and, by direct table look-up, spits out answers, which
   >are good enough to pass the Turing Test. I imagine supporters of the
   >test (except behaviorists, I guess) will not want to classify this
   >device as intelligent (or as a 'person') in any sense of the word.

I am no supporter of the Test, but this scenario makes no sense (even
after imposing a length limitation on conversations so there's only a
finite number of them).  There are two problems (which others have
pointed out before, but what the the hell):

1. No matter how big and fast your information-retrieval system is, you
cannot build the game tree without actually simulating all possible
conversations.  (Well, not exactly.  You only have to come up with one
response to each of your interlocutor's "moves.")  This will take
more time than we have.  Even if we could somehow do it, the later
retrieval of the conversations would essentially amount to reenacting
the "trial run" that was simulated before.  The retriever would not be
carrying on the conversation, but just enabling the game-tree builder
to carry on conversations long after his death.

2. Many conversational remarks have no context-independent responses, e.g.:
   "What time is it?"  (due to Pat Hayes)
   "Did you hear that? It sounded like a sonic boom."
   "I don't know about you, but those heart tremors made me quayle in
    fear."

There's no way to encode a single appropriate response to
conversations including sentences like these.  There's also no way to
rule them out that I can think of.  (Any attempt to do so would make
it too easy for the human to always win, by subtly breaking the
rules.)

                                             -- Drew McDermott

hm02+@andrew.cmu.edu (Hans P. Moravec) (05/21/91)

CC:

mcdermott-drew@cs.yale.edu (Drew McDermott) writes:

> In article <1991May13.133711.102@athena.mit.edu> mlevin@jade.tufts.edu writes
>>  ... hyperdimensional memory, or something). They then make an
>>    enormous 'game-tree' of all possible conversations in English ... 
> ...
> 2. Many conversational remarks have no context-independent responses, e.g.:
>  "What time is it?"  (due to Pat Hayes)
>  "Did you hear that? It sounded like a sonic boom."
>  "I don't know about you, but those heart tremors made me quayle in
>   fear."
> 
> There's no way to encode a single appropriate response to
> conversations including sentences like these.  There's also no way to
> rule them out that I can think of.  (Any attempt to do so would make
> it too easy for the human to always win, by subtly breaking the
> rules.)
> 
>                                            -- Drew McDermott

   This is not a good objection to the conversation tree idea.
The machine simply preambles its conversation: "It sure is quiet down
here in this inpenetrable bunker.  I'd go bonkers if I didn't have this
teletype, which is my only link to the outside world.  Thanks for taking
the time to schmooze with me.  We intelligent thinkers should support
one another." 

   Memory of the conversation that has gone before (context) is, of
course, encoded as the identity of the node reached so far in the tree
of possible conversational moves and responses (like a finite state
machine). Since
the tree is so large, this node address will be a pretty huge number--if
a typical question contains 1000 bits of essential information, and a
conversation is 1000 questions long, there will be (2^1000)^1000 nodes
in the conversation tree, so encoding the node identity will take one
million bits--not an unreasonable memory to capture this tiny fragment
of intelligence.

   A machine built on the same principle to respond intelligently to visual
and sound inputs (answering Drew's point 2 in another way) would have a
much larger state memory. If the sensory input data rate is 1 megabit
per second, then the state memory would have to have one megabit for
each second the machine is designed to exhibit its intelligence.  That's
still less than 100 gigabits per day, or 2,000 terabits per human
lifetime.  And if the table-encoded intelligence can forget some of that
deluge, then some nodes of the response tree (and all their successors)
can be merged, reducing their number.

   A neat thought experiment, I think.  If the tree is large enough to
cover a human lifetime of responses, then I would consider it as intelligent
as a human. Basically, the tree encodes all possible thoughts and reactions
of a particular person, in a very uncompact, but theoretically accessible, way.

                 - Hans Moravec

chalmers@bronze.ucs.indiana.edu (David Chalmers) (05/21/91)

In article <YcC8CRG00WBM83X4Fq@andrew.cmu.edu> hm02+@andrew.cmu.edu (Hans P. Moravec) writes:

>   Memory of the conversation that has gone before (context) is, of
>course, encoded as the identity of the node reached so far in the tree
>of possible conversational moves and responses (like a finite state
>machine). Since
>the tree is so large, this node address will be a pretty huge number--if
>a typical question contains 1000 bits of essential information, and a
>conversation is 1000 questions long, there will be (2^1000)^1000 nodes
>in the conversation tree, so encoding the node identity will take one
>million bits--not an unreasonable memory to capture this tiny fragment
>of intelligence.

OK, one million bits to encode node address.  Assumming 1000 bits per
answer, that means around 2^1000010 bits of storage will be needed to encode 
the tree itself.  Not unreasonable?

Maybe the best idea would be for it to simulate one of Oliver Sacks'
amnesiacs who forget everything that happened more than 5 minutes ago.
This would save a lot of storage space, and hey, those amnesiacs are
still pretty intelligent.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

ziane@nuri.inria.fr (ziane mikal @) (05/21/91)

Newsgroups: comp.ai.philosophy
Subject: Re: Turing Test: opinions on an idea
Summary: 
Expires: 
References: <YcC8CRG00WBM83X4Fq@andrew.cmu.edu>
Sender: 
Followup-To: 
Distribution: 
Organization: INRIA Rocquencourt,Le Chesnay, France.
Keywords: 

Once again this idea of a conversation tree seems almost ridiculous, although
it is surprisingly a bit more difficult to eliminate than one would
think at first.

However, how do you cope with numbers ? A number is correct English right ?
How do you incorporate any possible number in your tree ?
If I say "how much is 5 times 3 ?" for example.
If you say that whenever a number appears the machine would say
"stop! I am useless with numbers, although I'm quite intelligent otherwise"
I'm afraid you will have to end with a pretty bug list of exceptions.
Numbers are only one example out of many others.
Also numbers may be evoked indirectly so that you can't easily rule them
out with a syntactic mechanism. 

About the memory of the conversation: having an amnesic computer is another
limitation. I forecast that the list is only beginning.

Mr Mc Dermott's arguments about contextual references (what time is it? etc)
are very convincing. Once again getting rid of them imposes new limitations.

I don't understand anything about all those hypothetic figures on
the number of possible English conversation. I expect something more
serious to convince me that a conversation tree is possible.

What about the description of what I see now, and my asking questions about
it ? Where are the reliable figures about a game tree producing ACCEPTABLE
answers to such questions. 
I do not consider that Eliza-like answers are convincing, although
it may be true that a machine may hide more successfully behind a very
particular role (psychic or mute-deaf-blind etc).
The point is that maybe some human beings may not pass the test, but the
context of the test should be made large enough to be convincing.


Finaly we should agree about the mechanism using the tree and about the
tree. Somebody has proposed grammars etc. If anything like that is used
of course (and I think it was the point of this person) the system may
produce acceptable answers because it would be intelligent !
The first proposal was a simple table lookup, right ?


Mikal.

mcdermott-drew@cs.yale.edu (Drew McDermott) (05/21/91)

   In article <YcC8CRG00WBM83X4Fq@andrew.cmu.edu> hm02+@andrew.cmu.edu (Hans P. Moravec) writes:
   >CC:
   >
   >mcdermott-drew@cs.yale.edu (Drew McDermott) writes:
   >
   >> 2. Many conversational remarks have no context-independent responses, e.g.:
   >>  "What time is it?"  (due to Pat Hayes)
   >>  "Did you hear that? It sounded like a sonic boom."
   >>  "I don't know about you, but those heart tremors made me quayle in
   >>   fear."
   >> 
   >> There's no way to encode a single appropriate response to
   >> conversations including sentences like these.  There's also no way to
   >> rule them out that I can think of.  (Any attempt to do so would make
   >> it too easy for the human to always win, by subtly breaking the
   >> rules.)
   >
   >   This is not a good objection to the conversation tree idea.
   >The machine simply preambles its conversation: "It sure is quiet down
   >here in this inpenetrable bunker.  I'd go bonkers if I didn't have this
   >teletype, which is my only link to the outside world.  Thanks for taking
   >the time to schmooze with me.  We intelligent thinkers should support
   >one another." 

But how do we embed this idea in the context of the Turing Test?  Are
we testing to see if the machine can mimic a person imprisoned in a
bunker (for his entire life)?  If so, where do we get such a person
for the machine to compete with?  

Perhaps the idea is that we tell the human contestant: "You will be
disqualified as soon as you allude to anything outside the realm of
the conversation itself," but this seems hopelessly unenforceable.
Even the machine is bound to have encoded some reference to the
outside world in its "game tree."  (E.g., does a reference to chess
count as a reference to the outside world?  Presumably it's a merely
historical fact that there is such a game, whose rules have been
formalized in a certain way as of the late twentieth century.)

The Turing Test makes no sense unless the tester thinks he is talking
to two entities that just walked in off the street, and know something
about what's going on around them.

                                             -- Drew

berry@arcturus.uucp (Berry;Craig D.) (05/21/91)

G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:

>The topic of the strangeness of extraterrestrias "thought" is covered
>in "The Mote in God's" - authored Larry Nevin and somebody else (not
>sure of the names here at all).

The novel is "The Mote in God's Eye" by Larry Niven and Jerry
Pournelle.  It is a good example of a very nonhuman way of thinking.
John Campbell (famous science fiction editor) used to challenge his
authors "Show me something that thinks *as well* as a human, but
*differently*."  This is known as the Campbell Challenge.  It seems
to me that AI researchers are answering the Challenge in a new way.

chalmers@bronze.ucs.indiana.edu (David Chalmers) (05/22/91)

In article <1991May21.155325.17797@cs.yale.edu> mcdermott-drew@cs.yale.edu (Drew McDermott) writes:

>Perhaps the idea is that we tell the human contestant: "You will be
>disqualified as soon as you allude to anything outside the realm of
>the conversation itself," but this seems hopelessly unenforceable.
>Even the machine is bound to have encoded some reference to the
>outside world in its "game tree."  (E.g., does a reference to chess
>count as a reference to the outside world?  Presumably it's a merely
>historical fact that there is such a game, whose rules have been
>formalized in a certain way as of the late twentieth century.)

How about just "No reference to anything that's happened since 1990."
Presumably historical knowledge can be built into the machine when it's
constructed.  The only problem is dealing with things that happen afterwards.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

forbis@milton.u.washington.edu (Gary Forbis) (05/22/91)

"Butter?"
"Butter."
"Jam?"
"Jam.  Jam!??  Let's don't be silly.  Lemon now that's different."

In article <1991May21.175359.26377@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <1991May21.155325.17797@cs.yale.edu> mcdermott-drew@cs.yale.edu (Drew McDermott) writes:
>
>>Perhaps the idea is that we tell the human contestant: "You will be
>>disqualified as soon as you allude to anything outside the realm of
>>the conversation itself," but this seems hopelessly unenforceable.
>>Even the machine is bound to have encoded some reference to the
>>outside world in its "game tree."  (E.g., does a reference to chess
>>count as a reference to the outside world?  Presumably it's a merely
>>historical fact that there is such a game, whose rules have been
>>formalized in a certain way as of the late twentieth century.)
>
>How about just "No reference to anything that's happened since 1990."
>Presumably historical knowledge can be built into the machine when it's
>constructed.  The only problem is dealing with things that happen afterwards.

In the multi-world tradition this shouldn't be a problem.  The list of all
possible productions should include all alternative world lines as these are
encompassed within valid English texts.  The state at the beginning of a 
conversation needn't be the same for every run but could be set to that which
takes into account the particular world line we experience.

I've been thinking hard about the differences between electroning learning and
cognitive learning but I guess I am hopelessly confused.  I cannot figure out
if a change in state along a static tree in response to outside information
is learning or not.  That is, having produced this complete tree of all English
conversations in 1980, the response to "What do you think about the way Desert
Storm turned out?" changes sometime in 1991 becuase of the specific 
line taken.  I am very willing to call the state change learning yet I'm not
so sure that what most chess programs do is learning.  Maybe it is short term
learning followed by rapid forgetting.

--gary forbis@u.washington.edu

dirish@glab1.math.utah.edu (Dudley Irish) (05/22/91)

The issue that is raised by the game-tree-based-turing-game-playing
system is whether we would think that a system which operates
according to a simple set of rules that we can readily understand is
intelligent.  This is the same question that Turing proposed his test
to answer.  We now have computer programs that are very large, indeed
a modern operating system is probably too complex for a single person
to understand completely, but they still run on computers.  The data
in the form of executable code is very complex, but the rules that are
followed by the hardware when it executes are really fairly simple
(and getting more simple with the advent of RISC processors).

If you twist the original question a little you will see that you can
view the executable code of your favorite AI program as a game tree
which the hardware traverses.  It really is the same question.

So pointing out that the game-tree-based system could never be
realized, or that it is theoreticly impossible to generate the game
tree doesn't really answer the question.  The answer has to be one of:
	1) The system is intelligent,
	2) The system is not intelligent, or
	3) The test is no good. (e.g. the test doesn't test for
				      intelligence.)

My answer would be if such a system could be built, then I would say
that it is intelligent.  This is based on a complex set of beliefs
having to do with the inadequacies of behaviorism and the paucity of
alternatives with which I won't bore you.

I hope that this brief discussion was helpful,
--
Dudley Irish / dirish@math.utah.edu / Manager Computer Operations
Center for Scientific Computing, Dept of Mathematics, University of Utah

The views expressed in this message do not reflect the views of the
Dept of Mathematics, the University of Utah, or the State of Utah.

jbaxter@physics.adelaide.edu.au (Jon Baxter) (05/22/91)

In article <2212@seti.inria.fr> ziane@nuri.inria.fr (ziane mikal @) writes:
>
> Once again this idea of a conversation tree seems almost ridiculous, although
> it is surprisingly a bit more difficult to eliminate than one would
> think at first.
>
> However, how do you cope with numbers ? A number is correct English right ?
> How do you incorporate any possible number in your tree ?
> If I say "how much is 5 times 3 ?" for example.

We don't need to be able to incorporate any possible number in the tree
because humans can only handle a very small range of numbers themselves.
Sure the tree will need to respond correctly to "how much is 5 times 3",
but to the question "how much is 1991 times 1991" all the tree has to reply
is "I don't have a pencil and paper: I can't work that one out." Even if we
demand that the tree behaves as if it did possess pencil and paper, there is
still a limit to the size of the calculation it can be pretending to perform,
(ask me to calculate "251521271185 times 1276151512" and I'll tell you to
get lost!) and so there is still a limit to the number of possibilities that
need to be encoded into the tree.

.....More stuff.......

> Finaly we should agree about the mechanism using the tree and about the
> tree. Somebody has proposed grammars etc. If anything like that is used
> of course (and I think it was the point of this person) the system may
> produce acceptable answers because it would be intelligent !
> The first proposal was a simple table lookup, right ?

I agree that the point of this discussion is lost if we start incorporating
more complicated algorithmic procedures such as grammars into our table
look-up. Still, there is a surprising amount that can be done with a table
look-up. Even the problem of describing ones surroundings can be solved by
simply leaving certain entries in the table blank, to be filled in when more
is known about the table's environment.

Jon.

richieb@bony1.bony.com (Richard Bielak) (05/23/91)

In article <1991May17.183918.26416@psych.toronto.edu> christo@psych.toronto.edu (Christopher Green) writes:
>In article <1991May17.064714.5942@latcs2.lat.oz.au> jane@latcs2.lat.oz.au (Jane Philcox) writes:
>>Huh?  I've only heard of the test as a test of intelligence.  Have I missed
>>something somewhere?  Was it originally a test to see whether you could tell
>>males from females, and then later adapted to the intelligence area?
>>
>>References, someone?
>>
>I think the original paper's in _Mind_ 1950. It's well worth the read. Turing
>was far less zealous than some of latter-day followers.
> 
>-- 

Turing's paper  "Computing Machinery and Intelligence" is part of the
book "The Mind's I". The book is a collection of essays selected by
Hofstader and Dennet.


...richie


-- 
*-----------------------------------------------------------------------------*
| Richie Bielak  (212)-815-3072    | Programs are like baby squirrels. Once   |
| Internet:      richieb@bony.com  | you pick one up and handle it, you can't |
| Bang:       uunet!bony1!richieb  | put it back. The mother won't feed it.   |

ziane@nuri.inria.fr (ziane mikal @) (05/23/91)

Newsgroups: comp.ai.philosophy
Subject: Re: Turing Test: opinions on an idea
Summary: 
Expires: 
References: <YcC8CRG00WBM83X4Fq@andrew.cmu.edu> <2212@seti.inria.fr> <3348@sirius.ucs.adelaide.edu.au>
Sender: 
Followup-To: 
Distribution: 
Organization: INRIA Rocquencourt,Le Chesnay, France.
Keywords: 


In article <3348@sirius.ucs.adelaide.edu.au> 
jbaxter@adelphi.physics.adelaide.edu.au.oz.au (Jon Baxter) writes:

>We don't need to be able to incorporate any possible number in the tree
>because humans can only handle a very small range of numbers themselves.
>Sure the tree will need to respond correctly to "how much is 5 times 3",
>but to the question "how much is 1991 times 1991" all the tree has to reply
>is "I don't have a pencil and paper: I can't work that one out." Even if we
>demand that the tree behaves as if it did possess pencil and paper, there is
>still a limit to the size of the calculation it can be pretending to perform,
>(ask me to calculate "251521271185 times 1276151512" and I'll tell you to
>get lost!) and so there is still a limit to the number of possibilities that
>need to be encoded into the tree.

At least your table needs to include an entry for a sentence mentionning
251521271185 !
Or do you parse the input ?
If I ask the machine: "Is 98762340987234 a number ?" I hope it'll reply "yes".
This point is only to show that you need to incorporate more or less
sophisticated techniques in your system. Numbers are only an example.
I guess one could show that you need very complex techniques for coping
with other examples.

>> Finaly we should agree about the mechanism using the tree and about the
>> tree. Somebody has proposed grammars etc. If anything like that is used
>> of course (and I think it was the point of this person) the system may
>> produce acceptable answers because it would be intelligent !
>> The first proposal was a simple table lookup, right ?
>
>I agree that the point of this discussion is lost if we start incorporating
>more complicated algorithmic procedures such as grammars into our table
>look-up. Still, there is a surprising amount that can be done with a table
>look-up. Even the problem of describing ones surroundings can be solved by
>simply leaving certain entries in the table blank, to be filled in when more
>is known about the table's environment.

What do you mean ? The table would be constantly updating itself ?
How ? Would an operator do that or would it be automatic ?
I definitely think this table lookup is less and less clear.

However I agree that the idea is interesting.
I think we could purge it and formulate it this way:
"Maybe a very fast hardware with a huge memory, associated with a "stupid" 
software may be as effective as a much slower hardware with an intelligent
software. Maybe such a system could even speak English like you and me".

One limited example of such a system is Deep Thought. It plays chess
almost like a grand-master although using a rather simple algorithm.

However the Test of Turing, although not the ultimate test for intelligence,
is still very useful because of the very low probability that a fast enough
hardware will pass the test with a stupid software.

A good complement for the test is IMO to "open the box" that is after
the demo you want to know more about the way it works. Not so easy
with a human being but not impossible, indirectly. If you know the history
of the system... But all this is of course much more complicated than
the Test of Turing which is precisely interesting because it is so simple.

Mikal.

mas@blanche.arc.ab.ca (Marc Schroeder) (05/23/91)

Turing's paper in which he originally described the "Turing
Test" was first published in _Mind_, Oct. 1950. I have a
reference here which says that this paper, and many others
like it, can be found in a collection called _Computers_and_
_Thought_, put together by Edward A. Feigenbaum and Julian
Feldman.

  Marc.

G.Joly@cs.ucl.ac.uk (Gordon Joly) (05/25/91)

In this thread Kuhn's work has been cited. Popper (?) has suggested
that physics is method of constructing models that describe the
physical world. The model and reality are always distinct. The model
approaches reality, as Newton was "falsified" when Einstein came
along. Special Relativity improves on Newton and with Quantum
Mechanics gave some predictive power, eg muon decay. Now superstring
theory is poised to make General Relativity and Quantum Theory
"false"; well it will unify the two, since a Quantised General
Relativity Theory has eluded scientists.

If we form AI models, they will only be approximations to (any) real
intelligence. This will never change, now matter how good they are.

So the Turing Test will always fail, given time. However, the Test
could prove of some use in the way that Eliza (Doctor) did - "you can
fool some of the people some of the time".

The Turing Test Quotient (TTQ) is a metric based on the amount of time
before 1000 (say) people can spot it is a computer. This will be a log
scale measure. Within an average of 5 mins is 1 unit, with an average
of 50 mins will be 2 units and so on. 

I guess a 3 unit computer might be of some real use.

____

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

                      No more pork sausages!

forbis@milton.u.washington.edu (Gary Forbis) (05/26/91)

In article <1575@ucl-cs.uucp> G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:
>
>If we form AI models, they will only be approximations to (any) real
>intelligence. This will never change, now matter how good they are.

I always took artificial to mean human made.  If you believe AI is an
attempt to model intelligence then your argument might make some sense.
Birds fly and Airplanes fly though one is natural and one is artificial.
I would never think to talk about Airplanes modeling flight.


--gary forbis@u.washington.edu

ziane@nuri.inria.fr (ziane mikal @) (05/27/91)

In article <1575@ucl-cs.uucp> G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:
>
>In this thread Kuhn's work has been cited. Popper (?) has suggested
>that physics is method of constructing models that describe the
>physical world. The model and reality are always distinct. The model
>approaches reality, as Newton was "falsified" when Einstein came
>along. Special Relativity improves on Newton and with Quantum
>Mechanics gave some predictive power, eg muon decay. Now superstring
>theory is poised to make General Relativity and Quantum Theory
>"false"; well it will unify the two, since a Quantised General
>Relativity Theory has eluded scientists.

You can have a complete model of a part of reality ! Actualy a model
of reality only makes sense for a subject that is interested in something.
Thus depending on what the subject is interested in, the model could
be considered complete.

>If we form AI models, they will only be approximations to (any) real
>intelligence. This will never change, now matter how good they are.

It depends on what interests you in intelligence. There is no absolute
definition of intelligence.

>
>So the Turing Test will always fail, given time. However, the Test
>could prove of some use in the way that Eliza (Doctor) did - "you can
>fool some of the people some of the time".

This is an extremely surprising conclusion !
You assume that there "exists" a "real intelligence" and that
the Turing Test requires to have this "real intelligence" to pass it !
What about this:
In order to play chess like a International Master(IM) you need to possess
the "ability of an IM". This ability cannot of course be modelled completely.
Thus you will never have computers playing at such a level.
Thus Deep Thought does not exist !

I was kind, I could have replaced IM by 1600-rated and make your own
chess computer vanish !


Mikal.

clin@eng.umd.edu (Charles Chien-Hong Lin) (05/28/91)

In article <1744@anaxagoras.ils.nwu.edu>, will@aristotle.ils.nwu.edu (William Fitzgerald) writes:
> I'm reading a book called _The Vastness of Natural Languages_ by
> Langendoen and Postal, in which they claim/prove that no 
> natural language is recursively enumerable.  Accepting 
> this as true, this means there is no Turing Machine which can
> be built to recognize the sentences of a natural language.

  Assuming Church's thesis is true, that is.

--
   ____         _
  /    |     __|_|       clin@eng.umd.edu
 |             |         
 |  harles    |  in      "University of Maryland Institute of Technology"
 |          _|
  \_____/  |_|\___/      
       

clin@eng.umd.edu (Charles Chien-Hong Lin) (05/28/91)

In article <91138.123053DOCTORJ@SLACVM.SLAC.STANFORD.EDU>, DOCTORJ@SLACVM.SLAC.STANFORD.EDU (Jon J Thaler) writes:
> In article <1744@anaxagoras.ils.nwu.edu>, will@aristotle.ils.nwu.edu (William
> Fitzgerald) says:
> 
> >I'm reading a book called _The Vastness of Natural Languages_ by
> >Langendoen and Postal, in which they claim/prove that no
> >natural language is recursively enumerable.  Accepting
> >this as true, this means there is no Turing Machine which can
> >be built to recognize the sentences of a natural language.
> 
> It's interesting to turn this around and ask whether human intelligence
> can recognize (all of) the sentences of a natural language.

   Considering what constitutes sentences (or even sentence fragments)
a sentence varies from person to person, the task might not
be achievable (think of slang).

--
   ____         _
  /    |     __|_|       clin@eng.umd.edu
 |             |         
 |  harles    |  in      "University of Maryland Institute of Technology"
 |          _|
  \_____/  |_|\___/      
       

krista@sandman.hut.fi (Krista Hannele Lagus) (05/28/91)

In article <1991May16.005158.1822@athena.mit.edu> patl@athena.mit.edu (Patrick J. LoPresti) writes:

>The Turing Test serves to detect intelligence; it gives no guarantee as
>to where that intelligence lies.  Consider the fact that a two-way radio
>passes the TT.  The TT correctly detects the presence of an
>intelligence; it is your own error if you attribute that intelligence to
>the radio.  For the case of Block's device, the intelligence which the
>TT detects is that of the creators of the list.
>
>The only question remaining is, when a computer DOES pass the TT, will
>we be encountering the intelligence of the machine, the creator(s), or
>both?  I suspect the creator(s) would then be in the best position to
>decide...

A very good example and question.

The next step would be to continue this example to humans.  When I
talk to my friend, who am I *really* talking with, his creator?  Who
is the creator of the intelligence in us.  Genes?  Doesn't sound
likely, at least for most part.  Our surroundings?  I'd go for that
one, although it may sound a little strange.  Perhaps, whenever we
discuss with someone, we are discussing with some part of the reality,
the specific part that the individual has encountered, combined with
the person's physical structure. So, where is the intelligence? I am
not very familiar with Spinoza, but he might have something to say to
this....other than that, I have no clue.  Anyone?

>-Pat LoPresti  (patl@athena.mit.edu)

Krista

ziane@nuri.inria.fr (ziane mikal @) (05/28/91)

In article <1991May14.140005.14956@athena.mit.edu> mlevin@athena.mit.edu (Mike Levin) writes:
>In article <1991May14.031103.2624@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>>
>>N. Block, "Psychologism and Behaviorism", Philosophical Review 90:5-43, 1981.
>>
>>This is about precisely the scenario that you imagine.  A long, thorough,
>>and interesting article -- definitely good value.  Block draws the conclusion
>>that the TT is too behaviourist to serve as a sufficient criterion for
>>intelligence.

(sorry to cross reference)
In any case it cannot be a criterion for intelligence. It's only a nice
test for a given individual to be convinced that the machine is somehow
intelligent. I would certainly not accept anybody to talk with the
machine and then decide himself that the machine is "objectively"
intelligent. I would like to do it myself ! Also I would demand excellent
proofs that the the conditions of the test are respected (like there is
no radio, as mentionned in another article).

I think that the behaviourist aspect of TT is more a problem related to
convincing people rather than a problem related to a definition of
intelligence. In order to convince someone of something extraordinary
you need extraordinary arguments (e.g. excellent proofs that the conditions of
the test are respected). Also it is very useful to show people why they were
wrong to a priori think that the result is impossible. One way to do this
is to "open the box" and explain how the machine think.

I don't see, except for this practical problem of convincing people, why a
definition of intelligence could not be behaviorist. I can only see that
such a definition would be much more precise than a definition taking into
account the process used. The reason is that when a behaviorist experiment
has succeeded you have little ground for generalizing the results if you
do not make assumptions on the way those results are produced.
In other terms, a behaviorist experiment can only be a direct proof, which
can be problematic for a complex phenomenon, but I don't see why it
cannot be a proof.

> While I agree, this example is obviously ridiculously 
>implausible, I think that it really doesn't matter.  The mere
>possibility of *some* techinical advance providing a way to fake true
>intelligence throws the door open to the same criticism being applied
>to successful future AI projects.
>
>Mike Levin

(switch to Mike)
By successful I guess that you mean "that have only proved their success
by passing TT". I guess no real AI project will only be judged on such
a ground.

Mikal.

G.Joly@cs.ucl.ac.uk (Gordon Joly) (05/29/91)

ziane mikal writes:
 > In article <1575@ucl-cs.uucp> G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:
 > >
 > >In this thread Kuhn's work has been cited. Popper (?) has suggested
 > >that physics is method of constructing models that describe the
 > >physical world. The model and reality are always distinct. The model
 > >approaches reality, as Newton was "falsified" when Einstein came
 > >along. Special Relativity improves on Newton and with Quantum
 > >Mechanics gave some predictive power, eg muon decay. Now superstring
 > >theory is poised to make General Relativity and Quantum Theory
 > >"false"; well it will unify the two, since a Quantised General
 > >Relativity Theory has eluded scientists.
 > 
 > You can have a complete model of a part of reality ! 

Which part of reality has a complete model? An exact model?

____

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

                        Drop a utensil.

DJG3@psuvm.psu.edu (05/29/91)

The model/reality distinction looks to be another version of Searle's
argument that AI systems merely simulate intelligence and do not
instantiate it (though I'm not sure about what you're after in
suggesting that AI models can only *approximate* genuine intelligence).
One difference between AI models and the physics models to which you
refer is that AI models--certain of them at any rate--can be run.  What
Searle has no idea about in claiming that AI simulations are missing
essential *biological* features of genuine intelligence is just what
sorts of biological phenomena are essential to thought; without these
it's hard to fathom his conviction about the missing stuff being
essentially biological.  If AI models--running ones--cannot have the
right stuff (or if, as mere approximations, they cannot have the
right values) then what exactly is missing, or holding them back?
I'm just curious, here.  I've got no positive argument on behalf of
any extant systems.  (BTW--it's not just the fact of their being models,
right? Models can be exemplars, examples of the things they're models
of)

D. Gilman, Penn State, College of Medicine

G.Joly@cs.ucl.ac.uk (Gordon Joly) (05/29/91)

D. Gilman <DJG3@psuvm.psu.edu> writes
>> The model/reality distinction looks to be another version of Searle's
>> argument that AI systems merely simulate intelligence and do not
>> instantiate it (though I'm not sure about what you're after in
>> suggesting that AI models can only *approximate* genuine intelligence).

All models are "inexact"; they must be falsifiable. Newton's gravity
all that is needed for terrestrial calculation. Some experiments have
been performed, but most are done in space. The Einstein view of
gravity, General Relativity (GR) is rarely apparent or needed; it did
however give reason for the precession of the perihelion of Mercury.
There is the lower gravitational field Newtonian limit to GR.
Philosophically however, they are poles apart.

>> One difference between AI models and the physics models to which you
>> refer is that AI models--certain of them at any rate--can be run.  What
>> Searle has no idea about in claiming that AI simulations are missing
>> essential *biological* features of genuine intelligence is just what
>> sorts of biological phenomena are essential to thought; without these

Penrose claims it is the quantum effects of a real, very compact,
bio-system like the brain that gives (human) intelligence/self-awareness.

>> it's hard to fathom his conviction about the missing stuff being
>> essentially biological.  If AI models--running ones--cannot have the
>> right stuff (or if, as mere approximations, they cannot have the
>> right values) then what exactly is missing, or holding them back?

Good question...

>> I'm just curious, here.  I've got no positive argument on behalf of
>> any extant systems.  (BTW--it's not just the fact of their being models,
>> right? Models can be exemplars, examples of the things they're models
>> of)
>> 
>> D. Gilman, Penn State, College of Medicine

Fractals pop up all over the place; coastlines and so on. They still
models. Therefore, I at loss to see the last point. Take also the
roots and discriminant of cubic equations. They popped up in my GR
research and also in catastrophe theory. Predator-prey can be applied
outside its original field of socio-biology.

Here is a quotation from the Editor's Introduction to

%A John Von Neumann
%T Theory of self-reproducing automata 
%E Arthur W. Burks
%C Urbana
%I University of Illinois Press
%D 1966
%P 388

``The scope of the theory of automata and its interdisciplinary
character are revealed by a consideration of the the two main type of
automata: the artificial and the natural. Analog and digital computers
are the most important, but other man-made systems for the
communication and processing of information are also included, for
example telephone and radio systems. Natural automata include nervous
systems, self-reproductive and self-repairing systems, and the
evolutionary and adaptive aspects of organisms.

``Automata theory clearly overlaps communications and control
engineering on the one hand, and biology on the other. In fact,
artificial and natural automata are so broadly defined that one can
legitimately wonder what keeps automata theory from embracing both
these subjects. Von Neumann never discussed this question, but there
are limits to automata theory implicit in what he said.''
____

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

                        Drop a utensil.

DJG3@psuvm.psu.edu (05/30/91)

In article <1577@ucl-cs.uucp>, G.Joly@cs.ucl.ac.uk (Gordon Joly) says:
>
>D. Gilman <DJG3@psuvm.psu.edu> writes
>>> The model/reality distinction looks to be another version of Searle's
>>> argument that AI systems merely simulate intelligence and do not
>>> instantiate it (though I'm not sure about what you're after in
>>> suggesting that AI models can only *approximate* genuine intelligence).
>
>All models are "inexact"; they must be falsifiable. Newton's gravity
>all that is needed for terrestrial calculation. Some experiments have
>been performed, but most are done in space. The Einstein view of
>gravity, General Relativity (GR) is rarely apparent or needed; it did
>however give reason for the precession of the perihelion of Mercury.
>There is the lower gravitational field Newtonian limit to GR.
>Philosophically however, they are poles apart.
>
What's the connection between models being inexact and being falsifiable?
I take it that the first point has to do with the fact that models
typically are idealized stand-ins for complex, variable or otherwise
difficult to observe real-world phenomena.  Or are you just thinking
that models can only be based upon measurements accurate to some degree
of specificity?  My problem here is one of not knowing what the measure
is supposed to be for intelligence.  Aren't we thinking of something
like rough performance standards for disparate problems posed, and
tasks taken up, in different environments?  And couldn't a model meet
these sorts of strictures (allowing that for actual instantiation of
intelligence we'll at least need to add some ability to run the model on
a system which provides for some sort of interface with a larger world,
and not just a model) just by being in the right ballpark and not by
perfectly matching some elusive particular values of genuine
intelligence?  The second point--falsifiability--has to do with our
wanting models to be subject to empirical tests.  Here we want only
the potential of bad fit with data, not the necessity of bad fit due
to inexactitude inherent to modeling, no?  And lots of models don't
seem falsifiable per se;  we're frequently more concerned with criteria
such as accuracy and utility--and these come in degrees--than with
truth or falsehood.

>>> One difference between AI models and the physics models to which you
>>> refer is that AI models--certain of them at any rate--can be run.  What
>>> Searle has no idea about in claiming that AI simulations are missing
>>> essential *biological* features of genuine intelligence is just what
>>> sorts of biological phenomena are essential to thought; without these
>
>Penrose claims it is the quantum effects of a real, very compact,
>bio-system like the brain that gives (human) intelligence/self-awareness.
>
I haven't read P's book.  Does he have an account of how the quantum
effects in such a system might give rise to (human) intelligence and
self-awareness or is he just stuck with a fancier (or micro) version
of Searle's problem (something like, I'm convinced that the difference
is right here but I don't know why)?

>>> it's hard to fathom his conviction about the missing stuff being
>>> essentially biological.  If AI models--running ones--cannot have the
>>> right stuff (or if, as mere approximations, they cannot have the
>>> right values) then what exactly is missing, or holding them back?
>
>Good question...
Having already used a ton o' space I'll leave off the last part.  I
don't think I understand your response to my remark about models and
exemplars.  It's probably not important but I'd be happy to try again
if you want to pitch it a different way.
D. Gilman

jan@cs.umu.se (Jan T}ngring) (06/07/91)

In article <1991May16.005158.1822@athena.mit.edu> patl@athena.mit.edu (Patrick J. LoPresti) writes:
>Suppose I construct a list of all
>conversations that I feel you and I might have for the next few minutes.
>(Let's see. You say, "hello", then I say, "hi," then you say...)  I then
>compile this into a large search tree, then allow you to communicate
>with it by tty.  With whom are you conversing?
>
>With me, obviously.  All I have done is pre-record my responses.  It is
>just a rather strange communication medium.
>
>The Turing Test serves to detect intelligence; it gives no guarantee as
>to where that intelligence lies.  Consider the fact that a two-way radio
>passes the TT.  The TT correctly detects the presence of an
>intelligence; it is your own error if you attribute that intelligence to
>the radio.  For the case of Block's device, the intelligence which the
>TT detects is that of the creators of the list.

I think that Block tries to guard himself from this counter-argument 
by arguing that the program might suddenly come to existence by pure 
chance.  In other words no intelligence involved. In the two-way 
radio example you might just be talking to random noise on the same 
channel that happens to sound like a voice responding sensibly.

Does anyone have a comment to this?

>-Pat LoPresti  (patl@athena.mit.edu)
  _____________ ________________
 /            //               /\email:jan@cs.umu.se
/____________//_______________/ /
\________    \\_____     _____\/ mail:Jan Tangring
  /____/ \    \  /  \    \   \        Mariehemsvagen 15E-409
  \    \/_\    \/    \    \  /        S-902 36 UMEA
   \___________/      \____\/         Sweden