[comp.ai.philosophy] If it does not pass TT it is not intelligent????

yanek@panix.uucp (Yanek Martinson) (06/17/91)

Performance of a system on TT shows how human that is, but not neccessarily
how intelligent. For example if some martians arrive and somehow learn
our language, they most likely can not pass the TT since they would be
most likely very different from humans and possible to distinguish. Would
that mean they are less intelligent? Or that they are not intelligent at all?
Is there any other, more objective test that tests for intelligence, not
for similarity to human beings?

me@csri.toronto.edu (Daniel R. Simon) (06/18/91)

In article <1991Jun17.064232.2536@panix.uucp> yanek@panix.uucp (Yanek Martinson) writes:
>Performance of a system on TT shows how human that is, but not neccessarily
>how intelligent. For example if some martians arrive and somehow learn
>our language, they most likely can not pass the TT since they would be
>most likely very different from humans and possible to distinguish. Would
>that mean they are less intelligent? Or that they are not intelligent at all?
>Is there any other, more objective test that tests for intelligence, not
>for similarity to human beings?

A similar problem arises with regard to rocks, all of which (beyond a certain
size, at least) are in fact prodigiously intelligent, yet, because of their
extreme natural lethargy, invariably easily distinguishable from humans in a
"Turing Test" setting.


"There *is* confusion worse than death"		Daniel R. Simon
			     -Tennyson		(me@theory.toronto.edu)

sjb@piobe.austin.ibm.com (Scott J Brickner) (06/19/91)

Would the martians necessarily fail the TT?  I agree that they may not
seem like normal humans, but that doesn't mean that they'd fail the
test... only that it might be somewhat more difficult for them.  This is
much like the arguments that low-income inner-city minority children
perform lower on "intelligence" tests because the tests are biased
against them (i.e. ask questions about material which is not within
their experience).  Consider a TT in which the subject is a moderatly
autistic adult (Raymond from Rain Man?)... I think he should be
considered intelligent, but would clearly be distinguishable from a
"normal human"... by your standards, his distinguishability would mark
him as not even human!

Presumably the tester is going to be putting questions to the subject
which are in some way impossible for a non-intelligent being to respond
sensibly, but to which an intelligent being may (although with some
difficulty) respond.  This sounds like a difficult task, and may
indicate that the TT is itself impossible.  Suppose someone came up with
a "super-eliza" program - one that could handle, say, one thousand times
the range of patterns of the original one (much like a character in
Christopher Stasheff's "King Kobold Revisted").  One could expect it to
perform at least as well as the "martian" or "autistic" subject.  Is it
intelligent?  I think that the original intent of the TT was to exclude
this sort of intelligence, but include all forms of "natural" intelligence.

I think that before we can continue arguments about intelligence, we
need to really evaluate what we MEAN by the term.

Any suggestions?

Scott J Brickner, thinker.

minsky@media-lab.media.mit.edu (Marvin Minsky) (06/19/91)

Please, Turing never meant the TT to be Necessary for people to
recognize something as intelligent.  It was only intended to be a
Sufficient condition.  And it was not to define intelligence, but only
to propose a situation in which non-critical people would usually agree.

jbaxter@physics.adelaide.edu.au (Jon Baxter) (06/19/91)

In article <1991Jun18.220932.22904@news.media.mit.edu>
minsky@media-lab.media.mit.edu (Marvin Minsky) writes:

> Please, Turing never meant the TT to be Necessary for people to
> recognize something as intelligent.  It was only intended to be a
> Sufficient condition.  And it was not to define intelligence, but only
> to propose a situation in which non-critical people would usually agree.

Then what use is the Turing test? Sufficiently non-critical people think
that Eliza is intelligent, but anyone with computing knowledge would disagree.
Did Turing really mean for the people in his test to be non-critical?

Jon Baxter.

minsky@media-lab.media.mit.edu (Marvin Minsky) (06/19/91)

In article <3727@sirius.ucs.adelaide.edu.au> jbaxter@adelphi.physics.adelaide.edu.au.oz.au (Jon Baxter) writes:
>In article <1991Jun18.220932.22904@news.media.mit.edu>
>minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
>
>> Please, Turing never meant the TT to be Necessary for people to
>> recognize something as intelligent.  It was only intended to be a
>> Sufficient condition.  And it was not to define intelligence, but only
>> to propose a situation in which non-critical people would usually agree.
>
>Then what use is the Turing test? Sufficiently non-critical people think
>that Eliza is intelligent, but anyone with computing knowledge would disagree.
>Did Turing really mean for the people in his test to be non-critical?

It isn't any use at all, so far as I know.  Turing was addressing the
problem that people, because they have the word "intelligent", think
there must be a thing that corresponds to it, and they want a
definition that will help them recognize that thing.  So Turing,
observing that they couldn't agree, suggested his "test" as a
sufficent condition: if people couldn't distinguish, over the phone,
between a person and computer X, then they could probably agree that
the computer must be intelligent.

So, yes, he meant for the people to be uncritical.  Do you think Eliza
is more or less intellgient than an ant?  Do you think something is
either intelligent or not?  Shame on you for wasting your intelligence
on such silly matters.  My point is that the "critical" people seem
just as foolish because, in my view, there isn't any such thing as
"intelligence" or "intentionality" of any of those things.  They're
all relative...

dave@tygra.Michigan.COM (David Conrad) (06/19/91)

In article <1991Jun17.064232.2536@panix.uucp> yanek@panix.uucp (Yanek Martinson) writes:
>Performance of a system on TT shows how human that is, but not neccessarily
>how intelligent. For example if some martians arrive and somehow learn
>our language, they most likely can not pass the TT since they would be
>most likely very different from humans and possible to distinguish. Would
>that mean they are less intelligent? Or that they are not intelligent at all?
>Is there any other, more objective test that tests for intelligence, not
>for similarity to human beings?

The Turing Test does not test for intelligence.  At a literal level it tests
for a specific ability, the ability to mimic human answers to questions,
which we may hope requires at least some kind of 'intelligence'.

But more importantly, it points out that physical form is not really an
indicator of intelligence.  Do you think that I am intelligent?  It's
probably safe to assume that you've never seen me.  You don't know if I'm
short, or tall, or a program running on a Cray Y-MP.  The real point of
the Turing Test is to demonstrate that we really consider behaviour more
than form when judging intelligence.  And that we should.

David R. Conrad
dave@michigan.com
-- 
=  CAT-TALK Conferencing Network, Computer Conferencing and File Archive  =
-  1-313-343-0800, 300/1200/2400/9600 baud, 8/N/1. New users use 'new'    - 
=  as a login id.  AVAILABLE VIA PC-PURSUIT!!! (City code "MIDET")        =
   E-MAIL Address: dave@Michigan.COM

G.Joly@cs.ucl.ac.uk (Gordon Joly) (06/19/91)

>> From:    me@csri.toronto.edu (Daniel R. Simon)
>> A similar problem arises with regard to rocks, all of which (beyond a certain
>> size, at least) are in fact prodigiously intelligent, yet, because of their
>> extreme natural lethargy, invariably easily distinguishable from humans in a
>> "Turing Test" setting.
>> 
>> 
>> "There *is* confusion worse than death"		Daniel R. Simon
>> 			     -Tennyson		(me@theory.toronto.edu)

Indeed; there is an article in the latest New Scientist about the memory
of sand.
____

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

                    Order is paramount in anarchy.

thomas@ckgp.UUCP (Michael Thomas) (06/20/91)

In article <1991Jun19.050512.27413@news.media.mit.edu>, minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
> In article <3727@sirius.ucs.adelaide.edu.au> jbaxter@adelphi.physics.adelaide.edu.au.oz.au (Jon Baxter) writes:
> >In article <1991Jun18.220932.22904@news.media.mit.edu>
> >minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
> >
> >> Please, Turing never meant the TT to be Necessary for people to
> >> recognize something as intelligent.  It was only intended to be a
> >> Sufficient condition.  And it was not to define intelligence, but only
> >> to propose a situation in which non-critical people would usually agree.

    I disagree that the the test is a condition that must be met for
a computer to be intelligent (let us not get lost on intelligence again...)
I also feel that the TT is critical, you are told one is a person and
one is a machine so which is which? The test shouldn't be posed in that way.
because you govern your question towards the goal. (I know that is the
idea!) But what if the person was told that one person is male and
one is female like the original party game. And if a person said that
that thing sounds like a computer or doesn't make any since then you
would know it failed. But if the person said the computer was male or
female then the computer would pass! 8^)

> >Then what use is the Turing test? Sufficiently non-critical people think
> >Did Turing really mean for the people in his test to be non-critical?
   
   I think his major goal with the test was to complete the goal of
the computer being able to interact with people comfortably. and if this
was to happen it sould act and respond just like a person. Now you and
I know that now AI have developed into something greater and contains
more aspects than just human interaction. (after all in the olden days
of AI wasn't that the point?)

> It isn't any use at all, so far as I know.  Turing was addressing the
> problem that people, because they have the word "intelligent", think
> there must be a thing that corresponds to it, and they want a

   I feel that that is more of a problem of today and why we all have such
a problem with the TT. WE know that intelligence is more than just
natural language processing or simple programs like Eliza. WE also
understand (or at least I hope that we do?) that true AI will contain
a richer or a better word might be DIFFERENT intelligence that human
intelligence AND still not worse than human intelligence. (I would have
just said better but people do feel better and smarter than everything
else in the world. and always will..8-( .)

> So, yes, he meant for the people to be uncritical.  Do you think Eliza
> is more or less intellgient than an ant?  Do you think something is

   I think elisa is different than an ant, but the joe on the street
might infact say YES it does at times seem intelligent, but I of course
know the trick behid eliza as we all do. 8^)

	I THINK: that the TT test is out-dated. The other day I saw
a show which was exploring the fact that dolphins communicate with
their sonar, wait a sec, not language like we think of language, but
with pure images. So since their language is/might be so different
than ours is it worse? I say no, they can convey pure idea to large
groups over vast distances and have complete or the next best, understanding.
(or aleast I think so. If I could in one shot give all of you idea "A" 
 and you understand the whole idea then the task of breaking it down
 in our system of language would be removed. You all understood "A"
 and the whole idea behind it and that I meant "A"...so back to the
 real point) So then is a dolphins system of language or the human
system of language better, more advanced, more INTELLIGENT? Isn't
this the same thing with intelligence the test should be something
more focused on what it means to be intelligent not matching our
guidelines of intelligence. Agreed?????  8^)

Thanks for listening....

-- 
Thank you,
Michael Thomas            
(..uunet!ckgp!thomas)

jbaxter@physics.adelaide.edu.au (Jon Baxter) (06/20/91)

In article <1991Jun19.050512.27413@news.media.mit.edu>
minsky@media-lab.media.mit.edu (Marvin Minsky) writes:

> In article <3727@sirius.ucs.adelaide.edu.au> jbaxter@adelphi.physics.adelaide.edu.au.oz.au (Jon Baxter) writes:
>>Then what use is the Turing test? Sufficiently non-critical people think
>>that Eliza is intelligent, but anyone with computing knowledge would disagree.
>>Did Turing really mean for the people in his test to be non-critical?
>
> It isn't any use at all, so far as I know.  Turing was addressing the
> problem that people, because they have the word "intelligent", think
> there must be a thing that corresponds to it, and they want a
> definition that will help them recognize that thing.  So Turing,
> observing that they couldn't agree, suggested his "test" as a
> sufficent condition: if people couldn't distinguish, over the phone,
> between a person and computer X, then they could probably agree that
> the computer must be intelligent.

So you are claiming that Turing, in devising his test, was defending the
view that the only reasonable definition of intelligence is a behavioural
one.

>
> So, yes, he meant for the people to be uncritical.  Do you think Eliza
> is more or less intellgient than an ant?  Do you think something is
> either intelligent or not?  Shame on you for wasting your intelligence
> on such silly matters.  My point is that the "critical" people seem
> just as foolish because, in my view, there isn't any such thing as
> "intelligence" or "intentionality" of any of those things.  They're
> all relative...

A behavioural definition of intelligence is fine for most practical purposes.
In the same way, data sheets for transistors are all that's needed when
building circuits. But we don't stop trying to understand how transistors work
just because we know how they behave, and in the same way I don't see why
we should stop trying to understand the nature of intelligence even if we
know how to use it. "Understanding the nature of intelligence" may have less
credibility than "Understanding the nature of transistors"; philosophy being
less credible than physics, but that is only my opinion. Besides, claiming
"there isn't any such thing as `intelligence' or `intentionality'..." is a
philosophical standpoint in itself, which you have to justify. And once you
start arguing for your position, you'll find yourself embroiled in the kind
of philosophical questions you deride above.

Jon Baxter.

ISSSSM@NUSVM.BITNET (Stephen Smoliar) (06/20/91)

In article <1991Jun19.050512.27413@news.media.mit.edu>
minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
>In article <3727@sirius.ucs.adelaide.edu.au>
>jbaxter@adelphi.physics.adelaide.edu.au.oz.au (Jon Baxter) writes:
>>In article <1991Jun18.220932.22904@news.media.mit.edu>
>>minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
>>
>>> Please, Turing never meant the TT to be Necessary for people to
>>> recognize something as intelligent.  It was only intended to be a
>>> Sufficient condition.  And it was not to define intelligence, but only
>>> to propose a situation in which non-critical people would usually agree.
>>
>>Then what use is the Turing test? Sufficiently non-critical people think
>>that Eliza is intelligent, but anyone with computing knowledge would
>>disagree.
>>Did Turing really mean for the people in his test to be non-critical?
>
>It isn't any use at all, so far as I know.  Turing was addressing the
>problem that people, because they have the word "intelligent", think
>there must be a thing that corresponds to it, and they want a
>definition that will help them recognize that thing.  So Turing,
>observing that they couldn't agree, suggested his "test" as a
>sufficent condition: if people couldn't distinguish, over the phone,
>between a person and computer X, then they could probably agree that
>the computer must be intelligent.
>
Since I do not have the paper in front of me, I shall have to rely on my
memory.  However, the reading of the paper that I recall does not quite
align with Minsky's (although it is very close).  Unless I am mistaken,
Turing uses his opening paragraphs to argue that it is a waste of time
to consider a question as naive as "Can a machine think?"  Therefore,
in the interest of being more productive, he introduces his "Imitation
Game" as a more realistic arena for investigation.  In other words he
replaces the intelligence question with that of whether or not a machine
could play the Imitation Game well enough that the other player would not
recognize it as a machine.  He then devotes the rest of the paper to arguing
why it is feasible that this would eventually be the case.

The paper, itself, by the way, is a model of simplicity and elegance, laced
with just the right amount of imagination and humor.  If the paper had been
written poorly or in obscure language, I would understand why so many people
would be more inclined to accept second-hand accounts of what Turing said.
The truth, however, is that very few of those second-hand accounts tell the
story as well as Turing did;  and, as we keep being reminded by articles on
this bulletin board, those second-hand accounts seem to beget some utterly
silly ideas as to what Turing was all about.

===============================================================================

Stephen W. Smoliar
Institute of Systems Science
National University of Singapore
Heng Mui Keng Terrace, Kent Ridge
SINGAPORE 0511

BITNET:  ISSSSM@NUSVM

"He was of Lord Essex's opinion, 'rather to go an hundred miles to speak with
one wise man, than five miles to see a fair town.'"--Boswell on Johnson

minsky@media-lab.media.mit.edu (Marvin Minsky) (06/20/91)

In article <3737@sirius.ucs.adelaide.edu.au> jbaxter@adelphi.physics.adelaide.edu.au.oz.au (Jon Baxter) writes:
>In article <1991Jun19.050512.27413@news.media.mit.edu>
>minsky@media-lab.media.mit.edu (Marvin Minsky) writes:

>So you are claiming that Turing, in devising his test, was defending the
>view that the only reasonable definition of intelligence is a behavioural
>one.
>>
>A behavioural definition of intelligence is fine for most practical purposes.
>In the same way, data sheets for transistors are all that's needed when
>building circuits. But we don't stop trying to understand how transistors work
>just because we know how they behave, and in the same way I don't see why
>we should stop trying to understand the nature of intelligence even if we
>know how to use it.

Sheesh, what's going on in this group?  Yes, I was saying that it was
a bad idea to try to DEFINE intelligence.  And I think that was
Turing's position as well.  Yes.

On the other side I agree with you completely.  Yes, we should go all
out to understand its nature.  That's my full time job and, I presume
yours.  And so far as I can see, defining "intelligence" is the worst
way to proceed because you can't get very far in defining things until
AFTER you understand them.  That's all I meant.  

If you do this, you might be surprised how useful it is.  I just
word-searched "The Society of Mind" and found only four occurrences
"intelligence" in its common sense usage, all in discussions of
cxommon sense psychology.  Yes, we humans possess a colossal
constellation of capabilities.  No, trying to describe them all in a very
few words -- which is all people seem to mean by "defining" -- seems
to have no particular utility in, as you put it, "trying to understand
the nature of intelligence".

G.Joly@cs.ucl.ac.uk (Gordon Joly) (06/20/91)

jbaxter@adelphi.physics.adelaide.edu.au.oz.au (Jon Baxter) writes
> 
> In article <1991Jun18.220932.22904@news.media.mit.edu>
> minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
> 
> > Please, Turing never meant the TT to be Necessary for people to
> > recognize something as intelligent.  It was only intended to be a
> > Sufficient condition.  And it was not to define intelligence, but only
> > to propose a situation in which non-critical people would usually agree.
> 
> Then what use is the Turing test? Sufficiently non-critical people think
> that Eliza is intelligent, but anyone with computing knowledge would disagree.
> Did Turing really mean for the people in his test to be non-critical?
> 
> Jon Baxter.


There is an in-between proposal; the Turing Test Quotient. The TTQ is
a log measure of the time taken by an average set of people to
uncover which is the machine.

Eliza fools to a maximum of about 0.5 units I guess (1 unit 6 mins, 2
units 60, three 600 mins etc).

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

                    Order is paramount in anarchy.

mcdermott-drew@cs.yale.edu (Drew McDermott) (06/20/91)

This is getting ridiculous ...

  In article <3737@sirius.ucs.adelaide.edu.au> jbaxter@adelphi.physics.adelaide.edu.au.oz.au (Jon Baxter) writes:
  >In article <1991Jun19.050512.27413@news.media.mit.edu>
  >minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
  >
  >> In article <3727@sirius.ucs.adelaide.edu.au> jbaxter@adelphi.physics.adelaide.edu.au.oz.au (Jon Baxter) writes:
  >>>Then what use is the Turing test? Sufficiently non-critical people think
  >>>that Eliza is intelligent, but anyone with computing knowledge would disagree.
  >>>Did Turing really mean for the people in his test to be non-critical?
  >>
  >> It isn't any use at all, so far as I know.  Turing was addressing the
  >> problem that people, because they have the word "intelligent", think
  >> there must be a thing that corresponds to it, and they want a
  >> definition that will help them recognize that thing.  So Turing,
  >> observing that they couldn't agree, suggested his "test" as a
  >> sufficent condition: if people couldn't distinguish, over the phone,
  >> between a person and computer X, then they could probably agree that
  >> the computer must be intelligent.
  >
  >So you are claiming that Turing, in devising his test, was defending the
  >view that the only reasonable definition of intelligence is a behavioural
  >one.
  >

PLEASE don't forget that Marvin started by pointing out

   *** completely correctly ***

that Turing was not trying to DEFINE intelligence.

                                             -- Drew McDermott

sjb@piobe.austin.ibm.com (Scott J Brickner) (06/20/91)

In article <1643@ucl-cs.uucp>, G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:

> >> From:    me@csri.toronto.edu (Daniel R. Simon)
> >> A similar problem arises with regard to rocks, all of which (beyond
a certain
> >> size, at least) are in fact prodigiously intelligent, yet, because
of their
> >> extreme natural lethargy, invariably easily distinguishable from
humans in a
> >> "Turing Test" setting.
> >> 
> >> 
> >> "There *is* confusion worse than death"		Daniel R. Simon
> >> 			     -Tennyson		(me@theory.toronto.edu)
> 
> Indeed; there is an article in the latest New Scientist about the memory
> of sand.

I also am currently reading a book entitled "The Tao of Symbols" in
which the author describes (in the first chapter) the ongoing attempt of
a neighbor of his to teach a stone to talk.

Scott.

nagle@well.sf.ca.us (John Nagle) (06/21/91)

     Someone should post a "frequently asked questions" posting for new
readers in this topic, where the Turing Test discussion, the Penrose
discussion, and similar subjects can be dealt with by providing references
on the subject.  

     Eliza and an ant are about in the same range in terms of compute
power required to do the job.  This may be an insight into how much
brainpower is needed to keep social interactions going.

thomas@ckgp.UUCP (Michael Thomas) (06/22/91)

In article <9106200231.AA06339@lilac.berkeley.edu>, ISSSSM@NUSVM.BITNET (Stephen Smoliar) writes:
>Since I do not have the paper in front of me, I shall have to rely on my
>memory.  However, the reading of the paper that I recall does not quite
>align with Minsky's (although it is very close).  Unless I am mistaken,
>Turing uses his opening paragraphs to argue that it is a waste of time
>to consider a question as naive as "Can a machine think?"  Therefore,
>in the interest of being more productive, he introduces his "Imitation
>Game" as a more realistic arena for investigation.  In other words he
>replaces the intelligence question with that of whether or not a machine
>could play the Imitation Game well enough that the other player would not
>recognize it as a machine.  He then devotes the rest of the paper to arguing
>why it is feasible that this would eventually be the case.
>===============================================================================
  
  Now are you saying the it is true that "it is a waste of time to
  consider a question as naive as "Can a machine think?"" I know you
  are quoting the paper, but isn't that why we are talking about this?
  I mean maybe it is me but I do not think that it is naive to think 
  that a computer will not be able to think. May this is just my deffinition
  on think, but still aren't the days of turning's AI a little outdated
  to compare to the goals and ideas of AI today?

  So are we all in agreement that the TT test is for something other
  than AI, or just divisions of AI (NLP -- Not even NLP really). I
  see no value for the TT test today. 8^) Today the problems of AI are
  more defined, so to speak, they are more qualified, so that we can
  not just simply say that computer intelligence is impossible so let
  us shot for the next best thing, modeling human behavior. 8^) I know
  that there is alot more going on in my head than just modeled behavoir
  and alot of it we can establish in a computer. 

  POINT: If the TT is to measure the computer's Imitation ability then
         why is the TT used (or referenced to being used) on AI things?
         I know that the goal of the AI stuff I do is NOT Imitation but
         rather emulation. (to equal or surpass the model) And if the
         mean buy which I do that means having an intelligent machine
         that can still answer a complex math question, and not lie, and
         try to fool everyone it interacts with then I must say forget
         the TT -- Don't I?  

  I can see the point of, well if the computer is intellgent then it
  will know it is playing the imitation game and will create a plan
  and do certain things to accomplish that goal of winning the game.
  But for this side of the coin the problem is that instead of your
  goal being intelligence the goal must become imitation, and even
  if you say your goal is intelligence you still have to keep the
  problems of imitation in mind....

  INTELLIGENCE: I know that it is hard to come up with a definition
  of intelligence. But intelligence isn't really a thing but combination
  of ability (application of some knowledge) and awareness (having
  some purpose, goal, knowledge that your doing something and why.)
  |o| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -|o|
  |o| So if the goal is to pass the test, win the game and that  |o|
  |o| application is imitation of a person, and the knowledge is |o|
  |o| the knowledge of the world and people, then a computer     |o|
  |o| will never be intelligent because it atleast at this point |o|
  |o| can not experience the world... so then only an android    |o|
  |o| or robot, with a sensory system could pass the TT.         |o|
  |o| I personally don't feel that this is true...               |o|
  |o| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -|o|

  Thanks for listening.... 8^)

-- 
Thank you,
Michael Thomas
(..uunet!ckgp!thomas)

nagle@well.sf.ca.us (John Nagle) (06/22/91)

    We don't know enough yet to frame this question properly.  It's like
trying to talk about aerodynamics a century ago.

    We can, just maybe, build an insect-level brain today.  Even Brooks'
insects are much dumber than real ones.  This reflects how shallow our 
understanding of the basic concepts of brain design is.

    It's fun to speculate about human-level AI.  But progress to date
indicates that trying to develop "abstract intelligences" that don't
have the underpinnings of animal-level capabilities probably won't work.
I'm not saying it's impossible, but that progress is very slow; some would
stay "stalled".  Recent real progress is at the low end.  We at least
have the advantage there that we know a whole hierarchy of dumb creatures
is possible.  We have no existence proof that an abstract intelligence is
possible, and as a more practical matter, none to observe and dissect.

					John Nagle

G.Joly@cs.ucl.ac.uk (Gordon Joly) (06/23/91)

Stephen Smoliar writes, on the subject of Turing's orignal paper,
 > Unless I am mistaken,
 > Turing uses his opening paragraphs to argue that it is a waste of time
 > to consider a question as naive as "Can a machine think?" 

Perhaps some should tell John Searle. One of his lectures in "Minds,
brains and science : the 1984 Reith lectures" is just that: "Can a
machine think?"

 > Therefore,
 > in the interest of being more productive, he introduces his "Imitation
 > Game" as a more realistic arena for investigation.  In other words he
 > replaces the intelligence question with that of whether or not a machine
 > could play the Imitation Game well enough that the other player would not
 > recognize it as a machine.  He then devotes the rest of the paper to arguing
 > why it is feasible that this would eventually be the case.

I think I am missing something. If it a box could "walk, talk and chew
gum", how different would it appear if it could only "imitate" said
behaviour?

____

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

                    Order is paramount in anarchy.

ISSSSM@NUSVM.BITNET (Stephen Smoliar) (06/23/91)

In article <1657@ucl-cs.uucp> G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:
>Stephen Smoliar writes, on the subject of Turing's orignal paper,
> > Unless I am mistaken,
> > Turing uses his opening paragraphs to argue that it is a waste of time
> > to consider a question as naive as "Can a machine think?"
>
>Perhaps some should tell John Searle. One of his lectures in "Minds,
>brains and science : the 1984 Reith lectures" is just that: "Can a
>machine think?"
>
I had two targets in mind when I decided to end this last article by pointing
my flame-thrower at those who had not yet bothered to give Turing's article a
serious reading:

 1.  Supposedly reputable scholars (such as John Searle) who
 should know better but are too busy enhancing their reputations
 by further elaborating upon arguments whose foundations sit on
 this fundamental misunderstanding of the original text.

 2.  Students and "curious observers" who seem more inclined to
 soak up second-hand accounts from folks like Searle than to set
 aside the couple of hours it takes to read what Turing REALLY
 had to say.

(Michael Thomas seems to be our resident representative of the second category.
Even when Minsky spells it all out with a Magic Marker, he still does not get
the message.  It IS a waste of time to argue about silly words like "think" and
"intelligence" when you could be spending your time building machines that
exhibit interesting behavior, such as the "vehicles" of Braitenberg's fantasy
or the robots of Brooks' reality.)

> > Therefore,
> > in the interest of being more productive, he introduces his "Imitation
> > Game" as a more realistic arena for investigation.  In other words he
> > replaces the intelligence question with that of whether or not a machine
> > could play the Imitation Game well enough that the other player would not
> > recognize it as a machine.  He then devotes the rest of the paper to
> > arguing
> > why it is feasible that this would eventually be the case.
>
>I think I am missing something. If it a box could "walk, talk and chew
>gum", how different would it appear if it could only "imitate" said
>behaviour?
>
Shame on you, Gordon!  You ARE missing something!  You have just revealed that
YOU have not read Turing either!  Turing's Imitation Game is well-defined in an
(intentionally) relatively narrow context.  That context does NOT involve
walking, talking, or chewing gum.  It involves nothing more than exchanges
of text through the objective medium of the logical equivalent of a dumb
terminal.  This narrow context is very important to Turing's argument for
exactly the reason I stated above:  The task is simple enough that you can
decide whether or not you have succeeded without getting into the deep waters
of philosophy.

At least you are in good company, Gordon.  This is where Searle tripped up,
too.  After all, the Chinese Room is nothing more than the Imitation Game with
some new sets and costumes.  (Think of it as the Peter Hall version if you are
at all into theater.)  Searle's big mistake, however, is that he wants to
accuse Turing and later members of the artificial intelligence community
of assuming that "winning" the Imitation Game is equivalent exhibiting thought.
Now there are certainly those out there who would like to make hay out of this
alleged equivalence, particularly if it is what the funding agencies want to
hear;  and they probably DO deserve at least a slap on the wrists if they try
to claim they are doing this in Turing's name.  However, such a slap has far
more impact when in comes from Marvin Minsky than when it comes from John
Searle, simply because Minsky's understanding of both the letter and the
spirit of Turing surpasses Searle's on every count.

===============================================================================

Stephen W. Smoliar
Institute of Systems Science
National University of Singapore
Heng Mui Keng Terrace, Kent Ridge
SINGAPORE 0511

BITNET:  ISSSSM@NUSVM

"He was of Lord Essex's opinion, 'rather to go an hundred miles to speak with
one wise man, than five miles to see a fair town.'"--Boswell on Johnson

grady@well.sf.ca.us (Grady Ward) (06/24/91)

There is no point in bogging down on the issue of when to
award a certificate of simulacrum to a.i.s.
I'm more interested in formulating a good test to decide
when a.i.s are conceded _superiority_ to the human species.

Does anyone have a suggestion for a suitable criterion?

My personal favorite is that if a.i.s can _persuade_ all
humans that they are indeed superior, then they are.  If
the a.i.s are not able to convince holdouts of this state
of affairs, then the a.i.s do not yet meet the standard.

sarima@tdatirv.UUCP (Stanley Friesen) (06/25/91)

In article <25566@well.sf.ca.us> nagle@well.sf.ca.us (John Nagle) writes:
>     Eliza and an ant are about in the same range in terms of compute
>power required to do the job.  This may be an insight into how much
>brainpower is needed to keep social interactions going.

:-C

Eliza requires as much compute power as an ant!?!

This is really far out.   Or were you just talking about an ant's social
responses, ignoring locomotory and foraging behavior?  By the time you
include all of the things an ant does (solar navigation, food recognition,
obstacle avoidance, feeding, cleaning, digging, carrying, ...) I really
think you would have a hard time getting a 386-based PC to do it in real
time (like an ant does). [And Eliza will run just fine on a Z80].
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)

thomas@ckgp.UUCP (Michael Thomas) (06/25/91)

In article <9106230258.AA12009@lilac.berkeley.edu>, ISSSSM@NUSVM.BITNET (Stephen Smoliar) writes:
> >Stephen Smoliar writes, on the subject of Turing's orignal paper,
> > > Unless I am mistaken,
> > > Turing uses his opening paragraphs to argue that it is a waste of time
> > > to consider a question as naive as "Can a machine think?"
> 
>  1.  Supposedly reputable scholars (such as John Searle) who
>  should know better but are too busy enhancing their reputations
>  by further elaborating upon arguments whose foundations sit on
>  this fundamental misunderstanding of the original text.
> 
>  2.  Students and "curious observers" who seem more inclined to
>  soak up second-hand accounts from folks like Searle than to set
>  aside the couple of hours it takes to read what Turing REALLY
>  had to say.
> 
>Michael Thomas seems to be our resident representative of the second category.

  Which category do you fit in...?  8^)

>It IS a waste of time to argue about silly words like "think" and
>"intelligence" when you could be spending your time building machines that

  I agree that it is a waste of time to ARGUE about the definitions
  of words, that we know we will not agree on, BUT it I was responeding to
  your mention of them. first "Can a machine think?" I was asking if
  you felt he ment that it was not possible or was possible for a 
  machine to think? Also my reference to the time at which this took
  place, computers today are alot different -- are they not? The view
  on the matter at that time is not compatible with current/future technology
  -- is it? 

>>I think I am missing something. If it a box could "walk, talk and chew
>>gum", how different would it appear if it could only "imitate" said
>>behaviour?
  
  There is a difference between actually doing something and imitating it.
  An airplane does not imitate the flight of a bird, because it wouldn't
  work! (They've tried it...way back when...) An airplane actually flies
  using its own means and instead of imitate it emulates, and hence has
  surpassed the model.(model = original) The same holds true for thought.
  You think and someday computers will emulate thought. It will not
  occur the way that you create thought, but will use its own means... 

>Shame on you, Gordon!  You ARE missing something!  You have just revealed that
>YOU have not read Turing either!  Turing's Imitation Game is well-defined in an
  [The imitation game was here long before Turing...]
>(intentionally) relatively narrow context.  That context does NOT involve
>walking, talking, or chewing gum.  It involves nothing more than exchanges
>of text through the objective medium of the logical equivalent of a dumb
>terminal.  This narrow context is very important to Turing's argument for
>exactly the reason I stated above:  The task is simple enough that you can
>decide whether or not you have succeeded without getting into the deep waters
>of philosophy.

  Narrow Context, hummm... I think you are missing something, yes Turing
  has the test set up the way that you discribe, yes the test only involves
  "nothing more the exchanges of text through the objective medium" was
  all of this really the point of our conversation before. My response
  was directed more directly at how the test was not a complete and
  comprehensive test for AI (which is different than imitation). I was
  merely stating my opinion that nothing (not even a person would be
  able to pass the test). If you put two people in the rooms and the person
  at the terminal was told to tell which is a person and which is a computer
  the person would always pick one of the people as the computer...
  The other problem was that in the Party game, (looking for a man or woman)
  the objective was to trick the experimenter so that they would pick
  the wrong person... this should not be the objective of a test to deturmine
  (What ever you wish to say Turing was trying to deturmine!)

>Searle's big mistake, however, is that he wants to
>accuse Turing and later members of the artificial intelligence community
>of assuming that "winning" the Imitation Game is equivalent exhibiting thought.

  Please tell me what accomplish by "winning" the Imitation Game? 
  (I do not believe that the answer is exhibiting thought!)

>Searle, simply because Minsky's understanding of both the letter and the
>spirit of Turing surpasses Searle's on every count.

  Marvin Minsky: sinse you aparently hold surpreme knowledge, please 
  tell us the goal of Turing's Test? Please also tell me if on an
  agent machine you would first (or ever!) put it to the test? (TT)

-- 
Thank you,
Michael Thomas
(..uunet!ckgp!thomas)

yonadav@VIRGO.MATH.TAU.AC.IL (Perry Yonadav) (06/25/91)

In article <1991Jun19.111622.5491@tygra.Michigan.COM> dave@tygra.Michigan.COM (David Conrad) writes:
>The Turing Test does not test for intelligence.  At a literal level it tests
>for a specific ability, the ability to mimic human answers to questions,
>which we may hope requires at least some kind of 'intelligence'.
>

But wouldn't it be logical to define 'intelligence' as human-like behavior?

	Ron.

minsky@media-lab.media.mit.edu (Marvin Minsky) (06/25/91)

In article <610@ckgp.UUCP> thomas@ckgp.UUCP (Michael Thomas) writes:
>
>  Marvin Minsky: sinse you aparently hold surpreme knowledge, please 
>  tell us the goal of Turing's Test? Please also tell me if on an
>  agent machine you would first (or ever!) put it to the test? (TT)

OK. I will reply, but only on one absolutely imperative condition.  It
is not permitted to either reply to this message, or ever communicate
to this newsgroup again on the indicated subject until you have read
and considered every single sentence of Turing's original article on
Computing Machinery and Intelligence from volume 59 of MIND.

*
*   
***** Press "n" unless you agree to this contractual agreement ***
*
*
*

Turing's goal was not *** repeat *** not to discuss whether a machine
can think.

Turing's goal was, instead, to discuss how a reasonable person ought
to deal with question, "Can Machines Think ?"
  After some discussion, Turing states that he believes that the
original question, "Can Machines Think ?" is too meaningless to
deserve discussion -- largely because it is too ambiguous.  However,
he suggests that there is another question that is less ambiguous and
more worth considering, for a number of reasons.

  Turing begins by describing the "Imitation Game".  And only then, he
presents us with the new question to be discussed. It is, "What will
happen when it is a computer that is placed in the other room ?"
(Only verbal messages are permitted to pass between the rooms. )

  Turing is kind enough to tell us his opinion of what will happen.
He believes that, by the year 2000, an average interrogator will not
have a better than 70% chance of correctly guessing, in five minutes,
whether the other room contains a person or a machine.

  By the way, so far as I can recall, Turing never once used the word
"intelligence" inside the paper.  I would hope that this will serve as
a lesson to everyone.  It is also a instance of one of my own
principles: that a word that appears in the title of a technical book
should never appear inside the text.

  And if you can't see the reason for this, then please stay away from

dnk@yarra-glen.aaii.oz.au (David Kinny) (06/25/91)

yonadav@VIRGO.MATH.TAU.AC.IL (Perry Yonadav) writes:

>But wouldn't it be logical to define 'intelligence' as human-like behavior?

>	Ron.

No, you're confusing 'intelligence' with 'stupidity'!

-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
David Kinny                       Australian Artificial Intelligence Institute
dnk@aaii.oz.AU                                  1 Grattan Street
Phone: +61 3 663 7922  Fax: 663 7937    CARLTON, VICTORIA 3053, AUSTRALIA
-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
David Kinny                       Australian Artificial Intelligence Institute
dnk@aaii.oz.AU                                  1 Grattan Street
Phone: +61 3 663 7922  Fax: 663 7937    CARLTON, VICTORIA 3053, AUSTRALIA

ISSSSM@NUSVM.BITNET (Stephen Smoliar) (06/26/91)

In article <610@ckgp.UUCP>, thomas@ckgp.UUCP (Michael Thomas) cites (in a
somewhat mangled fashion):
>In article <9106230258.AA12009@lilac.berkeley.edu>, ISSSSM@NUSVM.BITNET
>(Stephen Smoliar) writes:
>> >Stephen Smoliar writes, on the subject of Turing's orignal paper,
>> > > Unless I am mistaken,
>> > > Turing uses his opening paragraphs to argue that it is a waste of time
>> > > to consider a question as naive as "Can a machine think?"
>>
>>  1.  Supposedly reputable scholars (such as John Searle) who
>>  should know better but are too busy enhancing their reputations
>>  by further elaborating upon arguments whose foundations sit on
>>  this fundamental misunderstanding of the original text.
>>
>>  2.  Students and "curious observers" who seem more inclined to
>>  soak up second-hand accounts from folks like Searle than to set
>>  aside the couple of hours it takes to read what Turing REALLY
>>  had to say.
>>
>>Michael Thomas seems to be our resident representative of the second
>>category.
>
>  Which category do you fit in...?  8^)
>
To use Marvin Minsky's words, I count myself as one who has "read and
considered every single sentence of Turing's original article."  I have
done this several times and took great delight in Hugh Whitmore's translation
of this article into a dramatic monologue opening the second act of BREAKING
THE CODE.  (Jacobi's delivery of Whitmore's text had me so convinced that I
was ready to start asking questions from the audience!  These are strange times
when a playwright and an actor can exhibit greater understanding of a
pioneering paper in artificial intelligence than some philosophers can!)
Like Minsky, I believe that until you, too, have given Turing's text the
serious attention it deserves, you should just give it a rest, Michael.

===============================================================================

Stephen W. Smoliar
Institute of Systems Science
National University of Singapore
Heng Mui Keng Terrace, Kent Ridge
SINGAPORE 0511

BITNET:  ISSSSM@NUSVM

"He was of Lord Essex's opinion, 'rather to go an hundred miles to speak with
one wise man, than five miles to see a fair town.'"--Boswell on Johnson

G.Joly@cs.ucl.ac.uk (Gordon Joly) (06/26/91)

Grady Ward writes:
 > My personal favorite is that if a.i.s can _persuade_ all
 > humans that they are indeed superior, then they are.  If
 > the a.i.s are not able to convince holdouts of this state
 > of affairs, then the a.i.s do not yet meet the standard.

Intelligence is in the mind of the beholder. 

Psychologists are human too.
____

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

                    Order is paramount in anarchy.

berry@arcturus.uucp (Berry;Craig D.) (06/28/91)

yonadav@VIRGO.MATH.TAU.AC.IL (Perry Yonadav) writes:

>In article <1991Jun19.111622.5491@tygra.Michigan.COM> dave@tygra.Michigan.COM (David Conrad) writes:
>>The Turing Test does not test for intelligence.  At a literal level it tests
>>for a specific ability, the ability to mimic human answers to questions,
>>which we may hope requires at least some kind of 'intelligence'.
>>

>But wouldn't it be logical to define 'intelligence' as human-like behavior?

But what aspect of human-like behavior?  Examples:

(1) Gets sick when exposed to certain microorganisms.  [A clearly
	irrelevant behavior for AI, but still very characteristically
	human behavior.]

(2) Twitches limbs (manipulators?) when struck in certain spots with
	a rubber hammer.  [Grey area -- how important is body awareness
	and reflex loops to what we think of as "intelligence"?]

(3) Avoids situations which might lead to injury, seeks food and
	shelter (or equivalents).  [Probably necessary for anything we
	would like to think of as "intelligent".]

(4) Gets sad watching tragic play.  [Emotional response and empathy
	may or may not be critical to intelligence in general.]

(5) Debates philosophy.  [Clearly requires only reflexive behavior :-)]

(6) Plays world-class chess.  [Once thought a hallmark of intelligence,
	now known to be a "simple" matter of combining fast hardware and
	cleverly designed algorithms.]

Where do you draw the line when deciding if a purported AI is "human-like"
and therefore intelligent by your definition?