[comp.ai.philosophy] The TT && "working definition" of intelligence.

thomas@ckgp.UUCP (Michael Thomas) (06/25/91)

> As before, any constructive criticisms will find their way into a
> new version of the definition and flames will be duly ignored.

  This does mean that we can discuss the topic of intelligence--correct?

> ---------------------------------------------------------------------
> 	GENERAL REQUIREMENTS OF AN INTELLIGENT SYSTEM.
> 
> a)	The system MUST be able to learn. 
> b)	The system MUST be autonomous. 
> c)	The system MUST be able to reason.
> d)	The system MUST be able to develop self awareness. 
> ----------------------------------------------------------------------

  This is a very good definition (including the parts I took out...)
  Is/were there other important factors for which the label GENERAL
  did not fall? 

  In response to the TT, what do any of these things have to do with
  the TT or vice-versa? 

  Does anyone see the point of the TT, as the computer being intelligent
  enough to know when the experimenter is trying to foul it and when
  to lie in response to the questions (or come up with a witty response...)

  So we can then all agree that the TT atleast leaves out (doesn't test for)
  atleast two qualities of intelligence [B,D]?
-- 
Thank you,
Michael Thomas
(..uunet!ckgp!thomas)

G.Joly@cs.ucl.ac.uk (Gordon Joly) (06/25/91)

Could this thread go into comp.ai.philosophy and die in comp.ai?

Gordon.

martin@adpplz.UUCP (Martin Golding) (06/27/91)

In <611@ckgp.UUCP> thomas@ckgp.UUCP (Michael Thomas) writes:

>> As before, any constructive criticisms will find their way into a
>> new version of the definition and flames will be duly ignored.
>> 	GENERAL REQUIREMENTS OF AN INTELLIGENT SYSTEM.
>> a)	The system MUST be able to learn. 
>> b)	The system MUST be autonomous. 
>> c)	The system MUST be able to reason.
>> d)	The system MUST be able to develop self awareness. 
>> ----------------------------------------------------------------------

>  Does anyone see the point of the TT, as the computer being intelligent
>  enough to know when the experimenter is trying to foul it and when
>  to lie in response to the questions (or come up with a witty response...)

>  So we can then all agree that the TT atleast leaves out (doesn't test for)
>  atleast two qualities of intelligence [B,D]?

No, the Turing test expects the system to be capable of simulating both
B and D. The TT has the advantage over any of these other interesting
criteria because it gives a simple test, and a method of improvement.
"Aha, B is a a computer, A is a human being" is easy, and following
up with "How do you know" immediately provides a future path. The 
categories (a,b,c,d) are (as another poster has suggested) philosophical,
and (d) at least looks awfully circular to me.

The Turing test bypasses the discussion of souls, the complications
of defining "reason" and "self awareness" and replaces them with the
(pragmatic) "if it looks like intelligence, the programmer is done".

If I build a system that is _not_ intelligent, but _simulates_ intelligence
to the extent, say, of learning engineering and designing a chip, aren't
any of the other questions moot? Including what the difference is between
a perfect simulation of self awareness and actual self awareness.

PS IF you think that you have a good definition of intelligence,
and you figure out how to test for that, and you do, isn't that
the Turing Test anyway?


Martin Golding    | sync, sync, sync, sank ... sunk:
Dod #0236         |  He who steals my code steals trash.
A poor old decrepit Pick programmer. Sympathize at:
{mcspdx,pdxgate}!adpplz!martin or martin@adpplz.uucp

xerox@cs.vu.nl (J. A. Durieux) (06/28/91)

>> 	GENERAL REQUIREMENTS OF AN INTELLIGENT SYSTEM.
>> 
>> a)	The system MUST be able to learn. 
>> b)	The system MUST be autonomous. 
>> c)	The system MUST be able to reason.
>> d)	The system MUST be able to develop self awareness. 

Hmm.  Does this mean that if his illness spreads any further, Stephen
Hawkin ceases to be intelligent, on behalf of (b)?

(I am not sure about the name: I mean the physicist in the wheelchair.)

Or, that none of us is, since we are all critically dependent on thousands
of factors of our environment?  Perhaps the minimal intelligent entity
would be the Sun + the Earth ("only" dependent on general relativity,
quantum mechanics, etc.)?

Perhaps I should rather ask my question directly:
    What the h*ck dou you mean by "autonomous"?