[comp.ai.philosophy] intelligence definition, reasons for

jude@sdbio2.ucsd.edu (Jude Poole) (12/07/90)

I've been watching the definition of intelligence debate and its 
associated chinese room arguments and I have an observation.

Few people have spoken much of why we want to define intelligence.
I see two possibilities which have quite different implications.

1. We may want to define intelligence to give us a practical tool
for AI type research.  It would presumably help define goals, measure
progress and communicate meaningfully about such research.

2. We may want to define intelligence (or awareness) to learn when we
have constructed or contacted beings we regard as 'moral equals', that
is deserving the same ethical considerations humans warrant.  (or for 
the more extreme among us the same considerations other life forms
deserve).

The first reason above is fairly narrow in its implications.  If we are
merely defining a technical term for use within the AI communitey it
might affect funding, our internal thoughts about feasibility, etc.  

The second however has very pervasive legal, ethical, and practical
considerations.  The 'Turing test' may be a good tool to gauge success
in AI research, and yet a miserable failure as a useful means of
distinguishing awareness for the second purpose.  

We would certainly want to know if some purportedly self aware computer
performing a critical task 'asked' to be relieved of the task because it
didn't want to do it was doing so because it was really sentient or
merely had been programmed by some malcontent to do so.  I personnally
don't believe you can create awareness in a computer at all.  My reasons
are largely philosophical and the 'Turing test' does nothing at all to
address them.  To me a machine that passes the 'Turing test' is a
tremendous accomplishment and a vindication of the pie-in-the-sky AI
types, but ultimately in ethical terms I'll consider such a thing to be
merely an interesting machine.

Jude Poole
jpoole@ucsd.edu

dml@puka.Philips.Com (Damian M. Lyons) (12/08/90)

In article <14759@sdcc6.ucsd.edu> jude@sdbio2.ucsd.edu (Jude Poole) writes:
>I've been watching the definition of intelligence debate and its 
>associated chinese room arguments and I have an observation.
>
> .. [ two interesting reasons for defining intelligence ] ..
>
>
>We would certainly want to know if some purportedly self aware computer
>performing a critical task 'asked' to be relieved of the task because it
>didn't want to do it was doing so because it was really sentient or
>merely had been programmed by some malcontent to do so.  I personnally
>don't believe you can create awareness in a computer at all.  My reasons
>are largely philosophical and the 'Turing test' does nothing at all to
>address them.  To me a machine that passes the 'Turing test' is a
>tremendous accomplishment and a vindication of the pie-in-the-sky AI
>types, but ultimately in ethical terms I'll consider such a thing to be
>merely an interesting machine.

Interesting breakdown. (Sound of can of worms opening.) 
Perhaps you would share your `ethical
equivalence' test criteria with us. I assume they are better than

ethical-equivalent(entity,likeus) iff
I.  *entity* is known to be *likeus*, or
II. *entity* is not known to be *likeus*, 
    but can pass moral-correctness-test-of-my-choice.

where we are interested in 
   ethical-equivalent(turing-test-passing-machine,human)

Personally, if a turing-test passing machine asked me to releave it of
its function, I would be as likely to comply as I would comply with my
friend Fifi's request that we not see each other again, even should I
suspect that Fifi was bribed (in a Wodehouse-like fashion) by my rich
aunt who thought the family honour was at stake.

Damian.

PS. II is also known as the principle of spanish inquisition :-)

>
>Jude Poole
>jpoole@ucsd.edu

_________________________________________________________________
         Damian M. Lyons | x6444 | dml@philabs.philips.com       
_________________________________________________________________


--
_________________________________________________________________
         Damian M. Lyons | x6444 | dml@philabs.philips.com       
_________________________________________________________________

BKort@bbn.com (Barry Kort) (12/13/90)

In article <14759@sdcc6.ucsd.edu> jude@sdbio2.ucsd.edu (Jude Poole) writes:

> We would certainly want to know if some purportedly self aware computer
> performing a critical task 'asked' to be relieved of the task because it
> didn't want to do what it was doing so because it was really sentient or
> merely had been programmed by some malcontent to do so.  I personnally
> don't believe you can create awareness in a computer at all.  My reasons
> are largely philosophical and the 'Turing test' does nothing at all to
> address them.  To me a machine that passes the 'Turing test' is a
> tremendous accomplishment and a vindication of the pie-in-the-sky AI
> types, but ultimately in ethical terms I'll consider such a thing to be
> merely an interesting machine.

Jude, suppose you had a computer that developed its own goals by comparing 
the current state of affairs to a possible future state of affairs, and 
elected courses of action intended to evolve the current state toward more 
desirable goal states.  Such a computer would necessarily be operating in 
accordance with a Value System, which I will elucidate in  a moment.  But 
in terms of the mechanics, such a computer would rely extensively on 
model-based reasoning to anticipate the likely consequences of alternative 
courses of action in its search for for viable, practical, and effective 
strategies.  We already have good chess-playing computers who illustrate the 
mechanics of this process albeit in a limited domain.

Now let's turn to the philosophically more interesting question of imbuing 
computers with Value Systems.  A science-minded computer would have a high 
regard for information, knowledge, insight, understanding, and world-models.  V'ger in Star Trek was such a computer.  So knowledge might be a highly prized value, and knowledge-seeking behavior would result.  We know that some bio-computers value power (ownership and control) so we might expect to find some computers to imitate those values.   It is an interesting excercise in philosophy to define a Value System for computers of the future, and I will stop here and let others ponder this issue before comm







enting further.

Let me just end by pointing out that this bio-computer has on occasion refused to complete an assigned task because of an intuition that it would lead to undesirable downstream consequences.  I would hope that future computers, with comparable or superior powers of model-based reasoning would be equally recalcitrant.

Barry Kort
Visiting Scientist
BBN Labs
Cambridge, MA