[comp.ai.digest] Goal of AI: Where are we Going?

eyal@wisdom.BITNET (Eyal mozes) (10/05/87)

> I believe that those "bad" characteristics of human are necessary
> evils to intelligence.  For example, although we still don't understand
> the function of emotion in human mind, a psychologist Toda saids that
> it is a device for servival.  When an urgent danger is approaching, you
> don't have much time to think.  You must PANIC!  Emotion is a meta-
> inference device to control your inference mode (mainly of recources).
>
> If we ever make a really intelligent machine, I bet the machine
> also has the "bad" characteristics. In summary, we have to study
> why human has those characteristics to understand the mechanism of
> intelligence.

I think what you mean by "the bad characteristics" is, simply, free
will. Free will includes the ability to fail to think about some
things, and even to actively evade thinking about them; this is the
source of biased decisions and of all other "flaws" of human thought.

Emotions, by themselves, are certainly not a problem; on the contrary,
they're a crucial function of the human mind, and their role is not
limited to emergencies. Emotions are the result of subconscious
evaluations, caused by identifications and value-judgments made
consciously in the past and then automatized; their role is not "to
control your inference mode", but to inform you of your subconscious
conclusions. Emotional problems are the result of the automatization of
wrong identifications and evaluations, which may have been reached
either because of insufficient information or because of volitional
failure to think.

A theory of emotions and of free will, explaining their role in the
human mind, was developed by Ayn Rand, and the theory of free will was
more recently expanded by David Kelley.

Basically, the survival value of free will, and the reason why the
process of evolution had to create it, is man's ability to deal with a
wide range of abstractions.  A man can form concepts, gain abstract
knowledge, and plan actions on a scale that is in principle unlimited.
He needs some control on the amount of time and effort he will spend on
each area, concept or action. But because his range his unlimited, this
can't be controlled by built-in rules such as "always spend 1 hour
thinking about computers, 2 hours thinking about physics" etc.; man has
to be free to control it in each case by his own decision.  And this
necessarily implies also freedom to fail to think and to evade.

It seems, therefore, that free will is inherent in intelligence. If we
ever manage to build an intelligent robot, we would have to either
narrowly limit the range of thoughts and actions possible to it (in
which case we could create built-in rules for controlling the amount of
time it spends on each area), or give it free will (which will clearly
require some great research breakthroughs, probably in hardware as well
as software); and in the later case, it will also have "the bad
characteristics" of human beings.

        Eyal Mozes

        BITNET:                 eyal@wisdom
        CSNET and ARPA:         eyal%wisdom.bitnet@wiscvm.wisc.edu
        UUCP:                   ...!ihnp4!talcott!WISDOM!eyal