[comp.ai.digest] Flawed/Flawless

larry@VLSI.JPL.NASA.GOV (10/21/87)

FLAWED/FLAWLESS: I can argue on both sides.

FLAWLESS:  Quality can only be judged against some standard.  Every person 
has (perhaps only slightly) a different value system, as may even the same 
person at different times.  So what is a flaw to one may be a "feature" to 
another.  (Examples: a follower of Kali may consider torture, death, and 
corruption holy; a worshipper of the Earth Mother may consider monogamy a 
sin and infertility a crime.)  The only objective standard is survival of 
the largest number of one's species for the longest time, and even this 
"standard" is hopelessly flawed(!) by human subjectivity.

FLAWED:  Nevertheless, humans DO have standards, and not only for made 
objects like automobiles, that are essential to our survival and happiness.  
We want lovers who have compatible timing (social, sexual), sensitivity (at 
least enough not to hog the blankets or too obviously eye the competition), 
enough intelligence (so we can laugh at the same jokes) but not too much 
(winning an occasional argument is necessary to our self-esteem), etc.  
Notice that TOO MUCH intelligence may be considered as bad a flaw as too 
little.

And more FLAWLESS:  From an evolutionary standpoint what is a "virtue" in 
one mileau may become deadly when the environment changes.  Performing some 
mental activity reliably may be of little use when chaos sweeps through our 
lifeways.  THEN divergent thinking--or even simple error--may be more likely 
to solve problems.  A perfect memory (popularly thought to accompany great 
intelligence) can be a liability, holding one rigidly to standards or 
knowledge no longer valid.  It is also the enemy of abstract/general 
thought, which depends on forgetting (or ignoring) inessentials.  (Indeed, 
differential forgetting may be one of those great ignored areas of fruitful 
research.)

AI:  What does all this have to do with artificial intelligence?  Maybe 
nothing, but I'll invent something.  Say ... the relationship of emotions to 
intelligence.  First, sensations of pain and pleasure drive thought, in the 
sense that they establish values for thinking and striving to achieve or 
avoid some event or condition.  Sensation triggers emotions and in turn 
triggers sensations which may act as second-level motivators.  They also may 
trigger subsystems for readiness (or inhibition) of action.  (Example: 
hunger depletes blood sugar, triggering anger when a certain level of stress 
is reached, which releases adrenalin which energizes the body.  Anger also 
may cause partly or fully random action, which is statistically better than 
apathy for killing or harvesting something to eat.)

At least that's the more traditional outlook on emotions--though that 
outlook may have changed in ten years or so since I did much reading in 
psychology.  Even if true, however, the above outlook doesn't establish a 
necessary link of emotion with artificial thought; humans can supply the 
goals and values for a Mars Rover, an activate command can trigger emergency 
energy reserves.  Some other more intimate association of emotion with 
thought is needed.

Perhaps emotions affect thought in beneficial ways, say improving the 
thinking mechanism itself.  (The notion that it impedes correct thought is 
too conventional and (worse) obvious to be interesting.)  Or maybe emotion 
IS thought in some way.  It is, after all, only a conclusion based on 
incomplete brain evidence that thought is electrical in nature.  Suppose the 
electrical action of the brain is secondary, that the biochemical action of 
the brain is the primary mechanism of thought.  This might square with the 
observation that decision often happens in the subconscious, very rapidly, 
and integrates several (or even dozens) of conflicting motives into a vector 
sum.  In other words, an analog computer may be a better model for human 
thought than a digital one.  (In the nature of things that answer is likely 
too simple.  Most likely, I'd guess, the brain is a hybrid of the two.)

                                Larry @ jpl-vlsi