[comp.ai.philosophy] Evolutionary and developmental views of intelligence

msellers@mentorg.com (Mike Sellers) (02/13/91)

In article <MIKEB.91Feb1115721@wdl31.wdl.loral.com> mikeb@wdl31.wdl.loral.com (Michael H Bender) writes:
>John Nagle writes:
>
>   .... There is a bit of hubris in trying to address human-level intelligence
>   from our present level of ignorance .... 
>   We will not achieve lizard-level competence until we have ant-level
>   competence well in hand.  We will not achieve rodent-level competence
>   until we have lizard-level competence.  And we will not achieve primate-
>   level
>   competence until we can build rodent-level brains.  And until we have 
>   achieved primate-level competence, we will not successfully build a 
>   general-purpose human-level AI.
>
>[...]
>Clearly, AI will be more successful when it marries the cognitive approach,
>which has been so popular of late, and the "developmental" approach
>which John recommends. But that does not mean we should go to the other
>extreme and ignore the "higher-level" aspects of human intelligence.
>
>Mike Bender

Your comment spurred something in me that I've thought about from time to
time, and that may be implicit in Nagle's statement:  What he is referring
to is the necessity of 'scaling up' artificial intelligence in a phylogenetic
or evolutionary fashion, not a developmental one.  However, the developmental
view of intelligence will doubtless be invaluable too.  Consider that no one
is born intelligent/conscious, and yet at least most of us become conscious 
somewhere along the way (thus we have an existance proof that it is possible 
to do!).  Very little attention has been paid to the phylogenetic-based 
changes in intelligence that can be observed and inferred, and even less 
to such developmental aspects in humans (strange as that sounds).  Artificial
intelligence practitioners tend to focus almost exclusively on the adult 
organism when they refer to biological systems at all, and rarely take into 
account the continuing developmental aspects of an intelligent/cognitive 
agent.  The study of the emergence of consciousness is still so slippery 
as to be taboo in most academic circles (on the other hand I've written
a paper about this subject and wouldn't mind seeing some discussion of
the subject here).  

While we should not ignore the higher-order aspects of human cognition,
I do not think we will be able to do more than model these in a prescriptive
fashion (e.g. KBS) until we can support the emergence of the desired
properties from an evolutionary and developmentally viable functional
architecture.  Modelling the higher-order aspects can be very useful
in an applied sense, but I do not believe it will move us any closer
to the transmutation of artifice into general-purpose human-level AI.


P.S.  It might be worth noting that I chose the word "transmutation"
above carefully.  I do not believe we will be able to say that we can
'create' or 'assemble' or 'formulate' an intelligence like something 
in physics or mechanics or chemistry.  Rather I think those people who 
are in on the emergence of the first true AI will be much more easily 
likened to alchemists:  They will be practitioners of a high and 
somewhat mystical art who are searching for the correct combination 
of artifact and environment that will enable a previously mundane 
collection of things and circumstances to emerge as an entirely new
and coherent whole, much like the ancient alchemist's goal of finding
the correct combination of ingredients and components for turning 
lead into gold. 


-- 
Mike Sellers     msellers@mentor.com     Mentor Graphics Corp.

"I used to think that the brain was the most wonderful organ in my
body.  Then I realized who was telling me this." -- Emo Phillips

smoliar@isi.edu (Stephen Smoliar) (02/14/91)

In article <1991Feb13.071834.22703@mentorg.com> msellers@mentorg.com (Mike
Sellers) writes:
>  Consider that no one
>is born intelligent/conscious, and yet at least most of us become conscious 
>somewhere along the way (thus we have an existance proof that it is possible 
>to do!).  Very little attention has been paid to the phylogenetic-based 
>changes in intelligence that can be observed and inferred, and even less 
>to such developmental aspects in humans (strange as that sounds).  Artificial
>intelligence practitioners tend to focus almost exclusively on the adult 
>organism when they refer to biological systems at all, and rarely take into 
>account the continuing developmental aspects of an intelligent/cognitive 
>agent.  The study of the emergence of consciousness is still so slippery 
>as to be taboo in most academic circles (on the other hand I've written
>a paper about this subject and wouldn't mind seeing some discussion of
>the subject here).  
>
Fortunately, there are a few good minds out there who do not seem to be
concerned about academic taboo.  I suppose the pioneer in the study of
phylogenetic-based changes in intelligence would have to be Jean Piaget,
an astute observer of children at all stages of their development and a
designer of many clever and highly informative experiments.  The work of
Piaget has had some impact on the practice of artificial intelligence, due,
at least in part, to the activity of his former colleague, Seymour Papert.
The influence of Piaget is also readily apparent throughout the pages of Marvin
Minsky's THE SOCIETY OF MIND.

The other major researcher of phylogenetic-based changes is Gerald Edelman.
Leaving aside such grand matters as intelligence and consciousness, Edelman
begins with the premise that no organism is born with a capacity for perceptual
categorization (in other words, he assumes--and presents arguments for his
assumption--that such a capacity cannot be innate).  He then addresses how
that capacity may be acquired in terms of his Theory of Neuronal Group
Selection (TNGS), supporting his arguments with computer models.  This
is all documented in his book, NEURAL DARWINISM.  (In a later book, THE
REMEMBERED PRESENT, he speculates on how TNGS can also apply to consciousness
and intelligence;  but it is important to bear in mind that "speculates" is the
operative word here.  His experimental work is still down at the level of
perceptual categorization.)
-- 
USPS:	Stephen Smoliar
	5000 Centinela Avenue  #129
	Los Angeles, California  90066
Internet:  smoliar@venera.isi.edu

mikeb@wdl35.wdl.loral.com (Michael H Bender) (02/15/91)

Mike Sellers writes:

   ... review of previous notes ...

   Your comment spurred something in me that I've thought about from time to
   time, and that may be implicit in Nagle's statement:  What he is referring
   to is the necessity of scaling up artificial intelligence in a phylogenetic
   or evolutionary fashion, not a developmental one. However, the developmental
   view of intelligence will doubtless be invaluable too.  Consider that no one
   is born intelligent/conscious, and yet at least most of us become conscious 
   somewhere along the way (thus we have an existance proof that it is 
   possible to do!). 

What evidence do we have that we were not born conscious? Is it only our
lack of memory from that period or can you point to some other evidence?


   Very little attention has been paid to the phylogenetic-based 
   changes in intelligence that can be observed and inferred, and even less 
   to such developmental aspects in humans (strange as that sounds). Artificial
   intelligence practitioners tend to focus almost exclusively on the adult 
   organism when they refer to biological systems at all, and rarely take into 
   account the continuing developmental aspects of an intelligent/cognitive 
   agent.  

If you are referring to the Piaget type of research, I agree completely.

   The study of the emergence of consciousness is still so slippery 
   as to be taboo in most academic circles (on the other hand I've written
   a paper about this subject and wouldn't mind seeing some discussion of
   the subject here).  

I, for one, would be interested in seeing a copy of your paper.

   While we should not ignore the higher-order aspects of human cognition,
   I do not think we will be able to do more than model these in a prescriptive
   fashion (e.g. KBS) until we can support the emergence of the desired
   properties from an evolutionary and developmentally viable functional
   architecture.  

Why? Do you think that it is impossible to understand/emulate a system
without understanding how it was developed/built? What about reverse
engineering?

   Modelling the higher-order aspects can be very useful
   in an applied sense, but I do not believe it will move us any closer
   to the transmutation of artifice into general-purpose human-level AI.

Why? Doesn't general-purpose human-level intelligence also exhibit
conscious, prescriptive, behavior?  (E.g., look at the behavior of a
compulsive or paranoid person.)

   P.S.  It might be worth noting that I chose the word "transmutation"
   above carefully.  I do not believe we will be able to say that we can
   'create' or 'assemble' or 'formulate' an intelligence like something 
   in physics or mechanics or chemistry.  Rather I think those people who 
   are in on the emergence of the first true AI will be much more easily 
   likened to alchemists:  They will be practitioners of a high and 
   somewhat mystical art who are searching for the correct combination 
   of artifact and environment that will enable a previously mundane 
   collection of things and circumstances to emerge as an entirely new
   and coherent whole, much like the ancient alchemist's goal of finding
   the correct combination of ingredients and components for turning 
   lead into gold. 

It is probably easier, and we will make more money, transmutting lead
into gold....

Mike Bender