[sci.lang] the role of biological models in ai

rolandi@gollum.Columbia.NCR.COM (rolandi) (12/10/87)

Marty!

Sorry about our previous misunderstanding.  But regarding your reply ...

> You know perfectly well that, as a technology
> matures, it stops modeling its techniques on "natural processors" and
> develops artificial substitutes that were previously unknown.  You
> don't fly by flapping wings, your car doesn't propel itself with legs,
> and your air conditioner sweats as a result of cooling, not the other
> way around.  We first learn from natural processors, and then we
> progress by inventing artificial processors.

You make a good point here but, in a way, your examples labor against the 
interest of your argument.   According to some AI theorists, (see Schank,
R.C., (1984) The Cognitive Computer. Reading, Mass.: Addison-Wesley)
AI is "an investigation into human understanding through which we learn
...about the complexities of our own intelligence."  Thus, at least for
some AI researchers, the automation of intelligent behavior is secondary
to the expansion and formalization of our self-understanding.  This is 
assumed to be the result of creating computational "accounts" of (typically 
intellectual) behavior.  Researchers write programs which display the 
performance characteristics of humans within some given domain.  The
efficacy of a program is a function of the similarity of its performance 
to the human performance after which it was modeled.  Thus AI programs are 
(often) created in order to "explain" the processes that they model.

Although one of your examples provides an instance of a machine that employs
principles derived from studying natural flight, (airplanes) I don't
think many people would argue that the airplane was invented in order to
"explain" flight.  Of your other examples, I do not think that the workings 
of an automobile have ever been thought to provide insights into the nature
of human locomotion.  Nor do I believe that the "sweat" of an air conditioner
is in any meaningful way related to perspiration in humans.


-w.rolandi
ncrcae!gollum!rolandi
Look Boss, DisClaim! DisClaim!

sarima@gryphon.CTS.COM (Stan Friesen) (12/15/87)

In article <23@gollum.Columbia.NCR.COM> rolandi@gollum.UUCP () writes:
>
>   According to some AI theorists, (see Schank,
>R.C., (1984) The Cognitive Computer. Reading, Mass.: Addison-Wesley)
>AI is "an investigation into human understanding through which we learn
>...about the complexities of our own intelligence."  Thus, at least for
>some AI researchers, the automation of intelligent behavior is secondary
>to the expansion and formalization of our self-understanding.  This is 
>assumed to be the result of creating computational "accounts" of (typically 
>intellectual) behavior.  Researchers write programs which display the 
>performance characteristics of humans within some given domain.  The
>efficacy of a program is a function of the similarity of its performance 
>to the human performance after which it was modeled.  Thus AI programs are 
>(often) created in order to "explain" the processes that they model.
>
My problem with this class of AI research is that I question it
validity/usefulness. Why should there be only *one* algorithm for a
particular 'behavior'? What evidence do we have that the algorithms that
we are writing into our programs are in fact related in any way th the
ones used by the human brain? Mere parallel behavior is NOT sufficient
evidence to claim increased understanding of a human behavior, some
evidence from neurology and psychology is necessary to at least
demonstrate applicibility. In particular, I find most current AI
algorithms to be far too analytical to be realistic models of human,
or even animal, cognition.

msellers@mntgfx.mentor.com (Mike Sellers) (12/18/87)

>In article <23@gollum.Columbia.NCR.COM> rolandi@gollum.UUCP () writes:
>>
>>   According to some AI theorists, (see Schank,
>>R.C., (1984) The Cognitive Computer. Reading, Mass.: Addison-Wesley)
>>AI is "an investigation into human understanding through which we learn
>>...about the complexities of our own intelligence."  Thus, at least for
>>some AI researchers, the automation of intelligent behavior is secondary
>>to the expansion and formalization of our self-understanding.  

From what I've seen of AI research, this may not be true (in most cases).  
I think most AI researchers are not so concerned with self-understanding as 
they are with creating a program that interacts with humans in a seemingly
intelligent way.  It makes no difference if the methods or structures used
bear any resemblance to the human way of doing things.  I believe the problem
for most active researchers is one of scale: you cannot possibly hope to 
create a program that models human cognitive processing, and you have to get 
*something* running, so you set your sights a little lower and brush aside
questions of how well the program corresponds to humans.  This is not meant
to sound demeaning or even cynical, just realistic.

>>This is 
>>assumed to be the result of creating computational "accounts" of (typically 
>>intellectual) behavior.  Researchers write programs which display the 
>>performance characteristics of humans within some given domain.  The
>>efficacy of a program is a function of the similarity of its performance 
>>to the human performance after which it was modeled.  Thus AI programs are 
>>(often) created in order to "explain" the processes that they model. 

The last three statements are, I believe, rarely (if ever, in "classical" 
AI research) true.  In the vast majority of cases, we do not even know what 
the "performance characteristics of humans" are!  For a task of any real 
complexity, modeling a human's performance (when it can be measured) is 
still a matter of theory and conjecture rather than programming (see the
scale problem I mentioned above).  For example, even for all their hype 
and worth, knowledge-based (expert) systems do not even begin to approximate 
the actions of a human expert.  The most advanced projects in this area have 
some explanatory capabilities, and some skill at incorporating new or 
conflicting facts in their decision making process, but this is just 
scratching the surface of how human experts operate.  Lastly, current
AI programs are like the stork-story of human birth as far as explaining
human behavior or cognitive processing goes; they may provide something
that we can learn from later on, but they do not really get us any closer
to knowing what is really going on.

In article <2590@gryphon.CTS.COM>, sarima@gryphon.CTS.COM (Stan Friesen) writes:
>My problem with this class of AI research is that I question it
>validity/usefulness. Why should there be only *one* algorithm for a
>particular 'behavior'? What evidence do we have that the algorithms that
>we are writing into our programs are in fact related in any way th the
>ones used by the human brain? Mere parallel behavior is NOT sufficient
>evidence to claim increased understanding of a human behavior, some
>evidence from neurology and psychology is necessary to at least
>demonstrate applicibility. In particular, I find most current AI
>algorithms to be far too analytical to be realistic models of human,
>or even animal, cognition.

Most AI algorithms have little if any resemblance to how humans function.
How important this fact is depends on who you talk to.  Of those people
doing research in PDP (parallel distributed processing, or artificial
neural networks, or connectionist nets, etc), many are convinced that some
correspondence with the functioning of the human brain is important (possibly
vital).  This is not to say that this way of operating is the "only way".
It is, however, the only way that we know of.  Later, when we have all the
principles behind cognition down pat, we can begin to branch out in different
directions.  Interestingly, many of the people doing this research are
psychologists and neurologists, so there is (hopefully) an increasing amount
of knowledge and techniques from these fields being used in this research.
For the time being, however, the level of cognition we will be seeing arising
from PDP research will be more reminisicent of a flatworm or a sea slug 
than a dog or a human (I predict, however, that this is more than we will
see from more "classical" AI methods, which will continue to be more concerned
with outward function than with inward correspondence). 

-- 
Mike Sellers
...!tektronix!sequent!mntgfx!msellers
Mentor Graphics Corp.
Electronic Packaging and Analysis Division