[comp.ai] Biological relevance and AI

msellers@mntgfx.mentor.com (Mike Sellers) (06/15/88)

[This is a (slightly edited) re-post of a message that I don't think made it 
out into the net.  Apologies if you've already seen this. ]

In article <13100@shemp.CS.UCLA.EDU>, bjpt@maui.cs.ucla.edu (Benjamin Thompson) writes:
>In article <10510@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu writes:
>> Gerald Edelman, for example, has compared AI with Aristotelian
>> dentistry: lots of theorizing, but no attempt to actually compare
>> models with the real world.  AI grabs onto the neural net paradigm,
>> say, and then never bothers to check if what is done with neural
>> nets has anything to do with actual brains.

Where are you getting your information regarding AI & the neural net paradigm?
I agree that there is a lot of hype right now about connectionist/neural nets,
but this alone does not invalidate them (they may not be a panacea, but they 
probably aren't worthless either).  There are an increasing number of people 
interested in (and to some degree knowledgeable of) both the artificial and 
biological sides of sensation, perception, cognition, and (some day) 
intelligence.  See for example the PDP books or Carver Mead's upcoming book 
on analog VLSI and neural systems (I just finished a class in this -- whew!).
There have been recent murmurings from some of the more classical AI types 
(e.g. Seymour Papert in last winter's Daedalus) that the biological 
paradigm/metaphor is not viable for AI research, but these seem to me to be 
either overstating the case against connectionism or simply not aware
of what is being done.  Others contend that anything involving 'wetware' is 
not *really* AI at all, and thus shouldn't invade discussions on that subject.
This is, I believe, a remarkably short-sighted view that amounts to denying
the possibility of a new tool to use.

> This is symptomatic of a common fallacy.  Why should the way our brains
> work be the only way "brains" can work?  Why shouldn't *A*I workers look
> at weird and wonderful models?  We (basically) don't know anything about
> how the brain really works anyway, so who can really tell if what they're
> doing corresponds to (some part of) the brain?
> 
> Ben

I think Ben's second and following sentences here are symptomatic of a common 
fallacy, or more precisely of common misinformation and ignorance.  No one has 
said or implied that biological nervous systems have a monopoly on viable 
methodologies for sensation, perception, and/or cognition.  There probably are 
many different ways in which these types of problems can be tackled.  We do
have a considerable amount of knowledge about the human brain, and (for 
the time being more to the point) about invertebrate nervous systems and the
actions of individual neurons.  And finally, correspondence to biological
systems, while important, is by no means a single and easily acheived goal
(see below).  On the other hand, we can say at least two things about the 
current state of implemented cognition: 
  1) The methods we now call 'classical' AI, starting from about the late 
1950's or early 60's, have not made an appreciable dent in their original 
plans nor even lived up to their original claims.  To refresh your memory, 
a quote from 1958:

     "...there are now in the world machines that think, that learn and 
     that create.  Moreover, their ability to do these things is going to
     increase rapidly until --in a visible future-- the range of problems
     they can handle will be coextensive with the range to which the human
     mind has been applied."

This quote is from H. Simon and A. Newell in "Heuristic Problem Solving:
The Next Advance in Operations Research" in _Operations Research_ vol 6,
published in *1958*.  (It was recently quoted by Dreyfus and Dreyfus in the 
Winter 1988 edition of Daedalus, on page 19.)  We seem to be no closer to 
the realization of this claim than we were thirty years ago.
  2)  We do have one instance that proves that sensation, perception, and
cognition are possible: natural nervous systems.  Thus, even though there 
may be other ways of solving the problems associated with vision, for 
example, it would seem that adopting some of the same strategies used by
other successful systems would increase the likelyhood of our success.  
While it is true that there is more unknown than known about nervous 
systems, we do know enough about neurons, synapses, and small aggregates 
of neurons to begin to simulate their structure and function.

  The issue of how much to simulate is a valid and interesting one.  Natural 
nervous systems have had many millions of years to evolve their solutions 
(much longer than we hope to have to take with our artificial systems), but 
then they have been both undirected in their evolution and constrained by
the resources and techniques available to biological systems.  This would 
seem to argue for only limited biological relevance to artificial solutions: 
e.g., where neurons have axons, we can simply use wires.  On the other hand, 
natural systems also have the tendency to take a liability and make it into
a virtue.  For example, while axons are not simple 'wires', and in fact are
much slower, larger, and more complex than wires, they can also act as active
parts of the whole system, enabling such things as temporal differentiation
to occur easily and without much in the way of cellular overhead.  Thus, 
while we probably will not want to create fully detailed simulations of
neurons, synapses, and neural structures, we do need to understand what 
advantages are embodied in the natural approach and extract them for use in
our artifices while carefully excluding those things that exist only by
being carried along with the main force of the evolutionary current.
  All of this is not to say that AI researchers shouldn't look at "weird and 
wonderful models" of perception and cognition; this is after all precisely 
what they have been doing for the past thirty years.  The only assertion here 
is that this approach has not yielded much in the way of fertile results 
(beyond the notable products such as rule-based systems, windowed displays, 
and the mouse :-) ), and that with new technology, new knowledge of biological
systems, and a new generation of researchers, the one proven method for 
acheiving real-time sensation, perception, and cognition ought to be given 
its chance to fail.

Responses welcomed.

-- 
Mike Sellers                           ...!tektronix!sequent!mntgfx!msellers
Mentor Graphics Corp., EPAD            msellers@mntgfx.MENTOR.COM
"AI is simply the process of taking that which is meaningful, and making it
meaningless."  -- Tom Dietterich  (admittedly, taken out of context)

wlieberm@teknowledge-vaxc.ARPA (William Lieberman) (06/15/88)

Just to add slightly to Ben and Mike's discussion, Ben's naturally good
question about why  should it be that  anyone can assume that we humans on
earth uniquely possess capabilities in intellgence, etc. (i.e. the
biological system that makes us up), and Mike's reply that such an assumption
is not really made, reminds me of the question asked in a not-too-long
ago earlier age when scientists asked, 'How likely is
it that the chemistry of the world, as we know it, exists in the same
state outside the earth?'

	A reasonable question. Then when helium was demonstrated to exist
on the sun (through spectrographic analysis around the 1860's??) and around
the same time when the table of the elements was being built up empirically
and intuitively, the evidence favored the idea that our local chemical and
physical laws were probably universal. As a youngster I used to wonder
why chemists, etc. kept saying there are only around 100 or so elements
in the universe. Why couldn't there be millions?  But the data do suggest
the chemists are correct - with relatively few elements, such is the matter
of the universe existing. What I'm saying here is that it may be prudent
to expect not too many diverse 'forms' of intelligence around. Rough
analogy, I agree; but sometimes the history of science can provide useful
guideposts. Right now we have some sensible ideas about what it takes to
do certain kinds of analyses; but no one really knows what it takes to
enable a state of consciousness to exist, for example. One answer surely
lies in research in biophysics (and probably CS-AI).

Bill Lieberman

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (06/18/88)

From article <23201@teknowledge-vaxc.ARPA>, by wlieberm@teknowledge-vaxc.ARPA (William Lieberman):
" ...
" 	A reasonable question. Then when helium was demonstrated to exist
" on the sun (through spectrographic analysis around the 1860's??) and around
" the same time when the table of the elements was being built up empirically
"...
" the chemists are correct - with relatively few elements, such is the matter
" of the universe existing. What I'm saying here is that it may be prudent
" to expect not too many diverse 'forms' of intelligence around. Rough
" analogy, I agree; but sometimes the history of science can provide useful
" ...

It's not even analogous unless you have a table of intelligence.  Maybe
you do.  If so, how many entries does it have room for?

		Greg Lee, uhccux.uhcc.hawaii.edu

tjhorton@csri.toronto.edu (Tim Horton) (06/22/88)

msellers@mntgfx.mentor.com writes:
>>> ...  AI grabs onto the neural net paradigm,
>>> say, and then never bothers to check if what is done with neural
>>> nets has anything to do with actual brains.
>
>Where are you getting your information regarding AI & the neural net paradigm?
>...  We do have a considerable amount of knowledge about the human brain,
>and (for the time being more to the point) about invertebrate nervous
>systems and the actions of individual neurons.

Where are you getting your information regarding the human brain?
Most of the brain is unknown; it really is a unscratched problem.
And on the contrary, knowledge of operations of neurons seems to be
entirely less to the point.

Look at it this way.  What if we were without any appropriate theory,
(of serial computation or even electronic calculation), and a Motorola
68030 landed from another planet and inspired awe among us.

We might be able to watch voltage levels on pins, or take chips apart
and guess at the makings of structures like transistors and resistors.
We might figure out some of the 68000's I/O dependancies, and a
little bit about what happens when we poke a probe here or there.
One could imagine, then, claiming to "have a considerable amount of
knowledge" about 68000's.  It would certainly be a curious statement.
We surely mightn't have a clue about the underlying principles of cpu
design; very likely we wouldn't understand some of the most basic
fundamentals.  If, for example, we were without integer arithmetic,
much of what a 68000 did wouldn't make a spec of sense...

(Realize that even that little bit of math took us many thousands of
years to develop.  Surely computer science is young enough to admit of
possibilities for simple and eloquent but as-yet-undiscovered principles,
along as-yet-unimagined lines)...

Meanwhile, somebody might build "equivalents" to transistors -- call them
"light switches" -- and proceed to do research on things *we* would call
"electric lighting".  Though it may be all the rage at the time, and
garner support from granting agencies that overestimate the 68000-ish
properties, the research wouldn't exactly have a damn thing to do with
essential properties of the MC68030.

This is the sort of thing I think this person was getting at.


As for neural nets; take backpropagation for instance. It's almost
completely implausible for any biological system we know of, but it may
be one of the best general-purpose algorithms available.  Not only does
it require more than any biological neural system apparently provides,
but it's ridiculously slow.  (People really don't seem to always like to
tell you how *long* it takes them to get their hyped-up results!)  Further,
the basic structure of the problems this technology works on doesn't seem
to have much at all to do with the structure of most problems that natural
intelligence addresses.