[comp.ai.philosophy] Imitations of Humanity

khan@oxy.edu (Onnie Lynn Winebarger) (11/26/90)

     I'm relativity new to this net, so I'm not sure if this question will
really be appropriate.	My question is:  has the possibility of a neural net
hooked up to 2 cameras, connected for stereo vision, with other attempts at
equivalency for the other 4 physical senses, been considered for attempts to
produce intelligence?  I have been thinking that if a neural net was given the
same senses (as far as we can tell, of course we can't be sure they are the same
as our own) as a human, and given something to make it open it's eyes (a proverb
ial slap in the behind) then the flood of information it would recieve, somewhat
like a human baby would recieve, would force it to somehow deal with the flood,
and hopefully, eventually, enable intelligence.  I am rather existentialist in
belief and don't particularly believe that humans are born "intelligent" but
develop their thinking patterns over time in order to deal with what they sense.
If humans, who just happen to be biological machines, anyway, can develop these
thought processes, why shouldn't a machine given that it has very good
approximations of the sensory apparatus and "CPU" that a human has?
I am rather unfamiliar with this field, so I don't know how much this has been
debated or thought of, or anything like that.  I am also pretty sure that we are
unable, at the moment, to build this approximation of a human, but that really
isn't relevant to the question of if this approximation would become
intelligent.
    Also, consider the implications if this machine actually moved after it was
activated (without any programming to do it, but possessing the necessary motors
for movement and grasping and talking, etc. as humans.)

				 Lynn Winebarger
				 khan@oxy.edu
ps.  any info (by e-mail) on how to construct a .sig file would be greatly
 appreciated.	Thanx.

fostel@eos.ncsu.edu (Gary Fostel) (11/29/90)

Winebarger, at Occidental College asked (essentially) if there was a
Holy Grail:

   I'm relativity new to this net, so I'm not sure if this question will
   really be appropriate. My question is:  has the possibility of a neural 
   net hooked up to 2 cameras, connected for stereo vision, with other
attempts 
   at equivalency for the other 4 physical senses, been considered for
attempts 
   to produce intelligence?  I have been thinking that if a neural net
was given 
   the same senses (as far as we can tell, of course we can't be sure they are 
   the same as our own) as a human, and given something to make it open it's 
   eyes (a proverbial slap in the behind) then the flood of information it 
   would recieve, somewhat like a human baby would recieve, would force it to
   somehow deal with the flood, and hopefully, eventually, enable intelligence.

This is a very seductive expectation, but it founders a bit on the
oversimplification of what happens during human development.  This is likely
at the root of some people's inflated expectations of artificial neural nets.
They feel that an artificial net is the same as "brain stuff" and since there
is an existence proof for Brain stuff supporting intelligent behavior, then
artificial nets should be able to behave intelligently as well.  There are
a lot of issues that the "flood of data to the gigantic net" idea miss: 

   1) There is no "blank slate".  We are not born with a vast neural net
      devoid of all structure, ready to be programmed. Different areas of
      the brain have strikingly different connectivity, and it is becomming
      clear that this fine structure is optimized for the role that part of
      the brain is supposed to perform. 

   2) The stream of sensory information that helps create intelligent behavior
      is not arbitrary, but requires carefully structured interaction with
      other humans (usually parents).  Fortunately, those parents seem to
      know what to do; they probably learned it as children, or perhaps it
      is innate. 
 
   3) The structure of the net evolves in direct response to appropriate
      stimulation.  This is not a question of setting weights in response to
      stimulation, but rather the growth of new net-stuff.  If the stimulation
      is wrong during the critical period, the growth either does not occur
      or goes awry

   4) There are external factors that influence behavior that are not part
      of the net, e.g. glandular action and the mix of chemicals in your food.

   5) Artificial neurones are to real neurons as paper airplanes are to birds.
      There are an incredible array of things going on inside a real neurone
      that are abstracted away by the usual "neurone" in artificial net.

I'm sure there are more.  Philisophically, (this is net.ai.philosophy :-)
there are two reasons why artificial nets are never-the-less useful things
to study.  They seem to be a good way to organize the solution of certain
computational problems (usually on digital systems) and they may provide 
some insight into real mind-stuff. There is no reason, in principle, why
these issue can not be addressed in ever more sophisticated neural systems,
but the human "existence proof" for intelligence from nets is not an easy one.

----GaryFostel----                           Department of Computer Science
                                             North Carolina State University   

burley@pogo.ai.mit.edu (Craig Burley) (11/29/90)

In article <1990Nov28.165033.26351@ncsuvx.ncsu.edu> fostel@eos.ncsu.edu (Gary Fostel) writes:

   Winebarger, at Occidental College asked (essentially) if there was a
   Holy Grail:

      I'm relativity new to this net, so I'm not sure if this question will
      really be appropriate. My question is:  has the possibility of a neural 
      net hooked up to 2 cameras, connected for stereo vision, with other
   attempts 
      at equivalency for the other 4 physical senses, been considered for
   attempts 
      to produce intelligence?  I have been thinking that if a neural net
   was given 
      the same senses (as far as we can tell, of course we can't be sure they are 
      the same as our own) as a human, and given something to make it open it's 
      eyes (a proverbial slap in the behind) then the flood of information it 
      would recieve, somewhat like a human baby would recieve, would force it to
      somehow deal with the flood, and hopefully, eventually, enable intelligence.

   This is a very seductive expectation, but it founders a bit on the
   oversimplification of what happens during human development.  This is likely
   at the root of some people's inflated expectations of artificial neural nets.
   They feel that an artificial net is the same as "brain stuff" and since there
   is an existence proof for Brain stuff supporting intelligent behavior, then
   artificial nets should be able to behave intelligently as well.  There are
   a lot of issues that the "flood of data to the gigantic net" idea miss: 

      [very good points omitted]

Plus, even if the neural net you built was an excellent equivalent for the
human brain (whatever that might mean) and the sensory mechanisms adequate,
how do you expect it to ever decide to DO anything with all its "input"?
If the motor mechanisms aren't driven at all, big deal according to this
machine, and if they're driven randomly because of the "clean slate" that I
think means randomized weights in the NN, again big deal...what's going to
drive the machine to learn to focus its eyes on things, to walk, and so on?

It would be neat to see just what would happen, nevertheless.

But it is interesting (at least for me) to think about what we'd have to do
to such a machine to make it actually useful.  I gather our current model for
training NNs is to give them inputs and outputs at the same time, but how could
we reasonably do that for this machine, where the outputs are in some ways
unimportant?  Like, even if we got it to walk and focus and such, how do we
manipulate it to actually learn and think, which it is presumably capable of
doing if so motivated?
--

James Craig Burley, Software Craftsperson    burley@ai.mit.edu

msellers@mentor.com (Mike Sellers) (11/30/90)

Gary Fostel writes:
>Winebarger, at Occidental College asked (essentially) if there was a
>Holy Grail:
>
>   I'm relativity new to this net, so I'm not sure if this question will
>   really be appropriate. My question is:  has the possibility of a neural 
>   net hooked up to 2 cameras, connected for stereo vision, with other
>attempts 
>   at equivalency for the other 4 physical senses, been considered for
>attempts 
>   to produce intelligence?  I have been thinking that if a neural net
>was given 
>   the same senses (as far as we can tell, of course we can't be sure they are 
>   the same as our own) as a human, and given something to make it open it's 
>   eyes (a proverbial slap in the behind) then the flood of information it 
>   would recieve, somewhat like a human baby would recieve, would force it to
>   somehow deal with the flood, and hopefully, eventually, enable intelligence.
>
>This is a very seductive expectation, but it founders a bit on the
>oversimplification of what happens during human development.  This is likely
>at the root of some people's inflated expectations of artificial neural nets.
>They feel that an artificial net is the same as "brain stuff" and since there
>is an existence proof for Brain stuff supporting intelligent behavior, then
>artificial nets should be able to behave intelligently as well.  There are
>a lot of issues that the "flood of data to the gigantic net" idea miss: 

This is an excellent point:  artificial neural nets are not magic.  Biological
neural nets probably aren't either. :-)   The extension from a small network 
to a large self-developing neural mass (made of many nets) is not a straight-
forward one.  However, some research (and three book reviews in this month's
AI Expert) is beginning to show a change in this as ANNs are applied to 
problems that require more real-world interactivity.

I agree with Gary's thrust in general, but would take exception on a few 
points:

>   1) There is no "blank slate".  We are not born with a vast neural net
>      devoid of all structure, ready to be programmed. Different areas of
>      the brain have strikingly different connectivity, and it is becomming
>      clear that this fine structure is optimized for the role that part of
>      the brain is supposed to perform. 

True, but large parts of the mammalian brain (most notably the frontal, 
parietal, and parts of (I believe) the superior temporal cortex) do not 
appear to be strongly differentiated at birth.  This process takes place 
over the next several years at least.  This implies something of a "blank
slate" that is developed via interaction with the environment through the
other, already more developed structures.  

>   3) The structure of the net evolves in direct response to appropriate
>      stimulation.  This is not a question of setting weights in response to
>      stimulation, but rather the growth of new net-stuff.  If the stimulation
>      is wrong during the critical period, the growth either does not occur
>      or goes awry

Actually, "setting weights" is not that bad an abstraction for synapse LTP
given the correct view.  If you look at the processing nodes in an ANN as 
a small ensemble of neurons that generally respond as a group rather than 
as each node being a single neuron, and if you start with complete 
connectivity to all the other ensembles in the net, then altering the weights
(especially to 0) simulates the long-term development of synapses, with a 0
weight corresponding to the degeneration of the connections between ensembles.  
  It is true, however, that during the last trimester before birth and during
the first year or two after birth, new neurons are being generated at an
average rate of about 100,000 *per minute*, something that no current model
that I know of has begun to address.  These cells then migrate into place 
within the neuronal matrix by way of a very complex and still mysterious 
mechanism (the brain is not just a mass of neurons blobbed together!), and
begin to form synapses.  In some parts of the brain only a few of the 
synapses created this way will degenerate, while in other parts (notably 
those like the prefrontal cortex that are the most susceptible to environmental
conditions), upwards of 90% of all the synapses and the neuronal soma will 
degenerate, presumably from disuse.  Still, the major as-yet unaddressed 
conditions have to do with level of abstraction that we can get away with 
and still have a good model, the rate of new neurons being generated in 
the early development of the human brain, and the sheer scale and complexity
of the structures that we are hoping to model.

>   4) There are external factors that influence behavior that are not part
>      of the net, e.g. glandular action and the mix of chemicals in your food.

But these ultimately have an effect on synaptic behavior, which may be
modellable by altering synaptic weights.  The primary difference between 
chemically-mediated and neurally-mediated effects are that chemicals can 
be quickly diffused throughout a large part of the neural system (as by the
cerebrospinal fluid), affecting a wide variety of synapses more quickly 
and uniformly than if the change had to be propogated synapse to synapse, 
cell by cell.  

>   5) Artificial neurones are to real neurons as paper airplanes are to birds.
>      There are an incredible array of things going on inside a real neurone
>      that are abstracted away by the usual "neurone" in artificial net.

This is probably the most philosophical of all the issues you've raised.
When does the simulation of neurons correspond enough to the "real thing"
that we are no longer simply doing a simulation?  How much of the structure
and function of neurons are just nature's expedients, and which parts are
expedients that have since evolved into crucial parts of the neural 
mechanism?  I think there may be some (Searle?) who believe that we will
never get to the point where we are no longer _simulating_ neural systems,
but are actually building their functioning analogs.  For myself, I'm not
so sure.

>----GaryFostel----                           Department of Computer Science
>                                             North Carolina State University   


-- 
Mike Sellers     msellers@mentor.com     Mentor Graphics Corp.

"I used to think that the brain was the most wonderful organ in my
body.  Then I realized who was telling me this." -- Emo Phillips

greenba@gambia.crd.ge.com (ben a green) (11/30/90)

In article <BURLEY.90Nov28125234@pogo.ai.mit.edu> burley@pogo.ai.mit.edu (Craig Burley) writes:

   Plus, even if the neural net you built was an excellent equivalent for the
   human brain (whatever that might mean) and the sensory mechanisms adequate,
   how do you expect it to ever decide to DO anything with all its "input"?
   If the motor mechanisms aren't driven at all, big deal according to this
   machine, and if they're driven randomly because of the "clean slate" that I
   think means randomized weights in the NN, again big deal...what's going to
   drive the machine to learn to focus its eyes on things, to walk, and so on?

   It would be neat to see just what would happen, nevertheless.

   But it is interesting (at least for me) to think about what we'd have to do
   to such a machine to make it actually useful.  I gather our current model for
   training NNs is to give them inputs and outputs at the same time, but how could
   we reasonably do that for this machine, where the outputs are in some ways
   unimportant?  Like, even if we got it to walk and focus and such, how do we
   manipulate it to actually learn and think, which it is presumably capable of
   doing if so motivated?

Maybe this thread is starting to get interesting:

What more is needed for an artifical person is motivation, or needs, and
an adaptive mechanism that raises the probability of the behavior that
satisfies them. This is "reinforcement".

You also need a mechanism for acquiring new needs that are based on the
old ones. So a robot that starts out inherently valuing only survival
can acquire a taste for electrical outlets ... .

Warning! This line of thought may lead you into the dreaded world of
Behaviorism of the kind explained by B. F. Skinner in "About Behaviorism."
And, oh yes, whatever you think you know about behaviorism is probably 
wrong unless you got it straight from Skinner.

Ben
--
Ben A. Green, Jr.              
greenba@crd.ge.com
  Speaking only for myself, of course.