[comp.ai.neural-nets] Some Questions about NN

ambati@acf5.NYU.EDU (FJLevM{n[]Balamurali Ambati) (02/01/91)

Is it correct to say that people have had more success in modelling 
biological phenomena (i.e., in this case, describing neuronal networks 
in the visual pathway / hippocampus / ...) than in designing networks
to solve problems that the visual pathway / hippocampus / ... can 
solve?

A simple example is the immense difficulty in making a computer "see."
Of course, one of the problems is that "seeing" involves much more 
than the visual pathway alone.  But is this the only problem?

It's my understanding that Hopfield-Tank and other similar neural network 
models are not that useful (when compared to existing digital algorithms 
and even some genetic algorithms) in obtaining near-optimal solutions 
to combinatorial optimization problems such as TSP, etc.  Is this because
these models are simplistic in terms of describing the appropriate 
neurons?  Or is this because the human brain was not designed to solve 
TSP, etc.?  Is it worthwhile making neural networks that can themselves 
invent specific algorithms (somewhat like humans make machines solve 
specific problems)?  Is it possible / simple?

Balamurali K. Ambati

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (02/01/91)

In article <1467@acf5.NYU.EDU> ambati@acf5.UUCP (FJLevM{n[]Balamurali Ambati) writes:

>Is it correct to say that people have had more success in modelling 
>biological phenomena (i.e., in this case, describing neuronal networks 
>in the visual pathway / hippocampus / ...) than in designing networks
>to solve problems that the visual pathway / hippocampus / ... can 
>solve?

That's a loaded question.  For the most part, we don't know a great
deal of hard information concerning what is really going on in
(let's say) visual processing in humans.  We think we know alot about the
retina, but past that we don't have the hard facts.  There are alot
of high-level description theories (e.g. Marr's) of what is going on,
but there is controversy even over high-level descriptions.

The problem is multi-faceted:
1)  There are so many neurons involved in these processes 
2)  They are packed in together incredibly
3)  Fan-in and fan-out can be in the range of 1000-10,000 for each neuron
4)  Not every brain is the same, and there is some evidence that
    small scale brain structures can chagne slightly over time
5)  We are just barely getting a complete picture of what a single 
    neuron does, but there are around 1000 different neuron species

You see, we can't pick out neuron by neuron of a human brain and figure
out what is going on computationally.  We have to seek and understanding
of brain at a level of neural organization instead of neuron connections.
In other words, we have to answer how can populations of neurons
organize themselves (operationally or genetically) to produce 
information processing structures.  

>It's my understanding that Hopfield-Tank and other similar neural network 
>models are not that useful (when compared to existing digital algorithms 
>and even some genetic algorithms) in obtaining near-optimal solutions 
>to combinatorial optimization problems such as TSP, etc.  Is this because
>these models are simplistic in terms of describing the appropriate 
>neurons?

Neural Networks are usually used to form non-linear mappings between
and input space and an output space.  This is a very general problem
to solve, so it is reasonable that they do not perform as well as
specially crafted TSP programs.  I don't think there is anything you
can say psychologically about TSP results from neural networks.

Why?  Well, we don't know that brain is organized in any manner like
the neural net models which we put forward.  However, there is
also no reason that traditional neural net models can't resemble
neural population organization, but there is no positive evidence
that they do.  But neural nets are still capable of learning some neat
things for themselves, but they need to be a bit more specialized before
we see anything like proto-intelligence comming out of them.

Some of the most interesting "neuromorphic" work has come from looking
at a part of the brain which we do have alot of understanding of...
the retina.  We know that lateral inhibition is used in the retina
to provide an increase in dynamic range of illumination across the
retina.  It could also be used to bring out edges.  Retina-model
image preprocessors have shown themselves to be of great use, especially
with things like IR detectors which have a tendency to differ in
response from detector element to detector element.

There is also evidence that brain uses spatial frequency information for
visual processing.  Of course, engineers have been using Fourier
transforms for a long time to do spatial-frequency domain processing.

But more importantly, I think we have to understand the non-linear
dynamics of neural circuitry.  Feed-forward neural nets are wonderful
for to problems, but the real world has a temporal component which we
have to deal with.  Also we have to integrate heuristic learning
mechanisms from traditional AI which work so well on "symbolic"
processes in with traditional neural net models which handle 
non-well-behaved information well.

-Thomas Edwards

gowj@novavax.UUCP (James Gow) (02/04/91)

Are there any references to work on graphical representations in kb's?
linc
james