[comp.ai.neural-nets] Defining a Nerual Network

guedalia@bimacs.BITNET (David Guedalia) (11/14/90)

Hi,
  Has anyone seen or heard a definition for a Neural Network.
Must a Nerual Net. have specific types of neurons?  Are there any
criterias for the hardware implementation of the NN. ?  Would a Kohenen
Feature Map be considered a NN?
  I am writting my thesis and need to prove that the ANN that I may
build (one day) is 'officially' a NN.

  Thank you
    David

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (11/17/90)

In article <2491@bimacs.BITNET> guedalia@bimacs.BITNET (David Guedalia) writes:
>Hi,
>  Has anyone seen or heard a definition for a Neural Network.

Not really.  In the literature, and talking with researchers, you will
here "neural network" associated with almost any kind of real-valued
massively parallel computations which are used to perform AI tasks.

More formally, however, "neural network" is usually reserved for
real brain circuitry (as opposed to "artificial neural network",
reserved for massively parallel networks which people think up and
program/build).  The tem used for the field of working with
artificial neural networks is refered to as "connectionism,"
which makes one envisioned interesting computation done by
interconnected elements.

Kohonen maps can fall into the "connectionist" label fairly well.

The moral of the story here is that there are no hard-and-fast
definitions of these terms, which is totally apt for the "fuzzy"
and fault-tolerant nature of neural nets.

-Thomas Edwards
 

pako@lut.fi (Pasi Koikkalainen) (11/18/90)

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) writes:
>>  Has anyone seen or heard a definition for a Neural Network.

>Not really.  In the literature, and talking with researchers, you will

Well, I have been able to found a multitude of definitions for NN:s.
These definitions, however, do not really specify what Artificical
Neural Networks are (or are not). As a result almost anything can
be presented as a neural network (and also has been presented).
The most popular ANNs seemto have some common features:
- Massively parallel
- No external control mechanism (communication of PE:s only via message lines)
- adaptation, learning rules.
- paradigms are simle

>Kohonen maps can fall into the "connectionist" label fairly well.

One must remember that there are two definitions of what "connectionist"
networks are. The definition here was that they are ANNs. The other 
definition is that they are higher level decision making networks,
similar to semantic networks in AI, introduced by Feldman and in the
PDP book; a subset of ANNs. Kohonen maps are lower level ANNs in that
sense, but certainly this is a typical Artificial Neural Network too.

Just wait few more weeks and I will publish my lic.thesis on this
subject, you will be able to get your copy via postscript ftp file.
-- 
 * ---------------------------------------------------------------------
 *  Pasi Koikkalainen 
 *  Lappeenranta University of Technology
 *  P.O.Box 20, 53851 LPR, Finland

slehar@thalamus.bu.edu (Steve Lehar) (11/19/90)

Let me try my hand at this one...

NEURAL NETWORKS
===============
A neural   network is a   computational  model    that is inspired  by
observation    of     natural    computational   mechanisms.   Natural
architectures are      fundamentally   different    from  conventional
architectures in that  they    tend  to  represent information   in  a
distributed  way, and  to   perform computation  in a parallel  analog
manner that seems  to be more  fault tolerant  and robust if the input
information is somewhat  ambiguous.   Neural approaches work  best  in
applications  where traditional  computation  has   performed  poorly,
usually because the data  is  ambiguous or  the   context has  a large
influence on  the data,  such as vision, speech  and cognition.   They
generally perform  poorly  in realms where   computers   perform well,
usually  because  the data is  deterministic, clearly defined and well
understood,   such   as  word   processors,   spreadsheets, arithmetic
computation.

SPECIAL NOTE to the "there's no such thing as neural networks" folks:
============
Since most  neural  networks  are implemented by computer  simulation,
(except the  real  ones,  that is)  there is  of  course  some overlap
between "neural" and "non-neural" models;  very simple neural  systems
are similar  to  non-neural   equations, and very   large networks  of
conventional   systems  are  sometimes  similar  to very  small neural
networks.  The difference is really in  the  inspiration of the model-
neural  models tend  towards simple computational   units and lots  of
them, whereas conventional  architectures  have much  more complicated
units and less of  them.  The need  for the  separate term "neural" is
that this approach has been so  counterintuitive that it took a couple
of decades of bashing our heads against certain insoluable problems to
realize that the way it  is  done in the  brain is very different from
the way it is  done in sequential  computers.  The reason conventional
computers were initially so popular is  that theirs is a more obvious,
predictable and direct road  to the solution  (IF this AND  that  THEN
theother) than the neural way (IF some of  these AND some of those AND
I'm in the  right  mood THEN perhaps a  bit  of theother).   

There are those who  claim  that so-and-so's neural  model is  nothing
more than such-and-such a mathematical procedure, and therefore neural
networks don't exist.  To those I say,  since the geometrical solution
to a problem can also be solved algebraeically, therefore geometry (or
algebra, take your pick)  doesn't exist.   Many mathematical  problems
can be solved  in a variety of ways,  which can be shown ultimately to
be equivalent.  It just happens that  certain classes  of problems are
more easily solved with one technique than another, so it is important
to match your mathematical tools to the nature of  your problem to get
the most results for the least effort.
--
(O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O)
(O)((O))(((               slehar@park.bu.edu               )))((O))(O)
(O)((O))(((    Steve Lehar Boston University Boston MA     )))((O))(O)
(O)((O))(((    (617) 424-7035 (H)   (617) 353-6741 (W)     )))((O))(O)
(O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O)

pluto@babymilo.ucsd.edu (Mark Plutowski) (11/20/90)

slehar@thalamus.bu.edu (Steve Lehar) writes:

>Let me try my hand at this one...

>NEURAL NETWORKS
>===============
>A neural   network is a   computational  model    that is inspired  by
>observation    of     natural    computational   mechanisms.   Natural
>architectures are      fundamentally   different    from  conventional
>architectures in that  they    tend  to  represent information   in  a
>distributed  way, and  to   perform computation  in a parallel  analog
>manner that seems  to be more  fault tolerant  and robust if the input
>information is somewhat  ambiguous.   Neural approaches work  best  in
>applications  where traditional  computation  has   performed  poorly,
>usually because the data  is  ambiguous or  the   context has  a large
>influence on  the data,  such as vision, speech  and cognition.   They
>generally perform  poorly  in realms where   computers   perform well,
>usually  because  the data is  deterministic, clearly defined and well
>understood,   such   as  word   processors,   spreadsheets, arithmetic
>computation.

I believe this is a decent characterization, and provides enough historical
background to motivate the definition, but let me try my hand at a definition:

"Recall the usual definition of a ``network'' as a connected graph of 
computing elements (nodes) in which communication among nodes occurs along arcs 
connecting the nodes (connections.)  

A ``neural'' network, by analogy with the biological namesake, 
is obtained by placing restrictions on the type of information allowed
to propagate along the connections, as well as upon the type of computation 
allowed within each node.  Each node is allowed to compute a 
mapping from a set of inputs to a scalar output value.  The instantaneous 
value of the activation propagated by a connection is allowed to be a scalar 
value." 

Note that the inputs to a node 
can be elements of any set, and so this definition does not preclude symbolic
input information, so long as the transfer function of the node is well-defined
over such a domain.  Usually, though, the inputs are assumed to be a vector 
of a real space, since so many learning algorithms are derived analytically.
However, many learning algorithms are happy with inputs being elements of 
a set, since after all the two-bit input set {0, 1} can just as easily be taken
to be the set {off, on} by slight modification of the learning algorithm,
viz, by the appropriate use of propositions defined over the input set.  

The definition of what a connection is allowed 
to propagate does not preclude time-varying information, nor does it preclude
propagation of symbolic information encoded as a scalar value, say, such that 
the symbolic information can be appropriately decoded at the other end.
Also, the definition of network does not preclude global broadcasting of 
information, since we have placed no constraints upon the connectivity.
Nor does it preclude recurrent dynamics, since each node may retain a 
history of previous inputs internally, or be served by an external set 
of nodes whose purpose it is to retain historical information, say, 
by emulating a stack, queue, or even a time-averaged statistical summary.   

It is important to define a neural network as a computational model, as 
Steve Lehar did above, since a neural network is not defined by its 
implementation, but as an abstract way of characterizing either a particular
computational set of hardware, software, or, according to some folks, wetware.

Improvements to my definition are welcomed, it may not be  
sufficiently general to encapsulate everyone's idea of a neural network.  
But it certainly encompasses my own understanding of the term, given
the nets I've seen reported on in the literature.

That's my opinion, what's yours?

(Qualification: some "connectionists" may maintain that this definition is 
strictly contained within their definition of a "connectionist architecture." 
I would not argue with that viewpoint.)

-=-=
M.E. Plutowski,  pluto%cs@ucsd.edu 

UCSD,  Computer Science and Engineering 0114
9500 Gilman Drive
La Jolla, California 92093-0114