[net.math] Neural Net COmputing

norman@lasspvax.UUCP (Norman Ramsey) (11/12/85)

I recently heard a talk given here on Hopfield memories and neural network
devices. The work I heard about is being done at Bell Labs by Larry
Jaeckel's group. The idea is fairly simple: you take N "neurons", connect
each to all the others, and let the firing rate of a given neuron depend on
the stimuli on its inputs, which can be excitatory or inhibitory. Jaeckel's
people are using op amps with resistors and capacitors, where voltage is the
quantity analogous to firing rate, and conductance is analogous to the
transmittivity (or whatever) of a synapse. Apparently they have been able to
make an associative memory out of these gadgets, and have also taken a good
crack at the traveling salesman problem (by letting the device minimize
energy).

Does anyone know more about the mathemtics of these things? How many
elements can be stored in such an associative memory? What are expected
error rates like? What are the possibilities for programming or calculating
with these devices?
-- 
Norman Ramsey

ARPA: norman@lasspvax  -- or --  norman%lasspvax@cu-arpa.cs.cornell.edu
UUCP: {ihnp4,allegra,...}!cornell!lasspvax!norman

debray@sbcs.UUCP (Saumya Debray) (11/17/85)

> The work I heard about is being done at Bell Labs by Larry Jaeckel's
> group. The idea is fairly simple: you take N "neurons", connect each to
> all the others, and let the firing rate of a given neuron depend on
> the stimuli on its inputs, which can be excitatory or inhibitory. 

Sounds an awful lot like perceptrons (Marvin Minsky, MIT, early '70s).  But
as far as I remember, Minsky's conclusion was that the simple perceptron
model wasn't very powerful at all.  Does anyone have more details on
Jaeckel's work?
-- 
Saumya Debray
SUNY at Stony Brook

	uucp: {allegra, hocsd, philabs, ogcvax} !sbcs!debray
	arpa: debray%suny-sb.csnet@csnet-relay.arpa
	CSNet: debray@sbcs.csnet

bs@linus.UUCP (Robert D. Silverman) (11/18/85)

> > The work I heard about is being done at Bell Labs by Larry Jaeckel's
> > group. The idea is fairly simple: you take N "neurons", connect each to
> > all the others, and let the firing rate of a given neuron depend on
> > the stimuli on its inputs, which can be excitatory or inhibitory. 
> 
> Sounds an awful lot like perceptrons (Marvin Minsky, MIT, early '70s).  But
> as far as I remember, Minsky's conclusion was that the simple perceptron
> model wasn't very powerful at all.  Does anyone have more details on
> Jaeckel's work?
> -- 
> Saumya Debray
> SUNY at Stony Brook
> 
> 	uucp: {allegra, hocsd, philabs, ogcvax} !sbcs!debray
> 	arpa: debray%suny-sb.csnet@csnet-relay.arpa
> 	CSNet: debray@sbcs.csnet

The math content of this group has become low to negative recently. Can
we please have less of this sophistry and more math??? Most of the discussion
about turing machines vs. humans belongs in net.ai or net.philosophy. Also,
much of the content of these discussions leaves me wondering whether their
writers possess any natural intelligence. It certainly sounds as if many
people simply like to shoot their mouths off concerning a subject which
they haven't studied. This is not a newsgroup for speculation.

Now for an interesting variational problem:

A particle travels along a continuous path from (-a,a) to (a,a) such that
the magnitude of its velocity at any given point on that path is k/s where
s is the current distance to the origin. [k and a are fixed].

Question: What is the path for minimal travel time?  Don't just present
the Euler equation. You must solve it as well.
 
Bob Silverman   (they call me Mr. 9)

norman@lasspvax.UUCP (Norman Ramsey) (11/23/85)

In article <55@linus.UUCP> bs@linus.UUCP (Robert D. Silverman) writes:
>The math content of this group has become low to negative recently. Can
>we please have less of this sophistry and more math??? Most of the discussion
>about turing machines vs. humans belongs in net.ai or net.philosophy. Also,
>much of the content of these discussions leaves me wondering whether their
>writers possess any natural intelligence. It certainly sounds as if many
>people simply like to shoot their mouths off concerning a subject which
>they haven't studied. This is not a newsgroup for speculation.
>Bob Silverman   (they call me Mr. 9)


I'm sorry that Mr. Silverman is unhappy about the low math contents of
net.math. I invite him to send his flames out there. If he had read my
original posting carefully, he would have seen that the descriptive material
was background for asking the question, does anyone know anything about the
*mathematics* (there's that word again) of these things...

To be more specific, does anyone out there in net land understand:

  (1) What describes the family of functions minimized by the neuron
computing algorithm? Hopfield's model is nonlinear; the neuron turns on
when its inputs reach a certain threshold, so depending on whether you pick
a single threshold or one for each neuron you have a one- or an N-parameter
family.

  (2) What are the fixed points and basins of attraction of the neural
computing algorithm? In particular, how can altering the firing thresholds
change the (a) number or position of stable fixed points (metastable states,
local minima, pick your own jargon) and (b) the size (and shape?) of the
basins of attraction?

  (3) How does the mathematics of the device change when the range of
possible values for the transfer matrix (neural interconnections) is
restricted?

Hopfield addresses some of these questions briefly in his paper in Natl Acad
Sci USA 79. He also has some nice little estimates of error rates and such.
He doesn't give much discussion to the issue which really interests me,
which is control over the basins of attraction. It's a very nice paper
though.

I can't compare to Perceptrons since I know nothing about them, but I'm told
by those who claim to know that the Perceptron is different.
-- 
Norman Ramsey

ARPA: norman@lasspvax  -- or --  norman%lasspvax@cu-arpa.cs.cornell.edu
UUCP: {ihnp4,allegra,...}!cornell!lasspvax!norman

grunwald@uiucdcsb.CS.UIUC.EDU (11/26/85)

Your description sounds similar to 'threshold functions' and threshold gates.

For more information, see page 258 -> 260 of 'Logic Design and Switching
Theory' by Muroga. Other references given in that book include 

	Muroga, S. Threshold Logic and its Applications, John Wiley 1971
	Winder, R.O. Threshold Logic, PhD thesis, Princeton Univ, 1962
	Winder, R.O. The funamentals of threshold logic, in 'Applied
		Automata Theory', edited by J. Tou, Academic Press, 1968

	Winder, R.O. 'Threshold logic will ...', Electonics May 27, 1968

dirk grunwald
univ. illinois