[comp.ai.neural-nets] Neuron Digest V4 #3

neuron-request%hplabs@HPLABS.HP.COM (Neuron-Digest Moderator Peter Marvit) (09/19/88)

Neuron Digest	Sunday, 18 Sep 1988
		Volume 4 : Issue 3

Today's Topics:
			       Administrivia
		  Request for info on neural net HARDWARE
		      refs. for stochastic relaxation
		    Re: refs. for stochastic relaxation
			      Music and Nets
			    Re: Music and Nets
			 Neural Network Literature
		       Re: Neural Network Literature
		   Who is getting the money from DARPA?
		: Re: Who is getting the money from DARPA?
		 Re: Who is getting the money from DARPA?
		 Re: Who is getting the money from DARPA?
		       : Has anyone heard of ALOPEX?
			   why network analysis?
		     Brain simulation reference wanted
		    Mac enhancements of PDP Handbook SW


Submissions, questions, mailing list maintenance to
        "Neuron-request@hplabs.hp.com"
------------------------------------------------------------

[[ For a time, I will be glad to mail back issues of the Neuron-Digest.
Send the request, with issues desired, to neuron-request@hplabs.hp.com.
Eventually (soon, I hope), I hope to have some sort of regularly available
archives. -PM]]

------------------------------------------------------------

Subject: Request for info on neural net HARDWARE
From:    ark@ritcv.UUCP (Alan Kaminsky)
Date:    Wed, 03 Aug 88 14:07:06 +0000 

The August 8 issue of Time magazine had a one-page article on neural networks
(page 59).  Reporting on a recent neural networks conference in San Diego,
the article stated:

	"Neural networks come in all shapes and sizes.  Until now,
	most existed as software simulations because redesigning
	computer chips took a lot of time and money.  By experi-
	menting with different approaches through software rather
	than hardware, scientists have been able to avoid costly
	mistakes.  At last week's convention in San Diego, several
	firms introduced the real thing: chips that are actually
	wired to mimic the nerves in the brain."

Can anyone give me any information, pointers to literature, etc. for the
neural network chips mentioned above, or for any existing computer hardware
(experimental or commercial) that implements neural networks?

Please respond to me by e-mail and I will post a summary.  Thank you.
- -- 
Alan Kaminsky                           P. O. Box 9887
School of Computer Science              Rochester, NY  14623
Rochester Institute of Technology       716-475-5255
ark@cs.rit.edu

[[ ICNN last August had quite a few exhibiters and I imagine the just past
Boston meeting did too.  I someone who went to either wants to type in the
list... -PM]]

------------------------------

Subject: refs. for stochastic relaxation
From:    rmdubash@sun2.cs.uh.edu
Date:    Fri, 05 Aug 88 18:38:42 +0000 

I am currently working on stochastic relaxation and relaxation algorithms for 
finely grained  parallel  architectures.  In particular, I am  studying their 
implementation on neural and connectionist models, with emphasis on  inherent
fault tolerance property of such implementations.

I will be grateful if any of you can provide me with pointers, references etc.
on this ( or related ) topics.

Thanks.
Rumi Dubash, Computer Science, Univ. of Houston,
Internet : rmdubash@sun2.cs.uh.edu
U.S.Mail : R.M.Dubash, Computer Science Dept., Univ. of Houston, 

------------------------------

Subject: Re: refs. for stochastic relaxation
From:    manj@brand.usc.edu (B. S. Manjunath)
Date:    Sat, 06 Aug 88 18:02:44 +0000 

In article <824@uhnix1.uh.edu> rmdubash@sun2.cs.uh.edu () writes:
>I am currently working on stochastic relaxation and relaxation algorithms for 
>finely grained  parallel  architectures.  In particular, I am  studying their 
>implementation on neural and connectionist models, with emphasis on  inherent
>fault tolerance property of such implementations.
>
>I will be grateful if any of you can provide me with pointers, references etc.
>on this ( or related ) topics.

>Rumi Dubash, Computer Science, Univ. of Houston,

 Geman and Geman (1984) is an excellent paper to start with. It also 
contains lot of refernces. The paper mainly deals with Markov Random Fields 
and applications to image processing. 

S.Geman and D.Geman,"Stochastic relaxation, Gibbs distributions and 
the bayesian restoration of images", IEEE trans. on pattern analysis 
and machine intelligence", PAMI-6,Nov 1984, pp. 721-742.

Another reference that I feel might be useful is Marroquin,J.L.
Ph. D Thesis "Probabilistic solution of Inverse problems",
M.I.T. 1985.

bs manjunath.

------------------------------

Subject: Music and Nets
From:    eliot@phoenix.Princeton.EDU (Eliot Handelman)
Date:    Mon, 08 Aug 88 18:44:00 +0000 


I would like to hear from anyone who is interested in musical
aspects of connectionist simulations, either compositionally
or theoretically. Any pointers to papers would also be greatly
appreciated.

Thanks,

Eliot Handelman
Dept of Music
Princeton University

eliot@phoenix.princeton.pucc
OR
eliot@winnie.princeton.pucc

[[ I heard that someone (Hinton?) demonstrated the production of
Baroque-style fragments or phrases using a new architecture.  I, too would
be interested in this. -PM]]

------------------------------

Subject: Re: Music and Nets
From:    sandell@batcomputer.tn.cornell.edu (Gregory Sandell)
Date:    Mon, 15 Aug 88 23:01:46 +0000 

In article <3434@phoenix.Princeton.EDU> eliot@phoenix.Princeton.EDU (Eliot
Handelman) writes:
>I would like to hear from anyone who is interested in musical
>aspects of connectionist simulations, either compositionally
>or theoretically. Any pointers to papers would also be greatly
>appreciated.

See the Fall 1987 issue (vol. 5, no. 1) of MUSIC PERCEPTION.  Jamshed
Bharucha has an article on connectionism and traditional (tonal)
harmony.

Greg Sandell

------------------------------

Subject: Neural Network Literature
From:    rravula@wright.EDU (R. Ravula)
Date:    Mon, 08 Aug 88 21:55:23 +0000 


          I have noticed a lot of people asking for relevant neural
network literature. There is in fact a annotated bibliography on
neuro-computing. It is titled "The 1987 Annotated Neuro-Computing
Bibliography, Everything you wanted to know about Neuro-Computing, But
Didn't Know Who to Ask!". It was edited by Casimir C. Casey Klimasauskas,
Founder, NeuralWare Inc. The address is NeuroConnection, P.O.Box 206,
Sewickley, Pennsylvania 15143. I donot know the price of the book, but
provides hundreds of references in neural network literature.

[[ In addition to the informal list in the last issue of Neuron-Digest (Vol
4 #2), there are a number of on-line databases available... for a price.
If someone else wants to "advertise" these databases, such as in the following
message, send it to "neuron-request." -PM]]

------------------------------

Subject: Re: Neural Network Literature
From:    rich@bunny.UUCP (Rich Sutton)
Date:    Fri, 19 Aug 88 20:54:13 +0000 

Oliver Selfridge, Chuck Anderson and I have also written an annotated
bibliography on connectionism.  Ours is more selective, containing only 38
entries from about as many years.  It is also available free of charge.

To have a copy mailed to you, reply to this message with your
USMAIL (!) address, or send a request to: 
    Mary Anne Fox
    GTE Labs  MS-44
    Waltham, MA  02254
    mf01@gte.com

The bibliography will be published in the review volume EVOLUTION, LEARNING, 
AND COGNITION, edited by Lee, Y.C., World Scientific Publishing.

------------------------------

Subject: Who is getting the money from DARPA?
From:    sbrunnoc@hawk.ulowell.edu (Sean Brunnock)
Date:    Wed, 10 Aug 88 20:39:30 +0000 

 Last week, I read an article that DARPA is going to pour $390 million
into basic neural network research. They are shooting for a NN with an
iq equivalent to a sea slug the first year and a honey bee in five years.

 According to the same article, DARPA has slated 10 billion dollars for
1988 peaking in 1991 with ~100 million.

 My question is, who is going to be getting this money? I would guess that
CMU would be getting a lot since Miller and Sejnowski have done much work
trying to model the neural systems of invertebrates, as far as my memory
serves me.

 Offhand, I cannot remember the name of the article, it was on the front
page of one of the dozens of general computer trade magazines.

			    Sean Brunnock

------------------------------

Subject: Re: Who is getting the money from DARPA?
From:    gbn474@leah.Albany.Edu (Gregory Newby)
Date:    Thu, 11 Aug 88 21:23:44 +0000 

In article <8532@swan.ulowell.edu>, sbrunnoc@hawk.ulowell.edu (Sean Brunnock) 
writes:
> 
>  Last week, I read an article that DARPA is going to pour $390 million
> into basic neural network research. They are shooting for a NN with an
> iq equivalent to a sea slug the first year and a honey bee in five years.
> 

which article, please?

is the dimensionality of the network stated?

    (who knows how iqs of non-sapient (I would presume) beings are
     measured, anyway?  I suspect that 'iq' really refers to the number
     of neurons, eh?)

Thanks.
- --newbs
  (
   gbn474@leah.albany.edu      |
   gbn474@uacsc1.albany.edu    | this space for rent
   gbn474@albny1vx.bitnet      |
  )

------------------------------

Subject: Re: Who is getting the money from DARPA?
From:    brucet@tc.fluke.COM (Bruce Twito)
Date:    Fri, 12 Aug 88 16:47:59 +0000 

In article <8532@swan.ulowell.edu> sbrunnoc@hawk.ulowell.edu (Sean Brunnock) writes:
>
> Last week, I read an article that DARPA is going to pour $390 million
>into basic neural network research. They are shooting for a NN with an
>iq equivalent to a sea slug the first year and a honey bee in five years.
>
> According to the same article, DARPA has slated 10 billion dollars for
>1988 peaking in 1991 with ~100 million.
>
Actually, this 390M funding is *proposed*.  This, of course, means that IF
the funding is approved, it will not be until after the next election.

They are apparently diverting small funds (10M) to the program this year.

DARPA's Jasper Lupo was quoted as saying, "I believe the technology we are
about to embark upon is more important than the atom bomb."  To put this
in perspective, he was also quoted as saying, "The future of machine
intelligence is not AI."

I'm glad HE has a good working definition of AI.  I sure wish he'd enlighten
the net. (Please, no AI definition wars in THIS newgroup.)

> My question is, who is going to be getting this money? I would guess that
>CMU would be getting a lot since Miller and Sejnowski have done much work
>trying to model the neural systems of invertebrates, as far as my memory
>serves me.

The talk about modelling neural systems of animals is NOT what DARPA is most
concerned with.  I think that was the magazine's way of bringing the
technology to the public.  The true designs of DARPA were exposed later in
the article.  They include:

  - Strategic relocatable target detection from satellite optical and infrared
    sensors.
  - Quiet submarine detection from sonar arrays.
  - Electronic intelligence target identification from radar pulse trains.
  - Battlefield radar surveillance with synthetic aperture radar.

The other plans they listed would require more complex NN systems.

With such an aggressive campaign for complex neural systems, I think CMU may
not see ALL of the funding for the project :-).

Funding will be distributed among:

  - Interdisciplinary research with biosciences.  (Sejnowski and Co. may get a)
                                                  (bit of this.               )
  - Theoretical development.
  - Advanced simulators.
  - Device technologies.
  - Generic applications.  (Vision, speech, etc.)
  - Databases and benchmarks.

So... research money goes to CMU, MIT, Stanford, UCB, UCSD, and ???
Applied sciences money goes to the National Labs.
Device money goes to device manufacturers.

>
> Offhand, I cannot remember the name of the article, it was on the front
>page of one of the dozens of general computer trade magazines.
>
>			    Sean Brunnock

Electronic Engineering Times, August 8, 1988.

Good magazine but you gotta read the text after the 'Continued on page xx'
to get the unglorified truth :-).

- ---------------------------------------------------------------------------
Bruce Twito                  (206)356-5369        John Fluke Mfg. Co., Inc.
brucet@tc.fluke.com                        P.O. Box C9090 Everett, WA 98206
- ---------------------------------------------------------------------------

------------------------------

Subject: Re: Who is getting the money from DARPA?
From:    floyd@smoke.ARPA (Floyd C. Wofford)
Date:    Fri, 12 Aug 88 17:46:28 +0000 

In article <930@leah.Albany.Edu> gbn474@leah.Albany.Edu (Gregory Newby) writes:
>In article <8532@swan.ulowell.edu>, sbrunnoc@hawk.ulowell.edu (Sean Brunnock) 
>writes:
>> 
>>  Last week, I read an article that DARPA is going to pour $390 million
>> into basic neural network research. They are shooting for a NN with an
>> iq equivalent to a sea slug the first year and a honey bee in five years.
>> 
>
>which article, please?

  The original posting said the news appeared in several sources.  The source
I read was the August 1 issue of Federal Computer Week.  Before you go spending
the $400 million dollars remember that this was only a call for funding.  As
far as I can determine from the article this is a gedanken program.  As this
is an election year it may remain in a vaporous state until at least a while
after the dust of November settles.
  The article said that seventy-two experts (including three Nobel Laureates)
spent $700,000 to advise the government to spend $400,000,000 more.  Your
favorite advisor shall probably get a slice of the pie.
  Three names were mentioned:

     Jasper Lupo - DARPA Tactical Technology Office - sponser of the study
     Walter Morrow Jr. - MIT Lincoln Laboratory - Chair of the Steering Comm.
     Bernard Widrow - Stanford - Director of the study

  The article said that DARPA has not decided upon the funding level.  I
say stay tuned for the next administration.

______________________________________________________________________________

As an aside, the July 11 issue of Information Week has a short article
on how the top level management in some of the Massachussetts' computer
firms view the governor's attitude towards their industry.  This may be
insignificant, but since the governor has higher aspirations his policies
here could be a harbinger of future attitudes in the computing industry.
This could have a definite effect on research funding in general and
neural network funding specifically.  Should you be in graduate school at
this time or anticipate a future graduate degree this may interest you.
I have no information on the vice president's views.  These may also
be of interest.

Since the prospect of funding (as much as $400 M) has been raised, this topic
is germane.  I would hate to hate to see a high entropy digression ensue.
This newsgroup usually maintains a reasonably high signal to noise ratio.

floyd@smoke.brl.mil

------------------------------

Subject: Has anyone heard of ALOPEX?
From:    chrisley@arisia.Xerox.COM (Ronald Chrisley)
Date:    Fri, 19 Aug 88 20:07:35 +0000 

I would appreciate it if anyone could give me information about ALOPEX
(greek for "fox"), a neural net program released by a publish er of the
same name.  A friend of mine would like details, including things like when
it was released, etc.

- -- Ron Chrisley

------------------------------

Subject: why network analysis?
From:    doug@feedme.UUCP (Doug Salot)
Date:    Sat, 20 Aug 88 23:30:25 +0000 

I almost fired off a babbling reply to an article in comp.ai.digest
until I noticed that Marvin Minsky was the author.  Given that I
can't dance well enough to tango with such a partner and that the
volume in this group is dismally low, here's some shark bait:

>>From: MINSKY@AI.AI.MIT.EDU (Marvin Minsky)
>Newsgroups: comp.ai.digest
>Subject: AIList Digest   V8 #46
>Message-ID: <19880820041343.1.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU>
>
[...]
>
>Next, in #46, we see the discussion of what smoothing functions to use
>for making neural nets learn by estimating derivatives and using
>hill-climbing.  The irony lies in how that discussion ignores that
>very same knowledge/generality issue.  Specifically, hill-climbing is
>a weak general method to use when there is little knowledge.  But even
>a little knowledge should then make a large difference.  We ought
>usually to be able to guess when a solution to an unknown pattern
>recognition problem will require a neural net that has large numbers
>of connections with small coefficients - or when the answer lies in
>more localized solutions with fewer numbers of larger coefficients -
>that is, in effect, the problem of finding tricky combinational
>circuits.  Let's see more sophisticated arguments and experiments to
>see which problem domains benefit from which types of quasilinear
>threshold functions, rather than proposing this or that function
>without any analysis at all of when it will have an advantage.

I don't believe that humans are smart enough to do the sort of
analysis that Minsky suggests is necessary.  Network dynamics are
reasonably complicated, and to suggest that we need to understand
them well enough to make intelligent architectural decisions
may be asking too much.

As one who doesn't have enough of a mathematical background to
even consider such a rigorous approach, I like the idea suggested
by Rummelhart: modify the delta function in order to minimize
complexity (number of connections) as well as error.  This idea can
probably be generalized to let the machine slowly and dumbly find a
good network in terms of all of its dimensions rather than just in
terms of the weight space.

Has anyone taken a genetic algorithm approach to neural architectural
design?  Something like making random modifications to the activation,
summation, and weight modification procedures and keeping the changes
that reduce learning time?

>More generally, let's see more learning from the past.

That's what the momentum term is for :-).

- -- 
Doug Salot || doug@feedme.UUCP || ...{trwrb,hplabs}!felix!dhw68k!feedme!doug
                    "Thinking: The Thinking Man's Sport"

------------------------------

Subject: Brain simulation reference wanted
From:    dji@sbcs.sunysb.edu (the dirty vicar)
Date:    Mon, 29 Aug 88 20:28:00 +0000 

It was real tough trying to decide where to put this request!

I need one of three things.  Either a pointer to the author of the following
reference, or a pointer to where it may have been published outside of UMich,
or a suggestion as to how I can get a copy from UMich.  If you can provide
any of these, I would greatly appreciate it.

	Mortimer, James A.
	A Computer Simulated Model of Mammalian Cerebellar Cortex
	Tech Report, Computer and Communication Sciences Dept.
	University of Michigan, Ann Arbor
	June 1970

Thanks in advance.

- --
Dave Iannucci	SUNY at Stony Brook, Long Island, New York
*************	UUCP: {allegra, philabs, <arpa-gateway>}!sbcs.sunysb.edu!dji
* 115 days! *	BITNET: dji%sbcs.sunysb.edu@SBCCVM.BITNET
*************	Internet or CSnet: dji@sbcs.sunysb.edu


------------------------------

Subject: Mac enhancements of PDP Handbook SW
From:    JEFF SMITH <CS_JSMITH%uta.edu@RELAY.CS.NET>
Date:    Sun, 04 Sep 88 10:17:00 -0500 

Has anyone ported the software exercises of PDP "Explorations in Parallel
Distributed Processing" to the Mac? I've done the straight port, but if
someone has done windows, menus, plotting, etc. I would be very interested.

Jeff Smith
Please respond directly I will post replies.

B609CSE@UTARLG

------------------------------

End of Neurons Digest
*********************