neuron-request@HPLABS.HP.COM (Neuron-Digest Moderator Peter Marvit) (09/28/88)
Neuron Digest Tuesday, 27 Sep 1988
Volume 4 : Issue 8
Today's Topics:
Neural Networks for Temporal Recognition
David Touretzky on connectionist vs. symbolic models, knowledge rep.
Neural Networks and Expert Systems
Stephen Hanson on backpropagation algorithm for neural nets
Re: temporal domain in vision
Re: temporal domain in vision
Call for papers: 6th Scandinavian Conference on Image Analysis
Send submissions, questions, mailing list maintenance and requests for back
issues to "Neuron-request@hplabs.hp.com"
------------------------------------------------------------
Subject: Neural Networks for Temporal Recognition
From: bennett@srcsip.UUCP (Bonnie Bennett)
Date: 15 Sep 88 18:19:43 +0000
Problem: Recognizing temporal trends for Expert Systems that require
inputs like "X is increasing". Or, any info about Neural Nets and Expert
Systems together (again, at last.))
Please send responses directly to me.
Thanks
Bonnie Bennett
(612) 782-7381
[[ See also the second message following. Amonsgt others, Jeff Elman (U.C.
at San Diego) has done some very interesting work in looking at how neural
nets can do serial processing (cf. "Finding Structure in Time" CRL Tech
Report 8801, April 1988). In a talk Stephen Hanson (see later message)
gave at Stanford last year, he also talked about "sequential associative
memories." -PM]]
------------------------------
Subject: David Touretzky on connectionist vs. symbolic models, knowledge rep.
From: pratt@zztop.rutgers.edu (Lorien Y. Pratt)
Date: 16 Sep 88 13:54:05 +0000
Fall, 1988
Neural Networks Colloquium Series
at Rutgers
presents
David Touretzky
Carnegie-Mellon University
Room 705 Hill center, Busch Campus
Friday September 23, 1988 at 11:10 am
Refreshments served before the talk
Abstract
My talk will explore the relationship between connectionist models and
symbolic models, and ask what sort of things we should expect from a
connectionist knowledge representation. In particular I'm interested
in certain natural language tasks, like prepositional phrase
attachment, which people do rapidly and unconsciously but which involve
complicated inferences and a huge amount of world knowledge.
- --
- -------------------------------------------------------------------
Lorien Y. Pratt Computer Science Department
pratt@paul.rutgers.edu Rutgers University
Busch Campus
(201) 932-4634 Piscataway, NJ 08854
[[ Editor's note: I include these region talks because, although it's
unlikely that most readers will be able to attend, the subjects and
speakers are usually of general interest. You can then make a personal
effort to contact the speaker for further information. -PM]]
------------------------------
Subject: Neural Networks and Expert Systems
From: djlinse@phoenix.Princeton.EDU (Dennis Linse)
Date: 16 Sep 88 14:18:06 +0000
In article <8704@srcsip.UUCP> bennett@srcsip.UUCP (Bonnie Bennett) writes:
>Problem: Recognizing temporal trends for Expert Systems that require
>inputs like "X is increasing". Or, any info about Neural Nets and Expert
>Systems together (again, at last.))
Just a note to plug a forthcoming paper related to this very subject. I
have in my hand a paper entitled
"Integration of Knowledge-Based System and Neural Network Techniques
for Robotic Control" by David A. Handelman, Stephen H. Lane, and Jack
J. Gelfand
to be presented at the IFAC Workshop on Artificial Intelligence in
Real-Time Control, in Swansea, UK, next week. It is hot off the
presses, and quite interesting. The basic idea is that the expert
system controls the robot (a two link arm) and teaches the net. The net
takes over until some change in the system requires more learning.
(The first author is a soon to be Princeton grad and the second other is
a recent grad. All three currently work for David Sarnoff Research
Center here in Princeton.)
Dennis
(djlinse@{phoenix,pucc}.princeton.edu or djlinse@pucc.bitnet)
------------------------------
Subject: Stephen Hanson on backpropagation algorithm for neural nets
From: pratt@zztop.rutgers.edu (Lorien Y. Pratt)
Date: Tue, 20 Sep 88 18:23:53 +0000
Fall, 1988
Neural Networks Colloquium Series
at Rutgers
Some comments and variations on back propagation
------------------------------------------------
Stephen Jose Hanson
Bellcore
Cognitive Science Lab, Princeton University
Room 705 Hill center, Busch Campus
Friday September 30, 1988 at 11:10 am
Refreshments served before the talk
Abstract
Backpropagation is presently one of the most widely used learning
techniques in connectionist modeling. Its popularity, however, is
beset with many criticisms and concerns about its use and potential
misuse. There are 4 sorts of criticisms that one hears:
(1) it is a well known statistical technique
(least squares)
(2) it is ignorant <about the world in which it is
learning--thus design of i/o is critical>
(3) it is slow--(local minima, its NP complete)
(4) it is ad hoc--hidden units as "fairy dust"
I believe these four types of criticisms are based on fundamental
misunderstandings about the use and relation of learning methods to the
world, the relation of ontogeny to phylogeny, the relation of simple
neural models to neuroscience and the nature of "weak" learning
theories. I will discuss these issues in the context of some
variations on backpropagation.
P.S. Don't forget the talk this Friday (the 23rd) by Dave Touretzky
- --
- -------------------------------------------------------------------
Lorien Y. Pratt Computer Science Department
pratt@paul.rutgers.edu Rutgers University
Busch Campus
(201) 932-4634 Piscataway, NJ 08854
------------------------------
Subject: Re: temporal domain in vision
From: lag@cseg.uucp (L. Adrian Griffis)
Date: Wed, 21 Sep 88 20:50:53 +0000
In article <233@uceng.UC.EDU>, dmocsny@uceng.UC.EDU (daniel mocsny) writes:
> In Science News, vol. 134, July 23, 1988, C. Vaughan reports on the
> work of B. Richmond of NIMH and L. Optican of the National Eye
> Institute on their multiplex filter model for encoding data on
> neural spike trains. The article implies that real neurons multiplex
> lots of data onto their spike trains, much more than the simple
> analog voltage in most neurocomputer models. I have not seen
> Richmond and Optican's papers and the Science News article was
> sufficiently watered down to be somewhat baffling. Has anyone
> seen the details of this work, and might it lead to a method to
> significantly increase the processing power of an artificial neural
> network?
My understanding is that neurons in the eye depart from a number of
general rules that neurons seem to follow elsewhere in the nervous system.
One such departure is that sections of a neuron can fire independent
of other sections. This allows the eye to behave as though is has a great
many logical neuron without having to use the the space that the same number
of discrete cellular metabolic systems would require. I'm not an expert
in this field, but this suggests to me that many of the special tricks
that neurons of the eye employ may be attempts to overcome space limitations
rather than to make other processing schemes possible. Whether or not
this affects the applicability of such tricks to artificial neural networks
is another matter. After all, artificial neural networks have space
limitations of their own.
- --
UseNet: lag@cseg L. Adrian Griffis
BITNET: AG27107@UAFSYSB
------------------------------
Subject: Re: temporal domain in vision
From: jwl@ernie.Berkeley.EDU (James Wilbur Lewis)
Date: Thu, 22 Sep 88 01:42:30 +0000
In article <724@cseg.uucp> lag@cseg.uucp (L. Adrian Griffis) writes:
>In article <233@uceng.UC.EDU>, dmocsny@uceng.UC.EDU (daniel mocsny) writes:
>> In Science News, vol. 134, July 23, 1988, C. Vaughan reports on the
>> work of B. Richmond of NIMH and L. Optican of the National Eye
>> Institute on their multiplex filter model for encoding data on
>> neural spike trains. The article implies that real neurons multiplex
>> lots of data onto their spike trains, much more than the simple
>> analog voltage in most neurocomputer models.
>
>My understanding is that neurons in the eye depart from a number of
>general rules that neurons seem to follow elsewhere in the nervous system.
I think Richmond and Optican were studying cortical neurons. Retinal
neurons encode information mainly by graded potentials, not spike
trains....another significant difference between retinal architecture
and most of the rest of the CNS.
I was somewhat baffled by the Science News article, too. For example,
it was noted that the information in the spike trains might be a
result of the cable properties of the axons involved, not necessarily
encoding any "real" information, but this possibility was dismissed with a
few handwaves.
Another disturbing loose end was the lack of discussion about how this
information might be propogated across synapses. Considering that
it generally takes input from several other neurons to trigger a
neural firing, and that the integration time necessary would tend
to smear out any such fine-tuned temporal information, I don't
see how it could be.
It's an interesting result, but I think they may have jumped the
gun with the conclusion they drew from it.
- -- Jim Lewis
U.C. Berkeley
------------------------------
Subject: Call for papers: 6th Scandinavian Conference on Image Analysis
From: <PARKKINE%FINKUO.BITNET@CUNYVM.CUNY.EDU>
Date: Thu, 22 Sep 88 15:06:00 +0100
The 6th Scandinavian Conference on Image Analysis
=================================================
June 19 - 22, 1989
Oulu, Finland
Second Call for Papers
INVITATION TO 6TH SCIA
The 6th Scandinavian Conference on Image Analysis (6SCIA)
will be arranged by the Pattern Recognition Society of Fin-
land from June 19 to June 22, 1989. The conference is spon-
sored by the International Association for Pattern Recogni-
tion. The conference will be held at the University of Oulu.
Oulu is the major industrial city in North Finland, situated
not far from the Arctic Circle. The conference site is at
the Linnanmaa campus of the University, near downtown Oulu.
CONFERENCE COMMITTEE
Erkki Oja, Conference Chairman
Matti Pietik{inen, Program Chairman
Juha R|ning, Local organization Chairman
Hannu Hakalahti, Exhibition Chairman
Jan-Olof Eklundh, Sweden
Stein Grinaker, Norway
Teuvo Kohonen, Finland
L. F. Pau, Denmark
SCIENTIFIC PROGRAM
The program will consist of contributed papers, invited
talks and special panels. The contributed papers will cov-
er:
* computer vision
* image processing
* pattern recognition
* perception
* parallel algorithms and architectures
as well as application areas including
* industry
* medicine and biology
* office automation
* remote sensing
There will be invited speakers on the following topics:
Industrial Machine Vision
(Dr. J. Sanz, IBM Almaden Research Center)
Vision and Robotics
(Prof. Y. Shirai, Osaka University)
Knowledge-Based Vision
(Prof. L. Davis, University of Maryland)
Parallel Architectures
(Prof. P. E. Danielsson, Link|ping University)
Neural Networks in Vision
(to be announced)
Image Processing for HDTV
(Dr. G. Tonge, Independent Broadcasting Authority).
Panels will be organized on the following topics:
Visual Inspection in the Electronics Industry (moderator:
prof. L. F. Pau);
Medical Imaging (moderator: prof. N. Saranummi);
Neural Networks and Conventional Architectures (moderator:
prof. E. Oja);
Image Processing Workstations (moderator: Dr. A. Kortekan-
gas).
SUBMISSION OF PAPERS
Authors are invited to submit four copies of an extended
summary of at least 1000 words of each of their papers to:
Professor Matti Pietik{inen
6SCIA Program Chairman
Dept. of Electrical Engineering
University of Oulu
SF-90570 OULU, Finland
tel +358-81-352765
fax +358-81-561278
telex 32 375 oylin sf
net scia@steks.oulu.fi
The summary should contain sufficient detail, including a
clear description of the salient concepts and novel features
of the work. The deadline for submission of summaries is
December 1, 1988. Authors will be notified of acceptance by
January 31st, 1989 and final camera-ready papers will be re-
quired by March 31st, 1989.
The length of the final paper must not exceed 8 pages. In-
structions for writing the final paper will be sent to the
authors.
EXHIBITION
An exhibition is planned. Companies and institutions in-
volved in image analysis and related fields are invited to
exhibit their products at demonstration stands, on posters
or video. Please indicate your interest to take part by con-
tacting the Exhibition Committee:
Matti Oikarinen
P.O. Box 181
SF-90101 OULU
Finland
tel. +358-81-346488
telex 32354 vttou sf
fax. +358-81-346211
SOCIAL PROGRAM
A social program will be arranged, including possibilities
to enjoy the location of the conference, the sea and the
midnight sun. There are excellent possibilities to make
post-conference tours e.g. to Lapland or to the lake dis-
trict of Finland.
The social program will consist of a get-together party on
Monday June 19th, a city reception on Tuesday June 20th, and
the conference Banquet on Wednesday June 21st. These are all
included in the registration fee. There is an extra fee for
accompanying persons.
REGISTRATION INFORMATION
The registration fee will be 1300 FIM before April 15th,
1989 and 1500 FIM afterwards. The fee for participants cov-
ers: entrance to all sessions, panels and exhibition;
proceedings; get-together party, city reception, banquet and
coffee breaks.
The fee is payable by
- check made out to 6th SCIA and mailed to the Conference
Secretariat; or by
- bank transfer draft account or
- all major credit cards
Registration forms, hotel information and practical travel
information are available from the Conference Secretariat.
An information package will be sent to authors of accepted
papers by January 31st, 1989.
Secretariat:
Congress Team
P.O. Box 227
SF-00131 HELSINKI
Finland
tel. +358-0-176866
telex 122783 arcon sf
fax +358-0-1855245
There will be hotel rooms available for participants, with
prices ranging from 135 FIM (90 FIM) to 430 FIM (270 FIM)
per night for a single room (double room/person).
------------------------------
End of Neurons Digest
*********************