[comp.ai.neural-nets] Neuron Digest V6 #70

neuron-request@HPLMS2.HPL.HP.COM ("Neuron-Digest Moderator Peter Marvit") (12/12/90)

Neuron Digest	Tuesday, 11 Dec 1990
		Volume 6 : Issue 70

Today's Topics:
	       Astronomy and Medical Imaging Applications?
			   neuron digest item
			 job opening - Rank open
		    NEURAL NETS COMPUTING WITH GAUSS
	       Re: Transputer Implementations References?
			     standardization
		      Help with new HITACHI product
		      Academic Program Info Request
	     post-doctoral position in neural nets in France
		     "Request for posting a message"
		       Neural net who plays chess
     Re: Help with new HITACHI product - SUMMARY of the information
	       Japanese enter nets computer business - FYI
	     Neural Chess - Questions, Comments and Options
	 Re: The MIC and the Individual to creativity and reward
      Re: Articles on fuzzy cognitive maps & NN predictive modeling


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Astronomy and Medical Imaging Applications?
From:    Ade Miller <ASM%ASTRONOMY.PHYSICS.SOUTHAMPTON.AC.UK@pucc.PRINCETON.EDU>
Date:    Tue, 27 Nov 90 15:03:00 +0000


   Having just completed a fairly extensive search for network applications
in astronomy and medical imaging I was surprised to find only on or two
papers in each field. Is anyone out there doing anything in either of
these fields, on any sort of hardware?

				Thanks,

				Ade.

------------------------------

Subject: neuron digest item
From:    codelab@psych.purdue.edu (Coding Lab Wasserman)
Date:    Tue, 27 Nov 90 15:23:09 -0500

	BRAINS' BRAINS

	Landman offered the comment that Gauss' brain was noticeably more 
convoluted than normal. However, striking observations of this sort are
an inevitable concomitant of the variation in human brains. By the same
token, observations in the other direction are not difficult to come by.
For example, I believe, without having the citation ready to hand, that
the brain of Anatole France was only about 2/3 average size.

	There is great variation in the size and shape of both genetically 
unselected and selected brains. Wild (i.e., unselected) brains vary as
much from one person (or primate) to another as do faces. As Wahlsten has
shown, even inbred strains exhibit substantial gross variations in
morphology (e.g., about 20% of one inbred strain of mice do not develop a
corpus callosum while the rest of that same strain do develop one, albeit
to varying extents).

	So the noise in any gross brain size measurement is enormous. Is
there any signal about individual differences that can be extracted from
this noise? Well, the phrenologists tried to prove that there was, and
they spent about a hundred years of well-financed research time in that
attempt without any lasting success. Therefore, there is probably not
much doing here.


                        Jerry Wasserman
                        Purdue University

------------------------------

Subject: job opening - Rank open
From:    BOHANNON%BUTLERU.BITNET@VMA.CC.CMU.EDU
Date:    Thu, 29 Nov 90 11:59:00 -0500

Applied Cognition - Rank Open: The Department of Psychology at Butler
University is seeking nominations/applications for a tenure track opening
to start August, 1991.  We are seeking faculty who would be knowledgeable
in distributed processing systems and their applications.  Specific area
is less important than evidence of excellence in both teaching and
research.  Salaries are negotiable and competitive.  Responsibilities
include: maintain an active research program with undergraduates, teach
an undergraduate course in Human Factors and other courses in the
candidate's area of interest. Minimum Qualifications: Ph.D. in psychology
at time of appointment.  For junior nominees/applicants: potential
excellence in both research and teaching. Teaching experience preferred.
For senior nominees/applicants: established record of excellence in
research, more than 3 years of teaching experience, and an interest
and/or experience in attracting external funding.  Butler University is a
selective, private university located on a 400+ acre campus in the
residential heart of Indianapolis.  The psychology department is housed
in recently renovated space with state-of-the-art, video/social
interaction and computer-cognition labs (Macintosh II and up).  Screening
of nominees/applicants will continue until suitable faculty are found.
Those interested should send a statement of research and teaching
interests, vita, and three original letters of reference to: John Neil
Bohannon III, Head, Department of Psychology, Butler University, 4600
Sunset Ave., Indianapolis, IN 46208.  Bitnet : Bohannon@ ButlerU.  AA/EEO

------------------------------

Subject: NEURAL NETS COMPUTING WITH GAUSS
From:    zhou@brazil.psych.purdue.edu (Albert Zhou)
Organization: Purdue University
Date:    02 Dec 90 01:31:02 +0000

We've been using a programming language GAUSS to do neural network
computing, and we would like to share experiences and ideas with other
neural nets workers using the same language.

For those who haven't heard of it before, Gauss is a powerful high-level
matrix-oriented language implemented in PC environment.

------------------------------

Subject: Re: Transputer Implementations References?
From:    j_millan@cen.jrc.it (Jose del R. MILLAN)
Date:    Mon, 03 Dec 90 16:57:37 +0000



In reply to Ade Miller, Subject: Transputer Implementations References?

>   ...  My main interests are back- propagation networks; optimisation
>   and learning, applications to the above and implementations on
>   parallel hardware (especially Transputers).  ...

Millan, J. del R. & Bofill, P. (1989). Learning by back-propagation: a systolic
   algorithm and its transputer implementation. The Int. Journal of Neural
   Networks: Research & Applications, 1(3), 119--137.

Bofill, P., Manyer, J., Millan, J. del R., & Salvans, V. (1990). A systolic
   algorithm for back-propagation: mapping onto a transputer network. In
   Stender, J. & Addis, T. (eds.) "Symbols versus Neurons?". IOS Press.

	Jose del R. MILLAN
	Institute for System Engineering and Informatics
	Commission of the European Communities. Joint Research Centre
	Building A36. 21020 ISPRA (VA). ITALY
	j_millan@cen.jrc.it	(bitnet)


------------------------------

Subject: standardization
From:    atilla gunhan <gunhan@cc.uib.no>
Date:    Tue, 04 Dec 90 11:46:25 +0100

               TERMINOLOGICAL AND METHODICAL
                 STANDARDIZATION.

The most significant problem which may occur during an educational
program is the existence of different terminology and/or methodology used
in the various literature of a given subject. Such problems has existed
almost in any kind of subject at the Universities.

In the Neural Network field, one can easily recognize such problems.
Graphical representations, variables, concept definitions,
classifications in the subject vary from book to an other. Therefore I am
willing to write a paper in which I propose a standardized terminological
and methodical approach for ANN.  I would like to hear your opinions.

E-mail: gunhan@cc.uib.no

Address: Atilla Gunhan
         Dept. of Information Science
         University of Bergen
         Hightechnology center in Bergen
         Thormohlensgt. 55,
         N-5008 Bergen, Norway


------------------------------

Subject: Help with new HITACHI product
From:    Paulo V Rocha <P.Rocha@cs.ucl.ac.uk>
Date:    Wed, 05 Dec 90 08:59:16 +0000


The New York Times recently (sorry, I don't have the issue) reported on
Hitachi's new Neurocomputer, which claims an execelent performance.

Can anyone help me with some pointers to where I can find a description
of the architecture (or details in general)? Maybe someone in Japan?  I
haven't heard anything about it here in the UK.

Thanks and of course I will post the information I get :-)

P.

+-----------------------------+---------------------------------------------+
Paulo Valverde de L. P. Rocha |   JANET:procha@uk.ac.ucl.cs
Department of Computer Science|  BITNET:procha%uk.ac.ucl.cs@UKACRL
University College London     |Internet:procha%cs.ucl.ac.uk@nsfnet-relay.ac.uk
Gower Street                  | ARPANet:procha@cs.ucl.ac.uk
London WC1E 6BT               |    UUCP:...!mcvax!ukc!ucl-cs!procha
England                       |     tel: +44 (071) 387 7050 x 3719
                              |     fax: +44 (071) 387 1397
+-----------------------------+---------------------------------------------+

------------------------------

Subject: Academic Program Info Request
From:    worth@park.bu.edu (Andrew J. Worth)
Date:    Wed, 05 Dec 90 11:28:44 -0500

ISSNNet Request for Academic Program Information

The International Student  Society for  Neural  Networks (ISSNNet)  is
compiling a list of academic programs relating to Neural Networks from
around the world.  We would like  your input if you  are a member of a
scholastic program that  is in  any  way  related to Neural  Networks,
Parallel    Distributed    Processing,  Connectionism,   Computational
Neuroscience, Neural Modeling, Neural Computing, etc.

We hope to provide this service  so that (1) interested  students will
be  able to  apply  to those  programs that  will most closely satisfy
their educational goals,   and (2) current  students  and non-students
will be aware of existing academic programs.  This service is intended
to not only  provide an overview of  these programs and contact points
for more information, but  also  a personal glimpse into what's behind
the official descriptions.

All information will be made publicly available and will be updated as
new programs are created   and  as  programs change.   Complying  with
ISSNNet's goal to be absolutely unbiased, we would like this to become
THE source of information on academic programs in this field.

ISSNNet would like to provide the following information:

  - Official  address to contact for  more information (surface mail and
  email)
  - Official description of the program.
  - Names of Faculty Members and their interests
  - Degrees requirements (BA, BS, MA, MS, PhD, etc.)
  - Short description of courses offered
  - Computing resources (Hardware and Software Tools)
  - Number of Students (grad/undergrad) and related faculty
  - A  brief *personal* description  of  the   program, department, etc.
  describing motivation, emphasis, goals, and/or overall ambiance.
  - Student Contacts (w/ telephone numbers, email and surface addresses,
  degree sought, interests, and date of graduation)

This information is above and beyond the  academic questionnaires that
were  filled   out at the   San Diego and  Paris  conferences and will
eventually be made available via ftp and also  by other  means through
ISSNNet (your  submission will  be  taken  as  permission to  make the
information public unless we are otherwise notified).

Coordinated responses from each institution are encouraged and will be
appreciated.  Please submit descriptions of academic programs in plain
text (email is preferred) following the guidelines above to:

   issnnet-acad-progs@bucasb.bu.edu

We will  also be able to  re-distribute information in other emailable
formats such as postscript or LaTeX.

Thank you for your time and effort,
                                 Andy.

 ----------------------------------------------------------------------
Andrew J. Worth              (617) 353-6741                    ISSNNet
ISSNNet Academic Program Editor                           P.O. Box 557
issnnet-acad-progs@park.bu.edu                            New Town Br.
worth@park.bu.edu                                Boston, MA 02215  USA


------------------------------

Subject: post-doctoral position in neural nets in France
From:    "James A. Reggia" <reggia@cs.UMD.EDU>
Date:    Thu, 06 Dec 90 14:33:00 -0500

Post-Doctorate Position in France:
     A two year post-doctoral position in neural modelling is 
available at ONERA/CERT in Toulouse, France.  ONERA/CERT is a government
research laboratory: Office National d'Etudes et de Recherches
Aerospatiales/Centre d'Etudes et de Recherches de Toulouse.
     The pay is approximately $2000/month.  The working language 
is French, but most individuals at ONERA/CERT speak English fairly well.
Work in this position would include one or more of the following:
development and study of learning rules in competitive systems, matching
images, or development of neural modelling software on a connection
machine (the latter would require spending some time in Paris too).
     For further information or for answers to questions, please
contact Paul Bourret at bourret@tls-cs.cert.fr via email.

------------------------------

Subject: "Request for posting a message"
From:    VEMURI@icdc.llnl.gov
Date:    Thu, 06 Dec 90 14:45:00 -0700

Dear friends:

     I am interested to know if any one of you, out there, is working on the 
application of artificial neural nets to the drug abuse problem. The
range of potential problems is broad: 3_D drug design, data bases for
molecules, brain/reward mechanisms, and so on. Thanks.

                                                    Rao Vemuri
                                                    vemuri@icdc.llnl.gov
                                                    Dept. of Applied Science.

------------------------------

Subject: Neural net who plays chess
From:    morphy@cobalt.cco.caltech.edu (Jones Maxime Murphy)
Date:    Thu, 06 Dec 90 16:33:01 -0800

I am a chess master, and I have actually done some work on building a net
to solve K+P endgames. For a master, K+P endings are quite easy to
evaluate, and I also had the K+P evaluator from "Cray Blitz", which I
modified slightly for use as a training tool for my networks.
 I concentrated on the K+PvK endgame, the simplest possible endgame. My 6
inputs consisted of the position of the pawn and the vectors from the
pawn to the kings. The output was to be 1 if the position was a win, or 0
if the position was a draw. I generated every possible position and
trained the network on the output from "Cray Blitz"'s evaluator. "Cray
Blitz" is a well-known computer program which competes running on Cray
machines.
 The first problem was that the Cray Blitz evaluator sucked. The person who 
wrote that thing must not have been much of a player. I spent lots of
time hacking their code. Then, when I finally got it to the point where I
was halfway satisfied with it, I ran into the real problems. First of
all, edge effects. The edges of the chessboard have a really major
effect, since translation invariance only holds for 6 files. This means a
substantial proportion of postions are rookpawns, which are special. A
really helpful suggestion from Christof Koch, which I hope to implement,
is to make the board horizontally infinite. This will mean monkeying
around with Cray Blitz(gag!).
 As to the nice idea of alpha-beta/neural net duality, I(and others, I'm
sure) have been pondering this since 1986. The computational price is
extremely heavy, as I found out with my little K&P thing. One training
run would take 18 hrs on a Sun 3. I hope that work continues, tho.
 And, sorry, I can't shed much light on how masters process positions.
Friends of mine who are much stronger players than I are often at a loss
to articulate how or why they make certain key moves. It's something that
we do talk about a great deal, though.  Jones

------------------------------

Subject: Re: Help with new HITACHI product - SUMMARY of the information
From:    Paulo V Rocha <P.Rocha@cs.ucl.ac.uk>
Date:    Fri, 07 Dec 90 09:06:05 +0000


The most recent publication about the circuit seems to be

     "Design, Fabrication and Evaluation of a 5-inch Wafer Scale
     Neural Network LSI Composed of 576 Digital Neurons" by
     M.Yasunaga, et.al., Proc.IJCNN'90(San Diego) pp.II-527.

I also received some details from a pamphlet probably distributed at
Hitachi's technology fair in New York.

- - ------------------
Neuron circuit:		Completely digital circuit with learning function.

Architecture:		Time sharing digital bus
			Dual networks for learning

Performance of learing circuits:   2.3 GCUPS

Neumber of neurons:	144 neurons per wafer
			1152 neurons per system

Neuron output:		9 bits

Synapse weight:		16 bits

Process:  		0.8 micron CMOS gate array

Wafer size:		5" diameter

System size:		30 cm x 21 cm x 23 cm

Power consumption:	~50 Watt
- - ---------------

The circuit uses a single bus and the claimed performance is a peak
figure of operational units (multipliers and adders). Real performance
can be much less.

Each neuron has only 64 synapses because the the design uses gate array
technology and memory in this case is very expensive.

The neurocomputer has about 70 K synapses and 1 K neurons.


Thanks again to everybody.

Paulo (University College London)

------------------------------

Subject: Japanese enter nets computer business - FYI
From:    havener@Kodak.COM
Date:    Sat, 08 Dec 90 11:01:03 -0500

 John, FYI.
 DEC may miss the boat on this business if they don't get something
 underway.

==============================================================================
Note 313.0               Hitachi neurocomputer announced            No replies
TXTC01::TXTC01                                     60 lines   7-DEC-1990 02:01


According to an article in the November 26th edition of the "Nikkei
Shinbun" (the Japanese equivalent to the Wall Street Journal), Hitachi
has developed the world's fastest general purpose neurocomputer.

The learning speed is reported as 2,300 Million Weight Updates Per Second
(MWUPS), approximately four times that of the Fujitsu neurocomputer which
had a reported 587 MWUPS.

It is constructed of eight 5 inch silicon wafers enclosed within a 30cm x
21cm x 23cm cabinet.  Each wafer contains 144 neurons, making the total
capacity 1,152 neurons.  It is designed to be used in conjunction with a
workstation.

In the same article, two applications were announced.  The first is for
stock value prediction and the second is for signature verification.  The
stock value prediction application appears to accept the last 20 days
price data for a given stock as input, and output a projection for the
next 10 days.

The signature verification is trained on 5 actual signatures, and
verification is made based upon the pen pressure, and horizontal and
vertical pen speeds.

The company plans to commercialize the neurocomputer withing the next two
years at a price of less than 100 million yen (about $770k at $1 = 130).

Related articles:

%A Kato, H.
%A Yoshizawa, H
%A Iciki, H.
%A Asakawa, K.
%T A Parallel Neurocomputer Architecture with Ring Registers
%J Proceedings of the InfoJapan '90 Computer Conference
%V 1
%P 233-240
%I Information Processing Society of Japan
%C Tokyo
%D 1990
%K Neurocomputer

%A Herbst, K.
%T Fujitsu Parallel Design Speeds Learning of Neurocomuters
%J Supercomputing Review
%V 3
%P 56-57
%I Supercomputing Review
%C San Diego
%D October 1990
%K Neurocomputer

- ---Aggies-O-Aggies!--We fight until we die!--We fight until we die!----

 John P. Havener                                         / / 
 Senior Chemical Engineer                             / / / /
 TEX Core Artificial Intelligence Group           -------------
 Eastman Kodak, Texas Eastman Co.             ---/ New Mexico /---
 Building 1, Box 7444                    -------/   State    /-------
 Longview, Texas  75607                 -------/   Aggies   /-------
 Phone: (903)237-6368                      ---/    1978    /---
 Net: havener@Kodak.com                      --------------
 Fax: (903)237-5371                             / / / /
                                                 / /

 ---Aggies-O-Aggies!--We'll win this game and that's the reason why!-------

------------------------------

Subject: Neural Chess - Questions, Comments and Options
From:    David Kanecki <kanecki@vacs.uwp.wisc.edu>
Date:    Sat, 08 Dec 90 20:19:35 -0600

	Recently I was asked the following questions about the neural
chess program:

	1) What is the architecture of mamalian neural net?
	2) What method is used - Backprop, cascade, etc?
	3) What input units are used - whole board, single unit?
	4) What type of training protocols are used?
	5) How does the computer choose a move?
	6) Define 'minimalist learning'?
	7) Hows does the concept of 'Decision Making' apply to neural chess?

This information is considered propietory system programming which will
be available on published papers in the future as I am enrolled in
pre-doctoral program in biological science.

	Some people have asked if there was a way they could  see or play
against the program. Yes, they can.

	To see the program I can prepare a VHS tape of selected moves of
the computer screen of the match from start to completion. Upon review of
this tape I would like to receive comments or suggestions. Send $10 for
shipping and handling and upon return of the tape I will refund the tape
fee.


	To play the program I suggest the following:
 
1)	The person would need to send me either a board position that 
	they would wish to start from. 

2)	Or, they could play chess againstthe program. Both options 
	are available via correspondence where as a move(s)
	is received, the output will be returned through e-mail. 

3)	VHS tape option as described above.


	The primary reason for announcing the program was to show that
there are independent creative persons who can share what they have
developed but others are who are working may not be able to report their
finding.  However, your comments and feedback to my announcement will be
of value to all interested in this area.

	This program was developed without grants or funding. The
development process includes 10 years of computer and biological science
education in university disciplines. Any comments or feedback will be
updated through e-mail as well as any published papers when they are
available. Finally, I would encourage all to suggest comments or
suggestions of pending work or work they wish could be done so that
others would be enriched and encouraged "remember, that this program was
developed on a 64K CPM computer through wise programming options and
selections". More people should be encouraged to write programs on a 64K
basis before going to higher capacity machines in order to develop their
minds. Similarly, I have developed a neural simulation program on a 64K
basis that was reported earlier.


David H. Kanecki, Bio. Sci., A.C.S.
P.O. Box 93
Kenosha, WI 53141
(414)-654-8710
kanecki@vacs.uwp.wisc.edu

------------------------------

Subject: Re: The MIC and the Individual to creativity and reward
From:    David Kanecki <kanecki@vacs.uwp.wisc.edu>
Date:    Sat, 08 Dec 90 21:15:21 -0600

[[ Editor's Note: Once again, I fear this message strays into an area
best left to other fora.  However, I'm letting it go and will entertain
one round of replies, if any readers wish to do so.  Remember, this
should be primarily a *technical* Digest relating to neural networks,
both natural and artificial.  Again, I tend to be quite liberal in my
editorship but feel this is getting a bit afield. -PM ]]


	One of the last speeches by the late President Eisenhower
was on the buildup of the Military Industrial Complex and the result upon
the individual. His fear was that the MIC would acquire most of the
funding with the individual being forced to give their idea to a MIC
organization whereby the MIC would receive recognition and credit and
individual nothing.  And, the individual would have to play ball.

	The point of my neural chess program is that there are creative
people who prefer not to work for that system. Also, I think it would be
better if funding and credit were awarded to the individual. Also, the
merit of the idea should be the basis for funding not the education level
or the number of papers one has written or co-written. Doctorial degrees
were present only after the mid 1800's. Many people who developed and
invented new ideas, concepts, and products had various types of education
and skills. Also, all of them had the drive and ability to know that
there is something better.

	One example of an organization receiving credit and funding
and the individual nothing is the case of my father. He worked and was
creative member in the research and development of the carbon rocket
nozzle used in NASA Projects Mercury, Gemini, and Apollo. He also worked
in other companies setting up advanced engineering departments/ projects.
But, when he reached the age of 45 the company rewarded him by letting
him go without a pension or reward. In fact he had to survive for 15
years using his small savings to support his family and send his children
to college. Also, when he attempted to find other employment he had
suddenly became too old and his past achievements were not valued. Is
this fair or just to the remaining elderly professionals who are still
living and could contribute but are not allowed to? What can we do to
regain this technology before it is too late and lost? Is individual
reward and creativity the thing that should be encourage while time is of
essence! "Through mentors, grants, etc"

	I believe that if the U.S. wants to achieve the level
of creativity it had during the space program it should provide support
for the individual and not the organization. By providing too much
support to an organization one would have the problems that now exist
with the shuttle program, NASA, and other areas.

	Funding and credit should be given to individuals in all
fields on the merit of their idea as opposed to their title or pull.

	These opions are my own. But, I thought is was time that
someone addressed these issues.

	Any pros or cons comment would help strengthen the professional
development community and the special interest areas covered by e-mail.
In the key facts of success the first point is a change in attitude which
is addressed above and the second point is pareto analysis where only 5
or 10% are the key issues that relate to over 80% of the needs to be
done.


David Kanecki, Bio. Sci., A.C.S.
P.O. Box 93
Kenosha, WI 53141
kanecki@vacs.uwp.wisc.edu     


------------------------------

Subject: Re: Articles on fuzzy cognitive maps & NN predictive modeling
From:    Willie Brown <brown@Corona.ITD.MsState.Edu>
Date:    Mon, 10 Dec 90 14:36:47 -0600


  I'm looking for articles on FCMs & using neural networks as predictive
models.  I have copies of Bart Kosko's "Adaptive Inference in Fuzzy
Knowledge Networks" and "Fuzzy Cognitive Maps" articles.  Books and
periodicals are also of interest.
  I'm looking at applying FCMs to predict the future availability of obsolete
military microcircuits.

Willie C. Brown
Computer Engineer
Center for Military Replacement Parts
Starkville,MS

------------------------------

End of Neuron Digest [Volume 6 Issue 70]
****************************************