[comp.ai.neural-nets] NN experimentors information

johnson@c10sd1.StPaul.NCR.COM (Wayne D. T. Johnson) (04/19/88)

A while ago I polled the neural-nets newsgroup about basic texts concerning
neural networks, and included a request for public domain software.  This is
a list of the responces.

My original idea was to mail to users who wanted a copy of this information
but the sheer mass of requests makes it almost impossible to mail it our (not
to mention the number of bounced acknolagements I sent out)

I'm sorry if this is rather long and if some of this information has already
been seen on the network but I just have not had the time to sort through it all
and weed out the redundant stuff.

My thanks to all who contributed.

---------------------------------------------

From ncrcce!ncrlnk!relay.cs.net!wilma.bbn.com!mesard Wed Mar 23 09:33 CST 1988
To: "Wayne D. T. Johnson" <johnson@ncrcce.stpaul.ncr.com>
From: mesard@bbn.com
Reply-To: mesard@bbn.com
Phone: 617-873-1878
Office: 13/216
Subject: Re: request for net software
In-reply-to: Your message of 21 Mar 88 19:05:58 +0000.
             <346@c10sd1.StPaul.NCR.COM>
Date: Tue, 22 Mar 88 10:15:57 -0500
Sender: mesard@wilma.bbn.com
Status: R

For my money THE book on NN's is _Parallel Distributed Processing_ by
Rumelhart and McClelland.  It's a 2 volume set that gives a thorough
treatment to the field.  It's published by the MIT Press and any
university with a CS or Cog.Psy Dept. should have it in their bookstore.

Also, I took a seminar with McClelland a year ago, and our class were
guinea pigs for a companion workbook they were writing.  This workbook
will come with a set of floppies containing NN simulation programs.
Being able to build and train your own nets is a must to understanding
their power and how they work.  Last I heard (December) it was at the
publisher's.


--
unsigned *Wayne_Mesard();                     MESARD@BBN.COM
                                              BBN Labs, Cambridge, MA

Are you absolutely sure that you want to do this? [ny] ummm_

From ncrcce!ncrlnk!relay.cs.net!elroy.jpl.nasa.gov!jpl-jane.arpa!mathew Wed Mar 23 09:34 CST 1988
Date: Tue, 22 Mar 88 10:28:04 PST
From: Mathew Yeates <mathew@jpl-jane.arpa>
Message-Id: <8803221828.AA10425@jane.Jpl.Nasa.Gov>
To: johnson@ncrcce.stpaul.ncr.com
Subject: Suggested Reading
Status: R


I highly recommend the following:

1) Parallel Distributed Processing
   Explorations in the Microstructure of Cognition
   Vol. 1: Foundations
   Rumelhart, McClelland and the PDP Reasearch Group
   MIT Press, 1986.

2) Self Organization and Associative Memory
   T. Kohonen
   Springer-Verlag 1984

3) An introduction to Computing with Neural Nets
   R.P. Lippmann in IEEE ASSP Magazine
   April 1987

              happy hunting,
                  Mathew C. Yeates

From ncrcce!ncrlnk!relay.cs.net!gumby.cs.wisc.edu!g-gibbon Wed Mar 23 09:34 CST 1988
Date: Tue, 22 Mar 88 13:45:17 CST
From: Tom Gibbons <g-gibbon%gumby.cs.wisc.edu@cs.wisc.edu>
Message-Id: <8803221945.AA09532@gumby.cs.wisc.edu>
To: johnson@ncrcce.stpaul.ncr.com
Subject: Re: request for net software
Newsgroups: comp.ai.neural-nets
In-Reply-To: <346@c10sd1.StPaul.NCR.COM>
Organization: U of Wisconsin CS Dept
Status: R

I would appreciate any info you get on software for NNets.

As a basic text I suggest:
   Parallel distributed Processing: Explorations in the micro structure
       of cognition (Vol. 1)
   from MIT Press in paperback.
This is the sugested text for a Psych. seminar I am currently in.  The
first few chapters of this book are very good.  In general it is from
the Psych. point of view and does not deal much with computer software.

-thanks in advance for any info you get.

-tom

From ncrcce!ncrlnk!relay.cs.net!speedy.cs.wisc.edu!cs.wisc.edu!honavar Wed Mar 23 09:34 CST 1988
Date: Tue, 22 Mar 88 16:25:14 CST
From: A Buggy AI Program <honavar@cs.wisc.edu>
Message-Id: <8803222225.AA12710@speedy.cs.wisc.edu>
To: johnson@ncrcce.stpaul.ncr.com
Subject: Re: request for net software
Newsgroups: comp.ai.neural-nets
In-Reply-To: <346@c10sd1.StPaul.NCR.COM>
Organization: SOCIETY FOR THE PREVENTION OF CRUELTY TO AI PROGRAMS
Cc:
Status: R

In article <346@c10sd1.StPaul.NCR.COM> you write:
>
>I would also like to start a list of basic texts containing information
>on nets.  Not that some of the information in this group isn't useful
>its just that sometimes it goes so far over my head....
>

	1. Brains, Machines and Mathematics - second edition, Arbib,
Springer-Verlag, 1987.

	2. Parallel-distributed Processing vol 1 and 2,
McClelland, Rummelhart and the PDP research group, MIT Press, 1986.

	3. Neurocomputing - Foundations of research, edited by james
Anderson and Edward Rosenfeld, MIT Press, in press (1988)

	4. Several well-written introductory / survey articles are
available such as:

	a. the primers in the first issue of Neural networks journal

	b. papers in the Daedalus (Journal of the american academy of
arts and sciences) winter 87, special issue on AI

	and possibly several others (some of which were mentioned in
this news group in response to a request similar to yours.


	I would be interested in hearing about what you come up with.

	Also, please let me know if you find out about public domain
NN simulators.

	thanks.

	Vasant Honavar
	honavar@ai

From ncrcce!ncrlnk!relay.cs.net!boulder.colorado.edu!fozzard Thu Mar 24 06:59 CST 1988
Date: Tue, 22 Mar 88 18:01:23 MST
From: Richard Fozzard <fozzard@boulder.colorado.edu>
Message-Id: <8803230101.AA07845@tut>
To: johnson@ncrcce.stpaul.ncr.com
Subject: Re: request for net software
Newsgroups: comp.ai.neural-nets
In-Reply-To: <346@c10sd1.StPaul.NCR.COM>
Organization: University of Colorado, Boulder
Cc:
Status: R


look for the volume III of the PDP books by Rumelhart and McClelland;
This includes floppies with IBM PC NN simulation software.  I dont
know yet how good it is, but at least it is right from the horse's
mouth.  By the way, if you dont have them already, get vols I & II
as a good introduction.

	Have fun, but dont set your expectations too high.  Neural
nets have been badly over-hyped lately.

	Rich Fozzard

From ncrcce!ncrlnk!relay.cs.net!ai.toronto.edu!csri.toronto.edu!thomas Fri Mar 25 14:22 CST 1988
Date: Thu, 24 Mar 88 18:15:24 EST
From: Thomas Kurfurst <thomas@csri.toronto.edu>
Message-Id: <8803242315.AA05798@queen.csri.toronto.edu>
To: johnson@ncrcce.stpaul.ncr.com
Subject: Re: request for net software
Newsgroups: comp.ai.neural-nets
In-Reply-To: <346@c10sd1.StPaul.NCR.COM>
Organization: University of Toronto, CSRI
Status: RO

Although not free, a very good source of connectionist/neural net
software is available from MIT Press.

The package consists of IBM binaries and source (UNIX/IBM compatible)
to code that implements the following models:

		- interactive competition
		- constraint satisfaction
			* schema model
			* Boltzmann machine
			* harmonium
		- PDP learning models
			* Hebbian
			* generalized delta (back-propogation)
		- other learning models
			* auto-association
			* competitive learning

As I mentioned, all the source is provided (written in C). As well
a handbook is provided which describes the simulations and the theory
underlying these.

The package is entitled

	EXPLORATIONS IN PARALLEL DISTRIBUTED PROCESSING, A Handbook
	of Models, Programs and Exercises, James L. McClelland and
	David E. Rumelhart, The MIT Press, 1988

	ISBN 0-262-63113-X

I just received it two days ago - it is very new. Oh, the price is
U$ 27.50 - a steal!



[]
[] I would also like to start a list of basic texts containing information
[] on nets.  Not that some of the information in this group isn't useful
[] its just that sometimes it goes so far over my head....
[]
[] If any one would like to contribute any information, please send it to me via
[] E-mail.

The basic text is

	PARALLEL DISTRIBUTED PROCESSING, Explorations in the Microstructure
	of Cognition, James L. McClelland, David E. Rumelhart and the PDP
	Research Group, The MIT Press

		Volume 1: Foundations
		Volume 2: Psychological and Biological Models

	ISBN 0-262-18120-7 (v. 1)
	     0-262-13218-4 (v. 2)
	     0-262-18123-1 (set)

Actually, this is the Bible in the field as far as I know. You will
notice that the software and sources I recommended were developed by the
same people as above - in fact, it is meant to complement `the Bible`.

[]
[] If any one would like a copy of what I receive, send me a self addressed
[] stamped E-mail envlope and I will try to send it back.
[]
[] Thanks in advance
[] Wayne Johnson

You are very welcome. Have fun. If you discover anything really interesting
please forward it to me although most will recommend the same text anyhow.

		Sincerely,


------------------------------------------------------------------------------
Thomas Kurfurst    kurfurst@gpu.utcs.toronto.edu (CSnet,UUCP,Bitnet)
205 Wineva Road    kurfurst@gpu.utcs.toronto.cdn (EANeX.400)
Toronto, Ontario   {decvax,ihnp4,utcsri,{allegra,linus}!utzoo}!gpu.utcs!kurfurst
CANADA M4E 2T5     kurfurst%gpu.utcs.toronto.edu@relay.cs.net (CSnet)
(416) 699-5738
..............................................................................

`You see things and you say why, but I dream of
  things that never were and say why not.'
					  - George Bernard Shaw
------------------------------------------------------------------------------

From c10sd3!ncrcce!ncrlnk!ncrcae!hubcap!gatech!bloom-beacon!tut.cis.ohio-state.edu!mailrus!umix!uunet!mcvax!unido!tub!tmpmbx!netmbx!morus Mon Apr  4 09:59:20 CDT 1988
Article 45 of comp.ai.neural-nets:
Path: c10sd1!c10sd3!ncrcce!ncrlnk!ncrcae!hubcap!gatech!bloom-beacon!tut.cis.ohio-state.edu!mailrus!umix!uunet!mcvax!unido!tub!tmpmbx!netmbx!morus
>From: morus@netmbx.UUCP (Thomas M.)
Newsgroups: comp.ai.neural-nets
Subject: Re: request for net software
Message-ID: <1627@netmbx.UUCP>
Date: 1 Apr 88 00:46:16 GMT
References: <346@c10sd1.StPaul.NCR.COM> <6628@ames.arpa>
Reply-To: muhrth@db0tui11.BITNET (Thomas Muhr)
Organization: netmbx Public Access Unix, Berlin
Lines: 52
Posted: Fri Apr  1 01:46:16 1988

In article <6628@ames.arpa> burgess@pioneer.UUCP (Ken Burgess  RCD) writes:
>In article <346@c10sd1.StPaul.NCR.COM> johnson@ncrcce.StPaul.NCR.COM (Wayne D. T. Johnson) writes:
>>(I think).  As a software Engineer (AKA Programmer) I would be very
>>interested if any one out there could direct me to a source of Public
>>Domain (term used genericly, including such classes as shareware,
>>freeware, etc.) software for UNIX or an IBM PC/Compatible that could be
>>useful to a basement experimentor such as I.
>>
>>I would also like to start a list of basic texts containing information
>>on nets.  Not that some of the information in this group isn't useful
>>its just that sometimes it goes so far over my head.... 
>>
(rest deleted)>>.....
>
>
>Since I suspect that there are many new and introductory readers on this
>newsgroup, and the traffic is not all that heavy, I feel that it would
>be very helpful to post whatever information is available for computer
>systems, software, resources, texts, etc.
>
I think you are absolutely right!
Some time ago there was a little bit more traffic in this newsgroup and 
a neural network program named bpsim was posted by Richard Caasi (Oct. 87)
which was written in C and demonstrated a small example of the back-propa-
gation algorithm. This program and the idea behind it is explained in an 
article by William P. Jones and Josiah Hoskins "Back-Propagation" in BYTE
Vol. 12, No. 11 August 1987 p. 155-162. J. Hoskins is also the author of the
program.
I did draw a copy of this program - but I weren't very lucky in compiling it
using Turbo-C. It worked once and a while, though I'm not sure that it wasn't
subtly spoiled in the transfer. I am no "C-Crack" and didn't want to bother
people who are. 
Maybe some of the above mentioned "listens" and repost the program (wasn't so
big, about 16KB) together with some instruction to activate under Turbo-C. I
have the source available, but I don't know if I have the right to post it.
 
There is another introductory article and a program listing in C in Dr.Dobb's
Journal #126 April 1987 by Robert Jay Brown "An Artificial Neural Network
Experiment". The program simulates an "adaptive template matching image
categorizer". It "learns to recognize (visual) patterns by being trained
from a set of prototype patterns presented in a training file".
I didn't have the time to "hack" it into my PC, maybe someone else has and
kindly posts it to this newsgroup?
 
Hope this is some new information to you.
 
-- Thomas
-- 
! Thomas Muhr    Knowledge-Based Systems Dept. Technical University of Berlin !
! BITNET/EARN:	 muhrth@db0tui11.bitnet                                       !
! UUCP:          morus@netmbx.UUCP (Please don't use from outside Germany)    !
! BTX:           030874162  Tel.: (Germany 0049) (Berlin 030) 87 41 62        !


From c10sd3!ncrcce!ncrlnk!ncrcae!hubcap!johnsun Wed Apr  6 14:21:23 CDT 1988
Article 49 of comp.ai.neural-nets:
Path: c10sd1!c10sd3!ncrcce!ncrlnk!ncrcae!hubcap!johnsun
>From: johnsun@hubcap.UUCP (John K Sun)
Newsgroups: comp.ai.neural-nets
Subject: Re: request for net software
Summary: Back propagation.
Message-ID: <1298@hubcap.UUCP>
Date: 4 Apr 88 18:51:22 GMT
References: <346@c10sd1.StPaul.NCR.COM> <6628@ames.arpa> <1627@netmbx.UUCP>
Organization: Clemson University, Clemson, SC
Lines: 619

In article <1627@netmbx.UUCP>, morus@netmbx.UUCP (Thomas M.) writes:
> In article <6628@ames.arpa> burgess@pioneer.UUCP (Ken Burgess  RCD) writes:
> >In article <346@c10sd1.StPaul.NCR.COM> johnson@ncrcce.StPaul.NCR.COM (Wayne D. T. Johnson) writes:
> >>(I think).  As a software Engineer (AKA Programmer) I would be very
> >>interested if any one out there could direct me to a source of Public
> >>Domain (term used genericly, including such classes as shareware,
> >>freeware, etc.) software for UNIX or an IBM PC/Compatible that could be
> >>useful to a basement experimentor such as I.
> >>
> >>I would also like to start a list of basic texts containing information
> >>on nets.  Not that some of the information in this group isn't useful
> >>its just that sometimes it goes so far over my head.... 
> >>
> (rest deleted)>>.....
> >
> >
> >Since I suspect that there are many new and introductory readers on this
> >newsgroup, and the traffic is not all that heavy, I feel that it would
> >be very helpful to post whatever information is available for computer
> >systems, software, resources, texts, etc.
> >
> I think you are absolutely right!
> Some time ago there was a little bit more traffic in this newsgroup and 
> a neural network program named bpsim was posted by Richard Caasi (Oct. 87)
> which was written in C and demonstrated a small example of the back-propa-
> gation algorithm. This program and the idea behind it is explained in an 
> article by William P. Jones and Josiah Hoskins "Back-Propagation" in BYTE
> Vol. 12, No. 11 August 1987 p. 155-162. J. Hoskins is also the author of the
> program.
> I did draw a copy of this program - but I weren't very lucky in compiling it
> using Turbo-C. It worked once and a while, though I'm not sure that it wasn't

I have to source to the back propagation program...

Unfortunately, it only runs under Unix computers(32 bit words)     

I compiled the program under Turbo C and Microsoft C 4.0 it didn't
work...  The program hanged... (That was on an IBM XT compatible)

Sometimes the program worked fine... (on an AT) I suspect the floating
point emulation library for the PCs are not designed very well.

The program worked fine (always) under Sun 3/50s, 3/180s, 3/280s (SunOS
3.4) and DEC Vax 11/780 under Ultrix V2.2

I also have a simulation for Hopfield nets that run on the Sun3s.
If anybody is interested, please email to me.

John K. Sun

johnsun@hubcap.clemson.edu
johnksun@192.5.219.80

Here is the program:
/* Remove anything above this line ----------------------------*/

/*
 * title:	bpsim.c
 * author:	Josiah C. Hoskins
 * date:	June 1987
 *
 * purpose:	backpropagation learning rule neural net simulator
 *		for the tabula rasa Little Red Riding Hood example
 *
 * description: Bpsim provides an implementation of a neural network
 *		containing a single hidden layer which uses the
 *		generalized backpropagation delta rule for learning.
 *		A simple user interface is supplied for experimenting
 *		with a neural network solution to the Little Red Riding
 *		Hood example described in the text.
 *
 *		In addition, bpsim contains some useful building blocks
 *		for further experimentation with single layer neural
 *		networks. The data structure which describes the general
 *		processing unit allows one to easily investigate different
 *		activation (output) and/or error functions. The utility
 *		function create_link can be used to create links between
 *		any two units by supplying your own create_in_out_links
 *		function. The flexibility of creating units and links
 *		to your specifications allows one to modify the code
 *		to tune the network architecture to problems of interest.
 *
 *		There are some parameters that perhaps need some
 *		explanation. You will notice that the target values are
 *		either 0.1 or 0.9 (corresponding to the binary values
 *		0 or 1). With the sigmoidal function used in out_f the
 *		weights become very large if 0 and 1 are used as targets.
 *		The ON_TOLERANCE value is used as a criteria for an output
 *		value to be considered "on", i.e., close enough to the
 *		target of 0.9 to be considered 1. The learning_rate and
 *		momentum variables may be changed to vary the rate of
 *		learning, however, in general they each should be less
 *		than 1.0.
 *
 *		Bpsim has been compiled using CI-C86 version 2.30 on an
 *		IBM-PC and the Sun C compiler on a Sun 3/160.
 *
 *		Note to compile and link on U*IX machines use:
 *			cc -o bpsim bpsim.c -lm
 *
 *		For other machines remember to link in the math library.
 *
 * status:	This program may be freely used, modified, and distributed
 *		except for commercial purposes.
 *
 * Copyright (c) 1987	Josiah C. Hoskins
 */

#include <math.h>
#include <stdio.h>
#include <ctype.h>

#define BUFSIZ		512

#define FALSE		0
#define TRUE		!FALSE
#define NUM_IN		6	/* number of input units */
#define NUM_HID 	3	/* number of hidden units */
#define NUM_OUT 	7	/* number of output units */
#define TOTAL		(NUM_IN + NUM_HID + NUM_OUT)
#define BIAS_UID	(TOTAL) /* threshold unit */

/* macros to provide indexes for processing units */
#define IN_UID(X)	(X)
#define HID_UID(X)	(NUM_IN + X)
#define OUT_UID(X)	(NUM_IN + NUM_HID + X)
#define TARGET_INDEX(X) (X - (NUM_IN + NUM_HID))

#define WOLF_PATTERN	0
#define GRANDMA_PATTERN 1
#define WOODCUT_PATTERN 2
#define PATTERNS	3	/* number of input patterns */
#define ERROR_TOLERANCE 0.01
#define ON_TOLERANCE	0.8	/* a unit's output is on if > ON_TOLERENCE */
#define NOTIFY		10	/* iterations per dot notification */
#define DEFAULT_ITER	250

struct unit {			/* general processing unit */
  int	 uid;			/* integer uniquely identifying each unit */
  char	 *label;
  double output;		/* activation level */
  double (*unit_out_f)();	/* note output fcn == activation fcn*/
  double delta; 		/* delta for unit */
  double (*unit_delta_f)();	/* ptr to function to calc delta */
  struct link *inlinks; 	/* for propagation */
  struct link *outlinks;	/* for back propagation */
} *pu[TOTAL+1]; 		/* one extra for the bias unit */

struct link {			/* link between two processing units */
  char	 *label;
  double weight;		/* connection or link weight */
  double data;			/* used to hold the change in weights */
  int	 from_unit;		/* uid of from unit */
  int	 to_unit;		/* uid of to unit */
  struct link *next_inlink;
  struct link *next_outlink;
};

int	iterations = DEFAULT_ITER;
double	learning_rate = 0.2;
double	momentum = 0.9;
double	pattern_err[PATTERNS];

/*
 * Input Patterns
 * {Big Ears, Big Eyes, Big Teeth, Kindly, Wrinkled, Handsome}
 *   unit 0    unit 1	  unit 2   unit 3   unit 4    unit 5
 */
double	input_pat[PATTERNS+1][NUM_IN] = {
  {1.0, 1.0, 1.0, 0.0, 0.0, 0.0},	/* Wolf */
  {0.0, 1.0, 0.0, 1.0, 1.0, 0.0},	/* Grandma */
  {1.0, 0.0, 0.0, 1.0, 0.0, 1.0},	/* Woodcutter */
  {0.0, 0.0, 0.0, 0.0, 0.0, 0.0},	/* Used for Recognize Mode */
};

/*
 * Target Patterns
 * {Scream, Run Away, Look for Woodcutter, Approach, Kiss on Cheek,
 *	Offer Food, Flirt with}
 */
double	target_pat[PATTERNS][NUM_OUT] = {
  {0.9, 0.9, 0.9, 0.1, 0.1, 0.1, 0.1},	/* response to Wolf */
  {0.1, 0.1, 0.1, 0.9, 0.9, 0.9, 0.1},	/* response to Grandma */
  {0.1, 0.1, 0.1, 0.9, 0.1, 0.9, 0.9},	/* response to Woodcutter */
};

/*
 * function declarations
 */
void	print_header();
char	get_command();
double	out_f(), delta_f_out(), delta_f_hid(), random(), pattern_error();


main()
{
  char	 ch;
  extern struct unit *pu[];

  print_header();
  create_processing_units(pu);
  create_in_out_links(pu);
  for (;;) {
    ch = get_command("\nEnter Command (Learn, Recognize, Quit) => ");
    switch (ch) {
    case 'l':
    case 'L':
      printf("\n\tLEARN MODE\n\n");
      learn(pu);
      break;
    case 'r':
    case 'R':
      printf("\n\tRECOGNIZE MODE\n\n");
      recognize(pu);
      break;
    case 'q':
    case 'Q':
      exit(1);
      break;
    default:
      fprintf(stderr, "Invalid Command\n");
      break;
    }
  }
}


void
print_header()
{
  printf("%s%s%s",
	 "\n\tBPSIM -- Back Propagation Learning Rule Neural Net Simulator\n",
	 "\t\t for the tabula rasa Little Red Riding Hood example.\n\n",
	 "\t\t Written by Josiah C. Hoskins\n");
}


/*
 * create input, hidden, output units (and threshold or bias unit)
 */
create_processing_units(pu)
struct	unit *pu[];
{
  int	id;			/* processing unit index */
  struct unit *create_unit();

  for (id = IN_UID(0); id < IN_UID(NUM_IN); id++)
    pu[id] = create_unit(id, "input", 0.0, NULL, 0.0, NULL);
  for (id = HID_UID(0); id < HID_UID(NUM_HID); id++)
    pu[id] = create_unit(id, "hidden", 0.0, out_f, 0.0, delta_f_hid);
  for (id = OUT_UID(0); id < OUT_UID(NUM_OUT); id++)
    pu[id] = create_unit(id, "output", 0.0, out_f, 0.0, delta_f_out);
  pu[BIAS_UID] = create_unit(BIAS_UID, "bias", 1.0, NULL, 0.0, NULL);
}


/*
 * create links - fully connected for each layer
 *		  note: the bias unit has one link to ea hid and out unit
 */
create_in_out_links(pu)
struct	unit *pu[];
{
  int	i, j;		/* i == to and j == from unit id's */
  struct link *create_link();

  /* fully connected units */
  for (i = HID_UID(0); i < HID_UID(NUM_HID); i++) { /* links to hidden */
    pu[BIAS_UID]->outlinks =
      pu[i]->inlinks = create_link(pu[i]->inlinks, i,
				   pu[BIAS_UID]->outlinks, BIAS_UID,
				   (char *)NULL,
				   random(), 0.0);
    for (j = IN_UID(0); j < IN_UID(NUM_IN); j++) /* from input units */
      pu[j]->outlinks =
	pu[i]->inlinks = create_link(pu[i]->inlinks, i, pu[j]->outlinks, j,
				     (char *)NULL, random(), 0.0);
  }
  for (i = OUT_UID(0); i < OUT_UID(NUM_OUT); i++) {	/* links to output */
    pu[BIAS_UID]->outlinks =
	    pu[i]->inlinks = create_link(pu[i]->inlinks, i,
					 pu[BIAS_UID]->outlinks, BIAS_UID,
					 (char *)NULL, random(), 0.0);
    for (j = HID_UID(0); j < HID_UID(NUM_HID); j++) /* from hidden units */
      pu[j]->outlinks =
	pu[i]->inlinks = create_link(pu[i]->inlinks, i, pu[j]->outlinks, j,
				     (char *)NULL, random(), 0.0);
  }
}


/*
 * return a random number bet 0.0 and 1.0
 */
double
random()
{
  return((rand() % 32727) / 32737.0);
}


/*
 * the next two functions are general utility functions to create units
 * and create links
 */
struct unit *
create_unit(uid, label, output, out_f, delta, delta_f)
int  uid;
char *label;
double	 output, delta;
double	 (*out_f)(), (*delta_f)();
{
  struct unit  *unitptr;

  if (!(unitptr = (struct unit *)malloc(sizeof(struct unit)))) {
    fprintf(stderr, "create_unit: not enough memory\n");
    exit(1);
  }
  /* initialize unit data */
  unitptr->uid = uid;
  unitptr->label = label;
  unitptr->output = output;
  unitptr->unit_out_f = out_f;	/* ptr to output fcn */
  unitptr->delta = delta;
  unitptr->unit_delta_f = delta_f;
  return (unitptr);
}


struct link *
create_link(start_inlist, to_uid, start_outlist, from_uid, label, wt, data)
struct	link *start_inlist, *start_outlist;
int	to_uid, from_uid;
char *	label;
double	wt, data;
{
  struct link  *linkptr;

  if (!(linkptr = (struct link *)malloc(sizeof(struct link)))) {
    fprintf(stderr, "create_link: not enough memory\n");
    exit(1);
  }
  /* initialize link data */
  linkptr->label = label;
  linkptr->from_unit = from_uid;
  linkptr->to_unit = to_uid;
  linkptr->weight = wt;
  linkptr->data = data;
  linkptr->next_inlink = start_inlist;
  linkptr->next_outlink = start_outlist;
  return(linkptr);
}


char
get_command(s)
char	*s;
{
  char	command[BUFSIZ];

  fputs(s, stdout);
  fflush(stdin); fflush(stdout);
  (void)fgets(command, BUFSIZ, stdin);
  return((command[0])); 	/* return 1st letter of command */
}


learn(pu)
struct unit *pu[];
{
  register i, temp;
  char	 tempstr[BUFSIZ];
  extern int	iterations;
  extern double learning_rate, momentum;
  static char prompt[] = "Enter # iterations (default is 250) => ";
  static char quote1[] = "Perhaps, Little Red Riding Hood ";
  static char quote2[] = "should do more learning.\n";

  printf(prompt);
  fflush(stdin); fflush(stdout);
  gets(tempstr);
  if (temp = atoi(tempstr))
    iterations = temp;

  printf("\nLearning ");
  for (i = 0; i < iterations; i++) {
    if ((i % NOTIFY) == 0) {
      printf(".");
      fflush(stdout);
    }
    bp_learn(pu, (i == iterations-2 || i == iterations-1 || i == iterations));
  }
  printf(" Done\n\n");
  printf("Error for Wolf pattern = \t%lf\n", pattern_err[0]);
  printf("Error for Grandma pattern = \t%lf\n", pattern_err[1]);
  printf("Error for Woodcutter pattern = \t%lf\n", pattern_err[2]);
  if (pattern_err[WOLF_PATTERN] > ERROR_TOLERANCE) {
    printf("\nI don't know the Wolf very well.\n%s%s", quote1, quote2);
  } else if (pattern_err[GRANDMA_PATTERN] > ERROR_TOLERANCE) {
    printf("\nI don't know Grandma very well.\n%s%s", quote1, quote2);
  } else if (pattern_err[WOODCUT_PATTERN] > ERROR_TOLERANCE) {
    printf("\nI don't know Mr. Woodcutter very well.\n%s%s", quote1, quote2);
  } else {
    printf("\nI feel pretty smart, now.\n");
  }
}


/*
 * back propagation learning
 */
bp_learn(pu, save_error)
struct unit *pu[];
int    save_error;
{
  static int count = 0;
  static int pattern = 0;
  extern double pattern_err[PATTERNS];

  init_input_units(pu, pattern); /* initialize input pattern to learn */
  propagate(pu);		 /* calc outputs to check versus targets */
  if (save_error)
    pattern_err[pattern] = pattern_error(pattern, pu);
  bp_adjust_weights(pattern, pu);
  if (pattern < PATTERNS - 1)
    pattern++;
  else
      pattern = 0;
  count++;
}


/*
 * initialize the input units with a specific input pattern to learn
 */
init_input_units(pu, pattern)
struct unit *pu[];
int    pattern;
{
  int	id;

  for (id = IN_UID(0); id < IN_UID(NUM_IN); id++)
    pu[id]->output = input_pat[pattern][id];
}


/*
 * calculate the activation level of each unit
 */
propagate(pu)
struct unit *pu[];
{
  int	id;

  for (id = HID_UID(0); id < HID_UID(NUM_HID); id++)
    (*(pu[id]->unit_out_f))(pu[id], pu);
  for (id = OUT_UID(0); id < OUT_UID(NUM_OUT); id++)
    (*(pu[id]->unit_out_f))(pu[id], pu);
}


/*
 * function to calculate the activation or output of units
 */
double
out_f(pu_ptr, pu)
struct unit *pu_ptr, *pu[];
{
  double sum = 0.0, exp();
  struct link *tmp_ptr;

  tmp_ptr = pu_ptr->inlinks;
  while (tmp_ptr) {
    /* sum up (outputs from inlinks times weights on the inlinks) */
    sum += pu[tmp_ptr->from_unit]->output * tmp_ptr->weight;
    tmp_ptr = tmp_ptr->next_inlink;
  }
  pu_ptr->output = 1.0/(1.0 + exp(-sum));
}


/*
 * half of the sum of the squares of the errors of the
 * output versus target values
 */
double
pattern_error(pat_num, pu)
int	pat_num;	/* pattern number */
struct	unit *pu[];
{
  int		i;
  double	temp, sum = 0.0;

  for (i = OUT_UID(0); i < OUT_UID(NUM_OUT); i++) {
    temp = target_pat[pat_num][TARGET_INDEX(i)] - pu[i]->output;
    sum += temp * temp;
  }
  return (sum/2.0);
}


bp_adjust_weights(pat_num, pu)
int	pat_num;	/* pattern number */
struct	unit *pu[];
{
  int		i;		/* processing units id */
  double	temp1, temp2, delta, error_sum;
  struct link	*inlink_ptr, *outlink_ptr;

  /* calc deltas */
  for (i = OUT_UID(0); i < OUT_UID(NUM_OUT); i++) /* for each output unit */
    (*(pu[i]->unit_delta_f))(pu, i, pat_num); /* calc delta */
  for (i = HID_UID(0); i < HID_UID(NUM_HID); i++) /* for each hidden unit */
    (*(pu[i]->unit_delta_f))(pu, i);	  /* calc delta */
  /* calculate weights */
  for (i = OUT_UID(0); i < OUT_UID(NUM_OUT); i++) {	/* for output units */
    inlink_ptr = pu[i]->inlinks;
    while (inlink_ptr) {	/* for each inlink to output unit */
      temp1 = learning_rate * pu[i]->delta *
	pu[inlink_ptr->from_unit]->output;
      temp2 = momentum * inlink_ptr->data;
      inlink_ptr->data = temp1 + temp2; /* new delta weight */
      inlink_ptr->weight += inlink_ptr->data;	/* new weight */
      inlink_ptr = inlink_ptr->next_inlink;
    }
  }
  for (i = HID_UID(0); i < HID_UID(NUM_HID); i++) { /* for ea hid unit */
    inlink_ptr = pu[i]->inlinks;
    while (inlink_ptr) {	/* for each inlink to output unit */
      temp1 = learning_rate * pu[i]->delta *
	pu[inlink_ptr->from_unit]->output;
      temp2 = momentum * inlink_ptr->data;
      inlink_ptr->data = temp1 + temp2; /* new delta weight */
      inlink_ptr->weight += inlink_ptr->data;	/* new weight */
	inlink_ptr = inlink_ptr->next_inlink;
    }
  }
}


/*
 * calculate the delta for an output unit
 */
double
delta_f_out(pu, uid, pat_num)
struct unit *pu[];
int    uid, pat_num;
{
  double	temp1, temp2, delta;

  /* calc deltas */
  temp1 = (target_pat[pat_num][TARGET_INDEX(uid)] - pu[uid]->output);
  temp2 = (1.0 - pu[uid]->output);
  delta = temp1 * pu[uid]->output * temp2; /* calc delta */
  pu[uid]->delta = delta; /* store delta to pass on */
}


/*
 * calculate the delta for a hidden unit
 */
double
delta_f_hid(pu, uid)
struct unit *pu[];
int    uid;
{
  double	temp1, temp2, delta, error_sum;
  struct link	*inlink_ptr, *outlink_ptr;

  outlink_ptr = pu[uid]->outlinks;
  error_sum = 0.0;
  while (outlink_ptr) {
    error_sum += pu[outlink_ptr->to_unit]->delta * outlink_ptr->weight;
    outlink_ptr = outlink_ptr->next_outlink;
  }
  delta = pu[uid]->output * (1.0 - pu[uid]->output) * error_sum;
  pu[uid]->delta = delta;
}


recognize(pu)
struct unit *pu[];
{
  int	 i;
  char	 tempstr[BUFSIZ];
  static char *p[] = {"Big Ears?", "Big Eyes?", "Big Teeth?",
		      "Kindly?\t", "Wrinkled?", "Handsome?"};

  for (i = 0; i < NUM_IN; i++) {
    printf("%s\t(y/n) ", p[i]);
    fflush(stdin); fflush(stdout);
    fgets(tempstr, BUFSIZ, stdin);
    if (tempstr[0] == 'Y' || tempstr[0] == 'y')
      input_pat[PATTERNS][i] = 1.0;
    else
      input_pat[PATTERNS][i] = 0.0;
  }
  init_input_units(pu, PATTERNS);
  propagate(pu);
  print_behaviour(pu);
}


print_behaviour(pu)
struct unit *pu[];
{
  int	id, count = 0;
  static char *behaviour[] = {
    "Screams", "Runs Away", "Looks for Woodcutter", "Approaches",
    "Kisses on Cheek", "Offers Food", "Flirts with Woodcutter" };

  printf("\nLittle Red Riding Hood: \n");
  for (id = OUT_UID(0); id < OUT_UID(NUM_OUT); id++){ /* links to out units */
    if (pu[id]->output > ON_TOLERANCE)
      printf("\t%s\n", behaviour[count]);
    count++;
  }
  printf("\n");
}


From c10sd3!ncrcce!ncrlnk!ncrcae!hubcap!johnsun Wed Apr  6 14:22:00 CDT 1988
Article 50 of comp.ai.neural-nets:
Path: c10sd1!c10sd3!ncrcce!ncrlnk!ncrcae!hubcap!johnsun
>From: johnsun@hubcap.UUCP (John K Sun)
Newsgroups: comp.ai.neural-nets
Subject: Hopfield Neural Network Simulator (shelving problem) Part 1/3
Keywords: Hopfield NN, Sun 3
Message-ID: <1311@hubcap.UUCP>
Date: 6 Apr 88 01:18:42 GMT
Organization: Clemson University, Clemson, SC
Lines: 549

/*

   Since many people is interested in my simulation,
   I have posted it right here.

*/

/*
   Neural Network implementation solving optimization problems using
   the  generalized Hopfield model. 

   The shelving problem.  (Exploring the Neural Universe)

   Mix-Mode version.

     shelve -     goes into interactive mode with default values.
     shelve       goes into batch mode with default values.

   By John K. Sun, (C) Feb 25, 1988.

status: This program may be freely used, modified, except for
commercial purposes.

*/

#    include <stdio.h>
#    include <time.h>
#    include <math.h>
#    include <curses.h>
#    include <memory.h>

#    include "local.h"
#    include "neural.h"

     Vector X,				/* Coordinates */
            Y;

     Matrix V,				/* Output  voltage */
            u,				/* Initial voltage */
            Du,				/* Change in U     */
            d,				/* distance matrix */
            I,				/* Backup Current  */
            ExI;			/* Current */

     /* The conductance Multidimensional Array */

     Table  T;

     int 
                Stop_Freq  = 50,
                Iterations = 2000;

     double     
                Threshold = 0.0001,
                C = 1.0,
                R = 300.0;		/* Actually Tau = R * C = 300. */

     int    
            N,				/* N neurons per row or column */
            Gain_rate    = 15,
            SAVE_RATE,
            Times_to_run = 20;
 
     double 
            GAIN         = 0.01,	/* Starting variable gain */
            Scale_factor = 0.28,
            Gain_factor  = 1.05;

     double SAVE_GAIN, 
            SAVE_FACTOR;

     /***************************************************************
      * Fatal error exit routine                                    
      *                                                              
      * syntax: error_exit(function_name, format, arg1, arg2, ...); 
      ***************************************************************/
     void error_exit(name, message)
       char *name, *message;
     {
       /* print out the name of the function causing error */
       fprintf(stderr, "ERROR in %s: ", name);

       /* print out remainder of message */
       fprintf(stderr, message);

       exit(-1);
      } /* error exit */


     /********************************
      * Amplifier function generator 
      ********************************/ 
     double g(u) 
       double u;
     {
       return (0.5 * (1.0 + tanh(u * GAIN)));
     } /* Amplifier output V = g(u) */


     /***********************************************************
      * C * dUi/dt = -Ui / R + sum(TijVj) + Ii    
      ***********************************************************/
     double Delta_u(i, j)          
       int i, j;
     {
       double   sum;
       register int    k, l;

       sum = 0.0;
   
       for (k = 1; k <= N; k++)
       for (l = 1; l <= N; l++)
         sum = sum + T [i] [j] [k] [l] * V[k] [l];

       return( - u[i][j] / (R * C) + (sum + ExI[i][j]) / C );

     } /* Delta_u */

     /*********************************
      * Initialize conductance matrix 
      *********************************/
     void Make_T() 
     {
       register int x, i, y, j;

       for (x = 1; x <= N; x ++)
       for (i = 1; i <= N; i ++)
       for (y = 1; y <= N; y ++)
       for (j = 1; j <= N; j ++) 
           T [x][i][y][j] = 0.0;

       for (x = 1; x <= N; x++)
       for (i = 1; i <= N; i++)
       for (y = 1; y <= N; y++)
       for (j = 1; j <= N; j++) 
         if ((i EQUAL j) AND (x UNEQUAL y) OR 
             (x EQUAL y) AND (i UNEQUAL j))
           T [x][i][y][j] = -2.0;

     } /* Make_T */
       

     /******************************
      * Initialize Output Voltages 
      ******************************/
     void Init_V()
     {
       register int i, j;

       for (i = 1; i <= N; i ++)
       for (j = 1; j <= N; j ++)
         V[i][j] = g(u[i][j]);
     } /* Init_V */


     /*******************************
      * Initialize initial voltages 
      *******************************/
     void Init_u()
     {
       register int i, j;

       for (i = 1; i <= N; i ++)
       for (j = 1; j <= N; j ++)
         u[i][j] = -100.0;
     } /* Init_u */


     /***********************
      * Initialize Currents 
      ***********************/
     void Init_I()
     {
       register int i, j;

       ExI [1][1] = 10.0;
       ExI [1][2] =  5.0;
       ExI [1][3] =  4.0;
       ExI [1][4] =  6.0;
       ExI [1][5] =  5.0;
       ExI [1][6] =  1.0;
       ExI [2][1] =  6.0;
       ExI [2][2] =  4.0;
       ExI [2][3] =  9.0;
       ExI [2][4] =  7.0;
       ExI [2][5] =  3.0;
       ExI [2][6] =  2.0;
       ExI [3][1] =  1.0;
       ExI [3][2] =  8.0;
       ExI [3][3] =  3.0;
       ExI [3][4] =  6.0;
       ExI [3][5] =  4.0;
       ExI [3][6] =  6.0;
       ExI [4][1] =  5.0;
       ExI [4][2] =  3.0;
       ExI [4][3] =  7.0;
       ExI [4][4] =  2.0;
       ExI [4][5] =  1.0;
       ExI [4][6] =  4.0;
       ExI [5][1] =  3.0;
       ExI [5][2] =  2.0;
       ExI [5][3] =  5.0;
       ExI [5][4] =  6.0;
       ExI [5][5] =  8.0;
       ExI [5][6] =  7.0;
       ExI [6][1] =  7.0;
       ExI [6][2] =  6.0;
       ExI [6][3] =  4.0;
       ExI [6][4] =  1.0;
       ExI [6][5] =  3.0;
       ExI [6][6] =  2.0;
       
       for (i = 1; i <= N; i++)
       for (j = 1; j <= N; j++) {
         I   [i][j]  = ExI [i][j];
         ExI [i][j] *= Scale_factor;
         } /* for */
       
     } /* Init_I */

     /****************/
      BOOLEAN Stable()
     /****************/
     {
       register int i, j;

       for (i = 1; i <= N; i++) 
         for (j = 1; j <= N; j++) 
           if (  (u[i][j] * Du[i][j] < 0.0)
             OR ((abs(V[i][j] - 1.0) > Threshold)
             AND (abs(V[i][j]) > Threshold)) ) return(False);

       return(True);

     } /* Stable */

     /*********************/
      double print_answer()
     /*********************/
     {
       double   	Sum;
       register 	int    i, j;

       Sum = 0.0;
       putchar('\n');
       for (j = 1; j <= N; j++)
       for (i = 1; i <= N; i++)
         if (abs(V[i][j] - 1.0) <= Threshold) {
           Sum += I[i][j];
           printf(" %d", i);
           }

       printf(" Total = %7.3f\n", Sum);
       return(Sum);

     } /* Total */

     /*************************************************/
      void change_value(msg_str, row, col, value)
     /*************************************************/
       char   *msg_str;
       int     row, col;
       double *value;
     {
       move(PROMPT_ROW + 1, PROMPT_COL + 25);
       clrtoeol();
       addstr(msg_str);
       refresh();
       scanw("%lf", value);
       mvprintw(row, col + 10, "%7.3f", *value);
       refresh();

     } /* change value */

     /*****************************************/    
      void print_matrix(msg, matrix, row, col)
     /*****************************************/    
       char    *msg;
       Matrix   matrix; 
       int      row, col;
     {
       register int i, j;

       mvaddstr(MAT_ROW, MAT_COL, msg);
       addch('\n');
       for (i = 1; i <= row; i++) {
         for (j = 1; j <= col; j++)    
           printw(" %7.3f ", matrix[i][j]);
         addch('\n');
         } /* for */       
 
       refresh();

     } /* print matrix */
       
     /********************/
      void Restore_Values()
     /********************/
     {
       GAIN        = SAVE_GAIN;
       Gain_factor = SAVE_FACTOR;
       Gain_rate   = SAVE_RATE;

       /* Initialize initial voltage for amplifiers */
       Init_u();

       /* Initialize Output voltage */
       Init_V();

       /* Initialize Currents */
       Init_I();

     } /* restore values */

     /*******************/
      void Save_Values()
     /*******************/
     {
       SAVE_GAIN    = GAIN;
       SAVE_FACTOR  = Gain_factor;
       SAVE_RATE    = Gain_rate;
     } /* Save Values */
       
     /*****************/
      double Energy()
     /*****************/
     {
       register int x, i, y, j;
       double   Sum1, Sum2;

       Sum1 = Sum2 = 0.0;
       for (x = 1; x <= N; x++)
       for (i = 1; i <= N; i++) {
         for (y = 1; y <= N; y++) 
         for (j = 1; j <= N; j++) 
           Sum1 += T [x][i][y][j] * V [x][i] * V [y][j];
         Sum2 += V[x][i] * I[x][i];
       } /* for */

       return( -0.5 * Sum1 - Sum2 );

     } /* System Energy */
 
     /*********************/
      void get_command()
     /*********************/
     {     
       register char c;
       double   temp;
       
       do {
       move(PROMPT_ROW, PROMPT_COL + 1);
       refresh();
       c = getch();
       switch(c) {
         case 'C': case 'c': 
           addstr("\nChange (g, f, r, s, i) :");
           refresh();
           c = getch();
           switch(c) {
             case 'g': case 'G':
             change_value(" Gain = ", GAIN_ROW, GAIN_COL, &SAVE_GAIN);
             break;
        
             case 'f': case 'F':
             change_value(" Factor = ", GAINF_ROW, GAINF_COL, &SAVE_FACTOR);
             break;

             case 'r': case 'R':
             change_value(" Rate = ", GAINR_ROW, GAINR_COL, &temp);
             SAVE_RATE = (int) temp;
             break;

             case 's': case 'S':
             change_value(" Scale = ", SCALEF_ROW, SCALEF_COL, &Scale_factor);
             break;

             case 'i': case 'I':
             change_value(" Stp f = ", FREQ_ROW, FREQ_COL, &temp);
             Stop_Freq = (int) temp;
             break;
	
             default: addstr("Invalid Command!");
             } /* end case */
           refresh();
           break;

         case 'P': case 'p':
           addstr("\nShow (V, U, Du) :");
           clrtoeol();
           refresh();
           c = getch();
           switch(c) {
                 
             case 'u': case 'U':
             print_matrix("+++> Output Us <+++", u, N, N);
             break;

             case 'v': case 'V':
             print_matrix("---> Output Vs <---", V, N, N);
             break;

             case 'd': case 'D':
             print_matrix("---> Output Du <---", Du, N, N);
             break;

             default: addstr("Please try again!");
             } /* end case */
           refresh();
           break;

         case 'g': case 'G': return;

         case 'r': case 'R': Restore_Values(); break;

         case 's': case 'S': Save_Values(); break;
   
         case 'q': case 'Q': endwin(); exit(1);

         default: addstr("Invalid command!");
         } /* end case */
       } while(True);
             
     } /* get command from user */
     /****************/
      main(argc, argv)
     /****************/
       int    argc;
       char **argv;
     {
       long     now;
       double   Answer = 0.0;
       register int    Times, Counter, i, j, rand1, rand2, SqN;
       register int    mode;
   
       N = 6;				 /* Problem dependent */
       Save_Values();

       if (argc UNEQUAL 1) {

         if (argv[1][0] UNEQUAL INTRACT_SW) 
           mode = BATCH;
         else {
           mode = INTERACTIVE;
           stdscr = initscr();
           clear();
           addstr("Turbo NEURAL NETWORK -- OPTIMIZATION PROBLEMS");
           addstr(" (c) 1988 John Sun ");
           if (argc EQUAL 2) 
             goto nocomline;
           } /* else */

         SAVE_GAIN    = GAIN         = (double) atof(argv[1 + mode]);
         SAVE_FACTOR  = Gain_factor  = (double) atof(argv[2 + mode]);
         SAVE_RATE    = Gain_rate    = (int)    atoi(argv[3 + mode]);
         Times_to_run = (int)    atoi(argv[4 + mode]);
         Scale_factor = (double) atof(argv[5 + mode]);
         } /* if */
        
       nocomline:

       if (mode UNEQUAL BATCH) {
         mvprintw(FREQ_ROW,   FREQ_COL,   "Stop F = %7d",   Stop_Freq);
         mvprintw(ENERGY_ROW, ENERGY_COL, "Energy = %7.3f", Energy());
         mvprintw(GAIN_ROW,   GAIN_COL,   "Gain   = %7.3f", GAIN);
         mvprintw(GAINF_ROW,  GAINF_COL,  "Factor = %7.3f", Gain_factor);
         mvprintw(GAINR_ROW,  GAINR_COL,  "Rate   = %7d",   Gain_rate);
         mvprintw(SCALEF_ROW, SCALEF_COL, "Scale  = %7.3f", Scale_factor);
         mvaddch (PROMPT_ROW, PROMPT_COL, '>');
         mvaddstr(HELP_ROW,   HELP_COL,   "(C)hange, (R)estore, (S)ave, (G)o, (P)rint or (Q)uit");
         refresh();
         } /* if */
       else
         printf("\nGain = %6.3f, factor = %6.3f, rate = %7d, scale = %6.3f\n",
              GAIN, Gain_factor, Gain_rate, Scale_factor);

       if (N > Max) 
         error_exit("main", "Subscript Out of range \n");

       SqN = N * N;

       /* Initialize Connectivity Matrix */
       Make_T();

       Restore_Values();

       if (mode UNEQUAL BATCH) get_command();

       for (Times = 1; Times <= Times_to_run; Times ++) {

         /* Start at a random place. 
            Note: srandom() and random() are available in BSD4.2 or higher 
                  and  Ultrix  only.  For  other Unix systems, use srand()
                  and rand() instead. 
         */
         srandom((int) (time(&now) MOD 37) );

         for (Counter = 1; Counter <= Iterations; Counter ++) {
 
           for (i = 1; i <= SqN; i++) {

             rand1 = 1 + (int) (random() % N);

             rand2 = 1 + (int) (random() % N);

             Du [ rand1 ] [ rand2 ] = Delta_u(rand1, rand2);

             u [ rand1 ] [ rand2 ] += Du [ rand1 ] [ rand2 ];

             V  [ rand1 ] [ rand2 ] = g( u[ rand1 ] [ rand2 ]);
             } /* for */

           if (mode UNEQUAL BATCH) {
             mvprintw(ENERGY_ROW, ENERGY_COL, "Energy = %7.3f", Energy());
             mvprintw(GAINC_ROW,  GAINC_COL,  "New Ga = %7.3f", GAIN);
             print_matrix("---> Output Vs <---", V, N, N);
             } /* if */

           if (Stable()) break;

           if ((Counter MOD Gain_rate) EQUAL 0) GAIN *= Gain_factor;

           if (((Counter MOD Stop_Freq) EQUAL 0) AND
               (mode UNEQUAL BATCH) )
             get_command();     

           } /* for */

         Answer += (double) print_answer();
   
         if (mode UNEQUAL BATCH) 
           get_command();
         else 
           Restore_Values();
   
         } /* for */
       
       if (mode UNEQUAL BATCH) {
         move(END_ROW, END_COL);
         printw("\nRuns = %4d, Average = %7.3f\n", Times_to_run,         
                Answer / (double) Times_to_run);
         refresh();
         endwin();
         } /* if */
       else
         printf("\nRuns = %4d, Average = %7.3f\n", Times_to_run,         
                Answer / (double) Times_to_run);

     } /* main */


From c10sd3!ncrcce!ncrlnk!ncrcae!hubcap!johnsun Wed Apr  6 14:22:19 CDT 1988
Article 51 of comp.ai.neural-nets:
Path: c10sd1!c10sd3!ncrcce!ncrlnk!ncrcae!hubcap!johnsun
>From: johnsun@hubcap.UUCP (John K Sun)
Newsgroups: comp.ai.neural-nets
Subject: Hopfield Neural Network Simulator (Shelving) Part 2/3
Keywords: Hopfield NN, Sun 3
Message-ID: <1312@hubcap.UUCP>
Date: 6 Apr 88 01:20:17 GMT
Organization: Clemson University, Clemson, SC
Lines: 81


/****************************************
 * Neural Network Include File
 * static char NeuralSid[] = "@(#)neural.h	1.3 4/3/88";
 ****************************************/ 

#ifndef  NEURAL_H

#define  NEURAL_H

#    define  Max              10
#    define  ReservedElements (Max + 1)   /* Reserve 0, use 1..Max */
#    define  GRAPHICS
#    define  MONOGRAPHICS
#    define  INTERACTIVE      1
#    define  BATCH            0
#    define  INTRACT_SW       '-'
#    define  SUN3GRAPH_SW     '+'  

     typedef
         double Vector [ ReservedElements ];
 
     typedef
         Vector Matrix [ ReservedElements ];
 
     typedef
         Matrix Table  [ ReservedElements ] [ ReservedElements ];

#    define  PROMPT_ROW  3 
#    define  PROMPT_COL  0

#    define  HELP_ROW    11
#    define  HELP_COL    0

#    define  MAT_ROW     4
#    define  MAT_COL     20   

#    define  ANS_ROW     4
#    define  ANS_COL     0

#    define  END_ROW     11
#    define  END_COL     0

#    define  FREQ_ROW    1
#    define  FREQ_COL    1

#    define  GAIN_ROW    2
#    define  GAIN_COL    1

#    define  ENERGY_ROW  1
#    define  ENERGY_COL  20

#    define  GAINC_ROW   1
#    define  GAINC_COL   40

#    define  SCALEF_ROW  2
#    define  SCALEF_COL  60

#    define  GAINR_ROW   2
#    define  GAINR_COL   40

#    define  GAINF_ROW   2
#    define  GAINF_COL   20

/* Color definitions */

#define BLACK           0       /* white on b&w */
#define RED             1       /* black on b&w */
#define GREEN           2       /* black on b&w */
#define BLUE            3       /* black on b&w */

/* Position definitions */

#define ORIGIN_X        500
#define ORIGIN_Y        500
#define OFFSET_X        500
#define OFFSET_Y        500
#define MAX_X           2500
#define MAX_Y           2500

#endif 


From c10sd3!ncrcce!ncrlnk!ncrcae!hubcap!johnsun Wed Apr  6 14:22:40 CDT 1988
Article 52 of comp.ai.neural-nets:
Path: c10sd1!c10sd3!ncrcce!ncrlnk!ncrcae!hubcap!johnsun
>From: johnsun@hubcap.UUCP (John K Sun)
Newsgroups: comp.ai.neural-nets
Subject: Hopfield Neural Network Simulator (Shelving) Part 3/3
Keywords: Hopfield NN, Sun 3
Message-ID: <1313@hubcap.UUCP>
Date: 6 Apr 88 01:21:29 GMT
Organization: Clemson University, Clemson, SC
Lines: 33


/*
	static char LocalSid[] = "@(#)local.h	1.2 4/1/88";
*/

#        ifndef  LOCAL_H
#        define  LOCAL_H

#        define  Mod         %
#        define  And         &&
#        define  Or          ||
#        define  Not         !
#        define  EQUAL       ==
#        define  UNEQUAL     !=
#        define  NOT_FOUND   (-1)
#        define  ERROR       (-1)
#        define  NULL_CHAR   ('\0')
#        define  PARMS(x)    ()              /* For non ANSI C */
#        define  COL_80      81
#        define  ASCII_TABLE 256
#        define  FOREVER     for(;;)
#        define  End_Of_File 26
#        define  NEW_PAGE    '\014'

         typedef
           enum { True = 1, False = 0 } BOOLEAN;

         typedef
           enum { lt = -1, eq = 0, gt = 1 } RELATION;

#        define SWAP(a, b, t) ((t) = (a), (a) = (b), (b) = (t))

#endif


From c10sd3!ncrcce!ncrlnk!ncrcae!ece-csc!mcnc!uvaarpa!umd5!ames!amdahl!oliveb!felix!dhw68k!feedme!doug Wed Apr 13 10:56:05 CDT 1988
Article 56 of comp.ai.neural-nets:
Path: c10sd1!c10sd3!ncrcce!ncrlnk!ncrcae!ece-csc!mcnc!uvaarpa!umd5!ames!amdahl!oliveb!felix!dhw68k!feedme!doug
>From: doug@feedme.UUCP (Doug Salot)
Newsgroups: comp.ai.neural-nets
Subject: Re: request for net software
Summary: _Explorations in PDP_ is shipping
Message-ID: <14@feedme.UUCP>
Date: 1 Apr 88 19:32:00 GMT
References: <346@c10sd1.StPaul.NCR.COM<6628@ames.arpa>
Organization: Little County of Horrors, Orange County
Lines: 18

> Wayne D. T. Johnson writes:
>                                                    I would be very
>interested if any one out there could direct me to a source of Public
>Domain ... software for UNIX or an IBM PC/Compatible that could be
>useful to a basement experimentor such as I.

While not free, the PDP group's long awaited "volume 3," _Explorations
in Parallel Distributed Processing: A Handbook of Models, Programs, and
Excercises_ is now shipping from The MIT Press, Cambridge, MA, $27.50.

Contents include tunable programs, source code, and excercises for
interactive activation and competetion, constraint satisfaction,
pattern associators, the generalized delta rule, auto-associators
and competetive learning, and cognition modeling.

- Doug
-- 
Doug Salot || doug@feedme.UUCP || {trwrb,hplabs}!felix!dhw68k!feedme!doug


-- 
Wayne Johnson                 (voice) 612-638-7665
NCR Comten, Inc.           (internet) johnson@ncrcce.StPaul.NCR.COM or
Roseville MN 55113                    johnson@c10sd1.StPaul.NCR.COM
The comments stated here do not reflect the policy of NCR Comten.