[comp.ai.neural-nets] Docs for SAN Simulator

tree@sun.soe.clarkson.edu (Tom Emerson) (05/05/88)

\documentstyle[11pt]{article}
\newtheorem{definition}{Definition}
\newcommand{\lisp}{{\sc Lisp}}
\newcommand{\san}{{\sc San}}
\title{The Disambiguation of English Text Through the Use of Spreading
Activation Networks}
\author{Final Report \& Program Documentation\\ PY350 --- Artificial
Intelligence\\
Clarkson University \\ \\ Thomas R. Emerson (40405)}
\date{19 April 1988}

\begin{document}

\begin{titlepage}
\vskip 3in
\LARGE
\begin{center}
The Disambiguation of English Text Through the Use of Spreading
Activation Networks
\end{center}
\normalsize
\vskip 1in
\begin{center}
Final Report \& Program Documentation \\ PY 350 --- Artificial
Intelligence \\ Clarkson University 
\vskip .5in
Thomas R. Emerson (40405)
\vskip .5in
19 April 1988
\end{center}
\end{titlepage}

\newpage
\section{Overview} \label{overview}
One of the central dilemas in language understanding is the
ambiguity of natural languages.  Humans are able to understand an
infinite number of utterances with little or no difficulty, though
approximately 32\% of the words in the English language are semantically
ambiguous.  Further, ungrammatical and syntactically ambiguous sentences
can also be comprehended by one's language ``processor'', again with
little difficulty.

Syntactic grammars are not adequate for effective text {\em
understanding}.  With syntacticaly ambiguous text (sentences such as
``Police help murder victems'' or ``Teenage prostitution problem is
mounting''), these grammars will produce different representations
(derivation trees) for the same sentence.  The correct representation
must then be decided.  Semantically ambiguous text (i.e.  ``Robin walks
to the bank'') will also be ``understood'' (i.e.  a derivation tree
constructed) by a syntactic grammar, but the ambiguity will not be
resolved without the application of semantic/contextual/world knowledge. 

Several types of grammars, such as that used in Winograd's {\sc Shrdlu},
attempt to reduce semantics into executable procedures through the use
of procedural semantics.  However, even these lack the power to express
the complexity of natural, every day discourse.  Procedural semantics
allows for the representation of active and state knowledge, i.e.  how
to pick something up or the color of an object.  This is ineffective
when one is dealing with concepts that can not be represented
symbollicaly or as a procedure (i.e.  love, compassion, hate.)

Real world knowledge is vital for effective language understanding. 
Executable procedures are not an adequate source of knowledge, as was
noted above; further, the context of an utterance must be considered
during the understanding process.  For example, the sentence ``John went
to the bar'' can be interpreted at least three different ways: John is
going to work out in a gym, John is going to have a drink at the local
pub, or John is a lawyer.  With out some sort of contextual information,
this sentence may not be {\em understood} per se.  It will be understood
in that an entity {\em John} approached ({\em went to}) a singular
object labeled as {\em bar}.  That is the extent of this set
information, thus the ambiguity of the word bar remains as the main
barrier (a more indepth analysis of this sentence is found in
Section~\ref{conceptual}.) How can this ambiguity problem be solved?

The purpose of this project was to produce a model the human language
disambiguation process.  It was decided to use spreading activation
networks (\san s) as the basis of the model.

\san s allow for the integration of independent syntactic, semantic, and
contextual knowledge into one base accessible to the language processor.
 They have been the foundation for many theories in virtually all areas
of cognition, including sensory perception and recognition, semantic and
episodic memory, and skilled action. 

This paper presents the model in detail, as well as a computer program
written as a simulation of the model.

\section{The Model} \label{model}

The model attempts to explain the word-sense disambiguation of simple
English declaritive text as the spread of activation through a network
of word and concept relations composed of both excitatory and inhibitory
links. 

The model is based, like that of Quillan's model of semantic memory, on
the idea that the meaning of sentence corresponds to both the combined
meanings of the individual words making up the sentence, and the
concepts these words represent.  At first it may seem that this
emphasizes the surface structure of a sentence.  However, as activation
spreads through the word and concept relationships contained in the
network a much deeper (more general) representation is formed.  From
this representation, activation can be made to spread further along
different paths, to produce varied yet semantically similar utterances;
including paraphrasing and translation.  Results from tests run on the
simulation described in Section~\ref{thesimulation} show that a more
indepth network, such as one supporting case frames, would be needed for
real-world processing.

\subsection{The Network}

The network used in this model is composed of two levels: the {\em word
level} and the {\em concept level}.

The word level contains nodes representing the actual words of the
language.  In the simple model presented in this paper there is a
seperate node for each word.  In larger ``life-size'' systems, a more
efficient representation would have to be created in order to reduce the
size.  The nodes in the word level are then connected via
uni-directional excitatory links to a corresponding and identical node
in the concept level. 

The nodes in the word level receive activation from a ``syntactical
analyzer'' --- a front end to the understanding mechanism.  The exact
operation of the syntactic portion of the language understanding process
has not been completely defined, though it is believed that it acts
as a grammatical filter for both the production and the understanding of
text.  It will, in a way, flag a sentence as either being grammatical or
ungrammatical, and in the case of the understanding mechanism, will aid
in activating the correct word on the word level.

The concept level is where the majority of the processing occurs.  Each
node is linked to words and/or concepts that are both related and
unrelated through uni- or bi-directional excitatory or inhibitory links
respectively.  Inhibition provides a mechanism to increase the
difference in activation between two or more dissimilar nodes.  For
example, a node representing the concept {\em animate} would have an
inhibitory link with the node representing {\em inanimate}.  The
resulting inhibition of the {\em inanimate} node would result in the
decrease of activation of those nodes directly linked to it.  This
process is especially important to language understanding. 

Every active node in the network decays a certain percentage during
processing.  This prevents the activation values in the network from
rising unnecessarily.

\subsection{A Conceptual Example} \label{conceptual}

In this section a conceptual analysis of the sentence ``John walked to
the bar'' is presented using the model described above. 

The syntactic analyzer of the language processor activates the
appropriate word-nodes on the word (syntactic) level of the network. 
Activation then spreads down from these nodes into the concept
(semantic) level where the actual understanding process takes place. 

When the language processor encounters {\em John} a number of words and
concepts become activated, such as John is an animate, male human. 
Further, any more specific information, such as a particular instance of
the ``John'' concept, receives activation.  The word ``walked''
activates a change in position through the use of ones legs, generally
at a slow rate of speed.  On a more syntactic level, one would expect to
find a destination for the walking.  I say generally because the
sentence ``John walked'' is both grammatical and semantically correct. 
``To'' adds towards the activation of a destination for the {\em
walking} (walked) concept.  ``The'' has more syntactic than semantic
meaning, but stresses a singular object to the preposition began with
``to''.  Finally the word ``bar'' activates a range of possible
meanings, all of which are perfectly feasable for the sentence.  The
first that appears is the bar which serves alcoholic beverages.  The
second refers to a metal or wooden bar found at a gymnasium.  Of course,
this in turn starts a new burst of activation for the various types of
bars found in a gym, but for simplicity, this will be ommited.  Finally,
one can assume that John is a law student and is going to take the Bar
so that he may become a practising lawyer or that he is going to the Bar
because he needs some information or that he did something which
violates the regulations of the Bar.

\subsection{Formal Definitions}
\begin{table}
\centering
\begin{tabular}{ll} \hline
$\rightarrow$                    & Excitatory link                \\
$\not \rightarrow$               & Inhibitory link                \\
$\delta$                         & Decay constant                 \\
$h$				 & Inhibition constant		  \\
$\alpha$                         & Activation constant            \\
$t$                              & The number of time steps       \\
$\sum_i$                         & Total number of inputs \\
$c^*$				 & Amount of source activation \\
$c^*_i$                          & Amount of source activation for\\
                                 & node $i$                       \\
$c^*_i(t)$                       & Amount of source activation for\\
                                 & node $i$ and time step $t$     \\
$a_i$                            & Level of activation of node $i$  \\
$a_i(t)$                         & Level of activation of node $i$ \\
                                 & at time step $t$               \\
$n_i(t)$                         & Total input activation to node $i$ \\
\hline
\end{tabular}
\caption{Notation Glossary}\label{Glossary}
\end{table}

{\bf Activation}.  The spread of activation through a network is ``a
parallel mechanism for spreading measures of associative relevance over
[a] network of knowledge''.\footnote{John Anderson, {\em The
Architecture of Cognition} (Cambridge: Harvard UP, 1983) 87.}

\begin{definition}[Activation]
For each node $i$ in the network, activation \linebreak spreads to all
nodes $j$ , such that
\begin{equation}
n_j(t) =  \sum_i( c^*_i + k a_i(t))   \label{eq:activation}
\end{equation}
where $k = \alpha$ if $i \rightarrow j$ or $k = h$ if $i \not
\rightarrow j$, and $c^*_i = c^*$ if $i$ is a source node at $t=1$, 0 otherwise
\end{definition}

{\bf Decay}.  Decay represents the entropy of the activation values in
the network.  This prevents the network from ``heat death'' due to
exceedingly high activation throughout the network.

\begin{definition}[Decay]
For each node $i$ at time step $t$, the activation value $a_i(t)$ decays by a
constant $\delta$, $0 \leq \delta <  1$, such that 
\begin{equation}
\Delta a_i(t) = -\delta a_i(t) \label{eq:decay}
\end{equation}
\end{definition}

{\bf Source Activation}.  The source activation, $c^*$, represents the
external input into a specific node in the network.  More than one node
can share $c^*$ simultaneously.  $c^*$ is added to $n_i$ at $t=1$ only,
i.e. $c^*(t) = 0, t > 1$.

{\bf Input Activation}.  The total input activation into a node $i$,
$n_i(t)$, represents the value to be added to $a_i$, such that
\begin{equation}
a_i(t) = a_i + n_i(t) \label{eq:totala}
\end{equation} 

\section{The Simulation} \label{thesimulation}

A computer simulation of the \san\ described above has been implemented
in Common \lisp.

\subsection{Simulation Overview}

Each node in the network is represented by four properties on the
property list of the name of the node (all nodes have a distinct name
which {\bf must} be a \lisp\ atom).  These properties include the node's
current activation value, a list containing the node's excitatory
connections, a list containing its inhibitory connections, and a
temporary value which holds the total input ($n_i(t)$) to a node during
each time step.  The network as a whole is represented as a (global)
list of all defined nodes. 

The values of $\delta$, $\alpha$, and $h$ are initialized to 0.15,
0.25, and -0.40 respectively.  These can be modified by the user in order
to vary the spread of activation.

\label{activate} Activation spreads beginning with the first node in the
network list (i.e.  the first node defined.)  If the activation value of
the node is not zero, then it activates all of the nodes connected to it
according to Equation~\ref{eq:activation}.  The activation received by a
node is summated ($n_i(t)$) before it is added to $a_i$ to give the new
activation for that time step.  For example, {\bf foo} receives an
excitatory input of 0.8 and an inhibitory input -0.2.  Therefore the
 total input to {\bf foo} is 0.6.  {\bf Foo}'s activation at the end of
the previous activation was 0.3.  At the end of the time step, the
total input is added to current value, resulting in an activation value
of 0.9 (by Equation~\ref{eq:totala}). 

At the end of every time step the activation value of each node is
displayed, providing $a_i(t) > 0$.

A number of display functions have been programmed in order to allow the
user to view the network or individual nodes in numerous ways. 
Functions also exist allowing the user to perform simple maintenance on
the network.  These functions are described in
Section~\ref{displaydocs}.

\subsection{Theory-Programming Conflicts}

Several theory-programming conflicts have been resolved in this version
of the simulation.  The first relates to the application of $c^*$ to a
source node.  Currently $c^*$ is added to $n_i$ at $t=1$, and never
again (i.e.  $c^*(t)=0, t>1$.) It was suggested that the external input
to a node should be constant throughout the spread of activation
($c^*(t) = c^*, t \geq 1$).  The former method was used because it was
felt that it presented a more realistic application of source energy. 
When one reads a text, the words are scanned and activated on the word
level by the syntactic analyzer.  They are never activated again, unless
they are re-read. 

The present priming mechanism allows the user to add a constant to the
activation value of a node.  One other mechanism was under consideration
for implementation.  This would add a constant to the node and
spread activation one level from that node, thereby priming those nodes
immediately related to the primed unit. 

The algorithm for the spread of activation has been changed three times.
 The first algorithm would store those nodes that had just been
activated, and only activate those nodes at the next time step. 
Further, there was no activation constant --- a node acquired the entire
activation value of its source.  This quickly led to heat death of the
network even with decay set as high as 25\%. 

The second algorithm spread activation from every node at every time
step, providing that the node's activation value was not zero.  This led
to some difficulty.  Consider a node {\bf foo} that activates {\bf bar}
(which had an activation value of zero at the beginning of the step)
with an excitatory connection.  {\bf Bar} should not be allowed to
spread its activation during that time step.  Unfortunately, this
algorithm would spread activation from {\bf bar}.  This problem was
solved with the third algorithm, which is described above.

\subsection{An Example Run of the Simulation}

This section presents an example run of the simulation disambiguating
the sentence ``the astronomer married a star''.  The obvious source of
ambiguity is the meaning of the word {\em star}.  There are three
possible interpretations of the word according to the network (shown in
Figure~\ref{bignet}):
\begin{figure}
\centering

\small
\begin{picture}(360,216)

%%% Set up the excitatory relation between astronomer and astronomy
\put(30,100){astronomer}
\put(32,70){astronomy}
\put(53,88){\vector(0,-1){10}}
\put(53,88){\vector(0,1){10}}
%%% Set up the excitatory relation between astronomer and astral_body
\put(95,72){\vector(-1,0){15}}
\put(95,72){\vector(1,0){22}}
\put(120,70){astral\_body}
%%% Set up the excitatory relation between astral_body and star
\put(150,80){\vector(3,1){49}}
\put(200,100){star}
\put(215,95){\vector(3,-1){49}}
\put(225,70){geometric\_figure}
%%% Set up the inhibitory links between astral_body/geometric_figure and actor
\put(150,67){\line(3,-1){45}}
\put(260,67){\line(-3,-1){45}}
\put(196,43){actor}
%%% Set up the excitatory link between star and actor
\put(207,77){\vector(0,1){20}}
\put(207,77){\vector(0,-1){24}}
%%% Draw the inhibitory link between astral_body and geometric_figure
\put(172,72){\line(1,0){50}}
%%% Display the text for the inanimate concept and the links to/from it
\put(187,13){inanimate}
\put(285,67){\vector(-1,-1){50}}
\put(130,67){\vector(1,-1){50}}
%%% Display the animate concept and the link to/from it
\put(90,13){animate}
\put(130,15){\line(1,0){50}}
\put(92,45){human}
\put(105,30){\vector(0,-1){9}}
\put(105,30){\vector(0,1){12}}
%%% Display the excitatory link between astronomer and human
\put(82,102){\vector(1,-3){17}}
%%% Display the marry concept and it's link to human
\put(0,45){marry}
\put(80,47){\vector(-1,0){50}}
\put(80,47){\vector(1,0){10}}
\put(140,47){\vector(-1,0){18}}
\put(140,47){\vector(1,0){52}}
%%% Put a dashed line above the entire network
\multiput(0,120)(9,0){34}{\line(1,0){4.5}}
%%% Add the word level nodes and their links
\put(30,180){astronomer}
\put(53,178){\vector(0,-1){68}}
\put(200,180){star}
\put(207,178){\vector(0,-1){68}}
\put(0,150){marry}
\put(9,148){\vector(0,-1){93}}
%%% Add the labels for word level and concept level
\put(310,150){\shortstack{Word \\ Level}}
\put(310,45){\shortstack{Concept \\ Level}}

\end{picture}
\normalsize

\caption{The network used to disambiguate the sentence ``the astronomer
married a star''} \label{bignet}
\end{figure}
an astralogical body, a famous actor, or a hexagonal geometric figure.
Activation was passed through the network for 15 time steps with the
following parameters: $\alpha = 0.20$, $\delta = 0.15$, $h = -0.45$, and
$c^* = 0.5$ into {\bf astronomer}, {\bf marry}, and {\bf star}.  Note
that it is be the purpose of the syntactic analyzer to activate the above
three nodes.  Appendix~\ref{sample} shows the abridged output of the
simulation spreading activation through the network, and
Appendix~\ref{sampleprog} shows the program used to generate the network
shown in Figure~\ref{bignet}.

Table~\ref{table} shows the activation values of the nodes in the {\bf
star} cluster of Figure~\ref{bignet}.  It is interesting to observe that
the activation of {\bf actor} increases almost exponentially while {\bf
astral\_body} and {\bf geo\_fig} decrease very quickly.  This behavior
can be attributed to the excitatory links between {\bf actor} and {\bf
star} and {\bf human}.  {\bf Actor} indirectly receives source
activation from both {\bf star} and {\bf marry}.  This demonstrates the
``rich-get-richer'' effect --- nodes that start with a high activation
value (the source nodes in the above network) are able to accumulate
more activation than a node with a lower activation.  Hence, the nodes
immediately linked with these source nodes share the higher activation. 
Further, two nodes in the {\bf star} cluster ({\bf star} and {\bf actor}
``gang up'' on the other two nodes, resulting in their eventual demise. 

\begin{table}
\centering
\begin{tabular}{|c|r|r|r|r|} \hline
Time Step & \multicolumn{4}{c|}{Activation} \\ \hline
$t$ & {\bf star} & {\bf actor} & {\bf astral\_body} & {\bf geo\_fig} \\ \hline
1 & 0.050000 & 0.100000 & 0.100000 & 0.100000 \\
2 & 0.476000 & 0.127500 & 0.110500 & 0.093500 \\
3 & 0.460955 & 0.179550 & 0.127882 & 0.075140 \\
4 & 0.455558 & 0.230143 & 0.145118 & 0.036141 \\
5 & 0.457163 & 0.309097 & 0.162574 & \\
6 & 0.468772 & 0.407573 & 0.168694 & \\
7 & 0.496422 & 0.518785 & 0.147158 & \\
8 & 0.535169 & 0.659947 & 0.105002 & \\
9 & 0.584935 & 0.843615 & 0.036716 & \\
10 & 0.646851 & 1.08429 & & \\
11 & 0.737462 & 1.37462 & & \\
12 & 0.857714 & 1.71138 &  & \\
13 & 1.01999 & 2.11045 & & \\
14 & 1.22577 & 2.58874 & &\\
15 & 1.48199 & 3.16540 & & \\
\hline
\end{tabular}
\caption{Activation values for the {\bf star} cluster ($\alpha = 0.20$,
$\delta = 0.15$, $h=-0.45$, $c^* = 0.5$ into {\bf astronomer}, {\bf
star}, and {\bf marry}).}  \label{table}
\end{table}

From the data in the table one can conclude that the spread of
activation through a network of word and concept relations can
effectivley disambiguate a simple declaritive sentence.

\section{Program Documentation} \label{documentation}

The \san\ simulator was written in Common \lisp\footnote{The public
domain XLISP interpreter, version 2.0 for IBM PC's and compatibles, DOS
3.x}.  The simulator is interactive --- functions are entered at the
\lisp\ interpreter prompt and any values returned are displayed for the
user.  This section assumes a basic knowledge of \lisp, including the
notion of lists, property lists, and functions.  The source code for the
simulation is contained in Appendix~\ref{theprogram}. 

To load the simulator, simply place the diskette into the boot drive and
reboot the machine.  The screen will clear and an introductory message
will appear.  It will execute and load the simulation automatically. 
The \lisp\ interpreter prompts for input with the greater than symbol
({\tt >}).  It is now possible to load a network from the disk, define
new nodes and links, initiate activate through the network, and display
inidividual nodes or the network as a whole. 

\subsection{Loading a Network From Disk}

To load a network from disk, use the \lisp\ interpreters load command. 
For example, to load the network shown in Appendix~\ref{sampleprog}, you
would enter {\tt (load~'netwk1)} at the prompt.  The interpreter will
then attempt to read the file into memory.  Note that networks stored on
disk must have the {\tt .LSP} extension to their file name. 
Figure~\ref{loading} shows part of a sample session using the simulator
in which the user loads the network contained in the file {\tt netwk1}. 

\begin{figure}
\hrule
\begin{verbatim}
XLISP version 2.0.T9, Copyright (c) 1987, by David Betz
; loading "net.lsp"
> (load 'netwk1)
; loading "netwk1.LSP"
T
>
\end{verbatim}
\hrule
\caption{A dialogue showing the input of a network stored on the disk}
\label{loading}
\end{figure}



\subsection{Creating Nodes and Links}

A node is created using the {\tt define\_node} function.  The syntax of
the function is \mbox{{\tt (define\_node {\em node\_name} [{\em
initial\_activation}])}} where {\em node~name} is a \lisp\ atom
representing the name of the node, and [{\em initial\_activation}] is an
optional argument allowing the user to set the initial activation of a
node.  If this argument is ommited, then the initial activation is set
to 0. 

A link is created using the {\tt add\_link} function.  The syntax of the
function is \mbox{{\tt (add\_link {\em node\_a} {\em link} {\em
node\_b})}}, where {\em node\_a} and {\em node\_b} are the atomic names
of two defined nodes in the network, and {\em link} is an atom
representing the required link.  Table~\ref{links} lists the four
different links available for use.
\begin{table}
\centering
\begin{tabular}{|l|c|} \hline
Link Type & Notation \\ \hline
Unidirectional excitatory   & {\tt -->} \\
Unidirectional inhibitory   & {\tt --O} \\
Bidirectional excitatory    & {\tt <-->} \\
Bidirectional inhibitory    & {\tt O--O} \\
\hline
\end{tabular}
\caption{Notation used for the specification of links.} \label{links}
\end{table}

Figure~\ref{defineanode} presents the definition of a small network
(actually a portion of the network shown in Figure~\ref{bignet}) using
the functions described above\footnote{It is hereafter assumed that the
simulator has been loaded into the \lisp\ interpreter.}

\begin{figure}
\hrule
\begin{verbatim}
> (define_node 'star)
(STAR DEFINED)
> (define_node 'actor)
(ACTOR DEFINED)
> (define_node 'astral_body)
(ASTRAL_BODY DEFINED)
> (add_link 'star '<--> 'actor)
(STAR)
> (add_link 'star '<--> 'astral_body)
(STAR)
> (add_link 'actor 'O--O 'astral_body)
(ACTOR)
>
\end{verbatim}
\hrule
\caption{Defining nodes and links} \label{defineanode}
\end{figure}

The function {\tt reset\_net} will reset the activation value of every
node in the network to 0.

\subsection{Displaying Information} \label{displaydocs}

There are a number of functions that display information about the
network.  These will be presented with their syntax and a
short summary of their use.  In the descriptions below, {\em
node\_name} refers to the atomic name of a node defined in the network. 

\begin{description}
\item[{\tt (display\_node {\em node\_name})}] Displays a node's name,
activation value, excitatory links, and inhibitory links.
\item[{\tt (graph\_node {\em node\_name})}] Graphically displays a node,
one of its decendents, and the type of link between the two (see
Table~\ref{links}.
\item[{\tt (show\_node {\em node\_name})}] Graphically displays a node
and all of its connections.
\item[{\tt (show\_net)}] Graphical\-ly displays the entire network (by
calling \mbox{{\tt (show\_node)}} on each node in the network).
\item[{\tt (show\_activation)}] Displays the activation value for each
node in the network.
\item[{\tt (status)}] Displays the current values for $\delta$,
$\alpha$, and $h$.
\end{description}

\subsection{Spreading Activation}

The {\tt (start)} function acts as a front end to the functions and
macros used for activation processing.  The actual functions that handle
the spread of activation and the decay of the network will be described
in the next section.

When {\tt (start)} is first initiated it displays the network status,
consisting of all defined nodes and their connections and the values of
$\alpha$, $\delta$, and $h$ (by calling {\tt (show\_net)} and {\tt
(status)}.)  It prompts the user for the number (integer) of time
steps to send activation through the network.  It then asks for the
source node(s).  The user can enter either an atomic node name or a
list of nodes to act as sources (see Appendix~\ref{sample}) to receive
the value of $c^*$.  Finally, it requests the input value ($c^*$) to be
sent into the node(s) entered in the previous step.  The function then
starts the activation process.

At each time step the simulator will display all of the active nodes
(i.e.  $a_i > 0$) and pause until the {\tt Return} key is pressed.  At
the conclusion of each time step the network is decayed according to
Equation~\ref{eq:decay}.  At the end of the activation {\tt (start)}
will return NIL. 

\subsection{Low-level Activation Functions}

The functions and macros described in this section perform the low level
operations on the network during the activation process.  All of the
below functions are called by {\tt (start)}.  The macros are in turn
called by the functions below.

\subsubsection{Functions}

The function {\tt (spread {\em node})} will spread activation from {\em
node} to all nodes connected to it according to
Equation~\ref{eq:activation}, if and only if the activation value of
{\em node} is greater than zero. 

{\tt (update)} will resolve the activation of every node in the network
according to Equation~\ref{eq:totala}, i.e.  it adds $n_i$ to $a_i$ and
stores the value returned in $a_i$.  If $n_i + a_i < 0$, then $a_i = 0$. 
Finally the function resets the value of $n_i$ to 0. 

{\tt (decay)} applies Equation~\ref{eq:decay} to the entire network.

\subsubsection{Macros}

The macro \mbox{{\tt (prime {\em node\_name} {\em value})}} makes
it possible for the user to add {\em value} to the activation of {\em
node\_name}.  This is a front end to the macro \mbox{{\tt
(add\_to\_activation)}}.

{\tt (add\_to\_activation {\em value} {\em node})} will add {\em value}
to the activation of {\em node}.

{\tt (receive {\em node} {\em value})} will add {\em value} to the input
($n_i$) property of {\em node}.


%%% WARNING!!!!!
%%% Everything after this is for the appendix!
\newpage
\appendix
\section{Simulation Output} \label{sample}
\begin{verbatim}
> (start)
NETWORK STATUS
ASTRONOMER(0) --> ASTRONOMY(0)
ASTRONOMER(0) --> HUMAN(0)
ASTRONOMY(0) --> ASTRONOMER(0)
ASTRONOMY(0) --> ASTRAL_BODY(0)
STAR(0) --> ASTRAL_BODY(0)
STAR(0) --> ACTOR(0)
STAR(0) --> GEOMETRIC_FIGURE(0)
ACTOR(0) --> STAR(0)
ACTOR(0) --> HUMAN(0)
ACTOR(0) --O ASTRAL_BODY(0)
ACTOR(0) --O GEOMETRIC_FIGURE(0)
ASTRAL_BODY(0) --> ASTRONOMY(0)
ASTRAL_BODY(0) --> STAR(0)
ASTRAL_BODY(0) --> INANIMATE(0)
ASTRAL_BODY(0) --O GEOMETRIC_FIGURE(0)
ASTRAL_BODY(0) --O ACTOR(0)
GEOMETRIC_FIGURE(0) --> STAR(0)
GEOMETRIC_FIGURE(0) --> INANIMATE(0)
GEOMETRIC_FIGURE(0) --O ASTRAL_BODY(0)
GEOMETRIC_FIGURE(0) --O ACTOR(0)
ANIMATE(0) --> HUMAN(0)
ANIMATE(0) --O INANIMATE(0)
INANIMATE(0) --> ASTRAL_BODY(0)
INANIMATE(0) --> GEOMETRIC_FIGURE(0)
INANIMATE(0) --O ANIMATE(0)
HUMAN(0) --> ACTOR(0)
HUMAN(0) --> ASTRONOMER(0)
HUMAN(0) --> ANIMATE(0)
HUMAN(0) --> MARRY(0)
MARRY(0) --> HUMAN(0)

Delta:   0.150000
Alpha:   0.200000
H:       -0.450000

Enter the number of time steps: 15
Source node(s): (astronomer marry star)
Input value: .15
Time Step: 1
ASTRONOMER(0.150000)
ASTRONOMY(0.0300000)
STAR(0.150000)
ACTOR(0.0300000)
ASTRAL_BODY(0.0300000)
GEOMETRIC_FIGURE(0.0300000)
HUMAN(0.0600000)
MARRY(0.150000)

Time Step: 2
ASTRONOMER(0.142800)
ASTRONOMY(0.0561000)
STAR(0.142800)
ACTOR(0.0382500)
ASTRAL_BODY(0.0331500)
GEOMETRIC_FIGURE(0.0280500)
ANIMATE(0.0102000)
INANIMATE(0.0102000)
HUMAN(0.107100)
MARRY(0.137700)

Time Step: 3
ASTRONOMER(0.149124)
ASTRONOMY(0.0775965)
STAR(0.138286)
ACTOR(0.0515865)
ASTRAL_BODY(0.0383647)
GEOMETRIC_FIGURE(0.0225420)
ANIMATE(0.0229755)
INANIMATE(0.0151725)
HUMAN(0.146956)
MARRY(0.135252)

- - - - - - - - - - - - - - - - 

Time Step: 10
ASTRONOMER(0.428729)
ASTRONOMY(0.250596)
STAR(0.194055)
ACTOR(0.325287)
ANIMATE(0.259073)
HUMAN(0.605304)
MARRY(0.301053)

Time Step: 11
ASTRONOMER(0.509923)
ASTRONOMY(0.285890)
STAR(0.220246)
ACTOR(0.412385)
ANIMATE(0.323114)
HUMAN(0.737913)
MARRY(0.358797)

Time Step: 12
ASTRONOMER(0.607481)
ASTRONOMY(0.329694)
STAR(0.257314)
ACTOR(0.513414)
ANIMATE(0.400092)
HUMAN(0.899943)
MARRY(0.430423)

Time Step: 13
ASTRONOMER(0.725397)
ASTRONOMY(0.383511)
STAR(0.305997)
ACTOR(0.633136)
ANIMATE(0.493068)
HUMAN(1.09669)
MARRY(0.518850)

Time Step: 14
ASTRONOMER(0.868222)
ASTRONOMY(0.449302)
STAR(0.367731)
ACTOR(0.776622)
ANIMATE(0.605545)
HUMAN(1.33516)
MARRY(0.627460)

Time Step: 15
ASTRONOMER(1.04135)
ASTRONOMY(0.529505)
STAR(0.444597)
ACTOR(0.949621)
ANIMATE(0.741692)
HUMAN(1.62412)
MARRY(0.760319)


NIL
>
\end{verbatim}
\newpage
\section{Code for Sample Network} \label{sampleprog}
\scriptsize
\begin{verbatim}
;;; ***********************************************************************
;;;
;;; Network definition for the disambiguation of the sentence ``The
;;; astronomer married a star.''
;;;
;;; ***********************************************************************


; Define the nodes
(define_node 'astronomer)    (define_node 'astronomy)
(define_node 'star)          (define_node 'actor)
(define_node 'astral_body)   (define_node 'geometric_figure)
(define_node 'animate)       (define_node 'inanimate)
(define_node 'human)         (define_node 'marry)

; Set up the links
(add_link 'astronomer '<--> 'astronomy)
(add_link 'astronomy '<--> 'astral_body)
(add_link 'astral_body 'O--O 'geometric_figure)
(add_link 'astral_body 'O--O 'actor)
(add_link 'geometric_figure 'O--O 'actor)
(add_link 'star '<--> 'astral_body)
(add_link 'star '<--> 'actor)
(add_link 'star '<--> 'geometric_figure)
(add_link 'human '<--> 'actor)
(add_link 'human '<--> 'astronomer)
(add_link 'animate '<--> 'human)
(add_link 'human '<--> 'marry)
(add_link 'animate 'O--O 'inanimate)
(add_link 'inanimate '<--> 'astral_body)
(add_link 'inanimate '<--> 'geometric_figure)

; Define the values for alpha, delta, and h
(setq *DELTA* .15)
(setq *ALPHA* .20)
(setq *H* -.45)
\end{verbatim}
\normalsize

\newpage
\section{Source Code for the {\bf {\sc San}} Simulator}  \label{theprogram}
\scriptsize
\begin{verbatim}
;;; ***********************************************************************
;;;
;;;                   The Disambiguation of English Text
;;;                 Through the use of Activation Networks
;;;
;;;                       Thomas R. Emerson (40405)
;;;
;;;           A Program to fulful the project requirement of
;;;            PY350 - Artificial Intelligence - Spring '88
;;;                         Clarkson University
;;;
;;; ***********************************************************************
;;;
;;; Program Version:     1.3
;;; Program Started:     March  7, 1988
;;; Program Completed:   April 13, 1988
;;;
;;;         To Robin, for her constant flow of ideas and support
;;;
;;; ***********************************************************************

;; ****************************************************************
;; Set constants and global variables
;; ****************************************************************

(setq *NETWORK* '())                           ; Holds all defined nodes
(setq *ACTIVE_NODES* '())                      ; List holding active links
(setq *DELTA* 0.15)                            ; Decay constant
(setq *H* -0.40)                               ; Inhibition constant
(setq *ALPHA* 0.25)                            ; Activation constant

;; ***************************************************************
;; Macro definitions
;; ***************************************************************

; add_to_activation - will add the value supplied to the activation
; value of the node, and will put this new value in the property list.
(defmacro add_to_activation (value node)
  `(putprop ,node (+ ,value (get ,node 'activation)) 'activation))

; receive - will add the value supplied to the input value of the node,
; and will put this new value back on the property list.
(defmacro receive (node value)
  `(putprop ,node (+ (get ,node 'input) ,value) 'input))

; prime - a front end macro allowing the user to add value to the
; activation of the node specified.
(defmacro prime (node value)
  `(add_to_activation ,value ,node))
    
;; ***************************************************************
;; Input/Output functions
;; ***************************************************************

; prompt - prompt the user for input and return the atom entered
(defun prompt (prompt_string)
  (princ prompt_string)
  (read))


; display_node - will display the node, its current activation, and
; the links originating at the node
(defun display_node (node)
  (princ "Node Name: ")
  (print node)
  (princ "Activation: ")
  (print (get node 'activation))
  (princ "Activation Links: ")
  (print (get node 'alinks))
  (princ "Inhibition Links: ")
  (princ (get node 'ilinks)) (terpri))

; graph-node - display an individual node and one of its connections
(defun graph_node (node daughter link)
   (princ node) (princ "(") (princ (get node 'activation))
   (princ ") ") (princ link) (princ " ") (princ daughter)
   (princ "(") (princ (get daughter 'activation)) (princ ")")
   (terpri))

; show_node - graphically displays a single node and all its connections        
(defun show_node (node)
  (dolist (each_node (get node 'alinks))
          (graph_node node each_node '-->))
  (dolist (each_node (get node 'ilinks))
          (graph_node node each_node '--O)))

; show_net - graphically displays the entire network
(defun show_net ()
  (dolist (nodes *NETWORK*) (show_node nodes)))

; show_activation - display the activation of all nodes in the network
(defun show_activation (node)
  (princ node) (princ "(")
     (princ (get node 'activation)) (princ ")") (terpri))

; status - displays the current status of the constants used in the network
(defun status()
  (princ "Delta:   ") (princ *DELTA*) (terpri)
  (princ "Alpha:   ") (princ *ALPHA*) (terpri)
  (princ "H:       ") (princ *H*) (terpri))
          
;; ***************************************************************
;; Functions to handle the creation, modification, and deletion
;; of individual nodes.
;; ***************************************************************

; define_node - will create the initial representation of a node, with a
; default activation value of 0 and no links.  A different activation value
; can be assigned through the use of an optional parameter.
(defun define_node (node_name &optional (act_value 0))
  (putprop node_name act_value 'activation)       ; Store the activation value
  (putprop node_name '() 'alinks)                 ; Activation links
  (putprop node_name '() 'ilinks)                 ; Inhibition links
  (putprop node_name 0 'input)
  (setq *NETWORK* (append *NETWORK* (list node_name)))
  (cons node_name '(defined)))              

; add_link - will connect name with node via link
(defun add_link (name link node)
  (cond ((equal link '<-->) (putprop name 
  		       		     (append (get name 'alinks) (list node))
  				     'alinks)
  		            (putprop node
  		                     (append (get node 'alinks) (list name))
  		                     'alinks))
	((equal link 'O--O) (putprop name
				     (append (get name 'ilinks) (list node))
			  	     'ilinks)
	 		    (putprop node
	 		             (append (get node 'ilinks) (list name))
	 		             'ilinks))
        ((equal link '-->) (putprop name
                                    (append (get name 'alinks) (list node))
                                    'alinks))
        ((equal link '--O) (putprop name
                                    (append (get name 'ilinks) (list node))
                                    'ilinks))
        (t '(error in function add_link))))

; reset_net - will reset the activation value of all nodes to 0
(defun reset_net ()
  (dolist (node *NETWORK* 'done)
    (putprop node 0 'activation)))	

;; ***************************************************************
;; Functions for spreading activation
;; ***************************************************************

(defun start ()
  (princ "NETWORK STATUS") (terpri)
  (show_net) (terpri)
  (status) (terpri)
  (setq time_steps (prompt "Enter the number of time steps: "))
  (setq source (prompt "Source node(s): "))    
  (setq input (prompt "Input value: "))      ; get c*
        ; allow the user to supply a list of source nodes (for multiple
        ; applications of c*.)  The user can also supply an atom representing
        ; input into a sigle node.  c* is constant for every node, i.e.
        ; each node in the list gets the same value of c*.
  (cond ((listp source) (dolist (node source)    ; if a list...
  				(putprop node 
  				         (+ input (get node 'activation))
  				         'activation)))
  	(t (putprop source                       ; if not...
 	            (+ input (get source 'activation))
  	            'activation)))
  (dotimes (ts time_steps)
     (princ "Time Step: ") (princ (1+ ts)) (terpri)
     (dolist (node *NETWORK*)
             (spread node)) 
     (update)
     (decay)
     (read-char))
  (terpri))
                         
(defun spread (node)
  (cond ((not (zerop (get node 'activation)))
         (setq c (* (get node 'activation) *ALPHA*))
         (dolist (l (get node 'alinks))
                 (receive l c))
         (setq c (* (get node 'activation) *H*))
         (dolist (l (get node 'ilinks))
                 (receive l c)))))
                 
(defun update ()
  (dolist (node *NETWORK*)
    (setq new_activation (+ (get node 'activation) (get node 'input)))
    (cond ((minusp new_activation)
           (putprop node 0 'activation))
          (t (putprop node new_activation 'activation)))
    (putprop node 0 'input)
    (cond ((zerop (get node 'activation)) t)
          (t (show_activation node)))))

;; ***************************************************************
;; decay function
;; ***************************************************************
(defun decay ()
  (dolist (node *NETWORK*)
          (putprop node
                   (- (get node 'activation)
                      (* (get node 'activation) *DELTA*))
                   'activation)))


; ***************************************************************
; (c) 1988 by Tom Emerson
; All rights reserved
; ***************************************************************
\end{verbatim}
\normalsize



\newpage
\pagenumbering{roman}
\tableofcontents
\end{document}