[net.ai] AIList Digest V3 #90

LAWS@SRI-AI.ARPA (07/09/85)

From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>


AIList Digest            Tuesday, 9 Jul 1985       Volume 3 : Issue 90

Today's Topics:
  Query - Workstations for AI and Image Processing &
    PSL Flavors Implementation & Representation of Knowledge,
  New List - PARSYM for Parallel Symbolic Computing,
  Psychology - Distributed Associative Memory Systems

----------------------------------------------------------------------

Date: 8 JUL 85 15:10-N
From: APPEL%CGEUGE51.BITNET@WISCVM.ARPA
Subject: QUERY ON WORKSTATIONS FOR AI AND IMAGE PROCESSING

AI on workstations

We are looking for a workstation for developping AI systems, mixed with an
image processing system. The workstation has to work under Unix (if possible
4.2).

We intend to buy a Sun-2, with floating-point accelerator and color graphic
system.

We heard that the SUN is slow for floating point operations. Does somebody
have informations on Franz LISP's performance on SUN, or other AI tools on SUN,
versus other similar workstations (in costs)?

We are interrested in positive and negative arguments to buy or NOT buy a
SUN.

                                                Ron Appel

------------------------------

Date: Mon, 8 Jul 85 10:40:59 EST
From: munnari!elecadel.oz!alex@seismo
Subject: PSL Flavors Implementation.

The implementation of Flavors we received with our
PSL package from Utah does not permit the mixing of
Flavors (whats the point of calling it Flavors then
you might ask...). Can anyone tell me of a complete
version of Flavors that runs on PSL? I'm also
looking for a ZetaLisp compatability package that
implements the &.... function parameter conventions.

Thanks,
Alex Dickinson,
The University of Adelaide,
South Australia.

------------------------------

Date: Mon, 8 Jul 85 00:00:19 cdt
From: Mark Turner <mark%gargoyle.uchicago.csnet@csnet-relay.arpa>
Subject: representation of knowledge

I am gathering for my students a bibligraphy
of works on representation
of knowledge.  I am particularly concerned with
cognitive psychology, artificial intelligence,
philosophy, linguistics, and natural language processing.
I would appreciate receiving copies of bibliographies
others may already have on-line.
Mark Turner
Department of English
U Chicago 60637
>ihnp4!gargoyle!puck!mark

------------------------------

Date: Sun, 7 Jul 1985  21:31 PDT
From: DAVIES@Sumex
Subject: PARSYM -- new mailing list for Parallel Symbolic Computing

                  PARSYM: A Netwide Mailing List for
                     Parallel Symbolic Computing

The PARSYM mailing list has been started to encourage communication
between individuals and groups involved in PARALLEL SYMBOLIC COMPUTING
(non-numeric computing using multiple processors).  The moderator
encourages submissions relating either to parallelism in symbolic
computing or to the use of symbolic computing techniques (AI, objects,
logic programming, expert systems) in parallel computing.  All manner
of communication is welcomed: project overviews, research results,
questions, answers, commentary, criticism, humor, opinions,
speculation, historical notes, or any combination thereof, as long as
it relates to the hardware, software, or application of parallel
symbolic computing.

To contribute, send mail to PARSYM@SUMEX (or PARSYM@SUMEX-AIM.ARPA, if
your mailer requires).  To be added to the PARSYM distribution list,
or to make other editorial or administrative requests, send mail to
PARSYM-Request@SUMEX.  When you are added to the PARSYM distribution
list, I will send you a welcoming message with additional information
about PARSYM and some necessary cautions about copyright and
technology export.

To get the list off the ground, I offer the following set of
discussion topics:

1. Will there be a general-purpose parallel symbolic processor, or
   should parallel architectures always be specialized to particular
   tasks?

2. The primary languages for sequential symbolic computing are Lisp,
   Prolog, and SmallTalk.  Which is a better basis for developing a
   programming language for parallel computing?  Do we need something
   fundamentally different?

3. Sequential computing took about 30 years to reach its current
   state.  Thirty years ago, programming tools were nonexistent:
   programmers spent their time cramming programs into a few hundred
   memory cells, without programming languages or compilers or
   symbolic debuggers.  Now, sequential programming is in a highly
   developed state: most programmers worry less about the
   limitations of their hardware than about managing the
   complexity of their applications and of their evolving computer
   systems.

   Today, parallel programming is where sequential programming was
   thirty years ago: to optimize computation and communication,
   programmers spend their time manually assigning processes to a few
   processors, without benefit of programming languages or compilers
   or symbolic debuggers that deal adequately with parallelism.

   Will it take 30 years to bring parallel computing up to the current
   level of serial computing?

Submissions, queries, and suggestions are equally welcome.  Fire away!

                                PARSYM's Moderator,

                                Byron Davies (Davies@SUMEX)

------------------------------

Date: Sun, 7 Jul 85 21:47:37 EST
From: munnari!psych.uq.oz!ross@seismo
Subject: instantiation in distributed associative memory systems

I was reading some papers by James A. Anderson the other day on the
psychological properties of distributed associative memory systems ("Cognitive
and psychological computation with neural models",IEEE Transactions on Systems,
Man, and Cybernetics, Vol 13, pp 799-815, 1983; "Fun with parallel systems",
unpublished paper, 1984). His simulation model associates different features
with state vectors (patterns of activation of the neurons) instead of with
individual neurons. Orthogonality in this system is achieved in two ways.
Alternative values of the same variable (e.g. black-white, mortal-immortal)
use the same neurons but have orthogonal codings, whereas dissimilar things
(e.g. shoes-sealing wax, cabbages-kings)use entirely different sets of neurons.
He taught his system various associations such as Plato -> Man, Man -> Mortal,
Zeus -> God, God -> Immortal and the system was able to output triples such as
<Zeus,God,Immortal> from input of single components.

This system can be viewed as approximately equivalent to a production system
with rules such as "Man(X) -> Mortal(X)". In Anderson's simulation a better
pattern match leads to faster activation so conflict resolution uses a "best
match fires first" strategy. I think that his model also allows multiple rules
to fire simultaneously provided that they conclude about different attributes.
For example it would be possible to conclude simultaneously that Plato is
Greek and Mortal. However the superposition of the neural activation patterns
for Mortal and Immortal does not necessarily represent anything at all.

OK, so much for the rough sketch of Anderson's system. The questions which
interest me about it deal with instantiation. In a production system we can
arrange things so that values get bound to the variables in the rules. What is
the equivalent process in the neural network?

My guess is that the activation process is the closest equivalent. The total
activity pattern of the network represents the current entity being thought
about and it possesses some number of more or less independent attributes.
Thus the binding process is particularly simple because there is no choice of
entities to bind. There is only one value, the current state, and the choice
of attributes of the current state to bind is wired into the synapses of each
rule. So a rule looks more like "Big_animal(Current_state) & Teeth(Current_
state) -> Dangerous(Current_state)". You could say that all the rules are
permanently bound.

If this is a reasonable description of instantiation in neural nets, then the
next obvious question is "How the hell do you represent multiple entities?"
If multiple entities are represented by the current state of activity on the
network there is no way that the rules can decide which attributes go with
what entity. As far as they are concerned there is only one entity. So what
are the possibilities for keeping entities separate in the neural
representation?

1. Attribute separation. If two entities have no attributes in common then they
can be represented simultaneously. As noted above, the rules can't break them
apart but for some purposes this may not matter. If the entities have an
attribute in common then provided they have the same value on that attribute
no harm is done. If they have conflicting values on a shared attribute then the
representation of at least one of the entities will be distorted.

2. Temporal separation. If a pattern of neural activity causes a short term
increase in the ease with which that pattern can be re-triggered then several
entities could be juggled by time division multiplexing. Only one entity would
be actively represented at a single time, but the other recently represented
entities could be easily recalled. This scheme prevents entities interfering
and also seems to stop them usefully interacting. It is not clear how the rule
mechanism could be modified to allow references to multiple entities in the
pattern.

3. Spatial separation. Assume that instead of one neural population there are
several of them, all with identical knowledge bases, and communicating with
each other. These neural populations are not necessarily physically separate.
Each population would be capable of representing and manipulating an entity
without interference from the representations of the other entities.
Furthermore, because the populations are connected it would be possible for
rules to know about multiple entities. The difference between this scheme and
the attribute separation scheme is that for a given attribute there will be a
distinct group of neurons in each population rather than a single global group
of neurons. Any rule which is looking for a pattern involving multiple entities
will be able to see them as distinct because the information will come in over
distinct synaptic pathways.

This spatial separation scheme would be ideal for visual processing because the
populations could be arranged in a topographic mapping and allowed to
communicate only with their neighbours. This could deal with rules like, "If
the neighbour on my left sees an object moving right then I will see it soon
and it will still have the attributes my neighbour labelled it with."

This scheme could also be used for more cognitive calculations but obviously
there would need to be mechanisms for coordination to replace the simple
static coordination structure provided by topographic mapping and communication
with neighbours. Work done in cognitive psychology shows that children's
increasing ability to perform difficult tasks can be attributed to the
increasing number of concepts which can be simultaneously activated and
manipulated (Graeme S. Halford, "Can young children integrate premises in
transitivity and serial order tasks?", Cognitive Psychology, 1984, Vol 16,
pp 65-93). Perhaps the children are slowly learning the coordination rules
needed to stop the populations acting as one large population and allow them
to run as a coordinated group of entity processors.

That's my quota of armchair theorising for the week. Anyone got a comment?

Ross Gayler                     | ACSnet:       ross@psych.uq.oz
Division of Research & Planning | ARPA:         ross%psych.uq.oz@seismo.arpa
Queensland Department of Health | CSNET:        ross@psych.uq.oz
GPO Box 48                      | UUCP:         seismo!munnari!psych.uq.oz!ross
Brisbane        4001            |
AUSTRALIA                       | Phone:        +61 7 224 7060

------------------------------

End of AIList Digest
********************