[comp.ai.neural-nets] "Distributed" representation

ackley@wind.bellcore.com (David H Ackley) (09/29/89)

In article <YZ7nM8G00Ug7A0fJ8F@andrew.cmu.edu> js7a+@andrew.cmu.edu (James Price Salsman) writes:
>When neural-net theorists speak of "Distributed
>Representation," do they mean distributed in time as well as
>space?  

The thrust of the term, at least at the outset, was towards
distribution over space (vs "local", "punctate", "grandmother cell",
etc. representations).  With the current intereset in temporal
processing, it's possible the meaning has begun to migrate; have you
seen evidence of that?

>      To put it another way, is there any such thing as an
>"instantaneous" representation in a neural network?

What a machine is representing is generally thought of as the state
vector of the machine at some specific time, so it's "instantaneous"
in that sense.

>
>Of course, the outdated idea of a single-cell representation
>is to spatial distribution what an instantaneous
>representation is to temporal distribution.
>
>:James

The strict locallist position is not widely held at present, but it's
worth recognizing a spectrum of strategies, ranging from 1-out-of-N
(i.e., locallist) to N/2-out-of-N (i.e., "maximally distributed").
There are reasons to suspect that strategies like "few-out-of-N" or
perhaps "log N-out-of-N" may be important.

| David Ackley   Cognitive Science Research Group |
| "There are     Bell Communications Research Inc.|
|     no facts,          ackley@flash.bellcore.com|
|       only factors"          ...!bellcore!ackley|