JOSE@LATOUR.ARPA (04/20/87)
[Forwarded from the Neuron Digest by Laws@STRIPE.SRI.COM.]
Knowledge Representation in Connectionist Networks
Stephen Jose Hanson and David J. Burr
Bell Communications Research
Morristown, New Jersey 07960
Abstract
Much of the recent activity in connectionist models stems
from two important innovations. First, a layer of
independent, modifiable units (hidden layer) that can model
the statistics of the domain and in turn perform significant
associative mapping between stimulus pairs. Second, a
learning rule that dynamically creates representation in the
hidden layer based upon constraints from a teacher
signal. Both Boltzmann machine and back-propagation models
share these two innovations and interestingly ones that
were apparently well known by Rosenblatt[14]. Although
presently, many complex perceptual and cognitive models
have been constructed using these methods the exact
computational nature of the networks in terms of their
clustering, partitioning, and generalization behavior is not
well understood.
In this paper we present a uniform view of the
computational power of multi-layered learning (MLL)
models. We show that MLL models represent knowledge by
applying Boolean combination rules to partition the problem
space into regions. A by-product of these rules is that
knowledge is represented as distributed patterns of
activation in the hidden layers. Their partitioning
capability is related to both the neural device model and
the network complexity in terms of numbers and layers of
neurons. The device model determines the shape of an
elementary boundary segment and the network determines how
to combine the segments into region boundaries.
For continuous problem spaces two hidden layers are
sufficient to form arbitrary regions (or Boolean functions)
in the space, and for binary-valued spaces a single layer
suffices. Finally we show that networks can produce
probabilistic combination rules which closely approximate
the Bayes risk.
You can get a copy of this paper by replying to this message
or writing to jose@bellcore or djb@bellcore, comments
appreciated.