[comp.ai.neural-nets] visualization of Weight "meanings"

pwh@bradley2.bradley.edu (Pete Hartman) (12/15/90)

I hope this doesn't just serve to demonstrate my ignorance of the
field, but....

I am currently working my way through the Rummelhart and McClelland
PDP books, as sort of perparatory background to verify for myself
that I have the interest and ability to go on to grad school to study
Neural Nets.  I was reading the math chapters, basically refreshers
on linear algebra, vectors and matrices, and ran across some interesting
(and new to me) concepts.

The author describes the activation of a unit in terms of the
dot product of the input vector and the weight vector.  And a set 
of units in terms of a weight matrix made up of the various vectors.  
This I've seen around.  However, it was pointed out that the activation 
is actually (using a geometric interpretation of the dot product) a 
measure of how close the input vector matches the weight vector.  
Conceptually, the unit can be perceived as partitioning the input 
space into inputs that provide positive (or zero) activations, and 
inputs that proved negative activations.  (and I suppose you could 
see it in terms of more gradations than strict partitioning, too)

This is probably old news, but I was wondering....has anyone done
any work at representing such partitionings graphically?  For example,
in a very simple space where inputs were only 3 dimensional, a set
of units could be envisioned as partitioning the volume into seperate
regions.  I would think that these regions could provide insight to
the "meanings" of the weights.  Perhaps even after going through a 
training process they could be used to analyze the "final" states
to see exactly what was going on.  I suppose the hardest part would
be finding a way of graphically representing regions of dimensionality
greater than 3, (from what I've seen, the vast majority of problems
are of higher dimension), since the partitioning seems fairly simple
to find given enough crunching.

If such work has been done, could someone point me to it?  If not,
does it seem worthy of thinking about, or is this just idle whimsy
of someone not yet aware enough of the problems to see how useless
the idea is?
-- 
-----
Pete Hartman		pwh@bradley.bradley.edu			Haazavaa?

danforth@riacs.edu (Douglas G. Danforth) (12/15/90)

In <1990Dec14.201607.5832@bradley2.bradley.edu> pwh@bradley2.bradley.edu (Pete Hartman) writes:
..
>This is probably old news, but I was wondering....has anyone done
>any work at representing such partitionings graphically?  For example,
>in a very simple space where inputs were only 3 dimensional, a set
>of units could be envisioned as partitioning the volume into seperate
>regions.  I would think that these regions could provide insight to
>the "meanings" of the weights.  Perhaps even after going through a 
>training process they could be used to analyze the "final" states
>to see exactly what was going on.  I suppose the hardest part would
>be finding a way of graphically representing regions of dimensionality
>greater than 3, (from what I've seen, the vast majority of problems
>are of higher dimension), since the partitioning seems fairly simple
>to find given enough crunching.
..
>-----
>Pete Hartman		pwh@bradley.bradley.edu			Haazavaa?

     Every node (neuron) with n inputs can be considered a POINT in 
n-space. The COORDINATES of the point are the input WEIGHTS of the neuron.
In this picture, inputs and nodes reside in the SAME space and are just
points in it (I think of a white sheet of paper with dots scattered on it). A
node (point) will be activated strongly if its input (another point) is CLOSE
to it. There will be a region of positive activation for a node about it
which, for visualization purposes, can be thought of as an ellipse
(hyper-ellipse). If one uses a monotonic function of the inner product
between coordinates of points to determine activation (the usual case) then
the ellipse expands and flattens on one side to cut the full space in two.
That is, a HYPERPLANE slices through the origin of the space with the
neuron's point (vector) forming a normal to the plane.
     Boundary conditions on the space, constant offsets, and different
activation rules can modify this picture but in general the "egg" around
a point suffices quite often to depict the region of activation of a neuron.
A neuron is just a point.

     The OUTPUT of a neuron is another point, a point in m-space. It is a
point belonging to the TANGENT space of the neuron.  I think of this as
hair sticking up from the surface of the paper which has the neuron points.
Each point (neuron) has a hair projecting from it. The single hair, fixed at
one end, can be tilted in any direction. This shows the degrees of freedom of
the information that can be stored in the neuron via its OUTPUT weights.

     Training a layer of a neural net entails moving the points around on 
the paper and adjusting the direction of their hairs.

    Pretty hairy, eh?  :)

--
Douglas G. Danforth   		    (danforth@riacs.edu)
Research Institute for Advanced Computer Science (RIACS)
M/S 230-5, NASA Ames Research Center
Moffett Field, CA 94035