[comp.ai.digest] Notes on Neural Networks

kanecki@VACS.UWP.WISC.EDU (David Kanecki) (11/15/88)

 
 
Notes on Neural Networks:
 
 
During the month of September while trying various
experiements on neural networks I noted two observations:
 
1. Based on how the data for the A and B matrix
   are setup the learning equation of:
                                 T
       w(n)=w(n-1)+nn(t(n)-o(n)*i (n)
 
    may take more presentations for the system to learn
    then A and B output.
 
2. Neural Networks are self correcting in that if a
   incorrect W matrix is given by using the presentation/
   update process the W matrix will give the correct answers,
   but the value of the individual elements will differ when
   compared to a correct W matrix.
 
 
Case 1: Different A and B matrix setup
 
For example, in applying neural networks to the XOR problem
I used the following A and B matrix:
 
A    H  | H  B
------- |------
0 0  0  | 0  0
0 1  0  | 0  1
1 0  0  | 0  1
0 1  1  | 1  1
 
My neural network learning system took 12 presentations to
arrive at the correct B matrix when presented with the corresponding
A matrix. The W matrix was:
 
 
W(12) =     |  -0.5  0.75 |
            |  -0.5  0.75 |
            |  3.5  -1.25 |
 
 
For the second test I set the A and B matrix as follows:
 
A    H  | B
------------
0 0  0  | 0
0 1  0  | 1
1 0  0  | 1
1 1  1  | 0
 
This setup took 8 presentations for my neural network learning
system to arrive at a correct B matrix when presented with the
corresponding A matrix. The final W matrix was:
 
W(8) = | -0.5 -0.5 2.0 |
 
 
Conclusion: These experiements indicate to me that a
systems learning rate can be increased by presenting the
least amount of extraneous data.
 
 
--------------
 
 
Case 2: Self Correction of Neural Networks
 
In this second experiment I found that neural networks
exhibit great flexibility. This experiment turned out to
be a happy accident. Before I had developed my neural network
learning system I was doing neural network experiments by
speadsheet and hand transcription. During the transciption
three elements in 6 X 5 W matrix had the wrong sign. For example,
the resulting W matrix was:
 
 
       | 0.0  2.0  2.0  2.0  2.0 |
       |-2.0  0.0  4.0  0.0  0.0 |
W(0)=  | 0.0  2.0 -2.0  2.0 -2.0 |
       | 0.0  2.0  0.0 -2.0  2.0 |
       |-2.0  4.0  1.0  0.0  0.0 |
       | 2.0 -4.0  2.0  0.0  0.0 |
 
 
 
 
W(24)   = | 0.0    2.0   2.0   2.0   2.0  |
          |-1.53  1.18  1.18  -0.25 -0.15 |
          | 0.64  0.12  -0.69  1.16 -0.50 |
          | 0.27 -0.26  -0.06 -0.53  0.80 |
          |-1.09  1.62   0.79 -0.43 -0.25 |
          | 1.53 -1.18  -0.68  0.25  0.15 |
 
 
By applying the learning algorithm it took  24 presentations
the W matrix to give correct B matrix when presented with corresponding
A matrix.
 
 
But, when the experiment was run on my neural network learning
system I had a W(0) matrix of:
 
W(0) =   | 0.0  2.0  2.0  2.0  2.0  |
         |-2.0  0.0  4.0  0.0  0.0  |
         | 0.0  2.0 -2.0  2.0 -2.0  |
         | 0.0  2.0 -2.0 -2.0  2.0  |
         |-2.0  4.0  0.0  0.0  0.0  |
         | 2.0 -4.0  0.0  0.0  0.0  |
 
 
After 5 presentations the W(5) matrix came out to be:
 
W(5) =   | 0.0   2.0  2.0  2.0  2.0  |
         |-2.0   0.0  4.0  0.0  0.0  |
         | 0.0   2.0 -2.0  2.0 -2.0  |
         | 0.0   2.0 -2.0 -2.0  2.0  |
         | 2.0  -4.0  0.0  0.0  0.0  |
 
Conclusion: Neural networks are self correcting but the final
W matrix way have different values. Also, if a W matrix does
not have to go through the test/update procedure the W matrix
could be used both ways in that a A matrix generates the B matrix
and a B matrix  generates the A matrix as in the second example.
 
----------------
 
 
I am interested in communicating and discussing various
aspects of neural networks. I can be contacted at:
 
kanecki@vacs.uwp.wisc.edu
 
or at:
 
David Kanecki
P.O. Box 93
Kenosha, WI 53140