[comp.ai.neural-nets] Back Propagation problem..help needed.

tom@maths.tcd.ie (Thomas Murphy) (02/20/91)

I have to write a NN to play tic-tac-toe , and I'm having problems....
I'm using a simple(?) back propagation  algorithm and I hope to supervise
it's learning stage. 
	I know that this is proably easy to most of you out their but I'm
just started on NNs and I'm not doing so well..8-)



ok this is the system I'm using in simple detail.
     A           B
 0--[]----\ /---[]----0
 0--[]-----*----[]----0 <--- one of nine output nodes      
 0--[]----/^\---[]----0         
 0         |     ^
 0         |      \________one set of nine weights (three sets on either side)
 0          \_ one of 
 0            three hidden
 0            nodes
 0
 ^
 |
  \__ Nine input 
       nodes


   inputs are [-1, 0 1] ,

   the value of the nodes = sum of (input*weight), for each hidden node.

	there are two groups, of three sets, of nine weights, marked A, B on
diagram, however only one set from each group is shown here.
The net is totally interconnected.

At the moment I am updating the B set of weights as follows::

	newweight=oldweight + (frac)*(error)

	where error is the error between the value the output node gave and
	what it should have given, and frac is the fraction assigned to
	speed of learning.

I'm having trouble adjusting  the set A of weights and I can't find an
algorithm that makes any sense to do it.
Also, do I need thresholds , so far I have ignored them but are they important.

Some of the algorithms I've come across suggest that the update for
weights should be :-
	
	newweight= oldweight + (frac)*(error)*input-from-node

but this makes no sense to me.

A suggestion for finding the error of the hidden node was

	error = (input)(1-input)(desired-output)


Help!!!

I'm begining to regret ever taking on this project.

Tom.
ps: I've just reread this file and it's not really very clear so if you have 
any problems please mail me here with them and I'll try to clarify them.
I'll reply to all mail I get if you so desire.
All mail gratefully recieved.

ZUR072@DMSWWU1C.BITNET (Ulrich Kuehn) (02/28/91)

In <1991Feb20.135431.27119@maths.tcd.ie> Thomas Murphy writes:
  Also, do I need thresholds, so far I have ignored them but are they
  important.

Yes, tese thresholds are very important for convergence of bp.  Yo can handle
them as a connection to a neuron which is always turned on, so all you have to
do is to use the learning algorithm on this weight for each neuron, too.

Ulrich Kuehn