berkhout@cernvax.UUCP (robert berkhout) (11/21/89)
Re : Problem understanding TDNN's. Hello Netters ! s so I hope it will be ok. anyway here it is; I first became interested in neural nets after reading several articles in Byte Magazine some months ago. Particularly the TDNN (Time Delay Neural Network) presented as the work of Waibel and co-researchers on 'Phoneme Recognition Using TDNN'. Being very interested in speech recognition and the impact it would have I sent off for this and several other articles listed in the ref- erences. After first getting to grips with Back-propogation and other network topologies (thanks to Lippmans excellent article) I tackled the TDNN. All went well, I seemed to understand the concept of forcing the units to 'discover useful phonetic info .. wherever it occured in the input'. PROBLEMS !! ~~~~~~~~~~~ My problems started when I tried to understand the concept of 'adjusting the time delayed weights by the AVERAGE error in those weights across time'. Now hang on a minute ! Surely most of the time delayed units will yield errors in their weights (since by definition the same acoustic event cannot appear across the phoneme). Ok so most of the weights will need to be made more negative. Some, the ones that have detected the correct acoustic feature, will need the weights to be strengthened ( made more positive ). It seems to me that you have a lot of negative adjustments and few positive adjustments - take the average and you have still have a small, but negative, adjustment. Hence no learning. ---> I think I am missing a very important concept ! <--- Could anyone enlighten me ? Puzzled Netter. ___________________________________________________________