lugowski@ngstl1.csc.ti.COM (09/03/88)
Date: Thu, 1 Sep 88 17:17 EDT From: lugowski@ngstl1.csc.ti.com To: ailist@mc.lcs.mit.edu Subject: response to the "A/D->ROM->D/A" sigmoid idea by Antti Concerning the "analog/digital --> ROM --> digital/analog" neural sigmoids: Over here in Texas, Gary Frazier (central research labs, Texas Instruments) and I (ai laboratory, same) have played with a very similar idea for over a year now. We would have loved to have kept it to ourselves a bit longer in order to quietly work out its implications, writing a nice understated little paper about what it buys and what it doesn't, but -- sigh -- Antti's note from the prettier end of Europe forces our hand: 1. Consider not using ROM in favor of RAM. This allows you to learn the sigmoid, if you're so inclined, or otherwise mess with it in real-time. 2. Leave off the A/D and D/A conversions (for speed's sake) if there's a way to compute the thing in analog (often there is). 3. Consider other functions, rather different from sigmoids and consider other uses other than neural summation for network node activities. 4. Expect interesting system properties to emerge from this rather innocent looking hardware move. More on this in our forthcoming paper. Some clues for those who want to think this through in the interim: (1) implementations for neural darwinism?, (2) more bang for the hyper"plane" buck?, (3) faster convergence than pure gradient descent in weight space? Well, we could always turn out to be totally off base on this, but here's the goods just in case we're not. Comments? Anyone else tinkering thusly? -- Marek Lugowski AI Lab, DSEG, Texas Instruments P.O. Box 655936, M/S 154 Dallas, Texas 75265 lugowski@resbld.csc.ti.com