[comp.sys.amiga] blitting a neural net.

hugh@censor.UUCP (Hugh D. Gamble) (05/19/89)

In article <3029@cps3xx.UUCP>, golden@cps3xx.UUCP (golden james) writes:
> What about using the blitter to implement a neural network, since they
> usually suffer from hundreds of simple integer calculations?  Could you
> simply "blit" the network recursively to obtain a result?
> 
> Mike Golden
> Physiology Undergraduate
> Michigan State University

That's something I thought of just after I discovered Tom's nifty keen
life program that mapped so beautifully on to the blitter instructions.
So I asked a few experts and they either said no without thinking,
or thought for a second before saying the same thing.

I still suspect that you could set up an algorithm that would use the
blitter in combination with the CPU to acheive better results than
you would get from just using the CPU, but it would take some thinking.

I don't think it would be a particularly optimal fit.  Especially if
you had an '030 & 882 or a gaggle of transputers in your amiga, I
don't think the effort required to use the blitter would be worth it.
Besides, then you couldn't play space spuds at the same time your
Amiga was running a neural net program to decide for you whether you
should watch Oprah or Giraldo :-)

-- 
Hugh D. Gamble (416) 581-4354 (wk), 267-6159 (hm) (Std. Disclaimers)
hugh@censor, kink!hugh@censor
# It may be true that no man is an island,
# but I make a darn good peninsula.

jal@wsu-cs.uucp (Jason Leigh) (05/20/89)

Actually you'd be lucky to get a neural net to decide that much.
Anyway, is there anyone out there interested in a tiny Neural Net
Construction Set written for the Amiga in XLISP?

jal@zeus.cs.wayne.edu
Jason Leigh

ranjit@grad2.cis.upenn.edu (Ranjit Bhatnagar) (05/20/89)

Amazingly, it's been proven (statistically) that a network of McCullough-Pitts
neurons (see below) has nearly as much computational power after you restrict
it so that all the interconnection weights are 0, 1, or -1.  That is to say,
given a network with arbitrary weights that can recognize N inputs reliably,
a network with binary interconnection weights (0, 1, or -1 only) need only
be a few times larger to recognize N inputs just as reliably.  

Since the implementation of such a restricted network consists of
nothing but zillions of 1-bit adds and multiplies, and a relatively
small number of integer operations, and any network
that does anything reasonable is going to be very large, perhaps the
blitter would actually be able simulate such a network very efficiently
compared to a general purpose processor (like the 68000).  

	- Ranjit

A McCullough-Pitts neuron is the following function (approximately -
it shows up in slightly different forms): [it's easier to write
pseudo-code than math functions on a terminal!]

A function of n real numbers which returns one integer.
The array of n weights w[i] and threshold K are real values
associated with a particular neuron - it is these values that
are tweaked to make a network learn things.

	subtotal = 0;
	for i = 1 to n 
		subtotal = subtotal + w[i] * input[i];
	if subtotal > K then output = 1;
			else output = 0 (or -1, whichever you prefer)

In the binary approximation, the weight matrix only contains 1's or 0's
instead of arbitrary real numbers.  To simplify things, K is often
restricted to be 0, though this reduces the power of the neuron.




"Trespassers w"   ranjit@eniac.seas.upenn.edu	mailrus!eecae!netnews!eniac!...
	   "Such a brute that even his shadow breaks things." (Lorca)

u-jmolse%sunset.utah.edu@wasatch.utah.edu (John M. Olsen) (05/21/89)

In article <3029@cps3xx.UUCP>, golden@cps3xx.UUCP (golden james) writes:
> What about using the blitter to implement a neural network, since they
> usually suffer from hundreds of simple integer calculations?  Could you
> simply "blit" the network recursively to obtain a result?

In article <11269@netnews.upenn.edu> ranjit@grad2.cis.upenn.edu.UUCP
(Ranjit Bhatnagar) writes:
><McCullough-Pitts neurons>
>Since the implementation of such a restricted network consists of
>nothing but zillions of 1-bit adds and multiplies, and a relatively
>small number of integer operations, and any network
>that does anything reasonable is going to be very large,
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>perhaps the
>blitter would actually be able simulate such a network very efficiently
>compared to a general purpose processor (like the 68000).  

Uh, have you seen the size of the networks that are used to do loan
application analysis?  A whopping 15-20 nodes.  Now if you want to talk
about things like image analysis and enhancement, or robot vision, you will
get lots of nodes.

> 
> Mike Golden
> Physiology Undergraduate
> Michigan State University

In some lost article, hugh@censor (Hugh D. Gamble) writes:
>That's something I thought of just after I discovered Tom's nifty keen
>life program that mapped so beautifully on to the blitter instructions.
>So I asked a few experts and they either said no without thinking,
>or thought for a second before saying the same thing.

Back when I thought (during a moment of madness) I might have some spare 
time, I thought about this, and came up with the idea of implementing a 
3-dimensional neural back propogation network with the blitter, and after 
doing some back-of-the-envelope calculations, figured it would be about 1/4 
to 1/2 as fast to teach when compared with professional net software on fast 
PCAT's, but after it's learned it's stuff, it would blaze away at *over* 
5-10X the typical speeds (number of nodes calculated per second).

I used the blitter to do lots of parallel addition in a BADGE killer demo 
entry last year (Where *ARE* those prizes, you guys?!), which helped to
inspire me.  The way I figure it, a large network using binary values is
equivelant to a not-so-large network using integer or floating point
weights and computation.

An additional blessing of using the blitter is storage space.  You can use
huge 2 or 3 dimensional bit arrays (called screens :^) and most of the code
is just interface stuff for the blitter.  That way, you can even watch
the network run. :^)

As a side note, the LIFE game is actally a neural net where each node
has the same rules and weights, and just takes input from it's neighbors.

+  /|  |    /||| /\|       | John M. Olsen, 1547 Jamestown Drive  +
|  \|()|\|\_ |||.\/|/)@|\_ | Salt Lake City, UT  84121-2051       |
|   |  u-jmolse%ug@cs.utah.edu or ...!utah-cs!utah-ug!u-jmolse    |
+             (Net address changing July 10-15, 1989)             +

raz%kilowatt@Sun.COM (Steve -Raz- Berry) (05/22/89)

In article <730@wsu-cs.uucp> jal@cs.wayne.edu (Jason Leigh) writes:

>Anyway, is there anyone out there interested in a tiny Neural Net
>Construction Set written for the Amiga in XLISP?

Send it to Bob, he'll post anything. ;-)

(but seriously, pls sent it)


=>jal@zeus.cs.wayne.edu
=>Jason Leigh


Steve -Raz- Berry      Disclaimer: I didn't do nutin!
UUCP: sun!kilowatt!raz                    ARPA: raz%kilowatt.EBay@sun.com
"Fate, it protects little children, old women, and ships named Enterprize"

page%swap@Sun.COM (Bob Page) (05/23/89)

raz@sun.UUCP (Steve -Raz- Berry) wrote:
>Send it to Bob, he'll post anything. ;-)

Is this a complement?

I'm not posting anims, iff (ILBM, SMUS, etc) files, WP documents or
any of that 'end product' kind of stuff - there are plenty of other
avenues for publishing your work without abusing the USENET medium.
Keymaps, fonts, printer drivers and the like are OK though.  Anything
else is up to me and my mood-of-the-day.  :-/

..bob

PS pickled cumquats?

keithd@gryphon.COM (Keith Doyle) (05/29/89)

In article <1870@wasatch.utah.edu> u-jmolse%sunset.utah.edu.UUCP@wasatch.utah.edu (John M. Olsen) writes:
.I used the blitter to do lots of parallel addition in a BADGE killer demo 
.entry last year (Where *ARE* those prizes, you guys?!), which helped to
.inspire me.  

Which one was that?  I've only seen 2 of the entries, and this one
definately sounds interesting.  Let me know which one it was (via
email) I'd like to keep an eye out for it.


Keith Doyle
keithd@gryphon.COM    gryphon!keithd     gryphon!keithd@elroy.jpl.nasa.gov