[comp.ai.neural-nets] ALN - adaptive logic network hardware

arms@cs.UAlberta.CA (Bill Armstrong) (05/09/91)

So far about 300 people have copied the adaptive logic software via
the anonymous ftp from menaik@cs.ualberta.ca [129.128.4.241] in
pub/atree.tar.Z .  Over 30 people have subscribed to the mailing list
by sending a request to alnl-request@cs.ualberta.ca.  The people who
have used the software have been successful in getting it to run, but
there are two questions that have been raised:

1. What can be done with the software?
2. How can feedforward ALNs be realized in hardware?

In answer to the first question, I suggest a database application:
take your favorite data file, arranged in tabular form, and use Atree
to try to predict one of the columns, given some others.  This requires
the "lf" language, described in the implementation document in
atree.tar.Z.  "lf" works on Suns, but requires too much memory on PCs
at present (that will be fixed in the next release).

One can also try to get the networks to approximately learn a function
other than the spherical harmonic given in the examples.  Here's an
idea, that might be of some utility in graphics rendering hardware:
how about using trees to approximate the functions necessary for
implementing the Torrance-Sparrow shading model?  That's an area where
speed really counts.  Some readers have said they are going to try
ALNs on OCR.  We are trying ALNs on voice with only about thirty
words, but so far with limited success because we are not very
familiar with this area.

As for hardware implementation, here is a brief idea of how one could
obtain hardware for specific purposes.  Using the standard measure of
feedforward speed, one can get dedicated networks which, in effect,
execute the equivalent of SEVERAL TRILLION CONNECTIONS PER SECOND
(with one-bit weights).  My apologies in advance to those who think
this way of measuring the speed of the trees is inappropriate; maybe
it is, but it's a starting point for comparisons.

Implementing an ALN in hardware for dedicated feedforward use is
simple.  You start with a trained binary tree of two-input ANDs and
ORs (LEFTs and RIGHTs removed in the obvious way) and construct a tree
of NANDs (or NORs) of fan-in greater than two (as explained below).
You connect the leaves of that tree to the input variables and their
complements by hard wiring.  Depending on the multiplicity of
connections to the inputs, you may need some drivers to amplify the
input signals with a lot of fan-out, but the TOTAL computation time
should be on the order of a few nanoseconds (e.g. 5 ns for GaAs, not
counting the drivers, which are slower).  Logic networks can compute
whole functions in the time digital arithmetic nets do a single
multiplication!

The above applies to a dedicated ALN.  Making a programmable
feedforward device is harder, since you have a problem with switching
the signals from the input to the leaves.  But it's certainly not a
hard problem.

Here's how to get the tree of NANDs.  Starting with the trained binary
tree, AND-gates of fan-in greater than two are obtained by taking the
subtrees formed by the two-input ANDs which are directly connected to
each other.  We treat the ORs similarly.  This gives a tree of
alternating layers of ANDs and ORs of fanin generally greater than
two.  That tree can be converted to a tree of NANDs (or NORs if that
is your favorite IC technology).  This type of hardware is standard
and not patentable because it's so basic.  It can be realized by chips
which are just lacking a metallization layer.  There are many fab
facilities that can turn out something like that.

*****

There is a bit more on other implementations to be found using
the mailing list alnl@cs.ualberta.ca.

--
***************************************************
Prof. William W. Armstrong, Computing Science Dept.
University of Alberta; Edmonton, Alberta, Canada T6G 2H1
arms@cs.ualberta.ca Tel(403)492 2374 FAX 492 1071