[net.ai] Reinventing Man

vijay@ucbvax.ARPA (Vijay Ramamoorthy) (01/30/85)

    Has anyone read "Reinventing Man" (1981) by I. Aleksander and P. Burnett?
Its about an architectural "neural net" model for the mind that, it is
claimed, has been constructed and is in use in Britain.  The claim is
further made that it can recognize, to a degree, faces (even faces which
are partially disguised).
    I. Aleksander, the then head of the British Cybernetics Society, has
made some remarkable claims about this machine.  I think his architectural
model is interesting, but haven't heard much about this machine, called
Wisard, outside of the book.
    If anyone else has read the book, or even actually seen this machine's
performance, I'd appreciate learning what you think of it.

vek@allegra.UUCP (Van Kelly) (01/31/85)

In article <4464@ucbvax.ARPA> vijay@ucbvax.ARPA (Vijay Ramamoorthy) writes:
>
>    Has anyone read "Reinventing Man" (1981) by I. Aleksander and P. Burnett?
>Its about an architectural "neural net" model for the mind that, it is
>claimed, has been constructed and is in use in Britain.  The claim is
>further made that it can recognize, to a degree, faces (even faces which
>are partially disguised).
*********************************

Recognition of images (including faces) with a "sensory net" model of
memory is not all that new.  I'll look into this book, but I'd also
recommend a little monograph (late 70's vintage) by Tuevo Kohonen
(published by Springer) to "demythologize" a lot of the "neural net"
processor claims.  Kohonen built a very simple, regular, matrix-connected
memory array with nothing more complicated than linear mathematics at
each node (an elegant "gutless wonder", with but limited resemblance
to neural interconnection topologies).  His mathematics showed that
the performance of such a device in associative recall tasks was a function
of the ration of raw AVAILABLE storage to net IN-USE storage (number of
images stored).  Basically, this "dirt-simple" device used its surplus
storage to adaptively encode the images for maximal information-theoretic
"distance" between image traces.  Sounds fancy, but the math was really
simple.  So the question I always ask myself when I hear about "neural
net" pattern-recognition performance is whether it represents more
than just a constant-factor improvement on this "brute-force" result.
Kohonen suggested that the only way a network might radically improve 
over his basic equations (modulo a constant factor) was to incorporate 
extensive non-linear distributed feature extraction in the hardware.

dgary@ecsvax.UUCP (D Gary Grady) (02/01/85)

<>
Another book on "neural net" AI machines is Robots On Your Doorstep - an
unfortunate title for an entertaining and readable (if somewhat
outdated) book.  Sorry that I don't have an author or publisher in my
memory; I read the book back about 1979.
-- 
D Gary Grady
Duke U Comp Center, Durham, NC  27706
(919) 684-3695
USENET:  {seismo,decvax,ihnp4,akgua,etc.}!mcnc!ecsvax!dgary