[comp.ai.neural-nets] ANN fault tol.

rudnick@ogicse.ogc.edu (Mike Rudnick) (01/17/90)

Below is a synopsis of the references/material I received in response
to an earlier request for pointers to work on the fault tolerance of
artificial neural networks.  Although there has been some work done
relating directly to ANN models, most of the work appears to have been
motivated by VLSI implementation and fault tolerance concerns.

Apparently, and this is speculation on my part, the folklore that
artificial neural networks are fault tolerant derives mostly from the
fact that they resemble biological neural networks, which generally
don't stop working when a few neurons die here and there.

Although it looks like I'm not going to be doing ANN fault tolerance
as my dissertation topic, I can't help but feel this line of research
contains a number of outstanding phd topics.

Mike Rudnick				Computer Science & Eng. Dept.
Domain:	rudnick@cse.ogi.edu		Oregon Graduate Institute (was OGC)
UUCP: {tektronix,verdix}!ogicse!rudnick	19600 N.W. von Neumann Dr.
(503) 690-1121 X7390 (or X7309)		Beaverton, OR. 97006-1999

-----

From: platt@synaptics.com (John Platt)

  Well, one of the original papers about building a neural network in
analog VLSI had a chip where about half of then synapses were broken,
but the chip still worked. Look at

``VLSI Architecutres for Implementation of Neural Networks''
by Massimo A. Sivilotti, Michael Emerling, and Carver A. Mead,
in ``Neural Networks for Computing'', AIP Conference Proceedings 151,
John S. Denker, ed., pp. 408-413

-----

From: Jonathan Mills <rutgers!iuvax.cs.indiana.edu!jwmills>

You might be interested in a paper submitted to the 20th Symposium on
Multiple-Valued Logic titled "Lukasiewicz Logic Arrays", describing work
done by M. G. Beavers, C. A. Daffinger and myself.  These arrays (LLAs
for short) can be used with other circuit components to fabricate neural
nets, expert systems, fuzzy inference engines, sparse distributed memories
and so forth.  They are analog circuits, massively parallel, based on my
work on inference cellular automata, and are inherently fault-tolerant.

In simulations I have conducted, the LLAs produce increasingly noisy
output as individual processors fail, or as groups of processors randomly
experience stuck-at-one and/or stuck-at-zero faults.  While we have
much more work to do, it does appear that with some form of averaging
the output of an LLA can be preserved without noticeable error with up to
one-third of the processors faulty (as long as paths exist from some inputs
to the output).  If the absolute value of the output is taken, a chain of
pulses results so that a failing LLA will signal its graceful degradation.

VLSI implementations of LLAs are described in the paper, with an example
device submitted to MOSIS, and due back in January 1990.  We are aware of
the work of Alspector et. al. and Graf et. al., which is specific to
neural architectures.  Our work is more general in that it arises from
a logic with both algebraic and logical semantics, lending the dual
semantics (and its generality) to the resulting device.

LLAs can also be integrated with the receptor circuits of Mead, leading
to a design project here for a single circuit that emulates the first
several levels of the visual system, not simply the retina.  This is
almost necessary because I can put over 2,000 processors on a single
chip, but haven't the input pins to drive them!  Thus, a chip that
uses fewer processors with the majority of inputs generated on chip is
quite attractive -- especially since even with faults I'll still get
a valid result from the computational part of the device.

Sincerely,

Jonathan Wayne Mills
Assistant Professor
Computer Science Department
Indiana University
Bloomington, Indiana 47405
(812) 331-8533

-----

From: risto@CS.UCLA.EDU (Risto Miikkulainen)

I did a brief analysis of the fault tolerance of distributed
representations. In short, as more units are removed from the
representation, the performance degrades linearly. This result is
documented in a paper I submitted to Cognitive Science a few days ago:

Risto Miikkulainen and Michael G. Dyer (1989). Natural Language
Processing with Modular Neural Networks and Distributed Lexicon.

Some preliminary results are mentioned in:

@InProceedings{miikkulainen:cmss,
  author = 	"Risto Miikkulainen and Michael G. Dyer",
  title = 	"Encoding Input/Output Representations in Connectionist
Cognitive Systems",
  booktitle = 	"Proceedings of the 1988 Connectionist Models Summer School",
  year = 	"1989",
  editor = 	"David S. Touretzky and Geoffrey E. Hinton and Terrence
J. Sejnowski",
  publisher = 	 KAUF,
  address = 	 KAUF-ADDR,
}

-----

"Implementation of Fault Tolerant Control Algorithms Using Neural
Networks", systematix, Inc., Report Number 4007-1000-08-89, August 1989.

-----

From: kddlab!tokyo-gas.co.jp!hajime@uunet.uu.net

>            " A study of high reliable systems
>        against electric noises and element failures "
> 
>         -- Apllication of neural network systems -- 
> 
> ISNCR '89 means "International Symposium on Noise and Clutter Rejection
> in Radars and Image Processing in 1989".
> It was held in Kyoto, JAPAN from Nov.13 to Nov.17.

Hajime FURUSAWA 	JUNET: hajime@tokyo-gas.co.jp
Masayuki KADO		JUNET: kado@tokyo-gas.co.jp

Research & Development Institute
Tokyo Gas Co., Ltd.
1-16-25 Shibaura, Minato-Ku
Tokyo 105
JAPAN

-----

From: <MJ_CARTE@UNHH.BITNET>  Mike Carter

"Operational Fault Tolerance of CMAC Networks", NIPS-89, by Mikeael J.
Carter, Frank Rudolph, and Adam Nucci, University of New Hampshire

Mike Carter also says he has a non-technical overview of NN fault
tolerance which he wrote some time ago which contains references to
papers which have some association with fault tolerance (although only
1 of which had fault tolerance as its focus).

-----

From: Martin Emmerson <mde@ecs.southampton.ac.uk>

I am working on simulating faults
in neural-networks using a program running
on a Sun (Unix and C).

I am particularly interested in qualitative methods
for assessing performance of a network and also
faults that might occur in a real VLSI implementation.
-- 
Mike Rudnick			CSnet:	rudnick@cse.ogi.edu
Oregon Graduate Institute	UUCP:	{tektronix,verdix}!ogicse!rudnick
19600 N.W. von Neumann Dr.	(503) 690-1121 X7390
Beaverton, OR. 97006-1999	OGI used to be OGC ... progress!