[comp.ai.neural-nets] Extracting Rules from a Trained Network

bakker@cs.uq.oz.au (Paultje Bakker) (04/18/91)

I am interested in pointers to any articles, researchers, papers,
or books that have investigated the extraction of rules from a
successfully trained neural network.

Does anyone know if this is indeed possible? Can human-readable
rules be deduced from the distributed weights and connections of a
neural network?

Any help would be greatly appreciated.

Paul Bakker

--
--Paul Bakker   --   email: bakker@cs.uq.oz.au
--Depts. of Computer Science/ Psychology    ------
--University of Queensland  ----           --------
--New Holland   ---     -------------    ------------         

jdm5548@tamsun.tamu.edu (James Darrell McCauley) (04/19/91)

In article <886@uqcspe.cs.uq.oz.au>, bakker@cs.uq.oz.au (Paultje Bakker) writes:
|> I am interested in pointers to any articles, researchers, papers,
|> or books that have investigated the extraction of rules from a
|> successfully trained neural network.
|> 

On a similar note, Martin Wildberger of General Physics presented something
at the Simulation Multiconference in New Orleans a couple of weeks ago on 
using weights to determine significance of inputs and then "fuzzyfy" them 
and change them to verbage so that a non-techie could understand why a NN 
came to a particular solution.

This is a second-hand account of his talk - I was unable to stay in town.
Does anyone have any references to this type of thing?  This again is
second-hand, but I heard that when folks asked for references (or even copies
of his slides) that he referred them to a publication last year, either
in the SMC Proceedings or some NN conference.

-- 
James Darrell McCauley, Grad Res Asst, Spatial Analysis Lab 
Dept of Ag Engr, Texas A&M Univ, College Station, TX 77843-2117, USA
(jdm5548@diamond.tamu.edu, jdm5548@tamagen.bitnet)

sfp@mars.ornl.gov (Phil Spelt) (04/19/91)

In article <14940@helios.TAMU.EDU> jdm5548@tamsun.tamu.edu (James Darrell McCauley) writes:
>In article <886@uqcspe.cs.uq.oz.au>, bakker@cs.uq.oz.au (Paultje Bakker) writes:
>|> I am interested in pointers to any articles, researchers, papers,
>|> or books that have investigated the extraction of rules from a
>|> successfully trained neural network.
>|> 
>
>On a similar note, Martin Wildberger of General Physics presented something
>at the Simulation Multiconference in New Orleans a couple of weeks ago on 
>using weights to determine significance of inputs and then "fuzzyfy" them 
>and change them to verbage so that a non-techie could understand why a NN 
>came to a particular solution.
>
>This is a second-hand account of his talk - I was unable to stay in town.
>Does anyone have any references to this type of thing?  This again is
>second-hand, but I heard that when folks asked for references (or even copies
>of his slides) that he referred them to a publication last year, either
>in the SMC Proceedings or some NN conference.
>
>-- 
>James Darrell McCauley, Grad Res Asst, Spatial Analysis Lab 
>Dept of Ag Engr, Texas A&M Univ, College Station, TX 77843-2117, USA
>(jdm5548@diamond.tamu.edu, jdm5548@tamagen.bitnet)

I have seen several postings on this topic in this newsgroup, so I fianlly 
decided that no-one reading this group is aware of work being done at FL
State Univ.  The work started as graduate work by Dave Kuncicky ("kun-
sisky"), and has bee picked up by others in the FSU CS Department:  Susan
Hruska and Chris Lacher, most noteably.  Their "mission" is to explore
the transfer of knowledge between [simple] exeprt systems and neural nets --
in both directions.  Snail-mail for these people is:

Department of Computer Science
Florida Statre University
Tallahassee, FL  32306

Their work has been presented at both the Auburn Workshops on ANNs ('90
and '91).  Although I was skeptical a year ago about the reults, I have
since become convinced that this is a potentially very useful line of
investigation -- permitting the "fine tuning" of expert systems by
training a specially-designed [backprop] net, then transferring the
knowledge back to the ES.  The net would be designed on the basis of
expert nowledge initialy encoded into the ES.  Contact these reserachers
if you are interested.  They LOVE to talk about their work!

=============================================================================
MIND.  A mysterious form of matter secreted by the brain.  Its chief activity
consists in the endeavor to asscertain its own nature, the futility of the
attempt being due to the fact that it has nothing but itself to know itself
with.   -- Ambrose Bierce
=============================================================================

Phil Spelt, Cognitive Systems & Human Factors Group  sfp@epm.ornl.gov
============================================================================
Any opinions expressed or implied are my own, IF I choose to own up to them.
============================================================================

guedalia@bimacs.BITNET (David Guedalia) (04/21/91)

In article <886@uqcspe.cs.uq.oz.au> bakker@cs.uq.oz.au writes:
>I am interested in pointers to any articles, researchers, papers,
>or books that have investigated the extraction of rules from a
>successfully trained neural network.
>
>Does anyone know if this is indeed possible? Can human-readable
>rules be deduced from the distributed weights and connections of a
>neural network?
>
    What type of network are you talking about. I remember some
mention on this board about using a neural net. in a expert system,
would that be the same?
      In a Kohonen feature map the weights would not
say much.  But the distribution of the weights in the map
should have some meaning. Has anyone heard or have
any ideas about how one could represent a feature map not by its
weights but by the relationship between its neighborhoods ?
     I have seen something called instars and out-stars an out-star
could be a feature map and an instar would be the oppositte a way of
representing the feature map by a single vector.  Has anyone seen
refrences to that?


     david