[comp.ai.neural-nets] Refs/opinions wanted -- Neural nets & approximate reasoning

bradb@ai.toronto.edu (Brad Brown) (11/18/88)

   I am  working on  a paper which compares symbolic and neural
   network approaches to approximate reasoning, including fuzzy
   sets, probabilities  logic,  and  approximate  solutions  to
   problems.   I would  very  much  appreciate  references  and
   personal comments.
 
   Given current  hardware technology  and current  neural  net
   (NN) learning  algorithms, NNs  seem to  have some desirable
   properties that  symbolic systems  do not,  but suffer  from
   implementation problems  that prevent them from being useful
   or efficient in many cases.  A summary of my thinking, which
   includes  many   generalizations  and  omits  justification,
   follows.
 
   Neural network-based  systems have  advantages over symbolic
   systems for the following reasons.
 
   (1)  For some  classes of  problems, NN  learning algorithms
        are known.  In these cases, "programming" a NN is often
        a matter of presenting it with training information and
        letting it learn.
 
        Symbolic systems  have more known algorithms and can be
        applied to  more problems  than NNs,  but  constructing
        symbolic programs  is labour  intensive.  The resulting
        programs are typically problem-specific.
 
   (2)  Neural nets  can adapt to changes in their environment.
        For instance,  a financial expert system implemented as
        a  NN   could  use   new  information   to  modify  its
        performance  over   time  to  reflect  changing  market
        conditions.  Symbolic systems are usually either static
        or require re-training on a substantial fraction of the
        dataset to adapt to new data.
 
        Neural nets  are forgiving  in their response to input.
        Inputs that  are similar  are treated  similarly.    In
        symbolic systems  it is  very  difficult  to  give  the
        system a  notion of  what constitutes a "similar" input
        so input  errors or  input noise  are big  problems for
        symbolic systems.
 
   (3)  NNs are  good  at  constraint  problems  and  have  the
        desirable property  of finding  good compromises when a
        single best solution does not exist.
 
   (4)  NNs can deal with multiple sources of information.  For
        instance, a financial system could consider inputs from
        both stock  market  information  and  internal  company
        sales information, which are not causally related.  The
        learning procedure can be expected to find weights that
        "weigh"  different   kinds  of   evidence   and   judge
        accordingly.  Symbolic systems require extensive manual
        tuning  to   be  able   to  effectively   use  multiple
        orthogonal sources of information.
 
   On the other hand, practical applications of NNs are held
   back by
 
   (1)  Lack of  well-understood training  algorithms for  many
        tasks.   Many interesting tasks simply cannot be solved
        with NNs because no one knows how to train them.
 
   (2)  Difficulty  in  running  neural  nets  on  commercially
        available hardware.   Neural  net  simulations  require
        vast CPU  and memory resources so NN systems may not be
        cost effective compared to equivalent symbolic systems.
 
   (3)  Absence  of   an  ability   to  easily  explain  why  a
        particular result  was achieved.   Because knowledge is
        distributed throughout  the network  and  there  is  no
        concept of  the network  as a whole proceeding stepwise
        toward a solution, explaining results is difficult.
 
   All things considered, I am a believer in neural networks.
   I see them as the "natural" way to make big advances towards
   "human-level" intelligence, but the field is too new to be
   applied to many practical applications right now.  Symbolic
   approaches draw on a more mature and complete base of
   experience.  Nevertheless, it is very difficult to get
   symbolic systems to show some of the nice traits seen in
   neural networks, like an ability to deal with noise and
   approximate inputs and to produce good compromise solutions.
   An interesting compromise would be the integration of neural
   networks into symbolic reasoning systems, which has been
   tried with some success by at least one expert system group.
 
   -----------------------------------------------------------
 
   
 
   Comments and criticisms on these thoughts would be greatly
   appreciated.  References to current work on neural networks
   for approximate reasoning and comparisons between neural
   networks and symbolic processing systems would also be very
   much appreciated.  Thank you very much for your time and
   thoughts.
 
   
 
                                 (-:  Brad Brown  :-)
 
                                 bradb@ai.toronto.edu

songw@csri.toronto.edu (Wenyi Song) (11/20/88)

In article <88Nov18.011810est.6198@neat.ai.toronto.edu> bradb@ai.toronto.edu (Brad Brown) writes:
>...
>   On the other hand, practical applications of NNs are held
>   back by
>... 
>   (3)  Absence  of   an  ability   to  easily  explain  why  a
>        particular result  was achieved.   Because knowledge is
>        distributed throughout  the network  and  there  is  no
>        concept of  the network  as a whole proceeding stepwise
>        toward a solution, explaining results is difficult.

It may remain difficult, if not impossible, to explain results of NN in
terms of traditional symbolic processing. However this is not a drawback
if you do not attempt to unify them into a grand theory of AI :-)

An alternative is to explain the phenomenology in terms of the dynamics
of neural networks. It seems to me that this is the correct way to go.
We gain much better global predicability of information processing in
neural networks by trading off controllability of local quantum steps.

The Journal of Complexity devoted a special issue on neural computation
this year.

Krulwich-Bruce@cs.yale.edu (Bruce Krulwich) (11/23/88)

In article <8811200202.AA15157@russell.csri.toronto.edu>, songw@csri (Wenyi
Song) writes:
>>   On the other hand, practical applications of NNs are held
>>   back by
>>... 
>>   (3)  Absence  of   an  ability   to  easily  explain  why  a
>>        particular result  was achieved.   Because knowledge is
>>        distributed throughout  the network  and  there  is  no
>>        concept of  the network  as a whole proceeding stepwise
>>        toward a solution, explaining results is difficult.
>
>It may remain difficult, if not impossible, to explain results of NN in
>terms of traditional symbolic processing. However this is not a drawback
>if you do not attempt to unify them into a grand theory of AI :-)

OK, but there are many reasons for explanation, and many ways to explain.  A
lot of recent work in _high_level_ learning and processing involves
explanation, and it is exactly this type of high level processing that there
have not yet been connectionist models of.  There are many ways to explain
something (logical chain, case similarity, high level constraint
satisfaction), none of which have been handled well by connectionist
networks.  Also, at a more application-oriented level, explanation is
necessary to deal with other human or machine experts.

>An alternative is to explain the phenomenology in terms of the dynamics
>of neural networks. It seems to me that this is the correct way to go.
>We gain much better global predicability of information processing in
>neural networks by trading off controllability of local quantum steps.

This is fine for explaining the network in theoretical terms, but not for
other purposes.  Can you imagine a system that recommends surgery, and backs
up its recommendation with a description of neuron value clustering??

I think the fact of the matter is that there are a lot of aspects of
cognition that are crucial to "itelligence" that connectionist models cannot
_YET_ handle.  (Examples of these include goals, cases, plans, explanations,
themes, non-purely-inductive learning, etc.)  Symbolic AI was in the same
position 10 years ago.  It's wrong, however, to pretend that such high-level
aspects are not important in connectionist models.  They simply have not yet
been handled sufficiently.  That's what on-going research in a young field
is all about.


Bruce Krulwich

songw@csri.toronto.edu (Wenyi Song) (11/24/88)

In article <43864@yale-celray.yale.UUCP> Krulwich-Bruce@cs.yale.edu (Bruce Krulwich) writes:
>In article <8811200202.AA15157@russell.csri.toronto.edu>, songw@csri (Wenyi
>Song) writes:
[comments deleted]

An example of why natural language processing is difficult, I conclude.
Any suggestion that I write a little between the lines? :-)

A reply was sent to Bruce. In case of any curiosity, a copy is
available upon request. By convention of the usenet, I might
summarize it to the group if the level of demand is high :-)