[comp.ai.digest] Generality in Artificial Intelligence

YLIKOSKI@FINFUN.BITNET (07/12/88)

Date: Thu, 7 Jul 88 04:24 EDT
From: YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU
Subject:  Generality in Artificial Intelligence
To: AILIST@AI.AI.MIT.EDU
X-Original-To:  @AILIST, @JMC, YLIKOSKI

Distribution-File:
        AILIST@AI.AI.MIT.EDU
        JMC@SAIL.Stanford.EDU

This entry was inspired by John McCarthy's Turing Award lecture in
Communications of the ACM, December 1987, Generality in Artificial
Intelligence.


> "In my opinion, getting a language for expressing general
> commonsense knowledge for inclusion in a general database is the key
> problem of generality in AI."


What is commonsense knowledge?


Here follows an example where commonsense knowledge plays its part.  A
human parses the sentence

"Christine put the candle onto the wooden table, lit a match and lit
it."

The difficulty which humans overcome with commonsense knowledge but
which is hard to a program is to determine whether the last word, the
pronoun "it" refers to the candle or to the table.  After all, you can
burn a wooden table.

Probably a human would reason, within less than a second, like this.

"Assume Christine is sane.  The event might have taken place at a
party or during her rendezvous with her boyfriend.  People who do
things such as taking part in parties most often are sane.

People who are sane are more likely to burn candles than tables.

Therefore, Christine lit the candle, not the table."


It seems to me that the inferences are not so demanding but the
inferencer utilizes a large amount of background knowledge and a good
associative access mechanism.


Thus, it would seem that in order for us to see true commonsense
knowledge exhibited by a program we need:

        * a vast amount of knowledge involving the world of a person
          in virtual memory.  The knowledge involves gardening,
          Buddhism, the emotions of an ordinary person and so forth -
          its amount might equal a good encyclopaedia.
        * a good associative access mechanism.  An example of such
          an access mechanism is the hashing mechanism of the
          Metalevel Reasoning System described in/1/.


What kind of formalism should we use for expressing the commonsense
knowledge?


Modern theoretical philosophy knows of a number of logics with
different expressive power /2/.  They form a natural scale for
evaluating different knowledge representation formalisms.  For
example, it would be very interesting to know whether Sowa's
Conceptual Structures correspond to a previously known logical system.
I remember having seen a paper which complained that to a certain
extent the KRL is just another syntax for first-order predicate logic.

In my opinion, it is possible that an attempt to express commonsense
knowledge with a formalism is analogous to an attempt to fit a whale
into a tin sardine can.  The knowledge of a person has so many nuances
which are well reflected by the richness of the language used in
poetry and fiction (yes, a poem may contain nontrivial knowledge!)

Think of the Earthsea trilogy by Ursula K. LeGuin.  The climax of the
trilogy is when Sparrowhawk the wizard saves the world from Cob's evil
deeds by drawing the rune Agnen across the spring of the Dry River:

"'Be thou made whole!' he said in a clear voice, and with his staff
he drew in lines of fire across the gate of rocks a figure: the rune
Agnen, the rune of Ending, which closes roads and is drawn on coffin
lids.  And there was then no gap or void place among the boulders.
The door was shut."

Think of how difficult it would be to express that with a formalism,
preserving the emotions and the nuances.

I propose that the usage of *natural language* (augmented with
text-processing, database and NL understanding technology) for
expressing commonsense knowledge be studied.


> "Reasoning and problem-solving programs must eventually allow the
> full use of quantifiers and sets, and have strong enough control
> methods to use them without combinatorial explosion."

It would seem to me that one approach to this problem is the use of
heuristics, and a good way to learn to use heuristics well is to study
how the human brain does it.

Here follows a reference which you may now know and which will certainly
prove useful when studying the heuristic methods the human brain uses.

In 1946, the doctoral dissertation of the Dutch psychologist Adrian
D. de Groot was published.  The name of the dissertation is Het
Denken van den Schaaker, The Thinking of a Chess Player.

In the 30's, de Groot was a relatively well-known chess master.
The material of the book has been created by giving chess postitions
to Grandmasters, international masters, national masters and
first-class players and so forth for them to study.  The chess master
told aloud how he made the decision which move he thought was the best.

Good players immediately start studying the right alternatives.
Weaker players usually calculate as much but they usually follow
the wrong ideas.

Later in his life, de Groot became the education manager of the
Philips Corporation and the professor of Psychology in Amsterdam
University.  His dissertation was translated into English in the
60's in Stanford Institute as "Thought and Choice is Chess".


> "Whenever we write an axiom, a critic can say it is true only in a
> certain context.  With a little ingenuity, the critic can usually
> devise a more general context in which the precise form of the axiom
> does not hold.  Looking at human reasoning as reflected in language
> emphasizes this point."

I propose that the concept of a theory with a context be formalized.

A theory in logic has a set of true sentences (axioms) and a set of
inference rules which are used to derive theorems from axioms -
therefore, it can be described with a 2-tuple

        <axioms, inference_rules>.

A theory with a context would be a 3-tuple

        <axioms, inference_rules, context>

where "context" is a set of sentences.

Someone might create interesting theoretical philosophy or mathematical
logic research of this.


References:

/1/     Stuart Russell: The Compleat Guide to MRS, Stanford University

/2/     Antti Hautamaeki, a philosopher friend of mine, personal
        communication.

sme@doc.ic.ac.UK (07/18/88)

From: mcvax!doc.ic.ac.uk!sme@uunet.UU.NET
Date: Thu, 14 Jul 88 10:06 EDT
To: AIList@ai.ai.mit.edu
Newsgroups: comp.ai.digest
Subject: Re: Generality in Artificial Intelligence
Summary: 
Expires: 
References: <19880712044954.9.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU>
Sender: 
Reply-To: Steve M Easterbrook <sme@doc.ic.ac.uk>
Followup-To: 
Distribution: 
Organization: Dept. of Computing, Imperial College, London, UK.
Keywords: 

In a previous article, YLIKOSKI@FINFUN.BITNET writes:
>> "In my opinion, getting a language for expressing general
>> commonsense knowledge for inclusion in a general database is the key
>> problem of generality in AI."
>...
>Here follows an example where commonsense knowledge plays its part.  A
>human parses the sentence
>
>"Christine put the candle onto the wooden table, lit a match and lit
>it."
>
>The difficulty which humans overcome with commonsense knowledge but
>which is hard to a program is to determine whether the last word, the
>pronoun "it" refers to the candle or to the table.  After all, you can
>burn a wooden table.
>
>Probably a human would reason, within less than a second, like this.
>
>"Assume Christine is sane.  The event might have taken place at a
>party or during her rendezvous with her boyfriend.  People who do
>things such as taking part in parties most often are sane.
>
>People who are sane are more likely to burn candles than tables.
>
>Therefore, Christine lit the candle, not the table."

Aren't you overcomplicating it a wee bit? My brain would simply tell me
that in my experience, candles are burnt much more often than tables.
QED.

This to me, is very revealing. The concept of commonsense knowledge that
McCarthy talks of is simply a huge base of experience built up over a
lifetime. If a computer program was switched on for long enough, with a
set of sensors similar to those provided by the human body, and a basic
abililty to go out and do things, to observe and experiment, and to
interact with people, it would be able to gather a similar set of
experiences to those possessed by humans. The question is then whether
the program can store and index those experiences, in their totality, in
some huge episodic memory, and whether it has the appropriate mechanisms
to fire useful episodic recalls at useful moments, and apply those
recalls to the present situation, whether by analogy or otherwise.

From this, it seems to me that the most important task that AI can
address itself to at the present is the study of episodic memory: how it
can be organised, how it can be accessed, and how analogies with past
situations can be developed. This should lead to a theory of experience,
ready for when robotics and memory capacities are advanced enough for
the kind of exeriment I descibed above. With all due respect to McCarthy
et al, attempts to hand code the wealth of experience of the real world
that adult human beings have accumulated are going to prove futile.
Human intelligence doesn't gather this commonsense by being explicitly
programmed with rules (formal OR informal), and neither will artificial
intelligences.

>It seems to me that the inferences are not so demanding but the
>inferencer utilizes a large amount of background knowledge and a good
>associative access mechanism.

Yes. Work on the associative access, and let the background knowledge
evolve itself.

>...
>What kind of formalism should we use for expressing the commonsense
>knowledge?

Try asking what kind of formalism should we use for expressing episodic
memories? Later on you suggest natural language. Is this suitable?
Do people remember things by describing them in words to themselves?
Or do they just create private "symbols", or brain activation patterns,
which only need to be translated into words when being communicated to
others? Note: I am not saying people don't think in natural language,
only that they don't store memories as natural language accounts.

I don't think this kind of experience can be expressed in any formalism,
nor do I think it can be captured by natural language. It needs to
evolve as a private set of informal symbols, of which the brain (human
or computer) does not need to consciously realise are there. All it needs
to do is associate the right thought with the symbols when they are
retrieved, i.e. to interpret the memories. Again I think this kind of
ability evolves with experience: at first, symbols (brain activity
patterns) would be triggered which the brain would be unable to interpret.

If this is beginning to sound like an advocation of neural
nets/connectionist learning, then so be it. I feel that a conventional
AI system coupled to a connectionist net for its episodic memory might
be a very sensible achitecture. There are probably other ways of
achieving the same behaviour, I don't know.

One final note. Read the first chapter of the Dreyfus and Dreyfus book
"Mind over Machine", for a thought-provoking account of how experts
perform, using "intuition". Logic is only used in hindsight to support
an intuitive choice. Hence "heuristics is compiled hindsight". Being
arch-critics of AI, the Dreyfuses conclude that the intuition that experts
develop is intrinsically human and can never be reproduced by machine.
Being an AI enthusiast, I conclude that intuition is really the
unconscious application of experience, and all that's needed to
reproduce it is the necessary mechanisms for storing and retrieving
episodic memories by association.

>In my opinion, it is possible that an attempt to express commonsense
>knowledge with a formalism is analogous to an attempt to fit a whale
>into a tin sardine can.  ... 

I agree.
How did you generate this analogy? Is it because you had a vast
amount of common sense knowledge about whales and sardines, and tins,
(whether expressed in natural language (vast!), or some formal system)
through which you had to wade to realise that sardines will fit
in small tins but whales will not, and eventually synthesis this particular
concept, or did you just try to recall (holistically) an image that
matched the concept of forcing a huge thing into a rigid container?

Steve Easterbrook.

hayes.pa@XEROX.COM (07/24/88)

Date: Wed, 20 Jul 88 11:32 EDT
From: hayes.pa@Xerox.COM
Subject: Re: Generality in Artificial Intelligence
To: AIList@AI.AI.MIT.EDU
cc: hayes.pa@Xerox.COM

Steve Easterbrook gives us an old idea: that the way to get all the knowledge
our systems need is to let them experience it for themselves.  It doesnt work
like that, however, for several reasons. 

First, most of our common sense isnt in episodic memory.   That candles burn
more often than tables isnt something from episodic memory, for example.  Or is
the suggestion is that we only store episodes and do the inductive
generalisations whenever we need to by remarkably efficient internal access
machinery?  Apart from the engineering difficulties ( I can imagine 1PC being
reinvented as a handy device to save memory ),  this has the problem that lots
of what we know CANT be induced from experiences.  Ive never been to Africa or
ancient Rome, for example, but I know a fair bit about them.

But the more serious problem is that the very idea of storing experiences
assumes that there is some way to encode the experiences, some episodic memory
which can represent episodes.  I dont mean which knows how to index them
efficiently, just put them into memory in the first place.  You know that, in
your `..experience, candles are burnt much more often than tables.'  How are all
those experiences of candle-combustion represented in your head?  Put your wee
robot in front of a candle, and what happens in its head?  Now think of all the
other stuff your episodic memory has to be able to represent.  How is this
representing done?  Maybe after a while following this thought you will begin to
see McCarthys footsteps on the trail in front of you.

Pat Hayes