[comp.ai] UNIFIED MODEL FOR KNOWLEDGE REPRESENTATION

ISSSSM@NUSVM.BITNET (Stephen Smoliar) (06/13/91)

In article <1991Jun12.221121.15828@watdragon.waterloo.edu>
cpshelley@violet.waterloo.edu (cameron shelley) writes:
>In article <1991Jun12.130817.3621@kingston.ac.uk> is_s425@kingston.ac.uk
>(Hutchison C S) writes:
>
>>It seems to me that talk of 'partial truths', 'negotiation', and so on, may
>>not get us very far.  If I'm negotiating with you, I'm really just trying to
>>tell you why you are (mostly) wrong and I am (mostly) right.  If I adduce
>>evidence to support my claims, then we may end up negotiating what counts as
>>evidence.  We're stuck in a hopeless regress.  (Try telling one billion
>>Christians or one billion Muslims they're wrong -- especially if it is
>>perfectly obvious to you that Humanistic Buddhism is the only right way. Try
>>negotiating with the Jehovah's Witness on your doorstep.  Try telling the
>>free market liberal about the unspeakable suffering and brutality that
>>capitalism has wrought upon the cheap labour markets of the Third World.)
>
>What you seem to be saying is that the *process* of understanding (or
>failing to understand) is very hard in difficult cases.

Actually, I think that Chris may be saying more than that.  I think he also
seems to believe that there is some single, fixed resolution to negotiation.
My own position is that this is a serious mistake.  Any view of intelligence
which does not take into account the ongoing nature of behavior is bound to
miss the mark;  and this includes the fact that negotiations are rarely (if
ever) resolved absolutely.
>
>>To get things in context, despite the political flavour that my question may
>>appear to have taken on, my main concern is with automatic knowledge
>>acquisition
>>from text (whatever kind of text it may be).  My problem is: is knowledge
>>representation going to be about an intelligent agent's models of the
>>physical world or of speakers' reports about the world?  This is a technical
>>rather than a philosophical issue since it impinges directly on what kinds
>>of inference and what sources of knowledge are relevant to the reasoning
>>process.
>
>Like Carbonell's (and Hovy's) systems, a model of the physical world will
>require 'objective' input at some point.  Since this is not really possible,
>I would select option b) you give above.

The other possibility is to try to get the objectivity out of the first option.
As I pointed out in my previous article, the REAL problem with those African
headline lies in trying to assume a "generic" reader.  Even Carbonell's
liberals and conservatives are basically "generic."  When they read about
military action in Bhutan, they do not have to worry about running a business
there.  If we try to build intelligence agents which are less abstract and
more subjective, we might be able to make some progress on Chris' first option.

===============================================================================

Stephen W. Smoliar
Institute of Systems Science
National University of Singapore
Heng Mui Keng Terrace, Kent Ridge
SINGAPORE 0511

BITNET:  ISSSSM@NUSVM

"He was of Lord Essex's opinion, 'rather to go an hundred miles to speak with
one wise man, than five miles to see a fair town.'"--Boswell on Johnson