[sci.lang] situation semantics

bernus@cs.uq.oz.au (Peter Bernus) (06/27/91)

I am cross posting this to comp.ai, because relevant articles
of Smolniar <9106150211.AA13350@lilac.berkeley.edu> and Minsky
<1991Jun14.194519.28465@news.media.mit.edu> appeared there.
----------

Reading the recent articles about situation semantics in sci.lang
I have developed doubts about what these articles suggest anyway?

Minsky <1991Jun14.194519.28465@news.media.mit.edu> in comp.ai
makes sense from the cognitive scientist's point of view and maybe
discouraging from the computational linguist point of view saying:
leave alone the hope of implementing your realistic situated,
interpreting agent based on logic as a tool.
			      [I would add to this: as a sole tool].

Discouraging -- but only if we decide (which we should not really)
that the computational model in which we shall *implement* our
situated agent must be a *full* logic model of the described situation.
(That is, if we require that the model in which the agent interprets
its conversations must be fully implemented in the agent.) But does
it have to be the case?  No.  It is beside the point to suggest that
logic (and its model theory) is a perfect tool for knowledge representation.
I think that logic is perfect *for an external observer* to explain
what it means for the agent to interpret an utterance.

It does not matter if the case at hand requires
fuzzy, imperfect, incomplete etc reasoning, I bet whatever you suggest
there will be a logician to develop a default intensional higher order
minimal subjective bla bla logic to *explain* what happens.  Maybe Russel
and Wittgenstein would cry but logicians would do it anyway.
So: logic is great and rules >>in the external observer's model<<.  Hay! (s) I even
venture to say ``in defense of logic :-)'' that the implemented part of
the model in a computational agent should be based on some sort of
a logic language.  (Whether the really cool inference mechanisms need
a little bit different hardware? Maybe.)

But:

Just as logic (or arithmetic) is a perfect tool for an external observer to
describe how a stone knows in which direction and with what
speed to fly when thrown away I would not want to implement a ``stone''
by a built-in microprocessor which makes it fly according to my
stone flying theory!  To suggest that the stone must
do all the computations involved in calculatng its trajectory
would be silly.

Now, you might say -- despair, because (if my reasoning is correct) then
given what computers can do today the realistic type of situated inference
with all the realistic characteristics Minsky demands is out of reach.

I do not despair, because the model in the artificial agent
need not be a *complete* logic model of the described situation.  I suggest
that the agent and its environment *together* form the model in which the agent
interprets and interpretation need not be based fully on the representation held
within the skull.

E.g., the agent may need to use its external modelling facility before
it can make sense of an utterance.  Gordon Pask's (for some reason neglected)
theory of conversation (CT) taught me long ago that for meaningful
conversation we need to have a modelling facility anyway and that this 
facility must be commonly accessible to the agents participating in the conversation.  
Otherwise we would not even be able to reach common understanding/knowledge,
without which we would not be able to learn concepts  of the language
(associated with procedures in CT) which we later want to interpret.  [footnote 1]

A partial model built in the agent, however, might do.  How to build
such I do not know yet, but that may give us a leeway...
As S&A puts it nicely: there is more semantics in the
world than we suspected...

----
[footnote 1] the existence of the commonly accessibile modelling
	     facility is a prerequisite but need not be available
             at every spatiotemporal location where interpretation
	     is going on.
------------------------------------------------------------------------------------
		flames(Where) :- bin(Where), throwto(Where).
		enlightening(Peter) :- bernus(Peter).
		?- :-)
------------------------------------------------------------------------------------
Peter Bernus, Key Centre for Software Technology, The University of Queensland,
St Lucia, QLD 4072, Australia phone: +61-7-365 3241 (direct); +61-7-365 3168 (secr);
fax: +61-7-365 1999; telex: 40315 uniqld aa; e-mail: bernus@cs.uq.oz.au