[comp.ai.nlang-know-rep] NL-KR Digest Volume 5 No. 16

nl-kr-request@CS.ROCHESTER.EDU (10/05/88)

NL-KR Digest             (10/04/88 20:38:35)            Volume 5 Number 16

Today's Topics:
        Systems Engineering Level in KBS
        state and change/continuous actions
        common sense knowledge of continuous action  
        Model-based Reasoning - References?
        NL interfaces to Rule Based Expert Systems (Seek info. & address) 
        
Submissions: NL-KR@CS.ROCHESTER.EDU 
Requests, policy: NL-KR-REQUEST@CS.ROCHESTER.EDU
----------------------------------------------------------------------

Date: Fri, 16 Sep 88 15:44 EDT
From: James P. Davis <jdavis@gollum.UUCP>
Subject: Systems Engineering Level in KBS


I am looking for any pointers to references regarding the "systems
engineering level" of knowledge as defined for knowledge base
management systems (KBMS). The only reference I have is in
Brodie et al. *On Knowledge Based Management Systems*, where
Brachman and Levesque discuss the various levels associated with
knowledge representation and knowledge systems (knowledge level,
symbol level, organization level). They mention this Systems
Engineering level in passing, but do not fully define it.

Does anyone have any references, or is anyone doing work in this
area of further defining these "levels" (Ron and Hector, are you
out there)?

The nature of my research in this area involves the definition of
a "level" which allows structure and organization to be imposed
on the Universe of Discourse (which doesn't conform to Newell's
Knowledge Level, which deals specifically with what can be stated
or implied about the world on a functional basis independent of
organization or implementation). However, I am looking at this
imposition of organization independent of how the knowledge
schema is implemented or manipulated to carry out rational behavior
(which doesn't confrom to Newell's Symbol Level either, which
deals with issues of how rational behavior is realized on a
machine, addressing such issues as how to exploit the syntactic
properties of a representation technique to effectively produce
rational actions, e.g., inheritance in frame systems).

The perspective that I am approaching this from is based on the
ideas from Database and data modeling involving the construction
of an "enterprise model" of a domain, which is primarily a
structural description (in some formalism such as any number of
deviations of the E-R model which have been researched) that
captures domain objects, relationships, and constraints according
to some set of model-dependent wff's. This description is a declarative
representation of the UoD. What I am looking at is the correlation
between this process in database/data modeling and constructing
knowledge schemas for a domain in AI. The goal is to define an
architecture for the tight coupling of database and knowledge based
systems as KBMS'.

It seems that some of the work that I am doing at this level
between the KNowledge and Symbol Levels (which I call the "Enterprise
Level" may be what has been termed the "Systems Engineering" Level.
Is this Systems Engineering Level defined sufficiently? Is anyone
working on it? Are there references? Anyone want to correspond
regarding these levels?

Any and all responses are appreciated.


jdavis@Gollum.Columbia.NCR.COM

Jim Davis
Advanced Systems Development
NCR Corporation

------------------------------

Date: Fri, 16 Sep 88 17:25 EDT
From: Paul Fishwick <fishwick@uflorida.cis.ufl.EDU>
Subject: state and change/continuous actions

An inquiry into concepts of "state" and "change":

In browsing through Genesereth's and Nilsson's recent book "Logical 
Foundations of Artificial Intelligence," I find it interesting to
compare and contrast the concepts described in Chapter 11 - "State
and Change" with state/change concepts defined within systems
theory and simulation modeling. The authors make the following statement:
"Insufficient attention has been paid to the problem of continuous
actions." Now, a question that immediately comes to mind is "What problem?"
Perhaps, they are referring to the problem of defining semantics for
"how humans think about continuous actions." This leads to some
interesting questions:

 1) Clearly, the vast literature on math modeling is indicative of
    "how humans think about continuous actions." This knowledge is
    in a compiled form, and use of this knowledge has served
    science in an untold number of circumstances.

 2) If commonsense knowledge representation is the issue then we
    might want to ask a fundamental question "Why do we care about
    representing commonsense knowledge about continuous actions?"
    I can see 2 possible goals: One goal is to validate some given
    theory of commonsense "continuous action" knowledge against
    actual psychological data. Then we could say, for instance, that
    Theory XYZ reflects human thought and is therefore useful.
    I don't think it would be useful to increase our knowledge of
    mechanics or fluidics, for instance, but perhaps a psycho-therapist
    might find this knowledge useful. A second goal is to obtain
    a better model of the continuous action (this reflects the
    "AI is an approach to problem solving" method where one can
    study "how Johnny reasons when balls are bounced" and obtain
    a scientifically superior model regardless of its actual
    psychological validity). Has anyone seen a commonsense model
    of continuous action that is an improvement over systems of
    differential equations, graph based queueing models (and other
    assorted formal languages for systems and simulation)?

Obviously, I'm trying to spark some inter-group discussion and so I hope
that any responses will post to both the AI group (comp.ai) AND
the SIMULATION group (comp.simulation). In addition (sci.math) and
(comp.theory.dynamic-sys) may be appropriate.

I believe that Genesereth and Nilsson are quite correct that "reasoning
about time and continous actions" is an important issue. However, an
even more important issue revolves around people discussing 
concepts about "state," "time," and "change" by crossing disciplines.
Any thoughts?

-paul

+------------------------------------------------------------------------+
| Prof. Paul A. Fishwick.... INTERNET: fishwick@bikini.cis.ufl.edu       |
| Dept. of Computer Science. UUCP: gatech!uflorida!fishwick              |
| Univ. of Florida.......... PHONE: (904)-335-8036                       |
| Bldg. CSE, Room 301....... FAX is available                            |
| Gainesville, FL 32611.....                                             |
+------------------------------------------------------------------------+

------------------------------

Date: Sat, 17 Sep 88 12:14 EDT
From: Greg Lee <lee@uhccux.uhcc.hawaii.edu>
Subject: Re: state and change/continuous actions


From article <18249@uflorida.cis.ufl.EDU>, by fishwick@uflorida.cis.ufl.EDU (Paul Fishwick):
" 
"  2) If commonsense knowledge representation is the issue then we
"     might want to ask a fundamental question "Why do we care about
"     representing commonsense knowledge about continuous actions?"
"     I can see 2 possible goals: One goal is to validate some given
" ...

To reason about continuous actions where the physics hasn't been
worked out or is computationally infeasible.  How about that as a
third goal?

" Obviously, I'm trying to spark some inter-group discussion and so I hope
" that any responses will post to both the AI group (comp.ai) AND
" the SIMULATION group (comp.simulation). In addition (sci.math) and
" (comp.theory.dynamic-sys) may be appropriate.

Tsk, tsk.  Left out sci.lang.  The way people think about these
things is reflected in the tense/aspect systems of natural languages.
 
" I believe that Genesereth and Nilsson are quite correct that "reasoning
" about time and continous actions" is an important issue. However, an
" even more important issue revolves around people discussing 
" concepts about "state," "time," and "change" by crossing disciplines.
" Any thoughts?

In English, predicates which can occur with Agent subjects, those
capable of deliberate action, can also occur in the progressive
aspect, expressing continuous action.  This suggests some
connection between intent and continuity whose nature is not
obvious, to me anyway.

		Greg, lee@uhccux.uhcc.hawaii.edu

------------------------------

Date: Sun, 18 Sep 88 21:18 EDT
From: Steven Ryan <smryan@garth.UUCP>
Subject: Re: state and change/continuous actions


>Foundations of Artificial Intelligence," I find it interesting to
>compare and contrast the concepts described in Chapter 11 - "State
>and Change" with state/change concepts defined within systems
>theory and simulation modeling. The authors make the following statement:
>"Insufficient attention has been paid to the problem of continuous
>actions." Now, a question that immediately comes to mind is "What problem?"

Presumably, they are referring to that formal systems are strictly discrete and
finite. This has to do to with `effective computation.' Discrete systems can be
explained in such simple terms that is always clear exactly what is being
done.

Continuous systems are computably using calculus, but is this `effective
computation?' Calculus uses a number of existent theorems which prove some
point or set exists, but provide no method to effectively compute the value.
Or is knowing the value exists sufficient because, after all, we can map the
real line into a bounded interval which can be traversed in finite time?

It is not clear that all natural phenomon can be modelled on the discrete
and finite digital computer. If not, what computer could we use?

>Any thoughts?

------------------------------

Date: Mon, 19 Sep 88 11:18 EDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: common sense knowledge of continuous action  

If Genesereth and Nilsson didn't give an example to illustrate
why differential equations aren't enough, they should have.
The example I like to give when I lecture is that of spilling
the water glass on the lectern.  If the front row is very
close, it might get wet, but usually not even that.  The
Navier-Stokes equations govern the flow of the spilled water
but are entirely useless in this common sense situation.
No-one can acquire the initial conditions or integrate the
equations sufficiently rapidly.  Moreover, absorbtion of water
by the materials it flows over is probably a strong enough
effect, so that more than the Navier-Stokes equations would
be necessary.

Thus there is no "scientific theory" involving differential
equations, queuing theory, etc.  that can be used by a robot
to determine what can be expected when a glass of water
is spilled, given what information is actually available
to an observer.  To use the terminology of my 1969 paper
with Pat Hayes, the differential equations don't form
an epistemologically adequate model of the phenomenon, i.e.
a model that uses the information actually available.

While some people are interested in modelling human performance
as an aspect of psychology, my interest is artificial intelligence.
There is no conflict with science.  What we need is a scientific
theory that can use the information available to a robot
with human opportunities to observe and do as well as a
human in predicting what will happen.  Thus our goal is a scientific
common sense.

The Navier-Stokes equations are important in (1) the design
of airplane wings, (2) in the derivation of general inequalities,
some of which might even be translatable into terms common sense
can use.  For example, the Bernouilli effect, once a person has
(usually with difficulty) integrated it into his common sense
knowledge can be useful for qualitatively predicting the effects of
winds flowing over a house.

Finally, the Navier Stokes equations are imbedded in a framework
of common sense knowledge and reasoning that determine the
conditions under which they are applied to the design of airplane
wings, etc.

------------------------------

Date: Mon, 19 Sep 88 11:38 EDT
From: Paul Fishwick <fishwick@uflorida.cis.ufl.EDU>
Subject: Re: common sense knowledge of continuous actions

I very much appreciate Prof. McCarthy's response and would like to comment.
The "water glass on the lectern" example is a good one for commonsense
reasoning; however, let's further examine this scenario. First, if we
wanted a highly accurate model of water flow then we would probably
use flow equations (such as the NS equations) possibly combined with
projectile modeling. Note also that a lumped model of the detailed math
model may reduce complexity and provide an answer for us. We have not
seen specific work in this area since spilt water in a room is
of little scientific value to most researchers. Please note that I am
not trying to be facetious -- I am just trying to point out that *if* the
goal is "to solve the problem of predicting the result of continuous actions"
then math models (and not commonsense models) are the method of choice.
Note that the math model need not be limited to a single set of PDE's.
Also, the math model can be an abstract "lumped model" with less complexity.
The general method of simulation incorporates combined continuous and
discrete methods to solve all kinds of physical problems. For instance,
one needs to use notions of probability (that a water will make it
to the front row), simplified flow equations, and projectile motion.
Also, solving of the "problem of what happens to the water" need not
involve flow equations. Witness, for instance, the work of Toffoli and
Wolfram where cellular automata may be used "as an alternative to"
differential equations. Also, the problem may be solved using visual
pattern matching - it is quite likely that humans "reason" about
"what will happen" to spilt liquids using associative database methods
(the neural netlanders might like this approach) based on a huge
library of partial images from previous experience (note Kosslyn's work).

I still haven't mentioned anything about artificial intelligence yet - just
methods of problem solving. I agree that differential equations by
themselves do not comprise an epistemologically adequate model. But note
that no complex problem is solved using only one model language (such as
DE's). The use of simulation is a nice example since, in simulating
a complex system, one might use many "languages" to solve the problem.
Therefore, I'm not sure that epistemological adequacy is the issue.
The issue is, instead, to solve the problem by whatever methods
available.

Now, back to AI. I agree that "there is no theory involving DE's (etc.)
that can be used by a robot to determine what can be expected when a
glass of water is spilled." I would like to take the stronger position
that searching for such a singular theory seems futile. Certainly, robots of
the future will need to reason about the world and about moving liquids;
however, we can program robots to use pattern matching and whatever else
is necesssary to "solve the problem." I supposed that I am predisposed
to an engineering philosophy that would suggest research into a method
to allow robots to perform pattern recognition and equation solving
to answer questions about the real world. I see no evidence of a specific
theory that will represent the "intelligence" of the robot. I see only
a plethora of problem solving tools that can be used to make future
robots more and more adaptive to their environments.

If commonsense theories are to be useful then they must be validated.
Against what? Well, these theories could be used to build programs
that can be placed inside working robots. Those robots that performed
better (according to some statistical criterion) would validate
respective theories used to program them. One must either 1) validate
against real world data [the cornerstone to the method of computer
simulation] , or 2) improved performance. Do commonsense theories
have anything to say about these two "yardsticks?" Note that there
are many AI research efforts that have addressed validation - expert
systems such as MYCIN correctly answered "more and more" diagnoses
as the program was improved. The yardstick for MYCIN is therefore
a statistical measure of validity. My hat is off to the MYCIN team for
proving the efficacy of their methods. Expert systems are indeed a
success. Chess programs have a simple yardstick - their USCF or FIDE
rating. This concentration of yardsticks and method of validation
is not only helpful, it is essential to demonstrate the an AI method
is useful.

-paul

+------------------------------------------------------------------------+
| Prof. Paul A. Fishwick.... INTERNET: fishwick@bikini.cis.ufl.edu       |
| Dept. of Computer Science. UUCP: gatech!uflorida!fishwick              |
| Univ. of Florida.......... PHONE: (904)-335-8036                       |
| Bldg. CSE, Room 301....... FAX is available                            |
| Gainesville, FL 32611.....                                             |
+------------------------------------------------------------------------+

------------------------------

Date: Sun, 18 Sep 88 20:55 EDT
From: James P. Davis <jdavis@gollum.UUCP>
Subject: Model-based Reasoning - References?


I am looking for some good references on the subject of Model-based
reasoning (MBR). I am also interested in finding out who is doing
work/research in this area, and what domains are being investigated.
Nobody seems to have put any special compendiums (like Morgan Kaufmann)
in this area yet. Any of you out there?

Specifically, I am looking at the area of using a modeling framework,
which allows the structure and behavior for certain classes of
domains to be expressed in some declarative form, to drive the
reasoning process. My understanding of MBR is that it is an approach
at exploiting the inherent structure and constraints of a system
or enterprise to guide the process of reasoning about problems in
the given domain. I am developing an "analogical" representation
which allows the expression of domain semantics in terms of
structure and constraint declaration constructs based on the syntactic
construction of wff's in the modeling technique. The domain is
information systems design. In theory, by developing a self-describing
modeling formalism, in which the information systems design activity
can take place, the nature of the solution space can be constrained
such that only those solutions which adhere to the semantics of the
formalism itself (in which are expressed the semantics of the domain
application) are relevant.

What's happening in MBR? How does it relate to "reasoning from first
principles"?

Any and all responses are appreciated. I can summarize to the net if
requested.

Jim Davis
Advanced Systems Development
NCR Corporation
jdavis@Gollum.Columbia.NCR.COM

------------------------------

Date: Mon, 19 Sep 88 16:31 EDT
From: Fabrizio Sebastiani <FABRIZIO%ICNUCEVM.BITNET@ICNUCEVM.CNUCE.CNR.IT>


I am looking for papers on hybrid knowledge representation (MRS, KLONE,
KRYPTON and the like); I am pretty familiar with the "KLONE world"
literature (at least, with what has gone on up to 1985), but don't know
much about: 1) what has been written past that date; 2) what
has been written AGAINST this approach. Can anyone provide references to
relevant papers on the subject?    Is anyone interested to discuss
the issue?       Thanks    Fabrizio Sebastiani

------------------------------

Date: Mon, 19 Sep 88 20:24 EDT
From: ERIC Y.H. TSUI <munnari!aragorn.oz.au!eric@uunet.UU.NET>
Subject: NL interfaces to Rule Based Expert Systems (Seek info. & address) 

I recently broadcasted and seek information on NL interfaces to rule based
expert systems. There is no reply and I came across the following article:

DATSKOVSKY-MOERDLER, G., McKEOWN, K.R. and ENSOR, J.R. (1987); 
Building Natural Language Interfaces for Rule-based Systems, IJCAI-87, 
p682-687.

The first two authors are from Columbia University (NY) and the third author
is from AT&T Bell Lab. (Holmdel, N.J.).

Would anyone have their e-mail address ? (I am still interested to learn
about pointers to other work.)

Eric Tsui                               eric@aragorn.oz
Division of Computing and Mathematics
Deakin University
Geelong, Victoria 3217
Australia

------------------------------

Date: Fri, 23 Sep 88 19:35 EDT
From: Fabrizio Sebastiani <FABRIZIO%ICNUCEVM.BITNET@ICNUCEVM.CNUCE.CNR.IT>

Does anybody know whether further studies have been carried out on Fagin
and Halpern's notion of "awareness" in epistemic logics, as from their
1985 IJCAI paper?  whether the notion had been previously discussed in
the philosophy of language or the philosophy of mind?  Anyone wishing to
discuss the topic, provide references, send papers, etc., is invited to
contact me.    Fabrizio Sebastiani

------------------------------

End of NL-KR Digest
*******************