[comp.ai] forwarded message...

fishwick@uflorida.cis.ufl.EDU (Paul Fishwick) (09/19/88)

I am forwarding Prof. McCarthy's message to comp.ai :


From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: common sense knowledge of continuous action  
To: fishwick@BIKINI.CIS.UFL.EDU, ailist@AI.AI.MIT.EDU
Status: R

If Genesereth and Nilsson didn't give an example to illustrate
why differential equations aren't enough, they should have.
The example I like to give when I lecture is that of spilling
the water glass on the lectern.  If the front row is very
close, it might get wet, but usually not even that.  The
Navier-Stokes equations govern the flow of the spilled water
but are entirely useless in this common sense situation.
No-one can acquire the initial conditions or integrate the
equations sufficiently rapidly.  Moreover, absorbtion of water
by the materials it flows over is probably a strong enough
effect, so that more than the Navier-Stokes equations would
be necessary.

Thus there is no "scientific theory" involving differential
equations, queuing theory, etc.  that can be used by a robot
to determine what can be expected when a glass of water
is spilled, given what information is actually available
to an observer.  To use the terminology of my 1969 paper
with Pat Hayes, the differential equations don't form
an epistemologically adequate model of the phenomenon, i.e.
a model that uses the information actually available.

While some people are interested in modelling human performance
as an aspect of psychology, my interest is artificial intelligence.
There is no conflict with science.  What we need is a scientific
theory that can use the information available to a robot
with human opportunities to observe and do as well as a
human in predicting what will happen.  Thus our goal is a scientific
common sense.

The Navier-Stokes equations are important in (1) the design
of airplane wings, (2) in the derivation of general inequalities,
some of which might even be translatable into terms common sense
can use.  For example, the Bernouilli effect, once a person has
(usually with difficulty) integrated it into his common sense
knowledge can be useful for qualitatively predicting the effects of
winds flowing over a house.

Finally, the Navier Stokes equations are imbedded in a framework
of common sense knowledge and reasoning that determine the
conditions under which they are applied to the design of airplane
wings, etc.

fishwick@uflorida.cis.ufl.EDU (Paul Fishwick) (09/19/88)

I very much appreciate Prof. McCarthy's response and would like to comment.
The "water glass on the lectern" example is a good one for commonsense
reasoning; however, let's further examine this scenario. First, if we
wanted a highly accurate model of water flow then we would probably
use flow equations (such as the NS equations) possibly combined with
projectile modeling. Note also that a lumped model of the detailed math
model may reduce complexity and provide an answer for us. We have not
seen specific work in this area since spilt water in a room is
of little scientific value to most researchers. Please note that I am
not trying to be facetious -- I am just trying to point out that *if* the
goal is "to solve the problem of predicting the result of continuous actions"
then math models (and not commonsense models) are the method of choice.
Note that the math model need not be limited to a single set of PDE's.
Also, the math model can be an abstract "lumped model" with less complexity.
The general method of simulation incorporates combined continuous and
discrete methods to solve all kinds of physical problems. For instance,
one needs to use notions of probability (that a water will make it
to the front row), simplified flow equations, and projectile motion.
Also, solving of the "problem of what happens to the water" need not
involve flow equations. Witness, for instance, the work of Toffoli and
Wolfram where cellular automata may be used "as an alternative to"
differential equations. Also, the problem may be solved using visual
pattern matching - it is quite likely that humans "reason" about
"what will happen" to spilt liquids using associative database methods
(the neural netlanders might like this approach) based on a huge
library of partial images from previous experience (note Kosslyn's work).

I still haven't mentioned anything about artificial intelligence yet - just
methods of problem solving. I agree that differential equations by
themselves do not comprise an epistemologically adequate model. But note
that no complex problem is solved using only one model language (such as
DE's). The use of simulation is a nice example since, in simulating
a complex system, one might use many "languages" to solve the problem.
Therefore, I'm not sure that epistemological adequacy is the issue.
The issue is, instead, to solve the problem by whatever methods
available.

Now, back to AI. I agree that "there is no theory involving DE's (etc.)
that can be used by a robot to determine what can be expected when a
glass of water is spilled." I would like to take the stronger position
that searching for such a singular theory seems futile. Certainly, robots of
the future will need to reason about the world and about moving liquids;
however, we can program robots to use pattern matching and whatever else
is necesssary to "solve the problem." I supposed that I am predisposed
to an engineering philosophy that would suggest research into a method
to allow robots to perform pattern recognition and equation solving
to answer questions about the real world. I see no evidence of a specific
theory that will represent the "intelligence" of the robot. I see only
a plethora of problem solving tools that can be used to make future
robots more and more adaptive to their environments.

If commonsense theories are to be useful then they must be validated.
Against what? Well, these theories could be used to build programs
that can be placed inside working robots. Those robots that performed
better (according to some statistical criterion) would validate
respective theories used to program them. One must either 1) validate
against real world data [the cornerstone to the method of computer
simulation] , or 2) improved performance. Do commonsense theories
have anything to say about these two "yardsticks?" Note that there
are many AI research efforts that have addressed validation - expert
systems such as MYCIN correctly answered "more and more" diagnoses
as the program was improved. The yardstick for MYCIN is therefore
a statistical measure of validity. My hat is off to the MYCIN team for
proving the efficacy of their methods. Expert systems are indeed a
success. Chess programs have a simple yardstick - their USCF or FIDE
rating. This concentration of yardsticks and method of validation
is not only helpful, it is essential to demonstrate the an AI method
is useful.

-paul

+------------------------------------------------------------------------+
| Prof. Paul A. Fishwick.... INTERNET: fishwick@bikini.cis.ufl.edu       |
| Dept. of Computer Science. UUCP: gatech!uflorida!fishwick              |
| Univ. of Florida.......... PHONE: (904)-335-8036                       |
| Bldg. CSE, Room 301....... FAX is available                            |
| Gainesville, FL 32611.....                                             |
+------------------------------------------------------------------------+