fishwick@BIKINI.CIS.UFL.EDU (Paul Fishwick) (09/26/88)
---- Forwarded Message Follows ---- Return-path: <@AI.AI.MIT.EDU,@KL.SRI.COM:fishwick@ufl.edu> Received: from AI.AI.MIT.EDU by ZERMATT.LCS.MIT.EDU via CHAOS with SMTP id 195201; 19 Sep 88 11:59:58 EDT Received: from KL.SRI.COM (TCP 1200200002) by AI.AI.MIT.EDU 19 Sep 88 12:06:21 EDT Received: from bikini.cis.ufl.edu ([128.227.128.2]) by KL.SRI.COM with TCP; Mon, 19 Sep 88 08:56:01 PDT Received: by bikini.cis.ufl.edu (5.59/2.22) id AA08376; Mon, 19 Sep 88 11:20:20 EDT To: comp-ai-digest Path: uflorida!fishwick From: fishwick@bikini.cis.ufl.edu (Paul Fishwick) Newsgroups: comp.ai,comp.ai.digest Subject: commonsense reasoning Message-Id: <18275@uflorida.cis.ufl.EDU> Date: 19 Sep 88 15:20:13 GMT Organization: UF CIS Department Lines: 81 I very much appreciate Prof. McCarthy's response and would like to comment. The "water glass on the lectern" example is a good one for commonsense reasoning; however, let's further examine this scenario. First, if we wanted a highly accurate model of water flow then we would probably use flow equations (such as the NS equations) possibly combined with projectile modeling. Note also that a lumped model of the detailed math model may reduce complexity and provide an answer for us. We have not seen specific work in this area since spilt water in a room is of little scientific value to most researchers. Please note that I am not trying to be facetious -- I am just trying to point out that *if* the goal is "to solve the problem of predicting the result of continuous actions" then math models (and not commonsense models) are the method of choice. Note that the math model need not be limited to a single set of PDE's. Also, the math model can be an abstract "lumped model" with less complexity. The general method of simulation incorporates combined continuous and discrete methods to solve all kinds of physical problems. For instance, one needs to use notions of probability (that a water will make it to the front row), simplified flow equations, and projectile motion. Also, solving of the "problem of what happens to the water" need not involve flow equations. Witness, for instance, the work of Toffoli and Wolfram where cellular automata may be used "as an alternative to" differential equations. Also, the problem may be solved using visual pattern matching - it is quite likely that humans "reason" about "what will happen" to spilt liquids using associative database methods (the neural netlanders might like this approach) based on a huge library of partial images from previous experience (note Kosslyn's work). I still haven't mentioned anything about artificial intelligence yet - just methods of problem solving. I agree that differential equations by themselves do not comprise an epistemologically adequate model. But note that no complex problem is solved using only one model language (such as DE's). The use of simulation is a nice example since, in simulating a complex system, one might use many "languages" to solve the problem. Therefore, I'm not sure that epistemological adequacy is the issue. The issue is, instead, to solve the problem by whatever methods available. Now, back to AI. I agree that "there is no theory involving DE's (etc.) that can be used by a robot to determine what can be expected when a glass of water is spilled." I would like to take the stronger position that searching for such a singular theory seems futile. Certainly, robots of the future will need to reason about the world and about moving liquids; however, we can program robots to use pattern matching and whatever else is necesssary to "solve the problem." I supposed that I am predisposed to an engineering philosophy that would suggest research into a method to allow robots to perform pattern recognition and equation solving to answer questions about the real world. I see no evidence of a specific theory that will represent the "intelligence" of the robot. I see only a plethora of problem solving tools that can be used to make future robots more and more adaptive to their environments. If commonsense theories are to be useful then they must be validated. Against what? Well, these theories could be used to build programs that can be placed inside working robots. Those robots that performed better (according to some statistical criterion) would validate respective theories used to program them. One must either 1) validate against real world data [the cornerstone to the method of computer simulation] , or 2) improved performance. Do commonsense theories have anything to say about these two "yardsticks?" Note that there are many AI research efforts that have addressed validation - expert systems such as MYCIN correctly answered "more and more" diagnoses as the program was improved. The yardstick for MYCIN is therefore a statistical measure of validity. My hat is off to the MYCIN team for proving the efficacy of their methods. Expert systems are indeed a success. Chess programs have a simple yardstick - their USCF or FIDE rating. This concentration of yardsticks and method of validation is not only helpful, it is essential to demonstrate the an AI method is useful. -paul +------------------------------------------------------------------------+ | Prof. Paul A. Fishwick.... INTERNET: fishwick@bikini.cis.ufl.edu | | Dept. of Computer Science. UUCP: gatech!uflorida!fishwick | | Univ. of Florida.......... PHONE: (904)-335-8036 | | Bldg. CSE, Room 301....... FAX is available | | Gainesville, FL 32611..... | +------------------------------------------------------------------------+