gh@utai.UUCP (Graeme Hirst) (09/28/84)
AI Seminar, Department of Computer Science, University of Toronto REASONING ABOUT KNOWLEDGE AND BELIEF Kurt Konolige Artificial Intelligence Center SRI International Menlo Park, Calif. 94025 Tuesday 2 October 1984, 3pm, Sandford Fleming 1105 Reasoning about the knowledge and beliefs of computer and human agents is assuming increasing importance in Artificial Intelligence systems for natural language understanding, planning, and knowledge representation. A natural model of belief for robot agents is the deduction model: an agent is represented as having an initial set of beliefs about the world in some internal language and a deduction process for deriving some (but not necessarily all) logical consequences of these beliefs. Because the deduction model is an expli- citly computational model, it is possible to take into account limitations of an agent's resources when reasoning. We investigate a Gentzen-type formalization of the deductive model of belief. Several original results are proven. Among these are soundness and completeness theorems for a deductive belief logic; a correspondence result that relates our deduction model to competing possible-worlds models; and a modal analog to Herbrand's Theorem for the belief logic. Specialized techniques for automatic deduc- tion based on resolution are developed using this theorem. Several other topics of knowledge and belief are explored from the viewpoint of the deduction model, includ- ing a theory of introspection about self-beliefs, and a theory of circumscriptive ignorance, in which facts an agent doesn't know are formalized by limiting or circumscribing the information available to him. (I will give an overview of the deduction model, and then proceed to any of the sub-topics above that are of interest.) -- \\\\ Graeme Hirst University of Toronto Computer Science Department //// utcsrgv!utai!gh / gh.toronto@csnet-relay / 416-978-8747