SHARON@SU-SCORE.ARPA@sri-unix.UUCP (10/04/83)
From: Sharon Bergman <SHARON@SU-SCORE.ARPA> [Reprinted from the SU-SCORE bboard.] Computer Science Department Ph.D. Oral, Jim Davidson October 18, 1983 at 2:30 p.m. Rm. 303, Building 200 Interpreting Natural Language Database Updates Although the problems of querying databases in natural language are well understood, the performance of database updates via natural language introduces additional difficulties. This talk discusses the problems encountered in interpreting natural language updates, and describes an implemented system that performs simple updates. The difficulties associated with natural language updates result from the fact that the user will naturally phrase requests with respect to his conception of the domain, which may be a considerable simplification of the actual underlying database structure. Updates that are meaningful and unambiguous from the user's standpoint may not translate into reasonable changes to the underlying database. The PIQUE system (Program for Interpretation of Queries and Updates in English) operates by maintaining a simple model of the user, and interpreting update requests with respect to that model. For a given request, a limited set of "candidate updates"--alternative ways of fulfilling the request--are considered, and ranked according to a set of domain-independent heuristics that reflect general properties of "reasonable" updates. The leading candidate may be performed, or the highest ranking alternatives presented to the user for selection. The resultant action may also include a warning to the user about unanticipated side effects, or an explanation for the failure to fulfill a request. This talk describes the PIQUE system in detail, presents examples of its operation, and discusses the effectiveness of the system with respect to coverage, accuracy, efficiency, and portability. The range of behaviors required for natural language update systems in general is discussed, and implications of updates on the design of data models are briefly considered.
SHARON@SU-SCORE.ARPA (11/08/83)
From: Sharon Bergman <SHARON@SU-SCORE.ARPA> [Reprinted from the SU-SCORE bboard.] Ph.D. Oral COMPILING LOGIC SPECIFICATIONS FOR PROGRAMMING ENVIRONMENTS November 16, 1983 2:30 p.m., Location to be announced Stephen J. Westfold A major problem in building large programming systems is in keeping track of the numerous details concerning consistency relations between objects in the domain of the system. The approach taken in this thesis is to encourage the user to specify a system using very-high-level, well-factored logic descriptions of the domain, and have the system compile these into efficient procedures that automatically maintain the relations described. The approach is demonstrated by using it in the programming environment of the CHI Knowledge-based Programming system. Its uses include describing and implementing the database manager, the dataflow analyzer, the project management component and the system's compiler itself. It is particularly convenient for developing knowledge representation schemes, for example for such things as property inheritance and automatic maintenance of inverse property links. The problem description using logic assertions is treated as a program such as in PROLOG except that there is a separation of the assertions that describe the problem from assertions that describe how they are to be used. This factorization allows the use of more general logical forms than Horn clauses as well as encouraging the user to think separately about the problem and the implementation. The use of logic assertions is specified at a level natural to the user, describing implementation issues such as whether relations are stored or computed, that some assertions should be used to compute a certain function, that others should be treated as constraints to maintain the consistency of several interdependent stored relations, and whether assertions should be used at compile- or execution-time. Compilation consists of using assertions to instantiate particular procedural rule schemas, each one of which corresponds to a specialized deduction, and then compiling the resulting rules to LISP. The rule language is a convenient intermediate between the logic assertion language and the implementation language in that it has both a logic interpretation and a well-defined procedural interpretation. Most of the optimization is done at the logic level.
SHARON@SU-SCORE.ARPA (11/11/83)
From: Sharon Bergman <SHARON@SU-SCORE.ARPA> [Reprinted from the SU-SCORE bboard.] Ph.D. Oral Tuesday, Nov. 15, 1983, 2:30 p.m. Bldg. 170 (history corner), conference room A DEDUCTIVE MODEL OF BELIEF Kurt Konolige Reasoning about knowledge and belief of computer and human agents is assuming increasing importance in Artificial Intelligence systems in the areas of natural language understanding, planning, and knowledge representation in general. Current formal models of belief that form the basis for most of these systems are derivatives of possible- world semantics for belief. However,, this model suffers from epistemological and heuristic inadequacies. Epistemologically, it assumes that agents know all the consequences of their belief. This assumption is clearly inaccurate, because it doesn't take into account resource limitations on an agent's reasoning ability. For example, if an agent knows the rules of chess, it then follows in the possible- world model that he knows whether white has a winning strategy or not. On the heuristic side, proposed mechanical deduction procedures have been first-order axiomatizations of the possible-world belief. A more natural model of belief is a deductions model: an agent has a set of initial beliefs about the world in some internal language, and a deduction process for deriving some (but not necessarily all) logical consequences of these beliefs. Within this model, it is possible to account for resource limitations of an agent's deduction process; for example, one can model a situation in which an agent knows the rules of chess but does not have the computational resources to search the complete game tree before making a move. This thesis is an investigation of Gentzen-type formalization of the deductive model of belief. Several important original results are proven. Among these are soundness and completeness theorems for a deductive belief logic; a corespondence result that shows the possible- worlds model is a special case of the deduction model; and a model analog ot Herbrand's Theorem for the belief logic. Several other topics of knowledge and belief are explored in the thesis from the viewpoint of the deduction model, including a theory of introspection about self-beliefs, and a theory of circumscriptive ignorance, in which facts an agent doesn't know are formalized by limiting or circumscribing the information available to him. Here it is!
SHARON@SU-SCORE.ARPA (02/17/84)
From: Sharon Bergman <SHARON@SU-SCORE.ARPA> [Forwarded from the Stanford bboard by Laws@SRI-AI.] PH.D. ORAL USE OF ARTIFICIAL INTELLIGENCE AND SIMPLE MATHEMATICS TO ANALYZE A PHYSIOLOGICAL MODEL JOHN C. KUNZ, STANFORD/INTELLIGENETICS 23 FEBRUARY 1984 MARGARET JACKS HALL, RM. 146, 2:30-3:30 PM The objective of this research is to demonstrate a methodology for design and use of a physiological model in a computer program that suggests medical decisions. This methodology uses a physiological model based on first principles and facts of physiology and anatomy. The model includes inference rules for analysis of causal relations between physiological events. The model is used to analyze physiological behavior, identify the effects of abnormalities, identify appropriate therapies, and predict the results of therapy. This methodology integrates heuristic knowledge traditionally used in artificial intelligence programs with mathematical knowledge traditionally used in mathematical modeling programs. A vocabulary for representing a physiological model is proposed.