SOWA@IBM.COM (john Sowa) (10/26/87)
An abstract of a recent talk I gave found its way to the AIList, V5 #241. But along the way, the first five sentences were lost. Those sentences made a distinction that was at least as important as the rest of the abstract: Much of the knowledge in people's heads is inconsistent. Some of it may be represented in symbolic or propositional form, but a lot of it or perhaps even most of it is stored in image-like forms. And some knowledge is stored in vague "gut feel" or intuitive forms that are almost never verbalized. The term "knowledge base" sounds too precise and organized to reflect the enormous complexity of what people have in their heads. A better term is "knowledge soup." Whoever truncated the abstract also changed the title "Crystallizing theories out of Knowledge Soup" by adding "(knowledge base)". That parenthetical addition blurred the distinction between the informal, disorganized knowledge in the head and the formalized knowledge bases that are required by AI systems. Some of the most active research in AI today is directed towards handling that soup and managing it within the confines of digital systems: fuzzy logic, various forms of default and nonmonotonic reasoning, truth maintenance systems, connectionism and various statistical approaches, and Hewitt's due-process reasoning between competing agents with different points of view. Winograd and Flores' flight into phenomenology and hermeneutics is based on a recognition of the complexity of the knowledge soup. But instead of looking for ways of dealing with it in AI terms, they gave up. Although I sympathize with their suggestion that we use computers to help people communicate better with each other, I believe that variations of current AI techniques can support semi-automated tools for knowledge acquisition from the soup. More invention may be needed for fully automated systems that can extract theories without human guidance. But there is no clear evidence that the task is impossible.
SOWA@IBM.COM (john Sowa) (11/03/87)
Since my abstract on "Crystallizing Theories out of Knowledge Soup" appeared in AIList V5 #241 and my clarification appeared in V5 #247, I have received a number of requests for the corresponding paper. I regret to say that the paper is still in the process of getting itself crystallized. That talk was mostly a survey of current approaches to the soup together with some suggestions about techniques that I considered promising. Following is what I discussed: 1. The limits of conceptualization and the use of conceptual analysis as a nonautomated way of extracting knowledge from the soup. This material is discussed in my book, Conceptual Structures. See Section 6.3 for conceptual analysis, and Chapter 7 for a discussion of the limitations. 2. Dynamic belief revision, developed by Norman Foo and Anand Rao from Sydney University, currently visiting IBM. This is a kind of truth maintenance system based on the axioms for belief revision by the Swedish logician Gardenfors. They have been adding some interesting features, including levels of epistemic importance (laws, facts, and defaults) where the revision process tries to retain the more important propositions at the expense of losing some of the less important. Their current system uses Prolog style rules and facts, but they are adapting it to conceptual graphs as part of CONGRES (their conceptual graph reasoning system). 3. Dynamic type hierarchies, an idea developed by Eileen Way in her dissertation on metaphor. As in most treatments of metaphor, Eileen compares matching relationships in the tenor and vehicle domains. Her innovation is the recognition that the essential meaning of a metaphor is the introduction of a new node in the type hierarchy. Example: "My car is thirsty." The canonical graph for THIRSTY shows that it must be an attribute of something of type ANIMAL. Since CAR is not a subtype of ANIMAL, the system finds a minimal common supertype of CAR and ANIMAL, in this case MOBILE-ENTITY. It then creates a new node in the type hierarchy above both CAR and ANIMAL, but below MOBILE-ENTITY. To create a definition for that type, it checks the properties of ANIMAL with respect to THIRSTY, and finds a graph saying that THIRSTY is an attribute of an ANIMAL that is in the sate of needing liquid: [THIRSTY]<-(ATTR)<-[ANIMAL]->(STAT)->[NEED]->(PTNT)->[LIQUID] It then generalizes ANIMAL to MOBILE-ENTITY and uses the resulting graph to define a new type for mobile entities that need liquid. The system can generalize schemata involving animals and liquid to the new node, from which they can be inherited by CAR or any similar subtype. The new node thereby allows schemata for DRINK or GUZZLE to be inherited as well as schemata for THIRSTY. 4. Theory refinement. This is an approach that I have been discussing with Foo and Rao as an extension to their belief revision system. Instead of making revisions by adding and deleting propositions, as they currently do, the use of conceptual graphs allows individual propositions or even parts of propositions to be generalized or specialized by adding and deleting parts or by moving up and down the type hierarchy. This extension can still be done within the framework of the Gardenfors axioms. As the topic changes, the salience of different concepts and patterns of concepts in the knowledge soup changes. The most salient ones become candidates for crystallization out of the soup into the formalized theory. The knowledge soup thus serves as a resource that the belief revision process draws upon in constructing the crystallized theories. Depending on the salience, different theories can be crystallized from the same soup, each representing a different point of view. Even though the soup may be inconsistent, each theory crystallized from it is consistent, but specialized for a limited domain. People are capable of precise reasoning, but usually with short chains of inference. They are also capable of dealing with enormous, but loosely organized collections of knowledge. Instead of viewing formal theories and informal associative techniques as competing or conflicting approaches, I view them as complementary mechanisms that should be made to cooperate. This talk discussed possible ways of doing that. Although there is an enormous amount of work that remains to be done, there are also some promising directions for future research. References: Foo, Norman Y., & Anand S. Rao (1987) "Open world and closed world negations," Report RC 13122, IBM T. J. Watson Research Center. Foo, Norman Y., & Anand S. Rao (in preparation) "Semantics of dynamic belief systems." Foo, Norman Y., & Anand S. Rao (in preparation) "Belief and ontology revision in a microworld. Rao, Anand S., & Norman Y. Foo (1987) "Evolving knowledge and logical omniscience," Report RC 13155, IBM T. J. Watson Research Center. Rao, Anand S., & Norman Y. Foo (1987) "Evolving knowledge and autoepistemic reasoning," Report RC 13155, IBM T. J. Watson Research Center. Rao, Anand S., & Norman Y. Foo (1986) "Modal horn graph resolution," Proceedings of the First Australian AI Congress, Melbourne. Rao, Anand S., & Norman Y. Foo (1986) "DYNABELS -- A dynamic belief revision system," Report 301, Basser Dept. of Computer Science, University of Sydney. Sowa, John F. (1984) Conceptual Structures: Information Processing in Mind and Machine, Addison-Wesley, Reading, MA. Way, Eileen C. (1987) Dynamic Type Hierarchies: An Approach to Knowledge Representation through Metaphor, PhD dissertation, Systems Science Dept., SUNY at Binghamton. For copies of the IBM reports, write to Distribution Services 73-F11; IBM T. J. Watson Research Center; P.O. Box 218; Yorktown Heights, NY 10598. For the report from Sydney, write to Basser Dept. of Computer Science; University of Sydney; Sydney, NSW 2006; Australia. For the dissertation by Eileen Way, write to her at the Department of Philosophy; State University of New York; Binghamton, NY 13901.