dietrich@bingvaxu.cc.binghamton.edu (Eric Dietrich) (08/26/90)
Recent Abstracts from the Journal of Experimental and Theoretical AI Volume 2 Number 1 January - March 1990 Co-operation of syntax and semantics in flexive languages EMIL PALES Institute of Linguistics, Slovak Academy of Sciences Nalepkova 26, 811 01 Bratislava, Czechoslovakia Abstract. The idiosyncrasies of the flexic Slavonic languages, such as the rich case system and complex morphology, entail a free word-order and require a specialized approach when parsed. We claim complex morphology is not the only disadvantage of the language. Augmented transition networks for Slovak are introduced, and their adequacy for Slovak and comparison to English networks are discussed. Further, we introduce briefly our concept of semantic analysis, based on a semantic case system, verb and noun valency frames. The phenomenon of nominalization is investigated. The central theme of this paper is the close communication of syntax and semantics. The sequentially working modules shift misinterpretations and ambiguities from one to another. This leads to combinatorial explosions and the loss of effectiveness. The communication between the syntactic and semantic modules enables the exclusion of some syntactically correct, but semantically nonsensical interpretations in early phases of sentence processing. The assignment of semantic roles to the subphrases immediately when found, requires knowledge of the valency of the main verb (or head noun) first. These can be associated in Slovak (unlike English) by a simple precomputation, based on the character of the flexive languages. Knowledge representation issues in default reasoning J. TERRY NUTTER Department of Computer Science, Virginia Tech Blacksburg, Virginia 24061 Abstract. Most existing approaches to reasoning under uncertainty and with incomplete information appeal to formal theories, with relatively little attention to the phenomena they are intended to capture. This has had two major consequences. First, it has led to spurious disputes, in which participants criticize alternative approaches in the belief that they are competing, when in fact they are investigating different aspects of related phenomena, and should ultimately be viewed as cooperative efforts. Second, it has led to wasted effort on models which fail to reflect important aspects of kinds of reasoning which they are trying to capture, because necessary distinctions among kinds of generalization have not been made, and so the representational requirements have not been adequately spelled out. This paper delineates several different kinds of reasoning under uncertainty, establishes some distinctions within the field, and attempts to begin setting some ground rules for representational adequacy. An analysis of expected-outcome BRUCE ABRAMSON Department of Computer Science, University of Southern California Los Angeles, CA 90089-0782 Abstract. Static evaluators have been used in every game program ever written. These heuristic functions attempt to differentiate between strong and weak moves by assigning them values based on directly detectable game features. Despite their ubiquity, evaluation functions are not well understood; the development of a theory of evaluator design has been too long coming. In fact, the general consensus is that no theory is possible, because expertise is required to develop even a simple evaluator. One recently introduced 'generic' evaluation function, the expected-outcome model, proposed evaluating a node as the expected value of a game's outcome, given random play from that node on. Experimental studies conducted on evaluators designed under this model yielded encouraging results in tac-tac-toe, Othello, and chess. This paper analyzes the expected-outcome model on a simple class of game trees, and shows that the moves recommended under the assumptions of random play and perfect play are identical. This vindicates what appeared to have been an overly naive assumption, and furhters the claim that statistically interpreted evaluation functions are powerful, as well as elegant. Reasoning situated in time I: basic concepts JENNIFER J. ELGOT-DRAPKIN* and DONALD PERLIS** *Department of Computer Science, College of Engineering and Applied Sciences, Arizona State University, Tempe, AZ 85287-5406 and **Department of Computer Science and Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742 Abstract. The needs of a real-time reasoner situated in an environment may make it appropriate to view error-correction and non-monotonicity as much the same thing. This has led us to formulate situated (or step) logic, an approach to reasoning in which the formalism has a kind of real-time self-reference that affects the course of deduction itself. Here we seek to motivate this as a useful vehicle for exploring certain issues in commonsense reasoning. In particular, a chief drawback of more traditional logics is avoided: from a contradiction we do not have all wffs swamping the (growing) conclusion set. Rather, we seek potentially inconsistent, but nevertheless useful, logics where the real-time self-referential feature allows a direct contradiction to be spotted and corrective action taken, as part of the same system of reasoning. Some specific inference mechanisms for real-time default reasoning are suggested, notably a form of introspection relevant to default reasoning. Special treatment of 'now' and of contradictions are the main technical devices here. We illustrate this with a computer-implemented real-time solution to R. Moore's 'Brother Problem.' Abstracts from JETAI VOLUME 2 NUMBER 2 April - June 1990 Representational issues in genetic optimization GUNAR E. LIEPINS* and MICHAEL D. VOSE** *MS 6207 Bldg 4500N, Oak Ridge National Laboratory, PO Box 2008, Oak Ridge, TN 37831 email: gxl@msr.epm.ornl.gov **Computer Science Department, University of Tennessee, Knoxville, TN 37996 email: vose@utkcs2.cs.utk.edu Abstract. Functions are partially characterized as easy or hard for genetic algorithms to optimize. The failure modes of inappropriate embedding, crossover disruption, and deceptiveness are introduced, analyzed, and resolved in part. Virtually all optimizable (by any method) real valued functions defined on a finite domain are shown to be theoretically easy for genetic algorithms given appropriately chosen representations. Unfortunately, problems that are easy in theory can be difficult in practice because of sampling error. Also, the transformations required to induce favorable representations are generally arbitrary permutations, and the space of permutations is so large that search for good ones is intractable. The space of inversions is amenable to search, but inversions are insufficiently powerful to overcome deceptiveness. On the other hand, affine transformations (over the diadic group) are shown to be sufficiently powerful to transform at least selected deceptive problems into easy ones. These new results should be useful in guiding empirical studies and are expected to provide a foundation for further theoretical analysis. Information synthesis based on hierarchical maximum entropy discretization DAVID K. Y. CHIU*, BENNY CHEUNG* and ANDREW K. C. WONG** *Department of Computing & Information Science, University of Guelph, Guelph, Ontario, Canada N1G 2W1 email: dchiu@snowhite.cis.uoguelph.ca **Department of Systems Design Engineering, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1 email: akcwong@watsup.bitnet Abstract. This paper outlines a new approach to the synthesis of information from data. Information is defined as a detected organization of data after a process of discretization (or partitioning) and event covering. The discretization is based on a hierarchical maximum entropy scheme which iteratively minimizes the loss of information according to Shannon. The event-covering process is based on an evaluation of the deviation of the observed frequencies of an event from the expectation due to prior knowledge (defined by the null hypothesis and/or domain knowledge). The hierarchical maximum entropy discretization scheme provides a rigorous and efficient way in solving the non-uniform scaling problem in multivariate data analysis. Because our method refines the boundaries dynamically depending on the detection of information, it directs the analysis on the outcome subspace with high information content. In addition, it naturally produces a hierarchical view of information so that data can be analyzed/synthesized with respect to an outcome context. The method has been tested using simulated and real life data with very good results. Accelerating search in function induction THONG H. PHAN and IAN H. WITTEN Department of Computer Science, The University of Calgary, Calgary, Canada T2N 1N4 email: ian@cpsc.UCalgary.CA Abstract. Inducing functions from examples is an important requirement in many learning systems. Blind search is the most general approach, but is vastly less efficient than specialized problem-solving methods. This paper presents a new strategy to accelerate search without sacrificing generality. Experiments with numeric functions show several orders of magnitude performance increase over the standard search technique. Two factors account for this improvment. First, the new strategy manipulates functions in groups instead of singly, so that many can be selected or discarded with only one comparison. Second, functional equivalence is handled automatically by the internal organization of search space. A modular translation from defeasible nets to defeasible logics DAVID BILLINGTON School of Computing and Information Technology, Griffith Univ., Nathan, Brisbane, Queensland, 4111, Australia email: ACSNet db@gucis.cit.gu.oz.au KOEN DE COSTER Department of Mathematics and Computer Science, University of Antwerp, UIA, Universiteitsplein 1, B2610 Wilrijk, Belgium DONALD NUTE Department of Philosophy, University of Georgia, Athens, GA 30602 Abstract. The sceptical inheritance nets introduced in Horty et al. [Proceedings of AAAI-87 (1987):358-363] are translated into a version of Nute's defeasible logic. Moreover this translation is modular in the sense of Thomason and Horty [Non-Monotonic Reasoning. Springer-Verlag (1989):234]. Apart from the importance of relating two nonmonotonic reasoning formalisms, this result shows that the reasoning mechanisms underlying defeasible logic and defeasible nets are the same. Yet they were invented independently and set in totally different contexts. This is perhaps some evidence that the underlying nonmonotonic reasoning mechanism is mainly correct. We also observe that since defeasible logics can containe both absolute and defeasible rules, they provided a uniform setting for considering nets which contain both strict and defeasible arcs. Holland's schema theorem disproved? James R. Levenick Computer Science Dept. Willamette Univ. Salem, OR levenick@um.cc.umich.edu (No abstract) This theoretical note defends Holland's theorem against the alledged counterexample of Grefenstette and Baker ("How genetic algorithms work: a critical look at implicit parallelism," Proceedings of the 3rd International Conference on Genetic Algorithms, Kaufmann, 1989, pp. 20-27.)) Searle's extension of the Chinese room to connectionist machines Larry Roberts Dept. of Philosophy SUNY Binghamton Binghamton, NY 13902-6000 (No abstract.) This critical note argues that Searle is unsuccessful when he tries to extend his famous Chinese Room argument to connectionist architectures.