leff@smu.CSNET.UUCP (03/01/87)
Publication Announcement University of Southern California Information Sciences Institute The following is a list of the latest USC/ISI Research Reports. If you would like any of these reports, contact Diane Speekman at: USC/Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292 ------------------------------------------------------------------------------- Multiple Strongly Typed Evaluation Phases David Booth ISI/RS-86-165 November 1986 This work introduces the programming language notion of multiple strongly typed evaluation phases or simply phases. In general, a program might be executed through several phases. Each phase requires its own input, and acts as compiletime relative to the next phase, or runtime relative to the previous phase. Thus, each phase is the execution of a program, and may play the role of compiletime or runtime. The notion of phases offers a framework for understanding compiled strongly typed languages, and works toward an improved, strongly typed language basis for reusable software. The research shows how types can be manipulated as first-class values, and notions of compiletime and runtime can be unified, without sacrificing strong typing (compiletime type checking) or runtime speed. Type checking and expression evaluation are performed using the same evaluation mechanism. The apparent conflict of allowing types as first-class values, yet enforcing compiletime type checking, is resolved by the notion of multiple phases: though types may be manipulated as first-class values during one phase, the computed type values become invariants for the next phase. We demonstrate the notion of phases by defining a sample source language, Phi, which looks like a typed lambda calculus; an object language, IL, which is syntactically similar to an untyped lambda calculus, but is strongly typed; an associated IL Machine that interprets IL programs; and a translator for converting Phi programs to IL programs. Strong typing is guaranteed in spite of the fact that the Phi translator does no type checking. We also discuss how phases might be used to efficiently perform partial evaluation. ------------------------------------------------------------------------------- An Inexpensive Megabit Packet Radio System Richard Bisbey II Robert Parker Randy Cole ISI/RS-86-166 October 1986 Although packet radio is relatively new to the amateur radio community, there are now over 10,000 amateur packet radio units in service. These units cost between $500 and $800, including the controller and radio, and generally operate at 1200 baud. An effort to improve performance significantly while keeping costs reasonable has resulted in a packet radio system that can operate at data rates of up to one megabit per second, yet costs less than $1,000 for the controller and radio. The controller can also be used to develop its own software. The radio is digitally synthesized and operates in the 400 MHz region. The system uses DoD Internet IP/TCP datagrams, and has been used in the amateur radio service at the maximum legal data rate of 56 Kbps. ------------------------------------------------------------------------------- Recent Developments in NIKL Thomas Kaczmarek Raymond Bates Gabriel Robins ISI/RS-86-167 November 1986 NIKL (a New Implementation of KL-ONE) is one of the members of the KL-ONE family of knowledge representation languages. NIKL has been in use for several years and our experiences have led us to define and implement various extensions to the language, its support environment and the implementation. Our experiences are particular to the use of NIKL. However, the requirements that we have discovered are relevant to any intelligent system that must reason about terminology. This article reports on the extensions that we have found necessary based on experiences in several different testbeds. The motivations for the extensions and future plans are also presented. ------------------------------------------------------------------------------- A Logical-Form and Knowledge-Base Design for Natural Language Generation Norman Sondheimer Bernhard Nebel ISI/RS-86-169 November 1986 This paper presents a technique for interpreting output demands by a natural language sentence generator in a formally transparent and efficient way. These demands are stated in a logical language. A network knowledge base organizes the concepts of the application domain into categories known to the generator. The logical expressions are interpreted by the generator using the knowledge base and a restricted, but efficient, hybrid knowledge representation system. The success of this experiment has led to plans for the inclusion of this design in both the evolving Penman natural language generator and the Janus natural language interface. ------------------------------------------------------------------------------- Rhetorical Structure Theory: Descripton and Construction of Text William C. Mann Sandra Thompson ISI/RS-86-174 October 1986 Rhetorical Structure Theory (RST) is a theory of text structure that is being extended to serve as a theoretical basis for computational text planning. Text structure in RST are hierarchic, built on small patterns called schemas. The schemas which compose the structural hierarchy of a text describe the functions of the parts rather than their form characteristics. Relations between text parts, comparable to conjunctive relations, are a prominent part of RST's definitional machinery. Recent work on RST has put it onto a new definitional basis. This paper describes the current status of descriptive RST, along with efforts to create a constructive version for use as a basis for programming a text planner. ------------------------------------------------------------------------------- Automatic Compilation of Logical Specifications into Efficient Programs Donald Cohen ISI/RS-86-175 November 1986 We describe an automatic programmer, or "compiler" which accepts as input a predicate calculus specification of a set to generate or a condition to test, along with a description of the underlying representation of the data. This compiler searches a space of possible algorithms for the one that is expected to be most efficient. We describe the knowledge that is and is not available to this compiler, and its corresponding capabilites and limitations. This compiler is now regularly used to produce large programs. ------------------------------------------------------------------------------- Towards Explicit Integration of Knowledge in Expert Systems Jack Mostow Bill Swartout ISI/RS-86-176 November 1986 The knowledge integration problem arises in rule-based expert systems when two or more recommendations made by right-hand sides of rules must be combined. Current expert systems address this problem either by engineering the rule set to avoid it, or by using a single integration technique built into the interpreter, e.g., certainty factor combination. We argue that multiple techniques are needed and that their use -- and underlying assumptions -- should be made explicit. We identify some of the techniques used in MYCIN's therapy selection algorithm to integrate the diverse goals it attempts to satisfy, and suggest how knowledge of such techniques could be used to support construction, explanation, and maintenance of expert systems. -------------------------------------------------------------------------------