E1AR0002@SMUVM1.BITNET (02/22/86)
:aim 568 :unavailable :title {A Selected Descriptor-Indexed Bibliography to the Literature on Belief R evision} :author Jon Doyle and Philip London :asort Doyle, J.; London, P. :date February 1980 :pages 44 :ADnum (AD-A084821) :keywords artificial intelligence, bibliography, frame problem, belief revision, logic, philosophy :aim 569A :title {A Real Time Garbage Collector Based on the Lifetime of Objects} :author Henry Lieberman and Carl Hewitt :asort Lieberman, H.; Hewitt, C. :date April 1980 :cost $2.25 :pages 38 :ADnum (AD-A084819) :keywords garbage collection, temporary storage, compacting storage, reference counting, LISP, object-oriented programming, stacks, virtual memory :abstract In previous heap storage systems, the cost of creating objects and garbage collection is independent of the lifetime of the object Since temporary objects account for a large portion of storage use, it's worth optimizing a garbage collector to reclaim temporary storage faster. We present a garbage collection algorithm which: Makes short term storage cheaper than long term storage, operates in real time - object creation and access times are bounded, works well with multiple processors and a large address space. :aim 570 :unavailable :title {The Evaluation and Cultivation of Spatial and Linguistic Abilities In In dividuals With Cerebral Palsy} :author Sylvia Weir :asort Weir, S. :date October 1979 :pages 42 :reference See Logo Memo 55 :keywords cerebral palsy, computer-based education, cognitive development, computer-based diagnostics :aim 572 :unavailable :title {Determining Optical Flow} :author Berthold K.P. Horn and Brian G. Schunck :asort Horn, B.K.P.; Schunck, B.G. :date April 1980 :pages 28 :reference {See {\it Artificial Intelligence}, Vol. 17, pp. 185-203, 1981} :ADnum (AD-A093925) :keywords optical flow, motion perception, cooperative computation, intrinsic images, image sequences :aim 573A :title {A Model For The Spatio-Temporal Organization of X- And Y-Type Ganglion C ells In The Primate Retina} :author J. Richter and S. Ullman :asort Richter, J.; Ullman, S. :date April 1980 :reference Revised October 1981 :cost $3.00 :pages 65 :keywords retina, vision, x and y cells, motion perception :abstract A model is proposed for the spatial and temporal characteristics of X- and Y-type responses of ganglion cells in the primate retina. The model is related to a theory of directional selectivity proposed by Marr \& Ullman [1980]. The X- and Y-type responses predicted by the model to a variety of stimuli are examined and compared with electrophysiological recordings. A number of implications and predictions are discussed. :aim 574 :title {Against Direct Perception} :author S. Ullman :asort Ullman, S. :date March 1980 :cost $2.75 :pages 44 :abstract Central to contempory cognitive science is the notion that mental processes involve computations defined over internal representations. This notion stands in sharp contrast with another prevailing view -- the direct theory of perception, whose most prominent proponent has been J.J. Gibson. In this paper the notion of direct perception is examined primarily from a theoretical standpoint, and various objections are raised against it. An attempt is made to place the theory of direct perception in perspective by embedding it in a more comprehensive framework. :aim 575 :unavailable :title {One Child's Learning: Introducing Writing With A Computer} :author R.W. Lawler :asort Lawler, R. :date March 1980 :reference See Logo memo 56 :pages 19 :aim 576 :title {Meta-Rules: Reasoning About Control} :author Randall Davis :asort Davis, R. :date March 1980 :reference See {\it Artificial Intelligence}, vol. 15, 1980, pp. 179-222. :cost $2.75 :pages 58 :ADnum (AD-A084639) :keywords meta-level knowledge, knowledge-based systems, strategy, invocation, content reference, problem solving :abstract How can we insure that knowledge embedded in a program is applied effectively? Traditionally the answer to this question has been sought in different problem solving paradigms and in different approaches to encoding and indexing knowledge. Each of these is useful with a certain variety of problem, but they all share a common problem: they become ineffective in the face of a sufficiently large knowledge base. :aim 577 :title {A Session With TINKER: Interleaving Program Testing With Program Design} :author Henry Lieberman and Carl Hewitt :asort Lieberman, H.; Hewitt, C. :date April 1980 :cost $2.25 :pages 37 :ADnum (AD-A095521) :reference {See Proceedings of LISP Conference, Stanford, August 1980. pp. 90-99.} :abstract Tinker is an experimental interactive programming system which integrates program testing with program design. New procedures are created by working out the steps of the procedure in concrete situations. Tinker displays the results of each step as it is performed, and constructs a procedure for the general case from sample calculations. The user communicates with Tinker mostly by selecting operations from menus on an interactive graphic display rather than by typing commands. This paper presents a demonstration of our current implementation of Tinker. :aim 580 :unavailable :title {Extra-Retinal Signals Influence Induced Motion: A New Kinetic Illusion} :author K.F. Prazdny and Mike Brady :asort Prazdny, K.F.; Brady, M. :date May 1980 :pages 33 :ADnum (AD-A093191) :keywords induced motion, eye movements, tracking :aim 585 :unavailable :title {Primer for R users} :author Judi Jones :asort Jones, J. :date September 1980 :pages 15 :aim 586 :unavailable :title {The Progressive Construction of Mind} :author Robert W. Lawler :asort Lawler, R. :date June 1980 :pages 60 :reference See Logo memo 57, also published in {\it Cognitive Science}, Vol. 5, January 1981, pp. 1-30 :keywords learning, cognitive psychology, genetic epistemology, artificial intelligence, cognitive science, mental models, computers and education, arithmetic :aim 587 :title {Destructive Reordering of CDR-Coded Lists} :author Guy L. Steele, Jr. :asort Steele, G.L., Jr. :date August 1980 :cost $1.50 :pages 15 :keywords list structure, linked lists, CDR-coding, LISP, data structures, sorting, merge sorting, destructive list operations :abstract Linked list structures can be compactly represented by encoding the CDR ("next") pointer in a two-bit field and linearizing list structures as much as possible. This "CDR-coding" technique can save up to 50\% on storage for linked lists. We present here algorithms for destructive reversal and sorting of CDR-coded lists which avoid creation of indirect pointers. The essential idea is to note that a general list can be viewed as a linked list of array-like "chunks". The algorithm applied to such "chunky lists" is a fusion of separate-array and list-specific algorithms; intuitively, the array-specific algorithm is applied to each chunk, and the list algorithm to the list with each chunk considered as a single element. :aim 590 :unavailable :title {Extending A Powerful Idea} :author Robert W. Lawler :asort Lawler, R. :date July 1980 :pages 21 :reference See Logo memo 58 :keywords computers and education, mathematics education, computer designs, cognitive psychology :aim 591 :unavailable :title {Interfacing The One-Dimensional Scanning of an Image With The Applicatio ns of Two-Dimensional Operators} :author Shimon Ullman :asort Ullman, S. :date April 1980 :pages 13 :ADnum (AD-A093932) :keywords image processing, convolution, scanning :aim 592 :unavailable :title {Inferring Shape From Motion Fields} :author D.D. Hoffman :asort Hoffman, D.D. :date December 1980 :pages 19 :ADnum (AD-A099150) :keywords velocity field, surface normal :aim 593 :title {Toward A Computational Theory of Early Visual Processing In Reading} :author Mike Brady :asort Brady, M. :date September 1980 :cost $2.75 :pages 42 :ADnum (AD-A093185) :keywords :abstract This paper is the first of a series aimed at developing a theory of early visual processing in reading. We suggest that there has been a close parallel in the development of theories of reading and theories of vision in Artificial Intelligence. We propose to exploit and extend recent results in computer Vision to develop an improved model of early processing in reading. This first paper considers the problem of isolating words in text based on the information which Marr and Hildreth's (1980) theory asserts is available in the parafovea. :end :aim 596 :unavailable :title {Fundamental Scheme For Train Scheduling} :author Koji Fukumori :asort Fukumori, K. :date September 1980 :pages 24 :keywords time scheduling, railroad, train time-tables, search, propagation of constraints :aim 597 :title {Representation and Recognition of the Movement of Shapes} :author David Marr and Lucia Vaina :asort Marr, D.; Vaina, L. :date October 1980 :cost $2.25 :pages 25 :ADnum (AD-A097853) :keywords 3-D model representation, movements, shape :abstract The problems posed by the representation and recognition of the movements of 3-D shapes are analyzed. A representation is proposed for the movements of shapes that lie within the scope of Marr \& Nishihara's (1978) 3-D model representation of static shapes. The basic problem is, how to segment a stream of movement into pieces each of which can be described separately. The representation proposed here is based upon segmenting a movement at moments when a component axis, e.g. an arm starts to move relative to its local coordinate frame (here, the torso). So that for example walking is divided into a sequence of the stationary states between each swing of the arms and legs, and the actual motions between the stationary points (relative to the torso, not the ground). :end :aim 598 :title {The Design Procedure Language Manual} :author John Batali \& Anne Hartheimer :asort Batali, J.; Hartheimer, A. :date September 1980 :cost $3.00 :pages 81 :reference See VLSI Memo 80-31 :ADnum (AD-A093933) :keywords integrated circuits, VLSI, computer aided design, data bases :abstract This manual describes the Design Procedure language (DPL) for LSI design. DPL creates and maintains a representation of a design in a hierarchically organized, object-oriented LISP data-base. Designing in DPL involves writing programs (Design Procedures) which construct and manipulate descriptions of a project. The programs use a call-by-keyword syntax and may be entered interactively or written by other programs. DPL is the layout language for the LISP based Integrated Circuit design system (LISPIC) being developed at the Artificial Intelligence Laboratory at MIT. The LISPIC design environment will combine a large set of design tools that interact through a common data-base. :end :aim 599 :title {A Three-Step Procedure For Language Generation} :author Boris Katz :asort Katz, B. :date December 1980 :cost $2.25 :pages 40 :ADnum (AD-A131537) :keywords language generation, parsing, transformations, natural language :abstract This paper outlines a three-step plan for generating English text from any semantic representation by applying a set of syntactic transformations to a collection of kernel sentences. The paper focuses on describing a program which realizes the third step of this plan. Step One separates the given representation into groups and generates from each group a set of kernel sentences. Step Two must decide, based upon both syntactic and thematic considerations, the set of transformations that should be performed upon each set of kernels. The output of the first two steps provides the "TASK" for Step Three. Each element of the TASK corresponds to the generation of one English sentence, and in turn may be defined as a triple consisting of: (a) a list of kernel phrase markers; (b) a list of transformations to be performed upon the list of kernels; (c) a "syntactic separator" to separate or connect generated sentences. Step Three takes as input the results of Step One and Step Two. The program which implements Step Three "reads" the TASK, executes the transformations indicated there, combines the altered kernels of each set into a sentence, performs a pronominalization process, and finally produces the appropriate English word string. This approach subdivides a hard problem into three more manageable and relatively independent pieces. It uses linguistically motivated theories at Step Two and Step Three. As implemented so far, Step Three is small and highly efficient. The system is flexible; all the transformations can be applied in any order. The system is general; it can be adapted easily to many domains. :end :aim 601 :title {Conclusions From The Commodity Expert Project} :author James L. Stansfield :asort Stansfield, J.L. :date November 1980 :cost $2.25 :pages 36 :ADnum (AD-A097854) :keywords intelligent assistant, knowledge representation, qualitative reasoning, commodities :abstract The goal of the commodity expert project was to develop a prototype program that would act as an intelligent assistant to a commodity market analyst. Since expert analysts must deal with very large, yet incomplete, data bases of unreliable facts about a complex world, the project would stringently test the applicability of Artificial Intelligence techniques. After a significant effort however, I am forced to the conclusion that an intelligent, real-world system of the kind envisioned is currently out of reach. Some of the difficulties were due to the size and complexity of the domain. As its true scale became evident, the available resources progressively appeared less adequate. The representation and reasoning problems that arose were persistently difficult and fundamental work is needed before the tools will be sufficient to engineer truly intelligent assistants. Despite these difficulties, perhaps even because of them, much can be learned from the project. To assist future applications projects, I explain in this report some of the reasons for the negative result, and also describe some positive ideas that were gained along the way. In doing so, I hope to convey the respect I have developed for the complexity of real-world domains, and the difficulty of describing the ways experts deal with them. :end :aim 602 :title {Flavors: Message Passing in the Lisp Machine} :author Daniel Weinreb and David Moon :asort Weinreb, D.; Moon, D. :date November 1980 :cost $2.25 :pages 35 :ADnum (AD-A095523) :keywords flavor, message passing, actors, smalltalk, generic functions :abstract The object oriented programming style used in the Smalltalk and Actor languages is available in Lisp Machine Lisp, and used by Lisp Machine software system. It is used to perform generic operations on objects. Part of its implementation is simply a convention in procedure calling style; part is a powerful language feature, called Flavors, for defining abstract objects. This chapter attempts to explain what programming with objects and with message passing means, the various means of implementing these in Lisp Machine Lisp, and when you should use them. It assumes no prior knowledge of any other languages. :end :aim 603 :title {Jokes and the Logic of the Cognitive Unconscious} :author Marvin Minsky :asort Minsky, M. :date November 1980 :cost $2.25 :pages 25 :keywords memory, knowledge, bugs, frame, logic :abstract Freud's theory of jokes explains how they overcome the mental "censors" that make it hard for us to think "forbidden" thoughts. But his theory did not work so well for humorous nonsense as for other comical subjects. In this essay I argue that the different forms of humor can be seen as much more similar, once we recognize the importance of Knowledge about knowledge and, particularly, aspects of thinking concerned with recognizing and suppressing bugs--ineffective or destructive thought processes. When seen in this light, much humor that at first seems pointless, or mysterious, becomes more understandable. :end :aim 605 :title Spatial Planning: A Configuration Space Approach :author Tomas Lozano-Perez :asort Lozano-Perez, T. :date December 1980 :cost $2.25 :pages 37 :ADnum (AD-A093934) :keywords geometric algorithms, collision avoidance, robotics :abstract This paper presents algorithms for computing constraints on the position of an object due to the presence of obstacles. This problem arises in applications which require choosing how to arrange or move objects among other objects. The basis of the approach presented here is to characterize the position and orientation of the object of interest as a single point in a Configuration Space, in which each coordinate represents a degree of freedom in the position and/or orientation of the object. The configurations forbidden to this object, due to the presence of obstacles, can then be characterized as regions in the Configuration Space. :end :aim 606 :unavailable :title {Automatic Planning of Manipulator Transfer Movements} :author Tomas Lozano-Perez :asort Lozano-Perez, T. :date December 1980 :pages 54 :ADnum (AD-A096118) :keywords robotics, collision avoidance, path planning, grasping :aim 608 :unavailable :title {The Interpretation of Biological Motion} :author D.D. Hoffman and B.E. Flinchbaugh :asort Hoffman, D.D.; Flinchbaugh, B.E. :date December 1980 :pages 22 :keywords biological motion, planarity assumption :aim 609 :unavailable :title {Towards A Better Definition of Transactions} :author Barbara S. Kerns :asort Kerns, B.S. :date December 1980 :pages 13 :ADnum (AD-A093935) :keywords transactions, data bases, actors, interactive systems :aim 611A :title {GPRINT - A LISP Pretty Printer Providing Extensive User Format-Control M echanisms} :author Richard C. Waters :asort Waters, R.C. :date October 1981 :revised September 1982 :reference See ACM Transactions on Programming Languages and Systems, Vol. 5, No. 4, October 1983, pp. 513-531. :cost $2.25 :pages 29 :ADnum (AD-A124261) :keywords pretty printing, formatted output, programming environments, LISP :abstract A Lisp pretty printer is presented which makes it easy for a user to control the format of the output produced. The printer can be used as a general mechanism for printing data structures as well as programs. It is divided into two parts: a set of formatting functions, and an output routine. The user specifies how a particular type of object should be formatted by creating a formatting function for the type. When passed an object of that type, the formatting function creates a sequence of directions which specify how the object should be printed if it can fit on one line and how it should be printed if it must be broken up across multiple lines. A simple template language makes it easy to specify these directions. Based on the line length availabe, the output routine decides what structures have to be broken up across multiple lines and produces the actual output following the directions created by the formatting functions. The paper concludes with a discussion of how the pretty printing method presented could be applied to languages other than Lisp. :end :aim 612A :title {The Curve of Least Energy} :author B.K.P. Horn :asort Horn, B.K.P. :date January 1981 :cost $2.25 :pages 34 :ADnum (AD-A098054) :keywords spline, subjective contours, smooth curve, computer aided design :abstract Here we search for the curve which has the smallest integral of the square of curvature, while passing through two given points with given orientation. This is the true shape of a spline used in lofting. In computer-aided design, curves have been sought which maximize "smoothness". The curve discussed here is the one arising in this way from a commonly used measure of smoothness. The human visual system may use such a curve when it constructs a subjective contour. :end :aim 613 :title {A Computational Theory of Visual Surface Interpolation} :author W.E.L. Grimson :asort Grimson, W.E.L. :date June 1981 :cost $3.00 :pages 75 :ADnum (AD-A103921) :keywords stereo vision, surface interpolation, natural computation, quadratic variation :abstract Computational theories of structure from motion and stereo vision only specify the computation of three-dimensional surface information at special points in the image. Yet, the visual perception is clearly of complete surfaces. In order to account for this, a computational theory of the interpolation of surfaces from visual information is presented. The problem is constrained by the fact that the surface must agree with the information from stereo or motion correspondence, and not vary radically between these points. Using the image irradiance equation [Horn, 1977], an explicit form of this surface consistency constraint can be derived [Grimson, 1981c]. :end :aim 614 :unavailable :title {Equation Counting and the Interpretation of Sensory Data} :author W.A. Richards, J.M. Rubin and D.D. Hoffman :asort Richards, W.A.; Rubin, J.M.; Hoffman, D.D. :date June 1981 :pages 26 :ADnum (AD-A103924) :keywords vision, color-vision, structure from motion, perception, equation-counting, motion, signal detection, inference :aim 616 :title {Music, Mind, and Meaning} :author Marvin Minsky :asort Minsky, M. :date February 1981 :cost $2.25 :pages 21 :keywords cognition, music, semantics, representation of knowledge :abstract Speculating about cognitive aspects of listening to music, this essay discusses: how metric regularity and thematic repetition might involve representation frames and memory structures, how the result of listening might resemble space-models, how phrasing and expression might evoke innate responses and, finally, why we like music -- or rather, what is the nature of liking itself. :end :aim 617 :title {Control of a Tendon Arm} :author Kok Huang Lim :asort Lim, K.H. :date February 1981 :cost $3.00 :pages 85 :ADnum (AD-A098089) :keywords robotics, tendon actuation, time optimal control :abstract The dynamics and control of a tendon driven three degree of freedom shoulder joint are studied. A control scheme consisting of two phases has been developed. In the first phase, approximation of the time optimal control trajectory was applied open loop to the system. In the second phase a closed loop linear feedback law was employed to bring the system to the desired final state and to maintain it there. :end :aim 620 :title {Record of the Workshop on Research in Office Semantics} :author Gerald R. Barber :asort Barber, G. :date February 1981 :cost $1.50 :pages 18 :keywords office automation, knowledge-based office systems :abstract This paper is a compendium of the ideas and issues presented at the Chatham Bars Workshop on Office Semantics. The intent of the workshop was to examine the state of the art in office systems and to elucidate the issues system designers were concerned with in developing next generation office systems. The workshop involved a cross-section of people from government, industry and academia. Presentations in the form of talks and video tapes were made of prototypical systems. :end :aim 622 :title {On the Representation of Angular Velocity and its Effect on the Efficien cy of Manipulator Dynamics Computation} :author William M. Silver :asort Silver, W.M. :date March 1981 :cost $2.25 :pages 28 :ADnum (AD-A098418) :keywords robotics, Lagrangian dynamics, manipulators, Newton-Euler dynamics :abstract Recently there has been considerable interest in efficient formulations of manipulator dynamics, mostly due to the desirability of real-time control or analysis of physical devices using modest computers. The inefficiency of the classical Lagrangian formulation is well known, and this has led researchers to seek alternative methods. Several authors have developed a highly efficient formulation of manipulator dynamics based on the Newton-Euler equations, and there may be some confusion as to the source of this efficiency. This paper shows that there is in fact no fundamental difference in computational efficiency between Lagrangian and Newton-Euler equations, and there may be some confusion as to the source of this efficiency. This paper shows that there is in fact no fundamental difference in computational efficiency between Lagrangian and Newton-Euler formulations. The efficiency of the above-mentioned Newton-Euler formulation is due to two factors: the recursive structure of the computation and the representation chosen for the rotational dynamics. Both of these factors can be achieved in the Lagrangian formulation, resulting in an algorithm identical to the Newton-Euler formulation. Recursive Lagrangian dynamics has been discussed previously by Hollerbach. This paper takes the final step by comparing in detail the representations that Lagrangian formulation is indeed equivalent to the Newton-Euler formulation. :end :aim 624 :title {Negotiation as a Metaphor for Distributed Problem Solving} :author Randall Davis and Reid G. Smith :asort Davis, R.; Smith, R.G. :date May 1981 :cost $2.75 :pages 43 :reference See {\it Artificial Intelligence}, Vol. 20, 1983, pgs. 63-109. :ADnum (AD-A100367) :keywords distributed problem solving, contract net, loosely coupled systems :abstract We describe the concept of distributed problem solving and define it as the cooperative solution of problems by a decentralized and loosely coupled collection of problem solvers. This approach to problem solving offers the promise of increased performance and provides a useful medium for exploring and developing new problem-solving techniques. We present a framework called the contract net that specifies communication and control in a distributed problem solver. Task distribution is viewed as an interactive process, a discussion carried on between a node with a task to be executed and a group of nodes that may be able to execute the task. We describe the kinds of information that must be passed between nodes during the discussion in order to obtain effective problem-solving behavior. This discussion is the origin of the negotiation metaphor: Task distribution is viewed as a form of contract negotiation. :end :aim 625 :title {A Preview of Act 1} :author Henry Lieberman :asort Lieberman, H. :date June 1981 :cost $2.25 :pages 30 :keywords actors, object-oriented programming, message passing, knowledge representation, data abstraction, parallelism :abstract The next generation of artificial intelligence programs will require the ability to organize knowledge as groups of active objects. Each object should have only its own local expertise, the ability to operate in parallel with other objects, and the ability to communicate with other objects. Artificial Intelligence programs will also require a great deal of flexibility, including the ability to support multiple representations of objects, and to incrementally and transparently replace objects with new, upward-compatible versions. To realize this, we propose a model of computation based on the notion of an actor, an active object that communicates by message passing. Actors blur the conventional distinction between data and procedures. The actor philosophy is illustrated by a description of our prototype actor interpreter Act 1. :end :aim 626 :title {Thinking About Lots Of Things At Once Without Getting Confused - Paralle lism in Act 1} :author Henry Lieberman :asort Lieberman, H. :date May 1981 :cost $2.25 :pages 23 :keywords actors, parallelism, futures, serializers, data abstraction, synchronization, message passing :abstract As advances in computer architecture and changing economics make feasible machines with large-scale parallelism, Artificial Intelligence will require new ways of thinking about computation that can exploit parallelism effectively. We present the actor model of computation as being appropriate for parallel systems, since it organizes knowledge as active objects acting independently, and communicating by message passing. We describe the parallel constructs in our experimental actor interpreter Act 1. Futures create concurrency, by dynamically allocating processing resources much as Lisp dynamically allocates passive storage. Serializers restrict concurrency by constraining the order in which events take place, and have changeable local state. Using the actor model allows parallelism and synchronization to be implemented transparently, so that parallel or synchronized resources can be used as easily as their serial counterparts. :end :aim 627 :title {The Use of Parallelism to Implement a Heuristic Search} :author William A. Kornfeld :asort Kornfeld, W.A. :date March 1981 :cost $1.50 :pages 17 :ADnum (AD-A099184) :keywords constraint networks, parallelism, search strategies, problem solving :abstract The role of parallel processing in heuristic search is examined by means of an example (cryptarithmetic addition). A problem solver is constructed that combines the metaphors of constraint propagation and hypothesize-and-test. The system is capable of working on many incompatible hypotheses at one time. Furthermore, it is capable of allocating different amounts of processing power to running activities and changing these allocations as computation proceeds. It is empirically found that the parallel algorithm is on the average, more efficient than a corresponding sequential one. Implications of this for problem solving in general are discussed. :end :aim 628 :title {Chaosnet} :author David A. Moon :asort Moon, D. :date June 1981 :cost $2.75 :pages 62 :ADnum (AD-A104024) :keywords local network, system :abstract Chaosnet is a local network, that is, a system for communication among a group of computers located within about 1000 meters of each other. Originally developed by the Artificial Intelligence Laboratory as the internal communications medium of the Lisp Machine System, it has since come to be used to link a variety of machines around MIT and elsewhere. :end :aim 629 :title {Active Touch Sensing} :author William Daniel Hillis :asort Hillis, W.D. :date April 1981 :cost $2.25 :pages 36 :ADnum (AD-A099255) :keywords touch, tactile sensor, tendon, robots, finger :abstract The mechanical hand of the future will roll a screw between its fingers and sense, by touch, which end is which. This paper describes a step toward such a manipulator--a robot finger that is used to recognize small objects by touch. The device incorporates a novel imaging tacticle sensor--an artificial skin with hundreds of pressure sensors in a space the size of a finger tip. The sensor is mounted on a tendon-actuated mechanical finger, similar in size and range of motion to a human index finger. A program controls the finger, using it to press and probe the object placed in front of it. Based on how the object feels, the program guesses its shape and orientation and then uses the finger to test refine the hypothesis. The device is programmed to recognize commonly used fastening devices-nuts, bolts, flat washers, lock washers, dowel pins, cotter pins, and set screws. :end :aim 631 :title {Color Vision and Image Intensities: When are Changes Material?} :author John M. Rubin and W.A. Richards :asort Rubin, J.M.; Richards, W.A. :date May 1981 :cost $2.25 :pages 32 :ADnum (AD-A103926) :keywords vision, edge detection, crosspoint operators, color-vision, material changes :abstract Marr has emphasized the difficulty in understanding a biological system or its components without some idea of its goals. In this paper, a preliminary goal for color vision is proposed and analyzed. That goal is to determine where changes of material occur in a scene (using only spectral information). This goal is challenging for two reasons. First, the effects of many processes (shadowing, shading from surface orientation changes, highlights, variations in pigment density) are confounded with the effects of material changes in the available image intensities. Second, material changes are essentially arbitrary. We are consequently led to a strategy of rejecting the presence of such confounding process. We show that there is a unique condition, the spectral crosspoint, that allows rejection of the hypothesis that measured image intensities arise from one of the confounding processes. (If plots are made of image intensity versus wavelength from two image regions, and the plots intersect, we say that there is a spectral crosspoint). :end :aim 632 :title {Learning New Principles From Precedents And Exercises: The Details} :author Patrick H. Winston :asort Winston, P.H. :date May 1981 :cost $2.75 :pages 60 :ADnum (AD-A100368) :keywords learning, principles, theory, analogy-based reasoning :abstract Much learning is done by way of studying precedents and exercises. A teacher supplies a story, gives a problem, and expects a student both to solve a problem and to discover a principle. The student must find the correspondence between the story and the problem, apply the knowledge in the story to solve the problem, generalize to form a principle, and index the principle so that it can be retrieved when appropriate. This sort of learning pervades Management, Political Science, Economics, Law, and Medicine as well as the development of common-sense knowledge about life in general. This paper presents a theory of how it is possible to learn by precedents and exercises and describes an implemented system that exploits the theory. The theory holds that causal relations identify the regularities that can be exploited from past experience, given a satisfactory representation for situations. The representation used stresses actors and objects which are taken from English-like input and arranged into a kind of semantic network. Principles emerge in the form of production rules which are expressed in the same way situations are. :end :aim 634 :title Abstraction, Inspection And Debugging In Programming :author Charles Rich and Richard Waters :asort Rich, C.; Waters, R.C. :date June 1981 :cost $2.25 :pages 31 :ADnum (AD-A102157) :keywords artificial intelligence, Programmer's Apprentice, debugging, plans, automatic programming, program editor, programming environments :abstract We believe that software engineering has much to learn from other mature engineering disciplines, such as electrical engineering, and that the problem solving behaviors of engineers in different disciplines have many similarities. Three key ideas in current artificial intelligence theories of engineering problem solving are: Abstraction--using a simplified view of the problem to guide the problem solving process. Inspection--problem solving by recognizing the form ("plan") of a solution. Debugging--incremental modification of an almost satisfactory solution to a more satisfactory one. These three techniques are typically used together in a paradigm which we call AID (for Abstraction, Inspection, Debugging): First an abstract model of the problem is constructed in which some important details are intentionally omitted. In this simplified view inspection methods are more likely to succeed, yielding the initial form of a solution. Further details of the problem are then added one at a time with corresponding incremental modifications to the solution. This paper states the goals and milestones of the remaining three years of a five year research project to study the fundamental principles underlying the design and construction of large software systems and to demonstrate the feasibility of a computer aided design tool for this purpose, called the programmer's apprentice. :end :aim 635 :unavailable :title {Dynamic Interactions between Limb Segments during Planar Arm Movement} :author John M. Hollerbach and Tamar Flash :asort Hollerbach, J.M.; Flash, T. :date November 1981 :pages 23 :keywords arm movement, motor control, limb dynamics :aim 637 :title {Evidence Relating Subjective Contours And Interpretations Involving Occl usion} :author Kent A. Stevens :asort Stevens, K.A. :date June 1981 :reference Replaces Memo 363 :cost $1.50 :pages 12 :ADnum (AD-A103925) :keywords subjective contours, vision, occlusion, perception :abstract Subjective contours, according to one theory, outline surfaces that are apparently interposed between the viewer and background (because of the disruption of background figures, sudden termination of lines, and other occlusion "cues") but are not explicitly outlined by intensity discontinuities. This theory predicts that if occlusion cues are not interpreted as evidence of occlusion, no intervening surface need be postulated, hence no subjective contours would be seen. This prediction, however, is difficult to test because observers normally interpret the cues as occlusion evidence and normally see the subjective contours. This article describes a patient with visual agnosia who is both unable to make the usual occlusion interpretions and is unable to see subjective contours. He has, however, normal abililty to interpret standard visual illusions, stereograms, and in particular, stereogram versions of the standard subjective contour figures, which elicit to him strong subjective edges in depth (corresponding to the subjective contours viewed in the monocular versions of the figures). :end :aim 638 :title {Sniffer: a System that Understands Bugs} :author Daniel G. Shapiro :asort Shapiro, D.G. :date June 1981 :cost $2.75 :pages 59 :reference This paper was originally submitted as master's thesis to the MIT Dept. of Electrical Engineering and Computer Science on May 8, 1981. :ADnum (AD-A102158) :keywords debugging, error recognition, bugs, program understanding, expert systems :abstract This paper presents a bug understanding system, called {\it sniffer}, which applies inspection methods to generate a deep understanding of a narrow class of errors. Sniffer is an interactive debugging aide. It can locate and identify error-containing implementations of typical programming cliches, and it can describe them using the terminology employed by expert programmers. The debugging knowledge in Sniffer is organized as a collection of independent experts which understand specific errors. Each expert functions by applying a feature recognition process to the test program (the program under analysis), and to the events which took place during the execution of that code. No deductive machinery is involved. This recognition is supported by two systems; the {\it cliche finder} which identifies small portions of algorithms from a plan for the code, and the {\it time rover} which provides access to all program states which occured during the test program's execution. :end :aim 640 :title {Natural Learning} :author Laurence Miller :asort Miller, L. :date October 1981 :cost $3.50 :pages 185 :reference See Logo memo 61 :keywords learning, interactive control, equilibration, interest, micro-worlds :abstract This memo reports the results of a case study into how children learn in the absence of explicit teaching. The three subjects, an eight year old, a ten year old and a thirteen year old were observed in both of two experimental micro-worlds. The first of these micro-worlds, called the Chemicals World, included a large table, a collection of laboratory and household chemicals, and apparatus for conducting experiments with chemicals; the second, called the Mork and Mindy World included a collection of video-taped episodes of the television series Mork and Mindy, a video-tape machine and an experimenter with whom the subjects could discuss the episodes. The main result of the study is a theory of how children's interests interact with knowledge embodied in their environment causing them to learn new powerful ideas. An early version of this theory is presented in chapter five. :end :aim 641 :title {The Scientific Community Metaphor} :author William A. Kornfeld and Carl Hewitt :asort Kornfeld, W.A.; Hewitt, C. :date January 1981 :cost $1.50 :pages 11 :ADnum (AD-A108178) :keywords parallelism, problem solving, philosophy of science :abstract Scientific communities have proven to be extremely successful at solving problems. They are inherently parallel systems and their macroscopic nature makes them amenable to careful study. In this paper the character of scientific research is examined drawing on sources in the philosophy and history of science. We maintain that the success of scientific research depends critically on its concurrency and pluralism. A variant of the language Ether is developed that embodies notions of concurrency necessary to emulate some of the problem solving behavior of scientific communities. Capabilities of scientific communities are discussed in parallel with simplified models of these capabilities in this language. :end