neuron-request@HPLABS.HP.COM ("Neuron-Digest Moderator Peter Marvit") (06/15/89)
Neuron Digest Wednesday, 14 Jun 1989 Volume 5 : Issue 27 Today's Topics: "Transformations" tech report Abstract for CNLS Conference Book Reviews,Journal of Mathematical Psychology Abstracts from Journal of Experimental and Theoretical AI sort of connectionist: TR: direct inferences and figurative adjective-noun combinations Report available Technical Report Available TR announcement NEURAL NETWORK JOURNALS ***PLEASE NOTE THAT NEURON DIGEST WILL BE OFF THE AIR FOR TWO WEEKS (due to ***IJCNN and a bit of R&R). Still backlogged! Send submissions, questions, address maintenance and requests for old issues to "neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request" ARPANET users can get old issues via ftp from hplpm.hpl.hp.com (15.255.16.205). ------------------------------------------------------------ Subject: "Transformations" tech report From: Eric Mjolsness <mjolsness-eric@YALE.ARPA> Date: Tue, 07 Mar 89 21:23:16 -0500 A new technical report is available: "Algebraic Transformations of Objective Functions" (YALEU/DCS/RR-686) by Eric Mjolsness and Charles Garrett Yale Department of Computer Science P.O. 2158 Yale Station New Haven CT 06520 Abstract: A standard neural network design trick reduces the number of connections in the winner-take-all (WTA) network from O(N^2) to O(N). We explain the trick as a general fixpoint-preserving transformation applied to the particular objective function associated with the WTA network. The key idea is to introduce new interneurons which act to maximize the objective, so that the network seeks a saddle point rather than a minimum. A number of fixpoint-preserving transformations are derived, allowing the simplification of such algebraic forms as products of expressions, functions of one or two expressions, and sparse matrix products. The transformations may be applied to reduce or simplify the implementation of a great many structured neural networks, as we demonstrate for inexact graph-matching, convolutions and coordinate transformations, and sorting. Simulations show that fixpoint-preserving transformations may be applied repeatedly and elaborately, and the example networks still robustly converge. We discuss implications for circuit design. To request a copy, please send your physical address by e-mail to mjolsness-eric@cs.yale.edu OR mjolsness-eric@yale.arpa (old style) Thank you. ------------------------------ Subject: Abstract for CNLS Conference From: Stevan Harnad <harnad@Princeton.EDU> Date: Mon, 13 Mar 89 13:57:26 -0500 Here is the abstract for my contribution to the session on the "Emergence of Symbolic Structures" at the 9th Annual International Conference on Emergent Computation, CNLS, Los Alamos National Laboratory, May 22 - 26 1989 Grounding Symbols in a Nonsymbolic Substrate Stevan Harnad Behavioral and Brain Sciences Princeton NJ There has been much discussion recently about the scope and limits of purely symbolic models of the mind and of the proper role of connectionism in mental modeling. In this paper the "symbol grounding problem" -- the problem of how the meanings of meaningless symbols, manipulated only on the basis of their shapes, can be grounded in anything but more meaningless symbols in a purely symbolic system -- is described, and then a potential solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) iconic representations are analogs of the sensory projections of objects and events and (2) categorical representations are learned or innate feature-detectors that pick out the invariant features of object and event categories. Elementary symbols are the names of object and even categories, picked out by their (nonsymbolic) categorical representations. Higher-order symbols are then grounded in these elementary symbols. Connectionism is a natural candidate for the mechanism that learns the invariant features. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic module, however; the symbolic functions would emerge as an intrinsically "dedicated" symbol system as a consequence of the bottom-up grounding of categories and their names. ------------------------------ Subject: Book Reviews,Journal of Mathematical Psychology From: INAM000 <INAM%MCGILLB.BITNET@VMA.CC.CMU.EDU> Date: Sat, 01 Apr 89 12:40:00 -0500 The purpose of this mailing is to (re)draw your attention to the fact that the Journal of Mathematical Psychology, published by Academic Press, publishes reviews of books in the general area of mathematical (social, biological,....) science. For instance, in a forthcoming issue, a review of the revised edition of Minsky and Papert's PERCEPTRONS will appear (written by Jordan Pollack). The following is a partial list of books that we have recently received that I would like to get reviewed for the Journal - -those most relevant to this group are marked by *s. As you will see, most of them are edited readings, which are hard to review. However, if you are interested in reviewing one or more of the books, I would like to hear from you. Our reviews are additions to the literature, not "straight" reviews, so writing a review for us gives you an opportunity to express your views on a field of research. I would also like to be kept informed of new books in this general area that you think we should review (or at least list in our Books Received section). And, of course, one reward for writing a review is that you receive a complimentary copy of the book. (SELECTED) Books Received The following books have been received for review.We encourage readers to volunteer themselves as reviewers.We consider our reviews contributions to the literature ,rather than "straight" reviews,and thus reviewers have considerable freedom in terms of format,length,and content of their reviews.Readers who would like to review any of these or previously listed books should contact A.A.J.Marley , Department of Psychology , McGill University,1205 Avenue Dr. Penfield,Montreal,Quebec H3A 1B1, Canada.(Email address: inam@musicb.mcgill.ca on BITNET). *Amit, D. J. Modelling brain function: The world of attractor neural networks. Cambridge, England: Cambridge University Press,1989. Pp. 500. Collins,A. and Smith,E.E. Readings in Cognitive Science.A Perspective from Psychology and Artificial Intelligence.San Mateo,California:Morgan Kaufmann,1988.661pp. *Cotterill,R. M.J. (Ed).Computer Simulation in Brain Sciences.New York,New York: Cambridge University Press,1988.576pp,$65.00. *Grossberg,S. (Ed) Neural Networks and Natural Intelligence.Cambridge, Massachusetts : MIT Press,1988. 637pp. $35.00. Hirst,W. The Making of Cognitive Science.Essays In Honor of George A.Miller.New York,New York: Cambridge University Press,1988.288pp,$29.95. Laird,P.D. Learning from Good and Bad Data.Norwell,Massachusetts: Kluwer Academic,1988.211pp. *MacGregor, R. J. Neural and Brain Modeling. San Diego, California: Academic Press, 1987. 643pp. $95.50. Ortony,A,Clore,G.L. and Collins,A. The Cognitive Structure of Emotions.New York,New York: Cambridge University Press,1988. 175pp,$24.95. *Richards, W. (Ed). Natural Computation. Cambridge, Massachusetts: MIT Press, 1988. 561pp. Shrobe,H.E. and the American Association for Artificial Intelligence (Eds). Exploring Artificial Intelligence:Survey Talks from the National Conferences on Artificial Intelligence.San Mateo,California:Morgan Kaufmann,1988.693pp. Vosniadou,S. and Ortony,A.Similarity and Analogical Reasoning.New York,New York: Cambridge University Press,1988.410pp,$44.50. *Richards,W. (Ed.) Natural Computation.Cambridge,Massachusetts: Bradford/MIT Press,1988.561pp.$25.00. Wilkins,D.E. Practical Planning:Extending the Classical AI Planning Paradigm. San Mateo, California : Morgan Kaufmann, 1988. 205pp. ------------------------------ Subject: Abstracts from Journal of Experimental and Theoretical AI From: cfields@NMSU.Edu Date: Sun, 09 Apr 89 15:56:55 -0600 _________________________________________________________________________ The following are abstracts of papers appearing in the second issue of the Journal of Experimental and Theoretical Artificial Intelligence, to appear in April, 1989. For submission information, please contact either of the editors: Eric Dietrich Chris Fields PACSS - Department of Philosophy Box 30001/3CRL SUNY Binghamton New Mexico State University Binghamton, NY 13901 Las Cruces, NM 88003-0001 dietrich@bingvaxu.cc.binghamton.edu cfields@nmsu.edu JETAI is published by Taylor & Francis, Ltd., London, New York, Philadelphia _________________________________________________________________________ Generating plausible diagnostic hypotheses with self-processing causal networks Jonathan Wald, Martin Farach, Malle Tagamets, and James Reggia Department of Computer Science, University of Maryland A recently proposed connectionist methodology for diagnostic problem solving is critically examined for its ability to construct problem solutions. A sizeable causal network (56 manifestation nodes, 26 disorder nodes, 384 causal links) served as the basis of experimental simulations. Initial results were discouraging, with less than two-thirds of simulations leading to stable solution states (equilibria). Examination of these simulation results identified a critical period during simulations, and analysis of the connectionist model's activation rule during this period led to an understanding of the model's nonstable oscillatory behavior. Slower decrease in the model's control parameters during the critical period resulted in all simulations reaching a stable equilibrium with plausible solutions. As a consequence of this work, it is possible to more rationally determine a schedule for control parameter variation during problem solving, and the way is now open for real-world experimental assessment of this problem solving method. _________________________________________________________________________ Organizing and integrating edge segments for texture discrimination Kenzo Iwama and Anthony Maida Department of Computer Science, Pennsylvania State University We propose a psychologically and psychophysically motivated texture segmentation algorithm. The algorithm is implemented as a computer program which parses visual images into regions on the basis of texture. The program's output matches human judgements on a very large class of stimuli. The program and algorithm offer very detailed hypotheses of how humans might segment stimuli, and also suggest plausible alternative explanations to those presented in the literature. In particular, contrary to Julesz and Bergen (1983), the program does not use crossings as textons and does use corners as textons. Nonetheless, the program is able to account for the same data. The program accounts for much of the linking phenomena of Beck, Pradzny, and Rosenfeld (1983). It does so by matching structures between feature maps on the basis of spatial overlap. These same mechanisms are also used to account for the feature integration phenomena of Triesman (1985). ------------------------------------------------------ Towards a paradigm shift in belief representation methodology John Barnden Computing Research Laboratory, New Mexico State University Research programs must often divide issues into managable sub-issues. The assumption is that an approach developed to cope with a sub-issue can later be integrated into an approach to the whole issue - possibly after some tinkering with the sub-approach, but without affecting its fundamental features. However, the present paper examines a case where an AI issue has been divided in a way that is, apparently, harmless and natural, but is actually fundamentally out of tune with the realities of the issue. As a result, some approaches developed for a certain sub-issue cannot be extended to a total approach without fundamental modification. The issue in question is that of modeling people's beliefs, hopes, intentions, and other ``propositional attitudes'', and/or interpreting natural language sentences that report propositional attitudes. Researchers have, quite understandably, de-emphasized the problem of dealing in detail with nested attitudes (e.g. hopes about beliefs, beliefs about intentions about beliefs), in favor of concentrating on the sub-issue of nonnested attitudes. Unfortunately, a wide variety of approaches to attitudes are prone to a deep but somewhat subtle problem when they are applied to nested attitudes. This problem can be very roughly described as an AI system's unwitting imputation of its own arcane ``theory'' of propositional attitudes to other agents. The details of this phenomenon have been published elsewhere by the author: the present paper merely sketches it, and concentrates instead on the methodological lessons to be drawn, both for propositional attitude research and, more tentatively, for AI in general. The paper also summarizes an argument (presented more completely elsewhere) for an approach to attitude representation based in part on metaphors of mind that are commonly used by people. This proposed new research direction should ultimately coax propositional attitude research out of the logical armchair and into the pyschological laboratory. ------------------------------------------------------------ The graph of a boolean function Frank Harary Department of Computer Science, New Mexico State University (Abstract not available) ------------------------------ Subject: sort of connectionist: From: James Hendler <hendler@icsib9.Berkeley.EDU> Date: Wed, 03 May 89 14:29:29 -0700 CALL FOR PAPERS CONNECTION SCIENCE (Journal of Neural Computing, Artificial Intelligence and Cognitive Research) Special Issue -- HYBRID SYMBOLIC/CONNECTIONIST SYSTEMS Connectionism has recently seen a major resurgence of interest among both artificial intelligence and cognitive science researchers. The spectrum of connectionist approaches is quite large, ranging from structured models, in which individual network units carry meaning, through distributed models of weighted networks with learning algorithms. Very encouraging results, particularly in ``low-level'' perceptual and signal processing tasks, are being reported across the entire spectrum of these models. Unfortunately, connectionist systems have had more limited success in those ``higher cognitive'' areas where symbolic models have traditionally shown promise: expert reasoning, planning, and natural language processing. While it may not be inherently impossible for purely connectionist approaches to handle complex reasoning tasks someday, it will require significant breakthroughs for this to happen. Similarly, getting purely symbolic systems to handle the types of perceptual reasoning that connectionist networks perform well would require major advances in AI. One approach to the integration of connectionist and symbolic techniques is the development of hybrid reasoning systems in which differing components can communicate in the solving of problems. This special issue of the journal Connection Science will focus on the state of the art in the development of such hybrid reasoners. Papers are solicited which focus on: Current artificial intelligence systems which use connectionist components in the reasoning tasks they perform. Theoretical or experimental results showing how symbolic computations can be implemented in, or augmented by, connectionist components. Cognitive studies which discuss the relationship between functional models of higher level cognition and the ``lower level'' implementations in the brain. The special issue will give special consideration to papers sharing the primary emphases of the Connection Science Journal which include: 1) Replicability of Results: results of simulation models should be reported in such a way that they are repeatable by any competent scientist in another laboratory. The journal will be sympathetic to the problems that replicability poses for large complex artificial intelligence programs. 2) Interdisciplinary research: the journal is by nature multidisciplinary and will accept articles from a variety of disciplines such as psychology, cognitive science, computer science, language and linguistics, artificial intelligence, biology, neuroscience, physics, engineering and philosophy. It will particularly welcome papers which deal with issues from two or more subject areas (e.g. vision and language). Papers submitted to the special issue will also be considered for publication in later editions of the journal. All papers will be refereed. The expected publication date for the special issue is Volume 2(1), March, 1990. DEADLINES: Submission of papers June 15, 1989 Reviews/decisions September 30, 1989 Final rewrites due December 15, 1989. Authors should send four copies of the article to: Prof. James A. Hendler Associate Editor, Connection Science Dept. of Computer Science University of Maryland College Park, MD 20742 USA Those interested in submitting articles are welcome to contact the editor via e-mail (hendler@brillig.umd.edu - US Arpa or CSnet) or in writing at the above address. ------------------------------ Subject: TR: direct inferences and figurative adjective-noun combinations From: Susan Weber <hollbach@cs.rochester.edu> Date: Mon, 08 May 89 13:11:01 -0400 The following TR can be requested from peg@cs.rochester.edu. However, due to the cost of copying the 170 page report, the Computer Science Department is charging $7.50 for the TR: A Structured Connectionist Approach to Direct Inferences and Figurative Adjective-Noun Combinations Susan Hollbach Weber Computer Science Department TR 289 University of Rochester ----------------------------------------------------------------- A Structured Connectionist Approach to Direct Inferences and Figurative Adjective-Noun Combinations Susan Hollbach Weber University of Rochester Computer Science Department TR 289 Categories have internal structure sufficiently sophisticated to capture a variety of effects, ranging from the direct inferences arising from adjectival modification of nouns to the ability to comprehend figurative usages. The design of the internal structure of category representation is constrained by the model requirements of the connectionist implementation and by the observable behaviors exhibited in direct inferences. The former dictates the use of a spreading activation format, and the latter indicates some to the topology and connectivity of the resultant semantic network. The connectionist knowledge representation and inferencing scheme described in this report is based on the idea that categories and concepts are context sensitive and functionally structured. Each functional property value of a category motivates a distinct aspect of that category's internal structure. This model of cognition, as implemented in a structured connectionist knowledge representation system, permits the system to draw immediate inferences, and, when augmented with property inheritance mechanisms, mediated inferences about the full meaning of adjective-noun combinations. These inferences are used not only to understand the implicit references to correlated properties (a green peach is unripe) but also to make sense of figurative adjective uses, by drawing on the connotations of the adjective in literal contexts. ------------------------------ Subject: Report available From: Catherine Harris <harris%cogsci@ucsd.edu> Date: Tue, 09 May 89 20:51:02 -0700 CONNECTIONIST EXPLORATIONS IN COGNITIVE LINGUISTICS Catherine L. Harris Department of Psychology and Program in Cognitive Science University of California, San Diego Abstract: Linguists working in the framework of cognitive linguistics have suggested that connectionist networks may provide a computational formalism well suited for the implementation of their theories. The appeal of these networks include the ability to extract the family resemblance structure inhering in a set of input patterns, to represent both rules and exceptions, and to integrate multiple sources of information in a graded fashion. The possible matches between cognitive linguistics and connectionism were explored in an implementation of the Brugman and Lakoff (1988) analysis of the diverse meanings of the preposition "over." Using a gradient-descent learning procedure, a network was trained to map patterns of the form "trajector verb (over) landmark" to feature-vectors representing the appropriate meaning of "over." Each word was identified as a unique item, but was not further semantically specified. The pattern set consisted of a distribution of form-meanings pairs that was meant to be evocative of English usage, in that the regularities implicit in the distribution spanned the spectrum from rules, to partial regularities, to exceptions. Under pressure to encode these regularities with limited resources, the nework used one hidden layer to recode the inputs into a set of abstract properties. Several of these categories, such as dimensionality of the trajector and vertical height of the landmark, correspond to properties B&L found to be important in determining which schema a given use of "over" evokes. This abstract recoding allowed the network to generalize to patterns outside the training set, to activate schemas to partial patterns, and to respond sensibly to "metaphoric" patterns. Furthermore, a second layer of hidden units self-organized into clusters which capture some of the qualities of the radial categories described by B&L. The paper concludes by describing the "rule-analogy continuum". Connectionist models are interesting systems for cognitive linguistics because they provide a mechanism for exploiting all points of this continuum. A short version of this paper will be published in The Proceedings of the Fifteenth Annual Meeting of the Berkeley Linguistics Society, 1989. Send requests to: harris%cogsci.ucsd.edu ------------------------------ Subject: Technical Report Available From: <THEPCAP%SELDC52.BITNET@VMA.CC.CMU.EDU> Date: Wed, 17 May 89 13:00:00 +0200 LU TP 89-1 A NEW METHOD FOR MAPPING OPTIMIZATION PROBLEMS ONTO NEURAL NETWORKS Carsten Peterson and Bo Soderberg Department of Theoretical Physics, University of Lund Solvegatan 14A, S-22362 Lund, Sweden Submitted to International Journal of Neural Systems ABSTRACT: A novel modified method for obtaining approximate solutions to difficult optimization problems within the neural network paradigm is presented. We consider the graph partition and the travelling salesman problems. The key new ingredient is a reduction of solution space by one dimension by using graded neurons, thereby avoiding the destructive redundancy that has plagued these problems when using straightforward neural network techniques. This approach maps the problems onto Potts glass rather than spin glass theories. A systematic prescription is given for estimating the phase transition temperatures in advance, which facilitates the choice of optimal parameters. This analysis, which is performed for both serial and synchronous updating of the mean field theory equations, makes it possible to consistently avoid chaotic bahaviour. When exploring this new technique numerically we find the results very encouraging; the quality of the solutions are in parity with those obtained by using optimally tuned simulated annealing heuristics. Our numerical study, which extends to 200-city problems, exhibits an impressive level of parameter insensitivity. For copies of this report send a request to THEPCAP@SELDC52 [don't forget to give your mailing address]. ------------------------------ Subject: TR announcement From: eric@mcc.com (Eric Hartman) Date: Wed, 17 May 89 14:29:47 -0500 The following technical report is now available. Requests may be sent to eric@mcc.com or via physical mail to the MCC address below. MCC Technical Report Number: ACT-ST-146-89 Optoelectronic Implementation of Multi-Layer Neural Networks in a Single Photorefractive Crystal Carsten Peterson*, Stephen Redfield, James D. Keeler, and Eric Hartman Microelectronics and Computer Technology Corporation 3500 W. Balcones Center Dr. Austin, TX 78759-6509 Abstract: We present a novel, versatile optoelectronic neural network architecture for implementing supervised learning algorithms in photorefractive materials. The system is based on spatial multiplexing rather than the more commonly used angular multiplexing of the interconnect gratings. This simple, single-crystal architecture implements a variety of multi-layer supervised learning algorithms including mean-field-theory, back-propagation, and Marr-Albus-Kanerva style algorithms. Extensive simulations show how beam depletion, rescattering, absorption, and decay effects of the crystal are compensated for by suitably modified supervised learning algorithms. *Present Address: Department of Theoretical Physics, University of Lund, Solvegatan 14A, S-22362 Lund, Sweden. ------------------------------ Subject: NEURAL NETWORK JOURNALS From: will@ida.org (Craig Will) Date: Wed, 31 May 89 17:49:59 -0400 NEURAL NETWORK JOURNALS An update on neural network journals: In the near future, there will be six neural network journals published. They are: NEURAL NETWORKS, with 6 issues per year, published since 1988. Available with INNS membership $55/year. INNS, c/o Frank Polkinghorn, 9202 Ivanhoe Road, Ft. Washington, MD 20744. (301) 839-2114. Primarily academically oriented. Editor: Stephen Grossberg in US, Shun-ichi Amari in Japan, Teuvo Kohonen in Europe. NEURAL NETWORK REVIEW, now published by Lawrence Erlbaum Associates. Previously published somewhat irregularly (1 issue in 1987, 3 in 1988), 4 issues will be produced in 1989, the first due out in mid-August. $36/year personal; $72 institutional. LEA, Inc., Journal Subscription Dept, 365 Broadway, Hillsdale, NJ 07642. (201) 666-4110. A jour- nal of critical reviews. Editor: Craig Will. NEURAL COMPUTATION, published by MIT Press. Quarterly, the first issue just out. $45/year personal; $90 institutional. MIT Press Journals, 50 Hayward Street, Cambridge, MA 02142. Review articles and short theoretical papers. Editor: Ter- rence Sejnowski. INTERNATIONAL JOURNAL OF NEURAL NETWORKS, published by Learned Information in England. Quarterly, first issue was out in January, 1989. $99/year in the US. US orders: Learned Information, Inc., 143 Old Marlton Pike, Medford, NJ 08055. (609) 654-6266. Research and application papers. Editor: Kamal Karna. JOURNAL OF NEURAL NETWORK COMPUTING, published by Auerbach Publishers. Apparently $135/year. Auerbach Publishers, 210 South Street, Boston, MA 02111-9990. Quarterly, first issue due June 1989. Application papers. Editor: Harold Szu. IEEE TRANSACTIONS ON NEURAL NETWORKS, published by IEEE. Papers are being solicited and plans are apparently to pub- lish the first issue in January, 1990. Editor: Herbert Rausch. Craig Will Institute for Defense Analyses will@ida.org ------------------------------ End of Neurons Digest *********************