neuron-request@HPLMS2.HPL.HP.COM ("Neuron-Digest Moderator Peter Marvit") (08/30/90)
Neuron Digest Wednesday, 29 Aug 1990 Volume 6 : Issue 51 Today's Topics: neural stuff Re: neuron digest submission (protein folding) This may be of wider interest, at least for E. Coasters Preprints Item for Distribution Quantitative Linguistics Conference Announcement VLSI for AI and Neural Nets Workshop. Oxford, Sept. 5-7 Send submissions, questions, address maintenance and requests for old issues to "neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request" Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205). ------------------------------------------------------------ Subject: neural stuff From: ellen_warneke%a1@hp1900.desk.hp.com Date: 23 Aug 90 09:37:00 -0800 [[ Editor's Note: This was forwarded by my friendly local librarian. Does anyone have any more information? -PM ]] DR152.02: The MITSUBISHI ELECTRIC Central Laboratory has developed an optical neuro chip that can recognize characters more accurately than conventional devices. The chip, which is designed to recognize characters, "A," "E," and "J," integrates 32 light emitting devices, 32 photo detectors, and 1,024 spatial modulation devices on an 8mm-x-8mm GaAs (gallium arsenide) chip. Having a quantum-well structure, the chip has 32 neurons. Spatial modulation devices function to connect neurons, and the three character patterns are imbedded on the devices. Since the lab already developed an optical chip for identifying three characters in November 1988, the neuro chip is a second prototype chip. (8/21/90: Nikkei Sangyo [p.5]) ------------------------------ Subject: Re: neuron digest submission (protein folding) From: Douglas G. Danforth <danforth@riacs.edu> Date: Thu, 23 Aug 90 16:58:46 -0700 [[ ... regarding last issue's note ... ]] Steve, I would be interested in obtaining more details about what the task actually entails. One view that helps clearify such issues for me is to remove the problem of implementation and consider how would the Nearest Neighbor algorithm perform? If I understand you then the entire amino acid sequence could be encoded as a string of 20-bit "bytes" each byte representing one of the possbile 20 amino acids. If each amino acid can be considered "independent" of every other one then code a single bit for each acid otherwise distribute the bits in such a way that amino acid similarity is captured by the Hamming distance between different bytes. The resultant string would be some 2,000 bits long. The question is, what is associated with this string? Here is where you would have to fill me in. Now 2 strings are similar if their total Hamming distance is small. The prediction for an unknown string (under the Nearest Neighbor rule) would assign the data of the closest known string to the unknown string. That would be its method of generalization. Note that under this coding scheme transposition of bytes matters. The decision rule takes place in the JOINT space of all bits. If transposition of two amino acids ...ab... => ...ba... does NOT matter then there will be a tendency in the data to have similar labels assigned to these 2 strings (unless some other parts of the strings are more critical for distinquishing a difference). That is, as strings are stored (in a memory) there will be a tendancy for MARGINAL subspaces to emerge that have the same common labels. This is a general argument that will not serve you unless there is a large amount of data or there are many regularities in the data. Spaces of 2,000 dimensions are very large indeed. Notice the amount of mileage one can get without even considering multi-layers or error backpropogation. Nearest Neighbor is quite good. What is associated with the amino acid sequence? Doug Danforth danforth@riacs.edu ------------------------------ Subject: This may be of wider interest, at least for E. Coasters From: Neurotechnology Center - Martin Dudziak <DUDZIAKM@isnet.inmos.COM> Date: Fri, 24 Aug 90 15:23:32 -0600 ANNOUNCEMENT Public Presentation on Transputer-Based Neural Technology for Applications in: Image Processing, Systems Control, Signal Processing, Forcasting, Generalized Pattern Recognition Date: Thursday, September 27, 1990 Time: 9:30 AM Location: SGS-Thomson / INMOS Division Regional Technology Center 9861 Broken Land Parkway, Suite 320 Columbia MD 21046 Programme: There will be a presentation and demonstration of a transputer-based system employing the Holographic Neural Model developed by AND Corporation of Hamilton, Ontario. The model is essentially a non- connectionist, non-gradient-descent, approach to machine learning and the dense storage of stimulus-response associations, employing digital holographic principles. The technical team from AND will present a lecture on their technology, to be followed by a discussion period and demonstrations of their simulation models. This should last until approx. 12:30 PM. In the afternoon there will be an opportunity for informal discussions with SGS/INMOS technical staff and AND staff regarding particular user applications and areas of interest and how this technology can be applied to a variety of practical tasks. Further Information: Contact Martin Dudziak at SGS-Thomson/INMOS (Before 8/31 or after 9/9) Phone: 301-995-6952 FAX: 301-290-7047 Email: dudziakm@isnet.inmos.com ------------------------------ Subject: Preprints From: Gregory Kohring <HKF218%DJUKFA11.BITNET@VMA.CC.CMU.EDU> Date: Fri, 24 Aug 90 12:08:15 +0600 The following preprint is currently available. -- Greg Kohring Performance Enhancement of Willshaw Type Networks through the use of Limit Cycles G.A. Kohring HLRZ an der KFA Julich (Supercomputing Center at the KFA Julich) Simulation results of a Willshaw type model for storing sparsely coded patterns are presented. It is suggested that random patterns can be stored in Willshaw type models by transforming them into a set of sparsely coded patterns and retrieving this set as a limit cycle. In this way, the number of steps needed to recall a pattern will be a function of the amount of information the pattern contains. A general algorithm for simulating neural networks with sparsely coded patterns is also discussed, and, on a fully connected network of N=36 864 neurons (1.4 billion couplings), it is shown to achieve effective updating speeds as high as 160 billion coupling evaluations per second on one Cray-YMP processor. ================================================================== Additionally, the following short review article is also available. It is aimed at graduate students in computational physics who need an overview of the neural network literature from a computational sciences viewpoint, as well as some simple programming hints in order to get started with their neural network studies. It will shortly appear in World Scientific's Internationl Journal of Modern Physics C: Compuational Physics. LARGE SCALE NEURAL NETWORK SIMULATIONS G.A. Kohring HLRZ an der KFA Julich (Supercomputing Center at the KFA Julich) The current state of large scale, numerical simulations of neural networks is reviewed. Hardware and software improvements make it likely that biological size networks, i.e., networks with more than $10^{10}$ couplings, can be simulated in the near future. Sample programs for the efficient simulation of a few simple models are presented as an aid to researchers just entering the field. Send Correspondence and request for preprints to: G.A. Kohring HLRZ an der KFA Julich Postfach 1913 D-5170 Julich, West Germany e-mail: hkf218@djukfa11.bitnet Address after September 1, 1990: Institut fur Theoretische Physik Universitat zu Koln D-5000 Koln 41, West Germany ------------------------------ Subject: Item for Distribution From: B M Smith <bms@dcs.leeds.ac.uk> Date: Fri, 24 Aug 90 13:26:57 +0100 FINAL CALL FOR PAPERS AISB'91 8th SSAISB CONFERENCE ON ARTIFICIAL INTELLIGENCE University of Leeds, UK 16-19 April, 1991 The Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) will hold its eighth biennial conference at Bodington Hall, University of Leeds, from 16 to 19 April 1991. There will be a Tutorial Programme on 16 April followed by the full Technical Programme. The Programme Chair will be Luc Steels (AI Lab, Vrije Universiteit Brussel). Scope: Papers are sought in all areas of Artificial Intelligence and Simulation of Behaviour, but especially on the following AISB91 special themes: * Emergent functionality in autonomous agents * Neural networks and self-organisation * Constraint logic programming * Knowledge level expert systems research Papers may describe theoretical or practical work but should make a significant and original contribution to knowledge about the field of Artificial Intelligence. A prize of 500 pounds for the best paper has been offered by British Telecom Computing (Advanced Technology Group). It is expected that the proceedings will be published as a book. Submission: All submissions should be in hardcopy in letter quality print and should be written in 12 point or pica typewriter face on A4 or 8.5" x 11" paper, and should be no longer than 10 sides, single-spaced. Each paper should contain an abstract of not more than 200 words and a list of up to four keywords or phrases describing the content of the paper. Five copies should be submitted. Papers must be written in English. Authors should give an electronic mail address where possible. Submission of a paper implies that all authors have obtained all necessary clearances from the institution and that an author will attend the conference to present the paper if it is accepted. Papers should describe work that will be unpublished on the date of the conference. Dates: Deadline for Submission: 1 October 1990 Notification of Acceptance: 7 December 1990 Deadline for camera ready copy: 16 January 1991 Location: Bodington Hall is on the edge of Leeds, in 14 acres of private grounds. The city of Leeds is two and a half hours by rail from London, and there are frequent flights to Leeds/Bradford Airport from London Heathrow, Amsterdam and Paris. The Yorkshire Dales National Park is close by, and the historic city of York is only 30 minutes away by rail. Information: Papers and all queries regarding the programme should be sent to Judith Dennison. All other correspondence and queries regarding the conference to the Local Organiser, Barbara Smith. Ms. Judith Dennison Dr. Barbara Smith Cognitive Sciences Division of AI University of Sussex School of Computer Studies Falmer University of Leeds Brighton BN1 9QN Leeds LS2 9JT UK UK Tel: (+44) 273 678379 Tel: (+44) 532 334627 Email: judithd@cogs.sussex.ac.uk FAX: (+44) 532 335468 Email: aisb91@ai.leeds.ac.uk ------------------------------ Subject: Quantitative Linguistics Conference Announcement From: Connectionists-Request@CS.CMU.EDU Date: Fri, 24 Aug 90 10:31:02 -0400 First QUANTITATIVE LINGUISTICS CONFERENCE (QUALICO) September 23 - 27, 1991 University of Trier, Germany organized by the GLDV - Gesellschaft fuer Linguistische Datenverarbeitung (German Society for Linguistic Computing) and the Editors of "Quantitative Linguistics" OBJECTIVES QUALICO is being held for the first time as an International Conference to demonstrate the state of the art in Quantitative Linguistics. This domain of language study and research is gaining considerable interest due to recent advances in linguistic modelling, particularly in computational linguistics, cognitive science, and developments in mathematics like non- linear systems theory. Progress in hard- and software technology together with ease of access to data and numerical processing has provided new means of empirical data acquisition and the application of mathematical models of adequate complexity. The German Society for Linguistic Computation (Gesellschaft fuer Linguistische Datenverarbeitung - GLDV) and the editors of 'Quantitative Linguistics' have taken the initiative in preparing this conference to take place at the University of Trier, in Trier (Germany), September 23rd - 27th, 1991. In view of the stimulating new developments in Europe and the academic world, the organizers' aim is to encourage and promote mutual exchange of ideas in this field of interest which has been limited in the past. Challenging advances in interdisciplinary quantitative analyses, numerical modelling and experimental simulations from different linguistic domains will be reported on by the following keynote speakers: Gabriel Altmann (Bochum), Michail V. Arapov (Moskau) (pending acceptance), Hans Goebl (Salzburg), Mildred L.G. Shaw (Calgary), John S. Nicolis (Patras), Stuart M. Shieber (Harvard) (pending acceptance). CALL FOR PAPERS The International Program Committee invites communications (long papers: 20 minutes plus 10; short papers: 15 minutes plus 5; demonstrations and posters) on basic research and development as well as on operational applications of Quantitative Linguistics, including - but not limited to - the following topics: A. Methodology 1. Theory Construction - 2. Measurement, Scaling - 3. Taxonomy, Categorizing - 4. Simulation - 5. Statistics, Probabilistic Modells, Stochastic Processes - 6. Fuzzy Theory: Possibilistic Modells - 7. Language and Grammar Formalisms - 8. Systems Theory: Cybernetics and Information Theory, Synergetics, New Connectionism B. Linguistic Analysis and Modelling 1. Phonetics - 2. Phonemics - 3. Morphology - 4. Syntax - 5. Semantics - 6. Pragmatics - 7.Lexicology - 8. Dialectology - 9. Typology - 10. Text and Discourse - 11. Semiotics C. Applications 1. Speech Recognition and Synthesis - 2.Text Analysis and Generation - 3. Language Acquisition and Teaching - 4.Text Understanding and Knowledge Representation Authors are asked to submit extended abstracts (1500 words; 4 copies) of their papers in one of the conference's working languages (German, English) not later than December 31, 1990 to: QUALICO - The Program Committee University of Trier P.O.Box 3825 D-5500 TRIER Germany uucp: qualico@utrurt.uucp or: ..!unido!utrurt!qualico X.400: qualico@ldv.rz.uni-trier.dbp.de or: <c=de;a=dbp;p=uni-trier;ou=rz;ou=ldv;s=qualico> Notice of acceptance will be given by March 31, 1991; and full versions of invited and accepted papers (camera-ready) are due by June 30, 1991 in order to have the Conference Proceedings be published in time to be available for participants at the beginning of QUALICO. This 'Call for Papers' is distributed world-wide in order to reach researchers active in universities and industry. SOCIAL PROGRAMME The oldest city in Germany, founded 16 b.C. by the Romans as Augusta Treverorum in the Mosel valley is situated now in the most Western region of Germany near both the French and Luxembourgian border.In the center of Europe this ancient city will host the participants of QUALICO at the University of Trier, surrounded by the vineyards of the Mosel-Saar-Ruwer wine district at vintage beginning. The excursion day scheduled midway through the conference (September 25, 1991) will provide an opportunity to visit points of historical interest in the city and its vicinity during a boat-trip on the Mosel river. PROGRAM COMMITTEE Chair: B.B. Rieger, University of Trier S. Embleton, University of York, D. Gibbon, University of Bielefeld R. Grotjahn, University of Bochum J. Haller, IAI Saarbruecken P. Hellwig, University of Heidelberg E. Hopkins, University of Bochum J. Kindermann, GMD Bonn-St.Augustin U. Klenk, University of Goettingen R. Koehler, University of Trier J.P. Koester, University of Trier J. Krause, University of Regensburg W. Lehfeldt, University of Konstanz W. Lenders, University of Bonn C. Lischka, GMD Bonn-St.Augustin W. Matthaeus, University of Bochum R.G. Piotrowski, University of Leningrad D. Roesner, FAW Ulm G. Ruge, Siemens AG, Muenchen B. Schaeder, University of Siegen H. Schnelle, University of Bochum J. Sambor, University of Warsaw ORGANIZING COMMITTEE Chair: R. Koehler, University of Trier CONFERENCE FEES Early registration (paid before July 31, 1991): DM 300,- - Members of supporting organizations DM 250,- - Students (without Proceedings) DM 150,- Registration (paid after July 31, 1991): DM 400,- - Members of supporting organizations DM 350,- - Students (without Proceedings) DM 250,- ------------------------------ Subject: VLSI for AI and Neural Nets Workshop. Oxford, Sept. 5-7 From: delgado@bingvaxu.cc.binghamton.edu (Jose Delgado) Date: 22 Aug 90 14:39:53 +0000 International Workshop on VLSI FOR ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS University of Oxford -- September 5-7, 1990 ___________________________________________________________________ Research on architectures dedicated to artificial intelligence (AI) processing has been increasing in recent years, since conventional data or numerically oriented architectures are not able to provide the computational power and/or functionality required. For the time being these architectures have to be implemented in VLSI technology with its inherent constraints on speed, connectivity, fabrication yield and power. This in turn impacts on the effectiveness of the computer architecture. The aim of this second workshop on VLSI for AI and Neural Networks is again to provide a forum where AI experts, VLSI and Computer Architecture designers can come together to discuss the present status and future trends on VLSI and ULSI implementations of machines for AI computing. This workshop will be held in an informal environment with poster and regular session along with time for impromptu discussions. To encourage interaction, the workshop will be limited to a maximum of 70 participants. The workshop sessions, meals and accommodation will all be provided in the unique atmosphere of Jesus College between the evening of the 4th September and lunchtime on the 7th September 1990. The college was founded in 1571 by Queen Elizabeth I; meals will be taken in the traditional medieval hall, a perfect setting for the Conference Dinner on the Thursday evening. SPONSORS The Workshop is organised by the University of Oxford Department for External Studies in conjunction with the Department of Engineering Science and the Department of Electrical Engineering at SUNY-Binghamton. The workshop is sponsored by the University of Oxford in association with SUNY Binghamton, ACM-SIGARCH and the IEE. PROGRAMME COMMITTEE Igor Aleksander, Imperial College London (UK) Howard Card, University of Manitoba (Canada) Jose Delgado-Frias, SUNY-Binghamton (USA) Richard Frost, University of Windsor (Canada) Peter Kogge, IBM (USA) Will Moore, Oxford University (UK) Alan Murray, University of Edinburgh (UK) John Oldfield, Syracuse University (USA) Lionel Tarassenko, Oxford University (UK) Philip Treleaven, University College London (UK) Benjamin Wah, University of Illinois (USA) Michel Weinfield, Ecole Polytechnique (France) ENQUIRES Registration: Ms. Anna Morris (VLSI for AI & NN) CPD Unit, Department for External Studies, University of Oxford, Rewley House, 1 Wellington Square, OXFORD OX1 2JA, England. Tel.: +44 865 270360 Fax: +44 865 270708 Technical queries to: Dr. Jose G. Delgado-Frias Dept. of Electrical Engineering State University of New York at Binghamton Binghamton, NY 13901 USA Tel.: (607)777 4806 or 4856 Email: delgado@bingvaxu.cc.binghamton.edu (or) delgado@bingvaxa.bitnet or Dr. Will Moore, Department of Engineering Science, University of Oxford, Parks Road, OXFORD, OX1 3PJ, England. Tel.: +44 865 273187 (or 273000) Telex: 83295G Fax: +44 865 273010 Email: moore@vax.ox.ac.uk (not available via uupc). BACKGROUND The workshop, organised by the University of Oxford Department for External Studies in conjunction with the Department of Engineering Science, is the seventh in an occasional series on topics in VLSI and follows the successful workshop on VLSI for Artificial Intelligence at Oxford in 1988. FEES Standard fee 345 (pounds) to cover accommodation (nights of Sept. 4-6); all meals from supper on Sept. 4 to lunch on Sept. 7 (including Workshop dinner); a copy of the preprints; a copy of the edited proceedings when published; and a visit to the "Oxford Story". No-room fee 275 (pounds) to cover lunches, daytime refreshments, Workshop dinner; a copy of the preprints; a copy of the edited proceedings when published; and a visit to the "Oxford Story". ------------R E G I S T R A T I O N--------------------- U N I V E R S I T Y O F O X F O R D Continuing Professional Development Programme REGISTRATION FORM COURSE TITLE: Int. Workshop on VLSI for Artificial Intelligence & Neural Nets DATES: September 5-7, 1990 Please reserve places on the course for the following people 1 TITLE __________________ NAME _______________________________________ JOB TITLE _______________________________________ VEGETARIAN: Yes / No COMPANY / ORGANIZATION _______________________________________________ ADDRESS ______________________________________________________________ ______________________________________________________________ POSTCODE _____________________ TELEPHONE______________________________ FEES: _____________ (pounds) SIGNATURE___________________________ (Cheques should be made payable to O.U.D.E.S.) ------------------------end of registration form-------------------- * P R O G R A M M E * Wednesday September 5th, 1990 8.30-9.00am Registration 9.00-10.45am INTRODUCTION Will Moore, University of Oxford Session A: PULSE STREAM AND BIOLOGICALLY-BASED NEURAL NETS Chairman: Howard Card, University of Manitoba A1 "Computational Capabilities of Biologically-realistic Analog Processing Elements" C. Fields, M. DeYong, and R. Findley New Mexico State University, USA A2 "Results from Pulse-stream VLSI Neural Network Devices" Michael J. Brownlow, Lionel Tarassenko, Alan F. Murray Oxford University / Edinburgh University, UK A3 "Working Analogue Pulse Stream Neural Network Chips" Alister Hamilton, Alan F. Murray, H. Martin Reekie and Lionel Tarassenko Edinburgh University / Oxford University, UK 10.45-11.15am Coffee 11.15-12.45pm Session B: DIGITAL IMPLEMENTATIONS OF NEURAL NETWORKS Chairman: Michel Weinfield, Ecole Politechnique B1 "The VLSI Implementation of the 'sigma' Architecture" S. R. Williams and J. G. Cleary University of Calgary, Canada B2 "A Cascadable VLSI Architecture for the Realization of Large Binary Associative Networks" Werner Poechmuller and Manfred Glesner Technische Hochschule Darmstadt, Germany B3 "Digital VLSI Implementations of an Associative Memory Based on Neural Networks" Ulrich Ruckert, Christian Kleerbaum and Karl Goser University of Dortmund, Germany 12.50-2.00pm Lunch 2.15-4.00pm Session C: HARDWARE SUPPORT FOR AI Chairman: Jose Delgado-Frias, SUNY-Binghamton C1 "Incremental Garbage Collection Scheme in KL1 and its Architectural Support of PIM" Yasunori Kimura, Takashi Chikayama, Tsuyoshi Shinogi, and Atsuhiro Goto Fujitsu Laboratories/ICOT, Japan C2 "COLIBRI: A Coprocessor for Lisp based on RISC" H Hafer, J Plankl, F J Schmitt Siemens AG, Germany C3 "A CAM Based Architecture for Production System Matching" Pratibha and P. Dasiewicz University of Waterloo, Canada C4 "SIMD Parallelism for Symbolic Mapping" C.J. Wang and S.H. Lavington University of Essex, UK 4.00-4.30pm Tea 4.30-6.00pm Session D: PARALLEL MACHINES FOR PROLOG Chairman: Peter Kogge, IBM D1 "SYMBOL: A Parallel Incremental Architecture for Prolog Program Execution" A. De Gloria, P. Faraboschi, E. Guidetti University of Genoa, Italy D2 "Architectural Considerations for Achieving High Performance Prolog Execution" Mark A. Friedman and Gurindar Sohi University of Wisconsin, USA D3 "A Prolog Abstract Machine for Content-Addressable Memory" Hamid Bacha Coherent Research, Inc., USA Thursday September 6th, 1990 9.00-10.45am Session E: ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE Chairman: Will Moore, Oxford University E1 "VLSI Design of a 3-D Highly Parallel Message-Passing Architecture" J-L Bechennec, C. Chanussot, V. Neri, and D. Etiemble Universite de Paris-Sud, France E2 "Embedded Processor for Realtime AI and NN Applications" Robert T. Wang, John M. Walsh, and Ron Everett Integrated Inference Machines, USA E3 "Architectural Design of the Rewrite Rule Machine Ensemble" Hitoshi Aida, Sany Leinwand and Jose Mesaguer SRI International, USA E4 "A Dataflow Architecture for AI" Jose G. Delgado-Frias, Ardsher Ahmed, and Robert Payne SUNY-Binghamton, USA 10.45-11.15am Coffee 11.15-12.45pm Session F: ANALOGUE IMPLEMENTATIONS OF NEURAL NETWORKS Chairman: Lionel Tarassenko, Oxford University F1 "Analog VLSI Models of Mean Field Networks" Christian Schneider and Howard Card University of Manitoba, Canada F2 "An Analogue Neuron Suitable for a Data Frame Architecture" W A J Waller, D L Bisset and P M Daniell University of Kent, UK F3 "A Class of Optimal-Analog Parallel Computer Architectures for AI" Jonathan W. Mills Indiana University, USA F4 "Fully Cascadable Analogue Synapses Using Distributed Feedback" Donald J. Baxter, Alan F. Murray, and Martin Reekie University of Edinburgh, UK 12.50-2.00pm Lunch 2.15-4.00pm Session G: POSTER SESSION 4.00-4.30pm Tea 4.30-6.00pm Session H: IMPLEMENTATION AND APPLICATIONS OF NEURAL NETWORKS Chairman: Dan Hammerstrom, Adaptive Solutions, Inc. H1 "Efficient Implementation of Massive Neural Networks" James Austin, Tom Jackson and Alan Wood University of York, UK H2 "A Fully Digital Neural Network Chip Using Probability Coding" John Shawe-Taylor, Pete Jeavons, and Max Van Daalen University of London, UK H3 "Parallel Analogue Computation for Real-time Path Planning" Lionel Tarassenko and Gillian Marshall Oxford University, UK 7.00pm Reception and Conference Dinner Friday September 7th, 1990 9.00-10.45am Session I: ARRAYS FOR NEURAL NETWORKS Chairman: Alan Murray, University of Edinburgh I1 "A Highly Parallel Digital Architecture for Neural Network Emulation" Dan Hammerstrom Adaptive Solutions, Inc., USA I2 "Systolic Method for Modelling Spatio-Temporal Properties of Neurons using Domain Decomposition" Arno J Klassen and Rob Wiers Delft University of Technology, The Netherlands I3 "A Delay-Insensitive Neural Network Engine" C D Nielsen, J Staunstrup and S R Jones Technical University of Denmark, Denmark I4 "A VLSI Implementation of Multi-layered Neural Networks: 2-Performances" Bernard Faure and Guy Mazare IMAG, France 10.45-11.15am Coffee 11.15-12.45pm Session J: UNI-PROCESSOR MACHINES FOR PROLOG Chairman: Simon Lavington, University of Essex J1 "An Extended Prolog Instruction Set for RISC Processors" Andreas Krall University of Vienna, Austria J2 "A VLSI Engine for Structured Logic Programming" P L Civera, E Lamma, P Mello, A Natali, G L Piccinini, and M Zamboni Politecnico di Torino, Italy J3 "Performance Evaluation of a VLSI Associative Unifier in a WAM Based Environment" P L Civera, G Masera, G L Piccinini, M Ruo Roch and M Zamboni Politecnico di Torino, Italy -- P O S T E R S -- G1 "Binary Neural Network with Delayed Synapses" Tadashi Ae, Yasuhiro Mitsui, and Reiji Aibara Hiroshima University, Japan G2 "Implementing Neural Networks with the Associative String Processor" A. Krikelis and M. Groezinger Aspex Microsystems Ltd., UK G3 "Syntactic Neural Networks in VLSI" Simon Lucas and Bob Damper University of Southampton, UK G4 "Massively Parallel Neural Network Architecture for the Solution of Linear Equations Based on the Hopfield Network" J. R. Minick and M. A. Styblinski Texas A&M University, USA G5 "A New Architectural Approach for Flexible Digital Neural Network Chip Systems" Torben Markussen Technical University of Denmark, Denmark G6 "Systolic Architecture for a Subquadratic Converging Neural Network Learning Algorithm" Philippe De Wilde Imperial College of Science and Technology, UK G7 "A VLSI Implementation of a Generic Systolic Synaptic Building Block for Neural Networks" Christian Lehmann and Francois Blayo Ecole Polytechnique Federale de Lausanne, Switzerland G8 "A Learning Circuit that Operates by Discrete Means" W P Cockshott and G Milne University of Strathclyde, UK G9 "A Compact and Fast Silicon Implementation for Layered Neural Networks" F. Distante, M. G. Sami, R. Stefanelli, G. Storti-Gajani Polytechnic of Milan, Italy G10 "Pulse-Firing VLSI Neural Circuits for Fast Image Recognition" S. Churcher, A. F. Murray and H. M. Reekie University of Edinburgh, UK G11 "The ULM - A RISC for Lisp" Reinhard Rasche Technical University of Berlin, Germany G12 "Logic Flow in Active Data" Peter Sapaty Ukranian Academic of Sciences, USSR G13 "A Multi-Transputer Architecture for a Parallel Logic Machine" M. Cannataro, G. Spezzano and D. Talia CRAI, Italy ------------------------------ End of Neuron Digest [Volume 6 Issue 51] ****************************************