[comp.ai.neural-nets] Neuron Digest V4 #16

neuron-request@HPLABS.HP.COM (Neuron-Digest Moderator Peter Marvit) (10/19/88)

Neuron Digest	Tuesday, 18 Oct 1988
		Volume 4 : Issue 16

Today's Topics:
		  Transputer-based NN simulator (request)
			   Hecht-Nielsen Address
			 Classifier System Request
			       music and pdp
			  Re: What is an MX-1/16?
	      Request for TI Explorer neural network software
    Re: Intelligence / Consciousness Test for Machines (Neural-Nets)???


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"

------------------------------------------------------------

Subject: Transputer-based NN simulator (request)
From:    Dario Ringach <dario%TECHUNIX.BITNET@CUNYVM.CUNY.EDU>
Date:    Sun, 09 Oct 88 16:46:53 +0200 


Can anyone out there provide me references to transputer-based simulators
of NN models?  I will also appreciate any pointer to parallel computer
architectures dedicated to the simulation of NN models. Thanks in advance.

Dario.
dario@techunix.BITNET

------------------------------

Subject: Hecht-Nielsen Address
From:    hadas@p.cs.uiuc.edu
Date:    10 Oct 88 06:08:00 +0000 


Here is the address for Hecht-Nielsen as requested in the previous note:

Hecht-Nielsen Neurocomputer
5893 Oberlin Dr.
San Diego, CA 92121
(619)-546-8877

------------------------------

Subject: Classifier System Request
From:    powell@boston.steinmetz (Powell)
Organization: General Electric CRD, Schenectady, NY
Date:    11 Oct 88 14:12:41 +0000 

Recently, I have read some interesting articles on induction and classifier
systems. To better understand their capabilities and functionalities, I am
looking for a free, classifier software package to experiment with.

I have recently used John Grefenstette's very impressive GENESIS package
for optimization and became very excited and convinced about the
capabilities of genetic algorithms. I would now like to experiment with a
classifier system as described by Holland with a bucket brigade or similar
algorithm for credit apportionment and a genetic algorithm for rule
combination. If someone can send me such a package then I can quickly
evaluate the power and appropriateness of classifiers to my problem.

Thanks in advance
Dave Powell

[[ The combination of genetic algorithms and neural nets promises to be
very exciting, though most research has kept the fields separate.  If I can
dig out my files, I'll try to post some info on genetic algorithms.  IN the
mean time, what do you readers have on the subject? -PM ]]

------------------------------

Subject: music and pdp
From:    todd@galadriel.STANFORD.EDU (Peter Todd)
Date:    Thu, 13 Oct 88 00:13:39 -0700 

All those who have been expressing an interest in PDP research related to
music might like to have a look at a paper I've written recently entitled
"A Sequential Network Design for Musical Applications," in which I describe
my research into the compositional and psychological-modelling uses of a
Jordan-style sequential PDP network.  This paper will be appearing in the
Proceedings of the 1988 Connectionist Models Summer School, to be published
shortly by Morgan Kaufmann.  Copies will also be available directly from
me.  The abstract is as follows: "A sequential connectionist network of the
type first described by Jordan (1986) is presented for applications in the
musical domain.  Two such applications are described: composition of novel
melodies based on learned examples, and modelling of psychological
expectation violation in music.  The issues involved in selection of pitch
and time representations for the network are explored."  People interested
in networks applied to various aspects of music perception should also see
Jamshed Bharucha's work, some of which is referenced in my paper.

              --peter todd (todd@psych.stanford.edu)

------------------------------

Subject: Re: What is an MX-1/16?
From:    goblick@XN.LL.MIT.EDU (Tom Goblick)
Date:    Thu, 13 Oct 88 16:04:54 -0400 


This is in response to a query on "What is an MX-1/16?" from 
ghosh@ece.utexas.edu appearing recently in Neuron Digest (October 10 1988).

>The DARPA Executive Summary on Neural Networks mentions a neural network
>simulation system called MX-1/16 with a projected storage capacity of
>50M interconnects and processing speed of 120M interconnects/sec.
>
>Could someone shed some light on who is building this machine, what
>network models does it support, system architecture, stage of development...?
>Thanks,
>Joydeep Ghosh

The MX-1 is a shared-memory multiprocessor computer system developed at MIT
Lincoln Laboratory for AI applications involving intensive numeric and
symbolic computations, such as machine vision.  The approach taken in the
Lincoln MI Group is to couple a LISP machine host with a set of powerful
processing nodes, all interconnected by a crossbar.  The system is intended
for rapid prototyping of parallel algorithms as well as algorithm
evaluation using large data bases.  The MX-1 was thus designed for
programmability as well as for processing power.  The programming language
is Common LISP with extensions for parallel computations.  We have adapted
Kyoto Common LISP for our programming environment for the multiprocessor.
As for number crunching, each PE has a 68020 (16MHz clock), 8 MBytes and
its own independent digital signal processor (DSP) using the Weitek 8032
chip set, providing 20M floating pt. ops/sec or 10M combined
multiply-and-accumulate ops/sec per DSP.

Two 4 PE systems have been built thus far and a full sized 16 PE system is
now under construction.  The MX-1/16 is our notation for the 16 PE version.
As to what network models it supports, I would like to stress that this is
NOT a "neural net simulation system" but a general purpose shared-memory
multiprocessor that runs a parallel Common LISP, so it can be programmed to
run any kind of neural net model.  The size and numeric processing
capability of the MX-1 makes this an interesting machine on which to do
neural net simulations.  (The 16 PE version has 144MBytes of physical
memory and a peak rate of 160M multiply-and-accumulates/sec.)  We made some
calculations of how well it could do in this context and those calculations
were included in the DARPA report you mentioned, along with some other
estimated benchmarks to illustrate the capabilities of current computers
for simulation of neural nets.

That's the 2 paragraph answer.  If anyone wants more information, contact:

			Tom Goblick
			Machine Intelligence Group
			MIT Lincoln Lab
			Lexington, MA 02173

			Netmail to "goblick@XN.LL.MIT.EDU"




------------------------------

Subject: Request for TI Explorer neural network software
From:    ADELSBER%AWIWUW11.BITNET@CUNYVM.CUNY.EDU
Date:    Fri, 14 Oct 88 03:28:33 +0000 

I wanted to find out about tools for neural networks that run on a TI
Explorer. Is there anybody on this list who can help me? Please reply
directly to me as I am not on it myself.  If there are other people
interested in this question I will be glad to post them a summary of the
replies I got.

Heimo H. Adelsberger

University of Economics and Business Administration Vienna
E-mail: ADELSBER@AWIWUW11.BITNET

[[ I know we have readers within T.I.  Can anyone there help? I know there
are some LISP-based tools available, though all are of "student" quality.
However, even PD software can get folks going on experimentation. -PM ]]

------------------------------

Subject: Re: Intelligence / Consciousness Test for Machines (Neural-Nets)???
From:    "CLEM PADIN, 6-4962, L51S71" <CQMSCP%PCCVAX%dupont.com@RELAY.CS.NET>
Date:    Fri, 14 Oct 88 09:20:00 -0400 


<Subject: Intelligence / Consciousness Test for Machines (Neural-Nets)???
<From:    mician@usfvax2.EDU (Rudy Mician)
<Organization: University of South Florida at Tampa
<Date:    05 Oct 88 17:49:38 +0000 
<
<When can a machine be considered a conscious entity?  
<
<For instance, if a massive neural-net were to start from a stochastic state 
<and learn to interact with its environment in the same way that people do
<(interact not think), how could one tell that such a machine thinks or exists
<(in the same context as Descarte's "COGITO ERGO SUM"/"DUBITO ERGO SUM"
<argument- that is, how could one tell whether or not an "I" exists for the
<machine? 
<
<Furthermore, would such a machine have to be "creative"?  And if so, how would
<we measure the machine's creativity?
<
<I suspect that the Turing Test is no longer an adequate means of judging
<whether or not a machine is intelligent. 
<
<If anyone has any ideas, comments, or insights into the above questions or any
<questions that might be raised by them, please don't hesitate to reply.
<
<Thanks for any help,
<
<     Rudy
<

	I'm surprised that it took so long for someone to bring up the
topic.  It could be that, like me, few were willing to start the process
that we all know will fill these postings will paragraph after paragraph of
psuedo philosophy.  And yet somewhere we must all begin to come to terms
with the suspision that we may have 'done it': we may have developed the
technology to create a mind.  It won't come to pass right away, of course,
but it sure seems a hell of a lot closer.  What do we do now?  Once a
neural network becomes 'conscious' (or can convince people of being
conscious as well as we convince each other of being conscious) what do we
do?  Will it have legal rights?  Will turning off the power be interpreted
as murder?  Will we have a moral right to force it to 'live' simply solving
problems which we present to it?  What about modifying the code that
generated the 'mind'?...

	Sometimes it seems a very distant possibility and other times it
seems all too near.  I think, though, that it's important to start thinking
about it now.  These ideas, though they may seem like science fiction, are
complicated and nontrival so perhaps the real philosophers out there can
begin a GUIDED discussion.

	I have on reply to Rudy's many questions: 

	"...that is, how could one tell whether or not an "I" exists for the
	  machine? "

	Rudy, you may just have to take the machine's word for it...

clem

[[ I originally debated whether to include the original article and decided
this makes for interesting thought experiments.  We've all seen
pseudo-metaphysical debated about "true intelligence" vis-a-vis AI.
However, the emergent properties of neural nets are what makes them so
interesting.  There is some speculation that human consciousness is an
emergent property of brain cells. On the more mundane level, but still
quite vital, is the debate on whether a particular net "knows" or has
"learned" something versus "rote memorization"; while a trivial
consideration to some, this distinction is the source of heated argument
and cuts to the heart of the veracity of the connectionist model.

I've culled a series of responses to this point from the AI mailing list
and will collect them in a future Digest (with some editing, of course).
Please mail me your thoughts on this subject. Note, I don't want ad hoc
definitions of intelligence and would prefer comments which are strictly
relevant to these newer paradigms. -PM ]] 

------------------------------

End of Neurons Digest
*********************