[comp.ai] Building a brain, revisited

nagle@well.UUCP (John Nagle) (01/10/90)

     We have the semiconductor technology to build a brain.

     Assume one accepts Hans Moravec's calculation that a human
equivalent robot is 10^14 bits per second.  (See "Mind Children",
pages 64-65).  Assume that a floating point operation produces 64 bits
of new information per operation.

     Last week, Motorola announced a new "SuperChip".  As yet unnamed,
this part, developed for the DoD Very High Speed Integrated Circuit
program, contains over 4 million transistors and delivers 200
megaflops, or 200 million floating point operations per second.  It's
a single-chip computer, but, of course, requires external memory and
some external support.

     A human brain equivalent would thus require about 8,000 such
parts, plus memory and supporting parts.

     For packaging, let's assume a layout similar to that used in the
Ncube Inc.  Hypercube.  Ncube puts 64 processors with their memory on
a board about 30" square, and puts 16 boards in a blue cube one meter
on a side, for 1024 processors per cabinet.  Ncube's single chip
processor is only a 1 MIPS machine comparable to a VAX 11/780, so
we're talking about a 200:1 performance improvement over the
1986-vintage Ncube machine.  In the Ncube, each processor had 128KB of
local memory, implemented with 256K RAMs.  Today, we would probably
want a 4Mbyte SIMM per CPU, which would occupy about the same amount
of real estate.

      Assuming one 4Mbyte SIMM per processor and one custom support
chip per CPU to handle memory control, fault tolerance control (a
feature of this new part), and interprocessor communication, each
board would contain 64 processors and 0.25 gigabyte of RAM.  Each
cabinet, only one meter on a side, would contain 1024 processors and
about 250 gigabytes of RAM.

      A brain-sized system would thus consist of eight such cabinets,
for a total of 8192 processors and 2 terabytes of RAM, with a
computing power of 1.6 teraflops.  It would occupy about a space about
four feet deep, four feet high, and 25 feet long.

      Cost can only be roughly estimated at this point.  But assume
that the CPU chip costs $1000 in quantity, the support chip $500, and
the 4MB of RAM $500.  Then the pure parts cost per CPU is $2000.
Assume that the entire system costs twice its parts cost.  This gives
us a price of $32,000,000.  This is in the price range of the largest
supercomputers today.  Of course, the price can be expected to decline
over time, probably at the historical rate of an order of magnitude
every three or four years.

      Agreed that we have no idea how to program such a piece of
machinery to be intelligent.  But it could be built today.


                                        John Nagle

andrew@dtg.nsc.com (Lord Snooty @ The Giant Poisoned Electric Head ) (01/10/90)

John Nagle - you make interesting reading, but neglect to mention one
critical performance parameter for the proposed architecture, namely
the interprocessor bandwidth. This is what differentiates the "neural"
approach chiefly from the "throw MIPs/MFLOPs at it" approach.
Any comments?
-- 
...........................................................................
Andrew Palfreyman	andrew@dtg.nsc.com	Albania before April!

ceb@csli.Stanford.EDU (Charles Buckley) (01/10/90)

In article <15439@well.UUCP> nagle@well.UUCP (John Nagle) writes:

	We have the semiconductor technology to build a brain.

        [deleted: back-of-editor-buffer calculations describing a
         hypercube topology machine producing equivalent bit throughput, 
         based on a new "SuperChip" part from Motorola, which comes
         out to be of manageable, albeit ambitious size]

	 Agreed that we have no idea how to program such a piece of
         machinery to be intelligent.  But it could be built today.

I think you compare apples and oranges.  Sure, the raw compute power
could be marshalled - this has been true for some time, though as you
point out, it's getting to be within the scope of a realistic (?)
research initiative (;^/.

Brains do things that hypercubes don't though, which is learn, and
that learning is reflected in the topology.  Further, there's a
non-binary (analog) aspect to the information transmitted.

Might be interesting to redo the calculation including NN architecture
(such as those being developed at Bell Labs) and/or analog modules
(no-one is interested for these at the moment, so far I know).
NN-stuff doesn't adequately provide for the representation of time,
though, so you'd have to keep some TM aspects, and how it is organized?

Topology changes as a product of learning?  Seems quite open to me,
but I'd like to hear of efforts in this area.

smoliar@vaxa.isi.edu (Stephen Smoliar) (01/10/90)

In article <11673@csli.Stanford.EDU> ceb@csli.Stanford.EDU (Charles Buckley)
writes:
>
>Brains do things that hypercubes don't though, which is learn, and
>that learning is reflected in the topology.

While I find myself about as reluctant to use the word "learn" as the word
"understand" these days, I think it is important to grant the observation
about topological modification.  However, brains have an even more subtle
property than hypercubes which has received occasional reference in the Searle
debate.  Cells have finite endurance and are regularly replaced in the
organism.  Perhaps more remarkable than the fact that a topology forms
and is modified is the fact that it is MAINTAINED--that new nerve cells
can come in and fill in for old ones without disrupting that topology.
It would seem that we are a far cry from a piece of hardware which, as
part of its operation, keeps itself furnished with fresh components;
and we probably do not want to write off the possibility that such maintenance
operations may be strongly connected to the behavior of the agent.
>
>Topology changes as a product of learning?  Seems quite open to me,
>but I'd like to hear of efforts in this area.


A good place to start might be Gerald Edelman's NEURAL DARWINISM.  In
particular, Chapter 5 is entitled "Cellular Dynamics of Neural Maps."
He presents several examples in which topological reorganization has
been observed in adult brains.  Experiments by Merzenich's group were
able to induce such reorganizations by cutting a specific nerve.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"For every human problem, there is a neat, plain solution--and it is always
wrong."--H. L. Mencken

nagle@well.UUCP (John Nagle) (01/11/90)

      First, note that my total RAM size computation was off; 8192 4MB SIMMs
are only 32 gigabytes, not some number of terabytes.  I apologise for the
error.

      Another way to look at this computation is that eight of these
processors offer the computational power of a mouse or squirrel, again
using Moravec's figures.  This is an interesting result, because those
animals have good eye-hand coordination, almost as good as humans, which
implies that the computational power for well-coordinated robots is
almost in hand.  That problem is a bit better understood, and there are
techniques known that are presently too slow to use.  I'm thinking here
of Kass's optimization techniques, Girard's legged animations, and
Craig's adaptive control algorithms.  It's noteworthy that all of these
are based on heavy number-crunching.

      Personally, I am coming around to the position that a basis for
the portion of AI that deals with relationships with the physical world
will be found in computational geometry and nonlinear optimization.
Over the next few years, this hypothesis will be tested, by myself and
others.

      Systems that do brain-like processing may not need a high volume
of long-distance vs local data transmission internally.  The brain is
severely limited architecturally by the speed of neural impulse
propagation, which is on the order of thousands of feet per second.
So whatever is going on can't involve extensive, repeated transmission
between physically distant parts of the brain; it would take too long.
Against this, of course, the brain has very many connections.  It's
worth thinking about the architectural implications of this. 

					John Nagle