[sci.nanotech] How big is a brain?

royc@mtdca.UUCP (Roy A. Crabtree) (03/12/89)

In article <8903100502.AA24611@athos.rutgers.edu>, bpendlet@esunix.UUCP (Bob Pendleton) writes:
> 
> After spending a frustrating evening with my Britanica I still haven't
> found out just how much storage, how many instructions/second, how
> many simulated nuerons, or what ever, are needed to simulate the
> function of a human brain.

(chuckle): If Turing didn't know, who are we?

> Just what are the design parameters for a machine that would be needed
> to upload a human mind? What about the simulated environment you need to
> be able to walk around in and interact with other uploaded minds?
[elided]
> How do we know this is an upload and not a download :-)

Maybe (:-(*)) by _unloading_<voluntarytermination:death>
and waiting for a _reboot_<maybeGodisjustbuiltinmicrocodeafterall>

Or perhaps we are already VMing around now!  {SO what's term, JoSH, for
a virtual machine environment hoisting above its parent, and assuming
control? Partstoob esrever? }

> -- 
>               Bob Pendleton, speaking only for myself.
> UUCP Address:  decwrl!esunix!bpendlet or utah-cs!esunix!bpendlet
> 
> [You'll find many of the best guesses in one place in Moravec's Mind Children.
>  He figures 10 teraops cpu and 10 terabytes memory, but I'm vastly oversimplifying,
>  read the book.  (He also figures to operate in the real world, not simulated,
>  by means of robotic bodies.)
>  --JoSH]

I still think that's too small; a good simulation, but lacking essence.
(I believe I will live to see it in my hands, within 20 years).

roy a. crabtree att!mtdca!royc 201-957-6033

[Esroh najort.  --JoSH]

yamauchi@CS.ROCHESTER.EDU (Brian Yamauchi) (03/23/89)

>Problem 1: Design a chip to simulate a bunch of neurons.
>
>An estimate:  if we can simulate a neuron with a few thousand "circuits",
>we can probably simulate close to a hundred with the same circuits, since
>the transistors switch much faster than neurons react.  Assume that with
>100000 circuits (near the chip space of the 80286 processor,) we could
>simulate 1000 neurons in real time on one chip with current technology.
>Then we'd be talking on the order of 60 to 90 million chips (= 2^26) in a
>network, for each brain to be simulated.

It seems clear to me that if you want to attempt this brute force
method of brain simulation you want to use analog VLSI not digital.
Since you could acheive a component:neuron ratio that was much lower
(maybe almost 1:1).  If someone has accurate estimates of current
state-of-the-art analog VLSI capacity / performance, I would be very
interested.

Personally, I think we will have human-equivalent processing hardware
relatively soon -- but the software is the hard problem...

_______________________________________________________________________________

Brian Yamauchi				University of Rochester
yamauchi@cs.rochester.edu		Computer Science Department
_______________________________________________________________________________

csimmons@oracle.COM (Charles Simmons) (03/23/89)

In article <8903230413.AA09069@athos.rutgers.edu> shields@yunccn.UUCP (Paul Shields) writes:
>
>In article <8903100502.AA24611@athos.rutgers.edu>, bpendlet@esunix.UUCP
> (Bob Pendleton) writes:
>>After spending a frustrating evening with my Britanica I still haven't
>>found out just how much storage, how many instructions/second, how
>>many simulated nuerons, or what ever, are needed to simulate the
>>function of a human brain.
>
>I've been told that the human brain has from 60 to 90 billion neurons.
>
>>Just what are the design parameters for a machine that would be needed
>>to upload a human mind?
>
>Problem 1: Design a chip to simulate a bunch of neurons.

Ah!  One of my favorite subjects.  One of the people I've talked to
who studys neuro-psychology suggests that the estimated number of
neurons may be as large as 10^14.  That would be about 1000 times
more neurons than you're estimating.

Of possible interest, a neuron has direct connections with around 1000 or
so other neurons, and there may be some reason to believe that it
is the interconnections that make the brain interesting, rather
than the processing capabilities of a neuron.

>An estimate:  if we can simulate a neuron with a few thousand "circuits",
>we can probably simulate close to a hundred with the same circuits, since
>the transistors switch much faster than neurons react.  Assume that with
>100000 circuits (near the chip space of the 80286 processor,) we could
>simulate 1000 neurons in real time on one chip with current technology.
>Then we'd be talking on the order of 60 to 90 million chips (= 2^26) in a
>network, for each brain to be simulated.

In the above paragraph, I think we also need to specify the clock rate
at which the "circuits" are running.  I'd like to see this estimate
be derived in terms of transistors and Mhz.  For example, maybe we
need 400,000 transistors running at a clock rate of 16 Mhz to simulate
1,000 neurons in real time.  Since the number of neurons that can be
simulated will increase if we increase either the number of available
transistors, or if we increase the clock rate, we might want to use
a term like "transistors-Mhz" or "Mthz".  (This is millions of
"transistor-cycles" per second.)

We note that recent labratory chips (the i860) incorporate around
2^20 transistors running at a clock rate of 50 Mhz.  This corresponds
to a little less than 2^26 Mthz on a chip.  We also note that speed and
density of chips each increase by a factor of two around every 18 months.
So, every 3 years we should be able to increase the exponent here by 4.

>You bet.  I ask myself these questions whenever I encounter new technology.
>With chip density doubling every two years or so, I can imagine that a 
>project of this scale might be feasible in about a decade. The standard
>conventional memory unit in the year 2001 could be the 4 giga-bit chip, 
>since memory technologies will be hitting the 32-bit address barrier around
>that time. (Remember the 64K barrier?)  Processor speeds should be approaching
>100 MHz at that time (assuming GaAs technology is delayed due to fabrication
>problems.)

I guess I pretty much agree with your estimate of 4 Gigabit chips in 2001.
I think your estimate of 100 Mhz is way low.  John Mashey at MIPS is
predicting 300 to 400 Mhz clock rates by around 1995.  (MIPS builds
processors, so Mashey's estimate should be mildly reasonable.)  Currently,
we've got 4 Megabit chips just now shipping in small quantities, and 32 Mhz
processors are common.  So, if we have 4 Gigabit chips starting to ship
in 2001, we should have 32 Ghz processors running around.  (Hmmm...
Sanity check...  At 1 Ghz, light can travel 1 foot in one clock cycle.
At 32 Ghz, the "clock-width" is well below an inch...)


If this subject is of widespread and long-term interest, it might
be interesting to develop an equation to describe the number of
transitor-cycles needed to simulate a human brain.  For example,
a while back some astronomers were pondering the number of intelligent
civilizations that exist in the galaxy.  They developed an equation
that contained the number of stars in the galaxy, the fraction of
stars that were class G stars, the average number of planets per star,
the fraction of planets that were neither too close nor too far from
their parent star, and the average lifespan of an intelligent civilization.

For most of the parameters in the equation, the actual value of the
parameter isn't known.  But, various estimates can be plugged in for
each parameter, and as new information is obtained, the actual parameter
values can be refined.

So, as a first attempt to develop such an equation for determining
the number of transitor-cycles needed to simulate the human brain,
let's define the following parameters:

	N -- The number of Neurons in the human brain.

	S -- The fraction of neurons that actually need to be Simulated.
	     For example, maybe 90% of the neurons in the human brain
	     don't do anything, and hence don't need to be simulated.

	T -- The number of Transistor-cycles needed to simulate a
	     single neuron.

	C -- The number of transistor-cycles that can be crammed onto
	     a silicon Chip using some given technology.

	B -- The number of neuron simulating chips that can be
	     crammed into a Box of reasonable size for a reasonable cost
	     using the same technology as that assumed for parameter C.
	     (For reasonable size, let's restrict ourselves to 10,000
	     cubic feet.)

Using the above parameters, it appears that the question we would be
attempting to answer is "in what year will we be able to build a
10,000 cubic foot box (or smaller) that can simulate a human brain?"

We need to simulate S*N neurons.  This requires T*S*N transistor-cycles.
For a given technology, we need (T*S*N)/C chips to simulate a brain,
or we need ( (T*S*N)/C ) / B boxes to simulate a brain.

Hmmm...  Needs work.  Any volunteers?

-- Chuck

P.S.

How hard will it be to actually simulate a neuron in silicon?  ("Very hard"
is not a sufficient answer.)  Do we have to simulate the location and
movement of most molecules in the neuron?  That is, is a significant
fraction of the state information of a neuron stored in the chemical
composition of the neuron?  Or can we get by with a piece of hardware
that looks like an op-amp that has about 500 inputs and 500 outputs?

landman@SUN.COM (Howard A. Landman) (03/24/89)

In article <8903230413.AA09069@athos.rutgers.edu> shields@yunccn.UUCP (Paul Shields) writes:
>I've been told that the human brain has from 60 to 90 billion neurons.
>
>Problem 1: Design a chip to simulate a bunch of neurons.
>
>An estimate:  if we can simulate a neuron with a few thousand "circuits",
>we can probably simulate close to a hundred with the same circuits, since
>the transistors switch much faster than neurons react.  Assume that with
>100000 circuits (near the chip space of the 80286 processor,) we could
>simulate 1000 neurons in real time on one chip with current technology.
>Then we'd be talking on the order of 60 to 90 million chips (= 2^26) in a
>network, for each brain to be simulated.

I saw Carver Mead's talk at CompCon a few weeks ago.  He thinks this
approach is totally wrong.  Neurons operate using the raw physics of
their environment.  Simulating this digitally is horribly inefficient.
Consider simulating an 80286 using a digital computer.  The simulation
will run 2 to 7 orders of magnitude slower than the chip itself, even
though the underlying technology is identical!

A better approach (in Carver's view) is to use the device physics itself.
This requires very robust design techniques, which can adapt for bugs
in manufacturing.  Carver and his students have designed an artificial
retina using about 6 transistors per neuron-equivalent.  It has many
of the same properties that real retinas do - for example, it sees
after-images if it stares at things too long ...

The latest Intel chip has over 1,000,000 transistors.  Using Carver's
notions, that should be equivalent to at least 160,000 neurons.  Each
of these neurons is perhaps 10,000 times faster than a human neuron.
So, to get equivalent compute power to a human brain (assuming speed
can be traded off for size), we need ~80 G human neurons = 8 M machine
neurons = 50 chips.

Designing these 50 chips is an exercise left for the reader.

>Problem 2: How do we connect the chips?
>
>If, for example, we use Thinking Machines
>Corp's "Connection Machine" hypercube archetecture, presently around
>64K (=2^16) processors, We would have to hook up 1024 of those beasts.
>[a Connection Machine is more like a 4x4x4-foot cube. --JoSH]

That's at 16 neuron/chip.  At 160,000 neuron/chip, we would need 0.1 of
those beasts.  (We're going for the full ~80 G neurons here.) And 0.1 CMs
would be about 6 cubic feet, as big as a TV set.  Allowing for speed again,
we only need 0.000001 of those beasts; a single PC board.  Maybe even
room left over for a modem and MIDI out. :-)

>we'll see another order of magnitude change in the size of these things.)

5 chips.  One of the modules of HAL in 2001.

>Processor speeds should be approaching
>100 MHz at that time (assuming GaAs technology is delayed due to fabrication
>problems.)

For planning purposes (I want to enter the November 2000 International
Computer Go Tournament in Taiwan), I am assuming that a personal computer
in November 2000 will be ~100 MIPS, ~128 MB memory, ~1 GB disk.  The memory
and disk numbers may be too conservative, but I don't think I'll need more
than that.

	Howard A. Landman
	landman@hanami.sun.com

rod@VENERA.ISI.EDU (Rodney Doyle Van Meter III) (04/01/89)

[This is a test to see if the posting software is working again.
 It has to be used on a real article...
 --JoSH]


In article <8903240502.AA22884@athos.rutgers.edu> csimmons@oracle.COM (Charles Simmons) writes:
>
>
>How hard will it be to actually simulate a neuron in silicon?  ("Very hard"
>is not a sufficient answer.)  Do we have to simulate the location and
>movement of most molecules in the neuron?  That is, is a significant
>fraction of the state information of a neuron stored in the chemical
>composition of the neuron?  Or can we get by with a piece of hardware
>that looks like an op-amp that has about 500 inputs and 500 outputs?

Carver Mead has had what he calls an analog VLSI neuron for at least
three years, if not longer. It can have a number of inputs, integrates
the current coming from them, and eventually fires off a pulse, then
has a recovery time before it can do so again. I think they also have
negative inputs worked out. Sounds like a simple neuron to me. They
need to scale up a bit in the number of inputs, and the last I heard I
think they were still at <1000 per chip, I think in the 100 range for
certain purposes. All the same, tremendous progress. Their clock
speeds are also naturally much higher than the chemically driven
neurons. Their current applications include work in the vision area,
and an imitation cochlea (both of these have >1000 elements, but I
don't think they were all neurons. All the same, my earlier numbers
may be wrong.). You'd have to ask someone else for better details.

My question: my understanding of the brain and memory is poor, even by
current standards. I think I heard, though, that short term memory is
stored in the synapses, in electro-chemical form, and long term memory
(learning) involved a gradual, directed rewiring of the brain. IF this
is correct, how do the analog VLSI guys intend to manage this?

		--Rod
		  rod@ISI.Edu

cocteau@VAX1.ACS.UDEL.EDU (Daniel J Pirone) (04/14/89)

}
}My question: my understanding of the brain and memory is poor, even by
}current standards. I think I heard, though, that short term memory is
}stored in the synapses, in electro-chemical form, and long term memory
}(learning) involved a gradual, directed rewiring of the brain. IF this
}is correct, how do the analog VLSI guys intend to manage this?
}
}		--Rod
}		  rod@ISI.Edu
About where STM and LTM are ( or our best guess to date ),
Not only is LTM in the "wiring pattern" of the neurons, but also
in the amount of calcium in the synapses ( more Ca on a well learned pattern)

About the success of Carver Mead's approach,
The intelligence that may/will arrise out of VLSI Neural Networks
will Artificial in the sence that they will be just a bit different
than "us". But then again people, say far easterners, also can seem
quite different that "us", or the possible difference even between the
sexes ( as far as thinking/feeling ).  Things like LTM are
not super hard to solve, most of the VLSI networks come fully inter-
connected, gradual rewiring is no problem ( assuming you equations
mimic those of Bio Neurons... )
Just some thoughts...
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Thus spoke Zarathustra...			cocteau@vax1.acs.udel.edu
						Daniel Pirone
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

landman@SUN.COM (Howard A. Landman) (04/15/89)

In article <8904131959.AA02594@athos.rutgers.edu> rod@VENERA.ISI.EDU (Rodney Doyle Van Meter III) writes:
>My question: my understanding of the brain and memory is poor, even by
>current standards. I think I heard, though, that short term memory is
>stored in the synapses, in electro-chemical form, and long term memory
>(learning) involved a gradual, directed rewiring of the brain. IF this
>is correct, how do the analog VLSI guys intend to manage this?

The architecture aspect of this is not trivial, but the hardware end is
probably already solved.  Just look at a spec sheet for any Xylinx
reprogrammable gate array.  Basically, you spend some hardware on
programmable crosspoint switches, and if your connectivity is well
designed, you're done.  You pay a little area and you lose some performance,
but you're still orders of magnitude faster than human neurons.

Another approach would be to use a general-purpose high-bandwidth
communications network.  This would allow anything to talk to anything,
but with routing overhead.  This is essentially what a Connection Machine
does.  The advantage is that its VERY general; you can emulate any other
network this way.  Thus, a CM is a good test bed to explore the behavior
of various architectures.  The disadvantage is that it uses digital computation
instead of device physics, and thus throws away a couple orders of magnitude
performance.

	Howard A. Landman
	landman@hanami.sun.com