nagle@well.UUCP (John Nagle) (10/12/89)
Hans Moravec, in Mind Children, rates human brains at about 10^6 MIPS and 10^15 bits of storage. I have before me an ad for a single VMEbus board mounting eight M88200 processors running at 33MHz. The manufacturer (Tadpole Technology) claims 220 MIPS for this unit. So, to get the raw CPU power of a brain, we need about 5000 boards, or 227 standard VME cages, or 76 racks, or about 150 linear feet of cabinetry. I've worked in mainframe installations bigger than that. If we knew how to solve the architecture problems, we could build the hardware. John Nagle
leblanc@grads.cs.ubc.ca (David LeBlanc) (10/14/89)
In article <14079@well.UUCP> nagle@well.UUCP (John Nagle) writes:
#
# Hans Moravec, in Mind Children, rates human brains at about 10^6 MIPS
#and 10^15 bits of storage.
#
# I have before me an ad for a single VMEbus board mounting eight
#M88200 processors running at 33MHz. The manufacturer (Tadpole Technology)
#claims 220 MIPS for this unit.
#
# So, to get the raw CPU power of a brain, we need about 5000 boards,
#or 227 standard VME cages, or 76 racks, or about 150 linear feet of cabinetry.
#
# I've worked in mainframe installations bigger than that.
#
# If we knew how to solve the architecture problems, we could build
#the hardware.
Whether the numbers are right or not, you won't get anything until we (the
collective we) figure out the internal structures and wiring of the brain.
David LeBlanc
dnhjm@dcatla.UUCP (Henry J. Matchen) (10/17/89)
In article <14079@well.UUCP> nagle@well.UUCP (John Nagle) writes: > > So, to get the raw CPU power of a brain, we need about 5000 boards, >or 227 standard VME cages, or 76 racks, or about 150 linear feet of cabinetry. > > I've worked in mainframe installations bigger than that. > > If we knew how to solve the architecture problems, we could build >the hardware. But, the architecture *IS* the problem. Consider the number of possible number of interconnections between 5000 boards, then consider the number of neural interconnections in the brain. In the former, the architecture is mostly linear (or parallel, if it's fairly sophisticated.) The brain is very nonlinear, with a single axial impulse generating dozens, then hundreds, then thousands of response impulses. The response domain depends on actual (read hardware) connections, of course, but also on triggering and inhibiting factors and threshold effects. These factors are in turn determined by the brain's environment-- the body's neurovascular subsystems-- as well as by feedbacks which are both genetic and heuristic. As a comparison, suppose you sometimes had to type the same command on your terminal two or three times before that 150-foot Itty Bitty Mind in the next room would listen to you. Suppose that, after a while, the machine would let you type one or two characters of a command, make a (not necessarily correct) assumption about the remainder of the command and execute it. Suppose that the responses, as well as the response time, changed when other users were using the machine, as well as with the order of your commands. The fascinating aspect is that the brain, for all its complexity, produces a fairly small set of responses. This elegantly deceptive simplicity conned a lot of neural researchers into thinking they could isolate various functions within the brain. All they were seeing, or could see, were the "I/O controllers"; the higher functions were part of the brain's "background noise", so they missed them. To make matters even more interesting, folks applied what they knew of peripheral neural systems to the brain, and ended up looking for hardwired stimulus-response patterns. These exist, of course, but they don't explain much. A final comparison: lop about 50 feet off of that mainframe and you end up with a slightly slower mainframe. Scoop out one third of an average brain and you end up with a politician ;^{). Henry
ds@hollin.prime.com (10/17/89)
In "The Connection Machine" (MIT Press, 1985), W. Daniel Hillis writes, "As near as we can tell, the human brain has about 10**10 neurons, each capable of switching no more than a thousand times a second. So the brain should be capable of about 10**13 switching events per second. A modern digital computer, by contrast, may have as many as 10**9 transistors, each capable of switching as often as 10**9 times per second. So the total switching speed should be as high as 10**18 events per seconds, or 10,000 times greater than the brain. . . . One reason that computers are slow is that their hardware is used extremely inefficiently. The actual number of events per second in a large computer today is less than one-tenth of one percent of the number calculated [above]. The reasons for the inefficiency are partly technical but mostly historical. . . . In a large von Neumann computer almost none of its billion or so transistors do any useful processing at any given instant. Almost all of the transistors are in the memory section of the machine, and only a few of those memory locations are accessed at any given time. . . . This is called the von Neumann bottleneck. The bigger we build machines, the worse it gets. . . . The obvious answer is to get rid of the von Neumann architecture and build a more homogeneous computing machine in which memory and processing are combined. . . .we have the existence proof of the human brain, which manages to achieve the performance we are after with a large number of apparently slow switching components." Hope this helps. David Spector Prime Computer, Inc. ds@primerd.prime.com (until the layoff)
smoliar@vaxa.isi.edu (Stephen Smoliar) (10/18/89)
In article <14079@well.UUCP> nagle@well.UUCP (John Nagle) writes: > > Hans Moravec, in Mind Children, rates human brains at about 10^6 MIPS >and 10^15 bits of storage. > > I have before me an ad for a single VMEbus board mounting eight >M88200 processors running at 33MHz. The manufacturer (Tadpole Technology) >claims 220 MIPS for this unit. > > So, to get the raw CPU power of a brain, we need about 5000 boards, >or 227 standard VME cages, or 76 racks, or about 150 linear feet of cabinetry. > > I've worked in mainframe installations bigger than that. > > If we knew how to solve the architecture problems, we could build >the hardware. > There is one small difficulty which was observed by David Waltz in his contribution to the DAEDALUS issue on artificial intelligence, "The Prospects for Building Truly Intelligent Machines." Even if you DO get the architecture right (and other readers of this bulletin board have been skeptical about that), you may face the prospect that "educating" your device may take something on the order of ten or twenty years. After all, if you duplicate human hardware, you should not be surprised if you get human performance. ========================================================================= USPS: Stephen Smoliar USC Information Sciences Institute 4676 Admiralty Way Suite 1001 Marina del Rey, California 90292-6695 Internet: smoliar@vaxa.isi.edu "For every human problem, there is a neat, plain solution--and it is always wrong."--H. L. Mencken
schultz@cell.mot.COM (Rob Schultz) (10/19/89)
In article <10175@venera.isi.edu>, smoliar@vaxa.isi.edu (Stephen Smoliar) writes: > In article <14079@well.UUCP> nagle@well.UUCP (John Nagle) writes: > > [edited] > > So, to get the raw CPU power of a brain, we need about 5000 boards, > >or 227 standard VME cages, or 76 racks, or about 150 linear feet of cabinetry. > > > > I've worked in mainframe installations bigger than that. > > > > If we knew how to solve the architecture problems, we could build > >the hardware. > > > There is one small difficulty which was observed by David Waltz in his > contribution to the DAEDALUS issue on artificial intelligence, "The Prospects > for Building Truly Intelligent Machines." Even if you DO get the architecture > right (and other readers of this bulletin board have been skeptical about > that), you may face the prospect that "educating" your device may take > something on the order of ten or twenty years. After all, if you duplicate > human hardware, you should not be surprised if you get human performance. I see several methods for shortening this process to a (nearly?) manageable level: 1. Memory/retention. Presumably, an intelligent machine will not forget information it has learned. (This assumes we do not model the system after ourselves :-)) Therefore, the machine would not have to waste time re-learning something it should already know. 2. Countinuous Input. Such a machine will not require sleep, nourishment, or any other such distractions. So, instead of losing 8 to 15 hours out of every 24, it should be able to receive continuous input of information. 3. Input Speed. Information may be input directly in electronic form, thus reducing or even eliminating the time required to translate/digest information. This leads to several interesting possibilities: a. partition the information, and have each of several machines digest it. The information could then be distributed among the machines. b. "clone" the intelligence. Once the information is learned, it can be duplicated into other machines. Thus we have a way to mass-produce intelligences, and completely bypass the learning process. 4. Restricted Domain. If we decide to create function-specific machines, we can restrict the domain of information to the required function. For example, if a system is to be a medical diagnosis/treatment prescription system, it would have to learn little or nothing abou meteorology. Of course, this does not help us with a general-purpose system, but we can't have everything, eh? :-) -- Thanks - uunet!motcid!schultz rms Rob Schultz, Motorola General Systems Group 312 / 632 - 7597 1501 W Shure Dr, Arlington Heights, IL 60004 "Kicking the terminal doesn't hurt the monsters (or the bugs)."
bwk@mbunix.mitre.org (Barry W. Kort) (10/19/89)
In article <10175@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes: > There is one small difficulty which was observed by David Waltz in > his contribution to the DAEDALUS issue on artificial intelligence, > "The Prospects for Building Truly Intelligent Machines." Even if > you DO get the architecture right (and other readers of this bulletin > board have been skeptical about that), you may face the prospect that > "educating" your device may take something on the order of ten or > twenty years. After all, if you duplicate human hardware, you should > not be surprised if you get human performance. In _Apprentices of Wonder: Inside the Neural Network Revolution_, William Allman describes a remarkable experiment in which a modest neural network learned to enunciate English in just 36 hours of training on a 1000-word text using back-propagation feedback. It appears that what slows down the rate of human learning is the absence of a fast and accurate feedback loop. I think that is why teenagers become so proficient in mastering computer games--the feedback is instantaneous and unerring. --Barry Kort
wcalvin@well.UUCP (William Calvin) (10/19/89)
For an entirely different architecture (thjough^H^H^H^H^Hough one pretested by biological speciation and the immune response), see my article "The brain as a Darwin Machine" in NATURE 330:33-34 (6 Nov 1987). William H. Calvin wcalvin@well.sf.ca.us Univ. of Washington wcalvin@uwalocke.bitnet Biology NJ-15 wcalvin@locke.hs.washington.edu Seattle WA 98195 206/328-1192 206/543-1648
jiii@visdc.UUCP (John E Van Deusen III) (10/29/89)
In article <2556@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes: > ... > Even if building a human(like) intelligence in hardware/software > proves to be impossible for some reason, we still have much to gain > by implementing parts of human intelligence. Although we are presently incapable of building even "rodent(like)" intelligence into our machines, if it "proves" to be impossible, then by definition neither humans nor rodents can be exist. But they do. I will grant that humans may never get around to it, but it is certainly possible to achieve the human level of intelligence in a finite automata -- it's already been done! I think that the original poster, and others of us, are anticipating rather something more. It seems to me that once a machine reaches the level of a rabbit or a dog, then the human level is just another point on the axis of a continuum. Suddenly you are confronted with a microwave oven in possession of more intelligence, stored knowledge, empathy, consciousness, understanding, and even pure soul than has heretofore been exhibited by all the humans who ever walked the earth; past, present and future. And next year's model; it's even better. What the machines would do next could hardly be relevant to us "dinosaur people". We could, of course, all go back to the Garden of Eden, but I suspect that we might eventually, mercifully, blow ourselves up. I doubt it would really be much different if aliens appeared who could answer every one of our questions and make all of our Gods dance naked in the palms of their paws. Contact with vastly superior cultures, take for instance the landing of Cortez in Mexico, has always had the effect of destroying certain central societal myths held by the inferior culture. These myths, it is said, are essential for maintaining our gumption to get up in the morning and get to work on creating artificial intelligence. -- John E Van Deusen III, PO Box 9283, Boise, ID 83707, (208) 343-1865 uunet!visdc!jiii
ok@cs.mu.oz.au (Richard O'Keefe) (10/29/89)
In article <659@visdc.UUCP>, jiii@visdc.UUCP (John E Van Deusen III) writes: > Contact with vastly superior cultures, take > for instance the landing of Cortez in Mexico, has always had the effect > of destroying certain central societal myths held by the inferior > culture. This is a tired old legend. It happens not to be true. It wasn't contact with a superior culture that did the damage in Mexico, it was contact with new DISEASES. It wasn't contact with a superior culture that did the Tasmanians in, it was contact with BULLETS. If I recall the figures correctly, the native population of Central and South America went from a couple of hundred million to a few tens of millions in less than a century. It's hard to keep _any_ aspect of a culture going when people are dying off like that. Contact with an obviously inferior culture will destroy a superior culture if the people from the inferior culture bring the right diseases.
dmocsny@uceng.UC.EDU (daniel mocsny) (10/30/89)
In article <2569@munnari.oz.au>, ok@cs.mu.oz.au (Richard O'Keefe) writes: > In article <659@visdc.UUCP>, jiii@visdc.UUCP (John E Van Deusen III) writes: > > Contact with vastly superior cultures, take > > for instance the landing of Cortez in Mexico, has always had the effect > > of destroying certain central societal myths held by the inferior > > culture. > > This is a tired old legend. It happens not to be true. And what vastly superior culture have you contacted, that you have determined the above central societal myth to be in fact a tired old legend? :-) Dan Mocsny dmocsny@uceng.uc.edu
es@sinix.UUCP (Dr. Sanio) (11/09/89)
In article <78145@linus.UUCP> bwk@mbunix.mitre.org (Barry Kort) writes: > >I think we should build an Artificial Sentient Being by the end >of the Millenium. Not just an intelligent and knowledgeable >information processing system. A wise being able to discover >the nature of the world in which it finds itself embedded and >able to contribute to the invention of a better future. That reminds me a little on how communist leaders made predictions when the final state of communism and everlasting abundance should be reached since 1930. It was always some decade in ahead. They only needed a bit more progress in technology and organization ... With that "final frontier" of "hard" AI, it's very similar. The first predic- tions came up in the fifties: yet a bit more memory, faster cpu and periphs, then it will be done. Whilst hardware evolved some powers faster than every- body predicted, progress of AI was mainly finding out new problems even to come closer to the original goal - somehow like hunting the rainbow. To prove the goal to be feasable/impossible, my proposal would be to concentrate on simulating the most simple and best researched biological systems, say in- sects, which are lots closer to an automaton than e.g. rodents (that task seems too hard for AI, IMHO). Simulating would mean covering all aspects of neural activity of those systems (as far as biologists researched it - maybe that research has to be pushed in order to get the specification). Doing that up to the end of the milennium (or proving even that to be to hard) would uncover what AI is able to do. Recently, our best automatically driven cars, trains, missiles etc seem utterly primitive to me compared with worms, ants, or flies . >--Barry Kort regards, es