[sci.nanotech] Down and Out in Nanoland

Hanson@charon.arc.nasa.gov (Robin Hanson) (01/01/91)

"With nanotechnology, most people may be living near the edge of poverty"

The rest of this message offers arguments for this outrageous claim, to
stimulate discussion.

My baseline image of our nanotechnology future splits into three ages,
"Replication", "Uploading" and "AI".  In the Replication age, basic
nanotechnology (including assemblers) allows a tremendous increase in
total wealth, and as yet not clear ecological and military consequences.
In the Uploading age, assemblers can dismantle a human brain and
reimplement that person in new improved (computer-like) hardware.  This
only requires understanding how some low level of the brain works, not
the higher levels.  In the AI age, artificially intelligent programs
become competitive with human minds, or we learn enough about human
minds to modify or merge them substantially.

A couple of questions naturally arise for each of these ages.

1. What will life be like for me (or my children)?

2. What will life be like for the typical person/agent?

3. What will life be like for the typical economically influential
   person/agent, i.e. for the people who have most of the wealth and
   therefore influence what happens most?

In our present age, you and I live well.  The typical person in the
world lives much more poorly than us, but richer than a century ago.
Most wealth is concentrated in the "industrialized" world, apparently
because educating one person well is more economically efficient than
making lots of uneducated children.   Therefore the typical
economically influential person lives fairly well.

In the replication age, wealth will increase much faster than the
number of people, so with reasonable investment instruments, most
everyone should live much better than now.  Wealth may be roughly
concentrated among the same people it is now, so you and I may live
very well by today's standards.

In the uploading age things get more complicated.  Since uploaded minds
can be copied at fairly low cost, people can, if they so choose,
multiply much faster than the rate at which total wealth increases.  No
doubt some people will so choose, and the average wealth per person copy
may drop dramatically, till most people copies are at the edge of
poverty.  By "poor" I mean that poor people have most of their capital
tied up in their physical body, must spend most of their time managing
their capital carefully for fear of going bankrupt and therefore dying
(having their mental state erased), must accept alien and harsh working
conditions to survive, and many such agents die anyway.

Of course you or I, or our children, may move from the Replication age
into the Uploading age with great wealth, and so if we avoid copying we
may live like kings regardless of all those poor folk around.

But there is the question of where the bulk of the wealth and economy
will be.  It seems plausible to me that the economically most efficient
way to invest any given capital (i.e. resulting in the largest growth
rate as an investment), would be to put the bulk of it into making
copies.  A single mind directing a billion dumb nanocray-diggers
wouldn't do as well at mining an asteroid as a thousand or million human
minds with half a billion such diggers.  If so, then people who avoid
making copies, keeping just a couple very rich ones, would constitute a
decreasing portion of the economy.  The old rich may live the old way
on their estates, but the driving force of change would be elsewhere.
People, like me, who really want to influence and be part of the
long-term future may choose to make many copies, even though most of
them would be poor.

Yes, some of these copies may live in much faster etc. hardware, and so
when things go bad for them could maybe sell off the better brain and
move into a slower/cheaper one.  But this must end somewhere (tape
archives?) and wouldn't most agents (and most of the wealth?) be at
this low end?  Also, a mind tuned to running at a certain speed, or
with certain aids, may not compete as well at lower levels, and so once
knocked from their niche may fall steadily through the hardware levels.

In the AI age, the economically dominant agents, be they human or not,
may be incrementally growable or reducible.  Different humans copies
might be merged back together when one of them was about to go bankrupt,
and so not experience "dying".  In general, though, I find it very hard
to project into this period.

There ... that's the argument.  So where does it go wrong?  It is 
not as precise as I would like it to be, but I figure this is a good
forum to critique it at this stage.

Robin Hanson  hanson@charon.arc.nasa.gov   "Stake Your Reputation"
415-604-3361  MS244-17, NASA Ames Research Center, Moffett Field, CA 94035
415-651-7483  47164 Male Terrace, Fremont, CA  94539-7921 

P.S. I stole the subject line from Ravi Pandya

brucec@phoebus.labs.tek.com (Bruce Cohen;;50-662;LP=A;) (01/04/91)

In article <Dec.31.18.46.49.1990.26097@athos.rutgers.edu> Hanson@charon.arc.nasa.gov (Robin Hanson) writes:
> 
> "With nanotechnology, most people may be living near the edge of poverty"
> 
> The rest of this message offers arguments for this outrageous claim, to
> stimulate discussion.
...
> 
> In the uploading age things get more complicated.  Since uploaded minds
> can be copied at fairly low cost, people can, if they so choose,
> multiply much faster than the rate at which total wealth increases.

I'll agree with this statement.  You can always find some person to do
anything, no matter what consequences the act may have.

>  No
> doubt some people will so choose, and the average wealth per person copy
> may drop dramatically, till most people copies are at the edge of
> poverty. 

Here's where we part company.  Just because some people may choose to make
lots of copies of themselves, it doesn't follow that many will, or that
those who don't will be seriously affected.  A lot depends on the relative
numbers, on the total wealth to be distributed, and on the way in which it
is distributed.  It also depends on the copyright laws.

What I mean by this is that there may be regulation of the copying of true
human personalities (as opposed to subpersonae or animae; I'll get to these
later).  For instance, there is an excellent chance that, assuming a
continuing high rate of population increase, and no nanotech breakthrough
(this is hypothetical, remember), many societies will attempt to control
the dilution of wealth by requiring a prospective parent to get a license
to have a child, said license demonstrating the ability of the parent to
provide some minimum standard of living for the child.  Given uploading, a
similar form of regulation could protect potential copies.

Why would a society install regulations like that?  Precisely to prevent
the kind of inflation you describe.

> But there is the question of where the bulk of the wealth and economy
> will be.  It seems plausible to me that the economically most efficient
> way to invest any given capital (i.e. resulting in the largest growth
> rate as an investment), would be to put the bulk of it into making
> copies.  A single mind directing a billion dumb nanocray-diggers
> wouldn't do as well at mining an asteroid as a thousand or million human
> minds with half a billion such diggers.

I think you are underestimating the potential of the technology.  The
economics of this example assume that some useful number of human minds,
suitably embodied in silicon or some such, are cheaper than a half-billion
diggers.  But you really don't need the entire mind, not even very much of
it; it's simpler to have a bunch of dumb processors minding the diggers,
a smaller number of overseers monitoring them, and so on hierarchically to
some small number of full humans.  It's also cheaper, in terms of whatever
hardware is embodying the minds and processors.  And each processor is
better suited to its job; less likely to get bored and make mistakes, or
get angry and screw things up.

But, you say, who programs these processors?  Isn't it cheaper to use a
human mind instead of an artificial program, which has to be built or
purchased?  True, but I don't have artificial programs in mind.  Instead,
I'm suggesting the use of portions of a human personality, Minsky's agents
(what I called subpersonae above) and animal personalities or portions
thereof (what I called animae [pun intended] above).  I won't go into a lot
of discussion of the possibilities here, I'll save that for a later
posting.  In the meantime, you'll find some speculations about reusing
parts of animal and human personalities in Greg Bear's science-fiction
novels "Eon" and "Eternity".

I think there are significant economies in using smaller processing units
than whole minds, the legal and ethical questions are not as knotty, and
the problems of distributing wealth are far less nasty (unless, of course,
you assume that each part has the legal status of a whole entity, which
seems excessive to me).  Another benefit is that if I copy some parts of
myself into processors which I (or a copy of myself) is supervising, it's
equivalent to spreading myself over the operation of the entire system.  I
could have a lot more confidence in the lower levels to handle problems in
ways I understand and can react to then if the lower levels consist of
(copies of) other people.

> In the AI age, the economically dominant agents, be they human or not,
> may be incrementally growable or reducible.  Different humans copies
> might be merged back together when one of them was about to go bankrupt,
> and so not experience "dying".  In general, though, I find it very hard
> to project into this period.

You could argue that I'm describing things which won't be possible until
the AI age, but I think that far less is required of the technology to make
copies of parts of personalities and have them communicate as if they were
sepearate entities than to split and merge entire personalities as if they
were components.

--
------------------------------------------------------------------------
Speaker-to-managers, aka
Bruce Cohen, Computer Research Lab        email: brucec@tekchips.labs.tek.com
Tektronix Laboratories, Tektronix, Inc.                phone: (503)627-5241
M/S 50-662, P.O. Box 500, Beaverton, OR  97077

Hanson@charon.arc.nasa.gov (Robin Hanson) (01/07/91)

In <Jan.3.23.19.11.1991.3323@athos.rutgers.edu> josh@cs.rutgers.edu 
thoughtfully writes:
>Robin Hanson writes:
>  "My baseline image of our nanotechnology future splits into three ages,
>    "Replication", "Uploading" and "AI"."
>I think the "AI" age will come first, and that we are already in the
>beginning of it. ... uploading, which leaves you fairly sure
>that the thing you've created is "really a person".  However, by the
>time that is possible, I believe that purely synthetic systems will
>exist that can legitimately aspire to personhood.  ...
>Now the question is, is it economically more useful to mine your 
>asteroid with a crew of "full-personhood" copies of yourself, or
>with partials that embody your mining expertise but don't have 
>your taste for expensive Venusian wines?  Obviously you don't have
>a choice if your only option is to make a copy; but ...

In <Jan.3.23.33.49.1991.3613@athos.rutgers.edu> brucec@phoebus.labs.tek.com (Bruce Cohen)
thoughtfully writes:
>I think you are underestimating the potential of the technology. ...
>you really don't need the entire mind, not even very much of it ...
>I'm suggesting the use of portions of a human personality, Minsky's agents  
>(what I called subpersonae above) and animal personalities or portions
>thereof (what I called animae [pun intended] above).  ...
>You could argue that I'm describing things which won't be possible until  
>the AI age ...

Exactly.  Both of you seem to be arguing that there will not be a
significant time delay between the introduction of uploading technology
and the ability to split off "partial" minds.  And Josh thinks true AI
will happen first.

I don't think either of these alternatives is likely.  (I'd be happy to
offer betting odds if you would phrase a precise claim.)  The ability to
create partials which can be merged back into other partials or wholes
when they are no longer economically viable would seem to require
a tremendous understanding of how our brain works, and even then may not
be possible.  

As far as AI goes, I have been a professional AI researcher for the last
six years, and I think the chances of true AI coming before nanotech is
quite low, even if nanotech takes forty years.  And I think the vast
majority of AI researchers agree with me.  The progress we've made in
the last thirty years is good, but nowhere near half way.

Even if we disagree here, do you grant that *if* uploading comes first
*then* my claim about most agents being poor is plausible?

Some side points:

Josh writes:
>Why aren't the only living organisms bacteria?  They reproduce a hell
>of a sight faster than humans ...
>The most successful replicators of the animal world, in terms of 
>biomass represented by the species, are the ants, ...

Because investing in an ant body is apparently the "economically" most
efficient scale for an agent until recent innovations (like mammal
brains).  Note that most ants *do* live near the edge of poverty.  If
they invest poorly and start to starve, they do not convert to being a
flea or a bacteria -- they are at serious risk of dying.  Why humans
aren't poor now is the anomaly to be explained - my explanation is that
we are creating wealth faster than we can create educated children.

Bruce writes:
>Just because some people may choose to make
>lots of copies of themselves, it doesn't follow that many will, or that
>those who don't will be seriously affected.  A lot depends on ...

I agree.  How many people choose to make copies will depend in large
part on how economically beneficial it is to do so, which is what I am
trying to discuss.

>there may be regulation of the copying of truehuman personalities ...
>Why would a society install regulations like that?  Precisely to prevent
>the kind of inflation you describe.

I agree that regulation can dramatically alter a society within the
scope of that regulation (such as a nation) from what that society might
be with a free economy.  However, if that choice puts them at a
significant economic disadvantage to other societies, they will lose out
economically.  My question is about the agents that hold the bulk of the
wealth.

I imagine that if there are going to be regulatory limits, they will
probably happen at the level of whether people can upload at all.
Uploaded people will seem, and be, very weird to most people.  They
will be very threatening because of their faster clock speed, potential
immortality, and ability to multiply.  And the fact that the uploaded state
is likely to be a very alien and literally maddening experience for
a while will make things worse.  If uploading is allowed, most people
will face the choice of jumping in no matter how alien it might be, or 
becoming economic bit players.

Robin Hanson  hanson@charon.arc.nasa.gov   "Stake Your Reputation"
415-604-3361  MS244-17, NASA Ames Research Center, Moffett Field, CA 94035
415-651-7483  47164 Male Terrace, Fremont, CA  94539-7921 

Hanson@charon.arc.nasa.gov (Robin Hanson) (01/10/91)

In <Jan.7.08.58.43.1991.28451@athos.rutgers.edu> geopi@hocpa.att.com (George P Cotsonas) writes:
>In article <Jan.3.23.30.03.1991.3519@athos.rutgers.edu>, Hanson@charon.arc.nasa.gov (Robin Hanson) writes:

 >> I agree that there will be a significant time lapse between developing
 >> nanotechnology and technologies that require substantial understanding
 >> of how the brain works.

  >then contradicts it by saying

 >> However, it seems plausible that "uploading"
 >> will only require that we have a reasonable model of the signal
 >       ----
 >> processing capabilities of neurons and synapses, an understanding we
 >> seem close to today.  ...

 >I question whether the nanotechnology required to analyze, dismantle,
 >and record neural nets would be "not particularly advanced."

I apologize if I gave the impression that the technology required for 
uploading would be trivial - that would be an insult to the many 
researchers who have worked hard on it for decades.  My main point 
was/is that the technology for uploading would probably be *easier*, 
and hence come sooner, than that for human-level AI or human "partials".

Robin

[Perhaps a bit more detail would help ground the arguments better.
 It is true that, for example, the ability to copy someone's voice
 (the phonograph) predated the ability to synthesize one by almost
 a century.  What about uploading?  
 For computer-oriented types like myself, a nice overview of some
 neurophysiology that is germane to the question is found in Ch. 20
 of the "PDP" books (McClland & Rummelhart).  
 At an absolute minimum we must consider 10 billion neurons, with 
 an average 1000 synapses each, which can fire at rates of a kilohertz.
 It'll take at least a MIPS to simulate a synapse with any claim
 to fidelity, so we need 10 trillion MIPS (that's 10 million tera-ops
 or 10^19 instructions per second) to run the simulation.  This can
 be compared with Moravec's estimate of 10^13 IPS (10 million MIPS)
 for human equivalence AI style (i.e. not simulating neuron by neuron).
 Now AI seems to be moving slowly if you're sitting behind it in 
 traffic, but my own 15 year's association with the field convinces
 me that it's moving fast enough to keep up with the machines it
 has to run on.  With the rules of thumb 1990=10 MIPS and a decade
 gives 1000x computing power, we get full AI in 2010 but don't have
 a machine you can upload into until 2030.  (Nanotech doesn't change
 the rules of thumb, it simply helps them stay on the curve after
 electronics give out.)
 --JoSH]

 

Hanson@charon.arc.nasa.gov (Robin Hanson) (01/10/91)

In <Jan.7.10.46.46.1991.29737@athos.rutgers.edu> josh@cs.rutgers.edu writes:
        
>I hereby offer Robin Hanson (only) 2-to-1 odds on the following
>statement:
>
>"There will, by 1 January 2010, exist a robotic system capable
>of the cleaning an ordinary house (by which I mean the same job
>my current cleaning service does, namely vaccuum, dust, and scrub
>the bathroom fixtures).  This system will not employ any direct
>copy of any individual human brain.  Furthermore, the copying of
>a living human brain, neuron for neuron, synapse for synapse, into 
>any synthetic computing medium, successfully operating afterwards
>and meeting objective criteria for the continuity of personality,
>consciousness, and memory, will not have been done by that date."

I believe this was in responce to my saying:
>(I'd be happy to offer betting odds if you would phrase a precise claim.)

Thank you for taking the trouble to phrase a precise claim!  I believe
the effort has paid off in helping us isolate where (or whether) we 
disagree.  

The 67% chance you are estimating for your claim does not seem unreasonable 
to me.  I'd say the reasonable odds are somewhere between 20 and 80 percent,
and so, at present, am not willing to bet against odds in that range. 
(Tranlated: I'll bet either for it or against it if you give me 4:1 odds.)

The reason I say this is that, while it would be clearly impressive and
terribly useful, I do not take a housecleaning robot as a stand in for
"human-level AI".  Although perhaps if I thought about it more I would
come to believe that there wasn't any other plausible way to make such 
a robot.  We could migrate a discussion to comp.ai on this if you wish.

I'd say the chances of just the uploading part of your claim are much 
less than the household robot part, so my estimate above is domianted
by the robot part.  

Robin

[Note to readers:  Robin's paper on idea futures, which was summarized
 in the last Update, is available in full for FTP from planchet.rutgers.edu
 (the nanotech archives).  It is in nanotech/papers/hanson.

 Note to Robin:  The reason I'm interested in "robotic level AI" has to
 do with the economic argument you gave for overpopulation by uploaded
 copies.  I believe that those economic niches will long have been filled 
 by lower robotic forms of AI.
 I am happy to note that we seem to be in agreement once the specifics
 are carefully formulated!
 --JoSH]

wilcox@uwila.cfht.hawaii.edu (D. Wilcox) (01/12/91)

In article <Dec.31.18.46.49.1990.26097@athos.rutgers.edu>, Hanson@charon.arc.nasa.gov (Robin Hanson) writes:
-> "With nanotechnology, most people may be living near the edge of poverty"

[much included text deleted -j]

-> In the replication age, wealth will increase much faster than the
-> number of people, so with reasonable investment instruments, most
-> everyone should live much better than now.  Wealth may be roughly
-> concentrated among the same people it is now, so you and I may live
-> very well by today's standards.
-> 

In the replication age, what is wealth? If you can assemble anything
that you can think of, buying things becomes much less necessary.
You may not have a lot of gold, silver, etc., in the bank,
but unless you need a specific substance, most items that a normal
person would need can be assembled from sand and sunlight. The asteroids
have enough metals, etc., in them to supply the needs of humanity for
a long time. What will you trade for needed items? Possibly artwork,
books, music, research, and designs for assemblers to create, ie.,
software.

-> In the uploading age things get more complicated.  Since uploaded minds
-> can be copied at fairly low cost, people can, if they so choose,
-> multiply much faster than the rate at which total wealth increases.  No
-> doubt some people will so choose,...

Why would you choose to copy yourself? How do you envision capital being
tied up in your body? Why would there be alien and harsh working conditions
for humans when you can assembler comfortable places? And again, what is
wealth? If you can assemble food, housing, transportation, etc., what
more do you really need even if bankrupt? And even if you do copy yourself,
those will immediately become seperate identities. You will no longer
be directly a part of their thoughts and experiences.
 
-> Of course you or I, or our children, may move from the Replication age
-> into the Uploading age with great wealth, and so if we avoid copying we
-> may live like kings regardless of all those poor folk around.
-> 
-> But there is the question of where the bulk of the wealth and economy
-> will be.  It seems plausible to me that the economically most efficient
-> way to invest any given capital (i.e. resulting in the largest growth
-> rate as an investment), would be to put the bulk of it into making
-> copies.  A single mind directing a billion dumb nanocray-diggers 

Wouldn't it be more efficient to make copies of the assemblers to begin
with instead of the humans? If I have my replicator replicate itself, I
can give a replicator to another different human who will have a different
outlook on life, the universe and everything (sorry Douglas Adams). To
me this is more efficient than giving copies of myself the same tools.
We would all duplicate each others efforts since a copy of myself would
presumably want the same things I do, more than someone completely
different than me.

-> copies. A single mind directing a billion dumb nanocray-diggers
dumb nanocray-diggers? :^)
-> wouldn't do as well at mining an asteroid as a thousand or million human
-> minds with half a billion such diggers....

A fairly simple computer could direct diggers. What is the advantage to
having a million minds tied up in directing a mining operation? Substances
are fairly easy to distinguish without human intervention now. Most lab
analysis is done automatically. When you have nanomachines this will be
even more true.

-> Yes, some of these copies may live in much faster etc. hardware, and so
-> when things go bad for them could maybe sell off the better brain and
-> move into a slower/cheaper one. (stuff deleted)

What, the sun quits shining? Their replicators fail? :-)
That is a nightmare! Imagine, one of your copies ending up running
someone's hand calculator and dreaming of when it was a Cray! ;-) ;-)
But seriously, how do you envision things going bad?

(other stuff deleted)
 
-> There ... that's the argument.  So where does it go wrong?  It is 
-> not as precise as I would like it to be, but I figure this is a good
-> forum to critique it at this stage.
->
I think it goes wrong in assuming that the same economic situations will
exist in the era of nanotechnology. IMHO the age of the replicator will
cause drastic changes in most of our institutions, hopefully for the
better. I don't envision the nanotech future as being such a bleak
place. I think it will turn out to be an age of universal exploration. 
I see it as freeing up an individual from most of todays economic chains.
 
-> Robin Hanson  hanson@charon.arc.nasa.gov   "Stake Your Reputation"

===============================================================================
Dan Wilcox	wilcox@cfht.hawaii.edu		(808) 885-7944
-------------------------------------------------------------------------------
 Tout ce qu'un homme est capable d'imaginer,
 d'autres hommes seront capable de la realiser.      Jules Verne
-------------------------------------------------------------------------------
Disclaimers? I don't need to show you no stinking disclaimers!
===============================================================================

[Let me step in here because although I was one of the first to demur 
 from Robin's proposition, it is a fairly strong intellectual redoubt
 and valid criticisms need to be quite subtle.  As I mentioned, it
 is a modern-day Malthusian argument; but it can go Malthus one better.
 Suppose Dan is right and there's no rational reason to indulge in
 radical self-copying.  The catch is, what if there's one, just one,
 irrational individual out there who has this urge to spend every 
 cent he can get his hands on, copying himself?  Well, guess what,
 all the copies have the same urge.  If the average person copies
 himself once every 20 years, and the "repro-man" does it once a year,
 in 34 years half the world's population is copies of "repro-man".
 Let me add that the urge is *not* rare: most of the people I know
 take a substantial cut in standard of living over what they could
 otherwise afford, in order to have children.  In evolution, the 
 tendency to reproduce is amplified--that's just the way things work.
 --JoSH]

Hanson@charon.arc.nasa.gov (Robin Hanson) (01/12/91)

In <Jan.9.17.08.07.1991.13690@athos.rutgers.edu> JoSH writes:
> It'll take at least a MIPS to simulate a synapse with any claim
> to fidelity, so we need 10 trillion MIPS (that's 10 million tera-ops
> or 10^19 instructions per second) to run the simulation.  This can
> be compared with Moravec's estimate of 10^13 IPS (10 million MIPS)
> for human equivalence AI style (i.e. not simulating neuron by neuron).
> Now AI seems to be moving slowly if you're sitting behind it in 
> traffic, but my own 15 year's association with the field convinces
> me that it's moving fast enough to keep up with the machines it
> has to run on.  With the rules of thumb 1990=10 MIPS and a decade
> gives 1000x computing power, we get full AI in 2010 but don't have
> a machine you can upload into until 2030.  (Nanotech doesn't change
> the rules of thumb, it simply helps them stay on the curve after
> electronics give out.)
 --JoSH]

Thanks for the overview.  In "Mind Children" Moravec uses the estimate
of *two* decades to give 1000x computing power, and hence estimates the
hardware to support human-level AI will be available in 2030, not 2010.
And I presume everyone understands that all these MIPS estimates are
*very* crude!

Of course having sufficient hardware for a human level AI does *not*
mean we will know how to write programs to take full advantage of that
hardware.  The software is the hard part.  The idea that we will know
how to write human-level AI software as soon as the hardware is
available seems quite suspicious to me.  For example, I think it is
highly unlikely that we now know how to write the most intelligent agent
consistent with today's hardware.  It may very well be that present
hardware is sufficient to support a human-level "teletype" AI (i.e.
without full vision, sight, and tactile abilities).

Regarding our relative progress in hardware and software, the Nanotech
transition *does* make a difference.  Hardware capabilities may go
through a tremendous increase in a relatively short time, while software
would not.  Thus I estimate that while, as you say, AI systems of
comparable ability would take much less computing power, we will not know
how to program them when the hardware sufficient for uploading arrives.

Robin

P.S. This is a fun discussion!

[It's also one of the higher quality discussions we've had here,
 thank you for starting it!  
 I think that AI is a lot simpler than most people believe, in 
 one sense, and harder in another.  To invoke HPM again, most of
 the machinery of the brain is involved in recognizing objects, 
 not bumping into trees, etc.  My guess is that when we have good
 algorithms for those things, the re-application of the algorithms
 to the higher-level thought processes that give us so much trouble
 now will be fairly simple.  That's because (I claim) that's how
 evolution did it in the first place, i.e., copy and modify existing
 functionality.  That's the "easy" part.  The hard part is getting
 to the two-year-old chimp stage.
 Now what Moravec et al. discovered, in obstacle avoidance anyway, 
 is that there is a level of brute force at which, still using plenty
 of ingenuity to be sure, things suddenly seem to work a lot better
 even though they're running simpler algorithms than before (in some
 conceptual sense).  A big 2D array is an easier-to-manage map than a 
 dynamically balanced quadtree.  Indeed, you can do some mathematically
 more sophisticated things with your array and still have a simpler
 program overall.  Well, lo and behold, it turns out the big array
 implementation is a lot less "brittle" than the old sophisticated
 ones; but it takes more horsepower to run.
 So more MIPS not only lets you run your AI program faster; it makes
 it easier to write.  More than half the hair of AI programs is in the
 pursuit of efficiency, I opine:  not only does it soak up effort but
 it makes the other half harder to write.  Try to explain the Rete
 algorithm in 25 words or less.  In an appropriate associative 
 processor the equivalent is "do a pattern-match between expression
 A and every expression in set B."  Highly inefficient, says the 
 algorithmicist.  But the heavy hardware version allows for dynamic
 rulebases, not just adding and removing rules but on-the-fly subsetting
 and rulebase merging.  Again brute force has made the algorithm 
 simpler but also produced an unexpected bonus in functionality and/or
 robustness.
 I will give you 5-to-1 odds there is a tera-ops machine by 2000;
 100-to-1 odds there's one by 2010--exact wording to be worked out
 if you're interested.
 --JoSH]

 

eachus@linus.mitre.org (Robert I. Eachus) (01/12/91)

     I'm not sure that I agree with your (JoSH's) estimate of the
number of MIPS required for an uploaded persona to function.  As you
can see below, I don't disagree with your basic statement AI before
uploading, but I do disagree with when.  There are four areas where I
would like to dispute your numbers:

     First, 1 MIPS for a single synapse, 1000 synapses per neuron
implies that the only useful way to simulate a neuron is as a
collection of synapses.  My feeling is that the proper approach might
be to simulate neurons instead, and my guess is that a "real-time"
neuron simulation can be done for under 100 MIPS.

     Next, under most circumstances a lot of areas of the brain are
dormant.  Simulation of neurons with no input will require no
computational effort.  This should account for another factor of ten.

     Another factor of ten or 100, or more, comes simply from the fact
that you may be willing to upload to a "slow" machine if the
alternative is death.  One "supercomputer" system may be needed to do
the initial upload, but the software can then be transferred to a
slower machine.  As the hardware available gets better you can migrate
the software.  This really means that the limit on when uploading
becomes possible is primarily a storage limit.  (Hmmm. Say each neuron
state would take one the order of 10Kbytes to represent, remember that
packing can be very efficient, so we need order of 10**14 bytes of
memory.  Current systems are on the order of 10 Meg, and actually
growing slightly faster than 1000x per decade, but lets use that.  No
uploading before say, 2015.)

     Last but not least, even if we buy your estimate of the
computational effort involved, the correct measure for 1990 is the
number of parallel processing MIPS available, since this is an
inherently parallel problem.  1990 ~= 1000 MIPS. This also pulls in
the uploading date to about 2015.

     So all in all I put the uploading date about 15 years earlier,
and the AI date about the five to 10 years earlier, in about
2000-2005, from the parallel processing difference.

     This may seem highly optimistic, but there was an article many
years ago in Analog entitled "Science Fiction is too Conservative" by
G. Harry Stine.  It showed that even most of the wildest
extrapolations in science fiction drastically underestimated the rate
of change in technology.  In this group anyway, we are at least using
exponential estimates for performance growth, but as I remember the
article, even e**x, as above, is a little pessimistic for most
observed curves.

--

					Robert I. Eachus

     When the dictators are ready to make war upon us, they will not
wait for an act of war on our part." - Franklin D. Roosevelt

["Lots of my friends are running on eta-op processors, but I wouldn't
 let my sister upload into one!"  I can't really argue with your
 numbers except to say that I used a lot of "let's include this to
 be on the safe side" and you did the opposite.  Just to bring the
 point back to where we started, 1000 MIPS machines cost millions 
 today, and if we stay on that trendline, we don't have to worry
 about a lot of destitute people buying multi-million-dollar machines
 to upload copies of themselves onto.  Outside of that, I agree.
 --JoSH]

lovejoy@alc.com (Alan Lovejoy) (01/16/91)

In article <Jan.12.00.26.53.1991.17471@athos.rutgers.edu> eachus@linus.mitre.org (Robert I. Eachus) writes:
>     This may seem highly optimistic, but there was an article many
>years ago in Analog entitled "Science Fiction is too Conservative" by
>G. Harry Stine.  It showed that even most of the wildest
>extrapolations in science fiction drastically underestimated the rate
>of change in technology.  In this group anyway, we are at least using
>exponential estimates for performance growth, but as I remember the
>article, even e**x, as above, is a little pessimistic for most
>observed curves.

Put technical advances into these five categories: 

1) Totally new and unexpected physical phenomena.

2) New mathematics that handles a brand new class of problems.
 
3) New theories that transcend some set of earlier theories, perhaps
mathematically unifying the description of what was thought to be separate 
phenomena with one new and better description for all of them. 

4) New mechanisms, techniques, devices, algorithms or implementation media
that provide revolutionary improvements in operating parameters.

5) Improvements in the operating parameters of existing mechanisms,
techniques, devices, algorithms or implementation media. 

Now estimate, for each category, the degree to which "futurists"--be they
laymen, SF writers, profesionsal engineers, scientists, or whoever--usualy
overestimate or underestimate the rate at which advances will occur in the
given category, and the magnitude/significance of such advances.

Generally, I would say that most SF writers are overly OPTIMISTIC with respect
to categories 1 and 2, but generally overly PESSIMISTIC with respect to
categories 4 and 5.  On the other hand, professional scientists tend to be
consistently too pessimistic--that's their job, after all.  It seems to me
that the "average layman" is much harder to characterize as consistently
either too optimistic or too pessimistic.  Their views tend to depend more
on the subject matter or on how the advance will affect their lives than it
does on the criteria used to define the named 5 categories.  When considering
the attitudes of laymen, one should take the following into account: The public
is usually exposed to visions of the future in SF movies, TV shows and books.  
So the popular conception of what the future holds is certainly influenced by 
the views of SF writers.  I have attempted to correct for this in my evaluation
of laymen attitudes, which is why I am somewhat more tentative in my statements
on that subject.

It stands to reason, I would think, that category 4 and/or 5 advances are
much more likely--and much more forseeable--than category 1, 2 or 3 advances.
Category 1 and 2 advances are especially hard to forsee or give time estimates
for.  Category 3 advances, on the other hand, are like category 1 and/or 2
advances in some ways, but like category 4 and/or 5 advances in other ways.

Of course, most of the advances under discussion in this newsgroup would be
category 4 or category 5 advances.
 

-- 
 %%%% Alan Lovejoy %%%% | "Do not go gentle into that good night,
 % Ascent Logic Corp. % | Old age should burn and rave at the close of the day;
 UUCP:  lovejoy@alc.com | Rage, rage at the dying of the light!" -- Dylan Thomas
__Disclaimer: I do not speak for Ascent Logic Corp.; they do not speak for me!

landman@eng.sun.com (Howard A. Landman) (01/22/91)

>[It'll take at least a MIPS to simulate a synapse with any claim
> to fidelity, so we need 10 trillion MIPS (that's 10 million tera-ops
> or 10^19 instructions per second) to run the simulation.]

Except that this makes the rather stupid assumption that we'll prefer
to simulate a brain rather than build one.  Simulating a million-transistor
VLSI chip takes perhaps 10^7 times as long as just running it directly.
This would indicate that only 10^12 ips (a million MIPS) might be enough
if they're the right kind (custom hardware).

Carver Mead's analog neural chips (the artificial "cochlea" and "retina")
use around 1 to 3 transistors per synapse-equivalent and operate many
orders of magnitude faster than biological ones.

> At an absolute minimum we must consider 10 billion neurons, with 
> an average 1000 synapses each, which can fire at rates of a kilohertz.

So either we build a 10 trillion transistor brain which will operate at
least 1000 times faster than a human one; or we figure out ways to use
fewer neurons switching faster to emulate more neurons switching slower
("virtual neurons") which gets us down to perhaps a billion transistors.
Even in today's technology, it's conceivable to pack that much circuitry
in the volume of a tunafish can and easy to fit it into a breadbox.  With
density doubling every 3 to 4 years, we should be able to build a brain
equal in power and size to a human brain within a decade or two.  Assuming
that we knew how to design it, of course.

--
	Howard A. Landman
	landman@eng.sun.com -or- sun!landman

[This is all reasonable but the argument was about whether it was 
 going to be easier to upload or AI.  The assumptions were to separate
 the sides of the argument, not prescriptions about the best way to 
 achieve human-level intelligence in a machine.
 --JoSH]