[sci.nanotech] Down and out in nanoland?

josh@cs.rutgers.edu (01/04/91)

Robin Hanson writes:
 "With nanotechnology, most people may be living near the edge of poverty"
with the intention of stimulating discussion on the issue.  This is 
an interesting thesis, but one with which I happen to disagree, after
a few day's thought, so here's my analysis:

Extremely condensed nutshell version:  This is a new version of Malthus,
and is wrong for the same reasons Malthus is wrong.  However, all the
specifics are different, and need to be examined in detail.

RH:
  "My baseline image of our nanotechnology future splits into three ages,
    "Replication", "Uploading" and "AI"."

I think the "AI" age will come first, and that we are already in the
beginning of it.  The critical issue here is what we want to consider
a person.  If the microprocessor controller in my toaster is considered
a person, it is a person with no belongings and no rights, and under
that definition, Hanson's thesis is almost certainly true.  

I don't consider my toaster a person and I'm sure everyone agrees that
such an appellation is silly.  However, personhood has very fuzzy
boundaries.  It's beginning to be possible to write programs which
have undeniably higher mentalities than babies or some mental patients.
By the year 2000, I'm virtually certain that there will be systems that
will be able to convince the average man on the street that there's
"someone inside" (this is not to say that the system will necessarily
be so indistinguishable from a human as to be able to pass a full-
fledged Turing Test).

More to the point, the mentality of systems that are doing economically
valuable work will increase dramatically--there are tons of applications
that could use expert systems writeable with current technology, much
less that available in a decade.  As time goes on, more and more such
systems will actually be written.  Will these systems be "people" in
the sense of the thesis under consideration?  Again, if so, the thesis
will be true, since they will be numerous and won't own two cents to rub
together.

This all happens with or without nanotechnology.  Nanotech throws in
the wild card in the form of uploading, which leaves you fairly sure
that the thing you've created is "really a person".  However, by the
time that is possible, I believe that purely synthetic systems will
exist that can legitimately aspire to personhood.

We can sum up this part of the argument by saying that it will be 
possible to create mentalities in a very wide range of types, and it
is very much unresolved which ones we should be considering "persons".
Let's adopt the nomenclature from Greg Bear's stories and refer to
a class of less-than-full-fledged mentalities as "partials", whether
they are simply partial copies of your full mind (as in the stories)
or latter-day expert systems at the same level of competence.

Now the question is, is it economically more useful to mine your 
asteroid with a crew of "full-personhood" copies of yourself, or
with partials that embody your mining expertise but don't have 
your taste for expensive Venusian wines?  Obviously you don't have
a choice if your only option is to make a copy; but in the long
run, once you're in the "AI" stage, you want to make a mentality 
appropriate to the job.

Hanson writes further:
"In the AI age, the economically dominant agents, be they human or not,
 may be incrementally growable or reducible.  Different humans copies
 might be merged back together when one of them was about to go bankrupt,
 and so not experience "dying".  In general, though, I find it very hard
 to project into this period."

The interesting thing to note is that Darwin conceived the theory
of evolution by variation and selection as he was pondering over
Malthus' "On Population".  Malthus' logic translates into a very 
potent force of nature--the population pressure of self-replicating
organisms.  But it isn't the only force operating:  The full story
is under Darwin's byline, instead.

Why aren't the only living organisms bacteria?  They reproduce a hell
of a sight faster than humans--take two humans, and one bacterium, and
assume the entire earth is made of food.  In less than a month, the 
entire earth is made of bacteria.  The pure ability to replicate is
not sufficient:  the qualities of a successful replicator are more
complex.

The most successful replicators of the animal world, in terms of 
biomass represented by the species, are the ants, although recently
(in evolutionary terms) the mammals have been giving them some
competition.  The advantage an ant (or a human for that matter) 
has over a bacterium in reproducing is the ability to change its
environment in its own favor to a substantially greater degree than
the bacterium does.

Thus in the post-nanotech world, the human-level mentality which
can barely afford the rent on its simulator time-slice (much less
the luxury of a real matter body) just isn't going to have the
*wherewithal* to replicate itself.  The raw human, as I'm sure 
you've guessed I'm about to claim, is the post-nanotech evolutionary
equivalent of the bacterium.

The logic of evolution seems to favor the organism somewhere in the 
center of the (logarithmic) scale, like the ant.  I have no way
to make this rigorous, of course, but I would suspect that the 
average person (optimal replicator) in the far post-nanotech
world would equal in wherewithal and productive capacity a large
company or small country of the present era.  

...For a second or two anyway;  remember, things will be changing 
much faster then than now!

--JoSH

ps--I would assume the title was originally derived from George
 Orwell's "Down and Out in Paris and London"...

L33QC@cunyvm.bitnet (01/12/91)

> I don't consider my toaster a person and I'm sure everyone agrees that
> such an appellation is silly.  However, personhood has very fuzzy
> boundaries.  It's beginning to be possible to write programs which
> have undeniably higher mentalities than babies or some mental patients.
> By the year 2000, I'm virtually certain that there will be systems that
> will be able to convince the average man on the street that there's
> "someone inside" (this is not to say that the system will necessarily
> be so indistinguishable from a human as to be able to pass a full-
> fledged Turing Test).

What do you mean "undeniably higher mentalities than babies or some mental
patients"??? Important things that both mental patients and babies can do
easily, and computers now cannot:
   Vision, Natural Language Processing... Babies AND mental patients can
learn most things just by telling them things... I don't know of ANY machine
or program existing that can do any of those things... Unless you are
referring to programs like Eliza or Racter, which don't have any kind of
mentality, since they are not capable of any real kind of learning.
I would think any kind of mentality would have to be able to learn things
along the same kind of broad spectrum that babies and mental patients can.
(Unless of course you're talking about catatonic patients, who just don't
do anything) "Learning" in machines at least to the best of my knowledge is
always along limited lines, like a machine that may learn strategies for
chess or math theorems... But, you cannot expect these machines to learn
altogether new things, like expect your math theorem prover to learn about
biology or something like this. I am not one of those people who accept
Searle's idea that it's IMPOSSIBLE to achieve AI, I just don't think we're
half as close as you seem to suggest. If not, I'd like to see the programs
that could act in a manner more intelligent than a baby...

[I was thinking of newborn infants and catatonics, who exhibit virtually
 no mentality at all, and are yet categorized as "human".  Considering
 that current computers have a million-to-one disadvantage vis-a-vis
 the brain in raw computing power, the rest of what you say is basically
 true but not particularly surprizing.  
 See the latest message from R. Hanson for more on the AI question...
 --JoSH]