[sci.nanotech] Nanotech thoughts

mgoodfel@mgoodfel.oracle.com (Michael Goodfellow) (11/15/89)

I've been reading sci.nanotech for the past few months, after reading 
EoC earlier this year.  Since the forum seems to be a little slow 
lately, perhaps you'll be interested in my two cents on the subject.  
Bear with me if this seems a bit obvious at first...

THE BLOB

After reading EoC, with all its wondrous descriptions of assemblers and 
their possibilities, its touching faith in AI (and the extent it relies 
on Technical AI's for projections of the future), I decided to try and 
visualize some aspects of this whole idea.  Grey Goo, with its doomsday
scenario seemed like a good place to start.

So imagine that we have a future nanotech factory.  As described in EoC, 
there is no assembly line.  Instead, we have numerous vats, filled with 
liquid working materials, and assemblers.  These assemblers are produced 
(or programmed ?) for a particular construction job.  They take 
materials out of their environment and build the product.  They also 
build other assemblers as needed.

Lets assume that this has all been thought out well and that Grey Goo is 
not just going to pop up in the factory by accident.  Instead, we'll 
assume a disgruntled employee (playground-berserker-nerd type) has 
decided to take revenge on his employers by scaring them with a Grey Goo 
attack.  He cleverly programs it to appear in the vat, eat the product 
and then die off.  Unfortunately, he makes a programming error and the 
goo is off and running without restriction.

Let's assume that the goo has a doubling time of an hour.  In ten hours 
it multiplies by 1000.  In a day or so, it multiplies by a million.  
This is under ideal conditions.  In "real life", the goo is constrained 
by the availability of materials and energy.

So in our factory, the Grey Goo becomes active when the product is taken 
out of the vats (as planned by our evil genius).  It eats the product, 
but does not stop there as was specified.  Instead it continues to 
replicate, taking materials as it finds them. The first thing anyone 
notices is a film sliming across the floor and up the walls, over the 
lights, etc.  In effect, we've created The Blob.

YECHH -- WHAT A MESS

Cleaning this up is a problem.  Not that assemblers are indestructible.  
Lots of things might destroy them -- chemicals, radiation, cutting off 
their food supply (hard).  The problem is that you have to get it *all*.  
If you miss any, you are back to square one in a day or two.  We can 
imagine our clean-up crews frantically fighting this thing again and 
again, until finally they lose.  They lose when a bit of Grey Goo gets 
into the outside world.  If it makes it inside a human or other animal, 
we've all had it.

Inside any living organism, it probably finds a nearly ideal situation -
- lots of energy and building materials.  A human infected with this 
doesn't even know it at first.  He carries it away, spreading it all 
over the place during his normal routine.  Small amounts of goo are 
being left behind when he touches surfaces (or other people), they are 
present in exhalation, wastes, etc.  Pretty soon the area around the 
factory is thoroughly contaminated, people are dropping like flies from 
being used as raw materials, and the goo is unstoppable.  A fatal 
disease with a fast vector like this could kill everyone on the planet 
in days.  Next scene, the entire Earth is covered with goo and life is 
boring.

THE DEADLY DUST

Note that the form of Grey Goo I've described is really mild.  I 
pictured it as a slime, all clumped together and easy to spot.  There's 
no reason for the assemblers to clump, and it makes it much harder to 
handle if it doesn't.  Also, I just described goo that wants to grow and 
has no preferences as to where.  If instead the goo homes in on good 
sites (actively searches for materials), it would be much worse.  So 
instead of The Blob, imagine a Deadly Dust, which consists of invisibly 
small particles.  These remain suspended in air, drifting until they 
find a good spot to set up replication (for example, when they are 
inhaled.)  Then they go to work and pump out copies at the maximum rate.

Now our goo is never even spotted by the clean up crews.  The first they 
know is a week later when everyone around starts to die.

BLUE GOO -- OUR HERO

Well, Grey Goo is mentioned in EoC, and the solution given is The Active 
Shield -- also known as Blue Goo.  This goo is tame.  It eats the 
dangerous Grey Goo and the world is saved!!  Only I have some problems 
with Blue Goo...

IT'S EVERYWHERE!

First, you can't apply the Blue Goo to a mess like the one above after 
the fact.  If you tried, you'd have two problems.  First, you might not 
have spotted that there was a problem until it was far too late.  
Second, it might be more than Blue Goo can handle.  

After all, Blue Goo is necessarily much more complex than Grey Goo (Blue 
Goo has to be much more selective!)  This means that it will probably 
replicate slower.  So a given amount of Blue Goo can't catch up to the 
same amount of Grey Goo.  In order for Active Shields to work, they have 
to be in place all the time, so they can stop Grey Goo before it gets 
started.

In EoC, Drexler envisions Blue Goo as the world's immune system.  I was 
not convinced, for several reasons.

First, where exactly does the Blue Goo live ?  The answer has to be 
everywhere, if you really want protection against deliberately created 
Grey Goo (or exotic accidents).  And by everywhere, I mean EVERYWHERE!

It has to be inside the body of every human you intend to protect, so 
that it can't eat them.  Right here, we have a show-stopper, as far as 
I'm concerned.  There are people in this country afraid of fluoridated 
water!  There is no way you are going to convince large numbers of 
people to allow you to infect them with an artificial disease in the 
name of protecting them!  Not to mention that people in this forum have 
discussed the possibility of mind control ("artificial conscience" 
indeed!) via nanotech devices.  Let any rumor of that possibility get 
out, and you can forget it! JAnd even the most trusting person is going 
to be worried about the possibility of sleepers or bugs in the design of 
the Blue Goo.  Are *you* willing to trust U.S. Government-designed stuff 
in your body ?

Many of you will make the argument that the progress of a technology is 
inevitable, and we have to make the best of this situation.  There will 
be the possibility of Grey Goo, and this is the only way to protect 
ourselves.  I think the average person would rather see the whole 
technology stopped cold rather than risk this situation.  But they will 
probably see the threat too late.  Plus there is always military goo, 
which will be developed even if all governments publicly renounce the 
technology.  I see this as leading to a rather bleak picture (see WE'RE 
DOOMED below), but perhaps not.  For the sake of getting on to the next 
point, let's assume that a benevolent, farsighted, brilliant bunch of 
idealists creates the perfect Blue Goo and releases it without consent 
of the population.

Well, the Blue Goo has to exist not just in humans, but in their food 
animals and plants, or Grey Goo can still wipe us out.  In fact, it has 
to exist in all living things all the way down the food chain from us.  
And in other crucial parts of the ecosystem -- we wouldn't want to try 
to do without trees or algae in the ocean producing oxygen for us!  We 
might as well think big and assume that Blue Goo is in every living 
thing on Earth....

This still doesn't protect against really nasty Grey Goo.  I would think 
it would be possible to create a Goo that would avoid living cells.  If 
nothing else, there are areas on Earth (deep oceans, arctic ice, upper 
atmosphere) where life would be scarce enough to avoid.  This type of 
Goo would have to wait until it was widely distributed enough (and 
massive enough) to do damage.  At that point, it could produce poisons 
of some kind in quantity.

To handle this situation, Blue Goo is going to have to pervade the 
inorganic world as well as the organic.  It will have to filter air to 
check for Grey Goo, and crawl about surfaces looking for it, and 
generally be omnipresent.

To say the least, I'm not convinced this is reasonable.

ONE GOO TO ANOTHER

So, I'm envisioning this Blue Goo existing inside all my cells.  How 
does it spot Grey Goo in the first place ?  Blue Goo is going to be 
crawling over all the parts of my cells, fingering them and looking for 
an unusual pattern. After all, it's not like it can extrude little 
eyeballs and look around for problems.  And there's no way for Blue Goo 
to know what should be in the cell anyway, at least not in any detail.  
To do that would require so much information storage that the Blue Goo 
would be huge, and as complex as the cell itself.  Some centralization 
might help here, with Blue Goo cops communicating back to a central 
intelligence, asking if each and every strange thing is part of normal 
cell contents.  Even so, Grey Goo could surround itself with perfectly 
ordinary organic material.  It could look like some piece of debris 
inside the cell, or a piece of DNA, or cell wall.  Asking Blue Goo to 
spot this is basically asking it to know everything there is to know 
about the organism.  And remember that it has to know it about each and 
every type of life on the planet. 

We can't redefine the task of Blue Goo to that of preventing cell damage 
without spotting the Grey Goo that's causing it.  For one, a cell could 
be subverted by the Grey Goo, and seem to work perfectly well, except 
that it's pumping deadly chemicals into the body.  The Blue Goo can 
fight effects all it likes, but if you have a heart attack, no cell-by-
cell strategy will save your life.  The Blue Goo would have to know 
system-wide effects, or else recognize the Grey Goo.

If we just tried to keep cells alive, we would also have all sorts of 
effects on the development of organisms.  And of course, we want some 
cells (and animals) to die!  How else would they eat each other?  How 
would we eat them?  I can imagine what would happen if we tried to eat a 
hamburger full of Blue Goo!  First, it tried really hard to save the 
life of the cow, and of the wheat in the bun.  Then it found itself in 
the hostile environments of packing plants, ovens, etc. and did its best 
to keep cells alive.  Then it wound up in your stomach, even more 
hostile!  Can it save the beef cells from your stomach acids?  Tune in 
tomorrow! 

Again, I think Blue Goo of this sophistication is unlikely. 
 
MY GOO IS BLUER THAN YOURS

While I'm at it, I should also point out that Blue Goo has to recognize 
itself.  If nothing else, so that it doesn't waste resources fighting 
with itself.  And this would seem to mean that there is only one kind of 
Blue Goo, since it would naturally attack any competing kind.  So we 
really need either a covert introduction of Blue Goo or international 
agreement on it (with or without consent of populations).  And once 
introduced, that's it.  Any improved Blue Goo would have to displace the 
stuff already in existence, which is supposed to be impossible.

CAPTIVE GOO, TROJAN GOO

Of course, it won't be impossible, because Blue Goo will have to 
be able to recognize itself.  This means some kind of recognition codes.  
And that leads to trouble.  First off, since Blue Goo is everywhere, 
capturing a specimen for analysis is trivial (in fact, it's impossible 
not to!  It's inside your equipment already, scanning for Grey Goo!)

Once you have a specimen, you can determine the recognition codes.  They 
have to be stable, since Blue Goo will be meeting itself all the time, 
and meeting different generations and variations of itself from other 
organisms that you ingest.  If the recognition scheme can be broken or 
copied from the design of the Blue Goo, then we can build a Trojan Goo 
that fools Blue Goo.  This seems to be another fundamental objection.

HOTHOUSE ASSEMBLERS

We could of course control accidents.  We could even control many types 
of sabotage, if we are willing to limit our use of nanotechnology.  For 
example, we could design all assemblers only to work in the presence of 
strong magnetic fields, or only when certain chemicals are present, or 
only when receiving radio signals.  When this signal is not present, the 
assembler dies.  This would represent a sort of dead-man switch, and 
could be used to control even Grey Goo.  This restricted nanotechnology 
would still be useful for all the envisioned medical and industrial 
applications.  Just keep the affected people and plants inside the 
required safety field.  Any nanodevices straying out of that field for 
any reason will die.

However, we would have to prevent any assembler from building an 
assembler that did not have this dead-man switch in it.  That means an 
unrealistic amount of human self-restraint (there's still military goo 
to consider), and pretty sophisticated analysis of the functions of 
assemblers.  I don't think anything less than a human-equivalent AI 
could be counted on to spot an assembler designed to build nonstandard 
assemblers.  Especially since several jobs could be submitted in pieces, 
and only in combination create the illegal assembler.

MILITARY GOO

The summary below (WE'RE DOOMED) could be applied equally well to 
biological warfare, except for two points:

First, there less connection between artificial plagues and useful 
genetic engineering than there is between Grey Goo and replicating 
assemblers (needed for any large project with nanotech).

Second, plagues look like a nasty, hard-to-control weapon, but Military 
Goo is nearly the ideal weapon.

For example, suppose the following relatively simple use of 
nanotechnology:  You build some goo that spreads from person to person 
and does not replicate endlessly.  Instead, it makes some simple 
changes, only to a small number of cells in the body.  It could work in 
some completely inaccessible place.  In this spot it builds a small 
machine that can receive radio instructions.  On command, this machine 
can synthesize various drugs/hormones and introduce them into the blood 
stream.  Once the machine is built, the body is tagged to prevent later 
visits by the goo.

Now we have the perfect mechanism for control of enemy (or your own) 
populations.  From orbit, areas can be blanketed with instructions to 
the implanted devices.  They can kill, sedate, or otherwise modify 
behavior.  Every dictators dream!  With the possibility of this kind of 
power in the works, nanotech devices will undoubtedly be developed, if 
only to protect our own people (*right*).

WE'RE DOOMED

So I'm left with the following chain of reasoning about all this:

1. Lots of people think self-replicating assemblers are possible.

2. We will build them if they are possible, for the large benefits.

3. With some probability, Grey Goo will be produced, accidentally
   or deliberately as a weapon.  It is one of the simplest uses of
   the technology, after all.

4. Grey Goo cannot be defeated without Blue Goo already in place.

5. Blue Goo cannot be made powerful enough to defeat reasonable
   (or even simple) Grey Goo, for the following reasons:

   a. it can't cover all the places Grey Goo might arise.
      - people won't let it inhabit them
      - covering the whole ecosystem is necessary
      - good coverage of inorganic world necessary as well.
   b. it can't spot Grey Goo when it occurs.
   c. Blue Goo can be captured and its recognition signals forged.

Conclusion:

  Someday, we will all be eaten by Grey Goo.

PROOF :-)

You wonder why we have not been visited by aliens?  Well, this is why.  
A simple scenario holds up very well:

On many worlds over billions of years of time, life arises.

Intelligent life is produced by natural selection.  Nothing in this 
selection process prepares an organism for a radical change in its 
niche, or abilities.

Technology is either too hard (no problem), or far too easy to master.  
If a species becomes technological, it suddenly finds itself in a 
situation for which evolution has not prepared it. It is as if members 
of the species suddenly grew to ten times normal size. How would they 
adapt to this before doing something fatal ?

The species inevitably destroys itself with some mistake. Inventing 
nanotechnology is one good way... 

A WAY OUT WAY OUT

There is a way out of all this, and I consider it far more likely than 
the invention of Active Shields.  After all, members of this group have 
talked about AI being necessary for Active Shields anyway.  We've also 
talked about being able to map the connections of the brain and save 
personality.  Drexler talks about Technical AI's that are equivalent to 
human brains, but as much as a million times faster.

The simplest scenario that gets us out of all these problems is that 
someday before The End, we build a nanotech copy of a human 
consciousness.  Actually, we build a community, and we give them 
assembler tools to work with.  They think a million times faster than we 
do.  To them, assemblers are as slow as hammer and nails.  They can 
easily monitor all the activity of human civilization and just prevent 
anyone from doing anything dangerous.  Unlike technical AI's or simpler 
Goo machines, these nanohumans will understand human culture.  They 
won't waste time touring your cells waiting for Grey Goo -- they will 
know that it comes from factories or labs.  They can easily track all 
work going on in such places, since we run in super-slow motion compared 
to them.

In fact, such nanohumans would quickly (*very* quickly) become the 
leading edge of civilization on this planet.  If human-level 
consciousness is possible at nanotech scale, then I find the whole idea 
of Technical AI's serving us absurd.  Instead, they will develop and 
leave us behind.  The Active Shield will be a nanoculture that protects 
us from our own carelessness out of sentiment.

Imagine the first steps in this process.  We would map a particular 
individual's nervous system into nanomachinery.  Once the copy is built, 
we would study it for awhile.  Its replicated nervous system would be 
run at greatly reduced rates (equivalent to neuron firing times), so 
that we can talk.  When its responses check out with the original, and 
it seems sane, we have our first nanohuman.  Depending on what's been 
done with the rest of the body, it even looks human.  A robotic body 
under the control of the nanobrain we've built seems simple in 
comparison.  So we have the first android copy of a human (empty-headed, 
of course, since the brain is the size of a few cells).  Already, we've 
invented immortality....

Speed its thought processes up by a factor of two, and we have a 
creature that can out-compete the human race, if allowed to reproduce.  
If we sped the mind up to maximum rate, and built a body scaled down and 
with response times to match, we have the first real nanohuman.  Suppose 
a group of people are copied down to this level.  If they really ran a 
million times our speed, we would flip the switch, and that's the last 
we would ever see of them.  A minute of our time would be two years of 
theirs.  You could get a good sized colony started up by then.  After an 
hour, over a century has passed for them.  Even starting from a few 
individuals, a reasonably scaled reproduction rate would have cities 
spread all over your lab by then.  In a day, who knows what you would be 
looking at.  Certainly, the nanoculture has pervaded the planet, and 
installed any necessary safeguards on our technological development by 
then.

Why would they?  One good reason is that they would be us.  If the 
nanoculture knew that the world was full of interesting personality 
patterns, which would give it a rich mental diversity (as necessary to 
them as genetic diversity is for us), then why not go and get it?  In 
the first hours of the colony's existence, a program would be started to 
make nanocopies of some or all of the world's population.  At some 
point, a nanocopy of you becomes conscious, and joins in the fun.  
Naturally, you would protect your organic self.  Why not, with such huge 
resources?

For us organic humans, things might suddenly get strange.  Before the 
press reports of the creation of the first nanoculture even reach us, we 
might find ourselves changing.  We might find everyone around getting 
younger and healthier, calmer, smarter, more talented, or whatever we 
desired...It would be our true desires as well, since the changes would 
be made by nanocopies of ourselves.  The nanotech devices monitoring 
your health and happiness might be multiple copies of your personality, 
frozen into a helpful state of mind, so that they don't get bored....

This civilization might last forever.  Not just the organic one, but the 
nanotech one as well.  After all, it's much more durable than ours.  If 
nothing else, it can make copies of itself.  At some point, perhaps it 
would build a few checkpoint asteroids.  These rocks would be encoded 
with the personality patterns of all existing nanohumans at the time, 
and sufficient hardened assembler machinery to bring them all back to 
life.  The checkpoint could be sent somewhere safe, with instructions to 
activate if no signal arrives from home every so often.  Perhaps some 
would be sent to random places in the galaxy, and their destination 
erased from the memory of the senders.  These would arrive and recreate 
civilization there as a backup against massive destruction.

If this all starts off before the end of our lifetimes (and we can 
extend with cryonics), then *we* might be these nanohumans.  Someday, a 
copy of me and a copy of you, resurrected for fun from archival storage, 
may gaze at some distant star and talk about how far we've come, and how 
when we were young, some people thought the human race would never 
last....

*VERY* LITTLE GREEN MEN

This is my answer to Active Shields, and the "proof" in WE'RE DOOMED.  
And it's my answer to why we have no alien visitors.

Since intelligent creatures either invent nanotechnology, and 
nanoculture, or else they destroy themselves :-), then it follows that 
any visitors will be nanoaliens, and very advanced ones as well (the 
time needed to cross between stars is eons for nanocreatures.)  Since 
there has been plenty of time for such creatures to have developed, 
either it can't happen (we really are doomed then), or else they are out 
there already.  If so, our CETI programs are barking up the wrong tree.  
We shouldn't be looking for slow signals.  We should be looking for 
communication between creatures a million times faster than us, with far 
better error-correction algorithms.  Their messages to one another will 
use the maximum bandwidth, and be so efficient as to be 
indistinguishable from noise.  Of course, there's no point in looking 
for this.  We couldn't understand it if we found it.  And in any case, a 
survey for all intelligent life in the universe is a reasonable class 
project for some bored nanostudent.  We don't need to look for 
nanoaliens, since they've certainly found us.

If they came here, what would they do?  It's possible that they would 
have ignored us, the way we would ignore any uninteresting place.  It's 
possible that they would absorb us, but that should have happened 
already (of course, I can't prove I'm not working through some type of 
simulation right now...)  It's also possible that they've done both.

If *I* were a visiting nanoculture, and I ran into the human race, I 
might be interested in it.  I wouldn't want to wait around long enough 
to talk to them though, since they are so slow.  The natural response 
would be to study their nervous systems in detail, and then map a few 
down to nanoscale.  At that point, I could question them in detail, and 
evaluate the species.  If they were uninteresting, and likely to remain 
so, that would be the end of it, and the nanoculture would leave.  On 
the other hand, it might decide to set up a permanent nanoculture here.  

My assumption here is that with the nearly infinite resources a 
nanoculture would have, it can afford to try many options.  It could 
create one nanocopy of all humans, and let that nanoculture evolve.  It 
could create another copy and mix it with alien patterns.  It could 
create any other interesting combination.  It could do all of this and 
also leave the organic society alone to develop.  The only "cost" they 
incur by doing this would be the time to make a copy of the 
machinery/culture needed to get a project started.  From that point, the 
copy extracts resources and builds as necessary to complete the project.  
The benefit they get is to extend themselves in possibly new directions.  
The nanoculture is the ultimate information society.  A few billion new 
patterns of consciousness might represent a gold mine of information.  
Comparing this trivial cost with the possibly large benefit could mean 
that the nanoculture is continually converting organic intelligence to 
nanocopies, just on the off-chance that an interesting new pattern has 
arisen.

Just to finish this article off with a bang, let's consider one last 
possibility.  I've talked about what *could* happen if a nanoculture 
discovered us.  As I've said, it should be the case that this culture 
already exists, and already has found us.  In that case, perhaps all of 
this already *has* happened.  There might be a nanocopy of you right 
now, living as part of a nanocopy of human civilization.  This 
civilization was started by nanoaliens thousands or millions of years 
ago, when human intelligence first became worth copying.  It has 
continued since that time, copying every new personality that nature 
produces, including all of us, at each important change in our lives.  
These copies have gone about their own activities in the nanoculture, 
perhaps copying themselves for various reasons.  By now, there might be 
thousands of nanohumans, derived from your personality at different 
points in your life.

The simplest concept is that the nanohuman culture is somewhere else, 
and the only thing here is some nanomachinery in each of us to harvest 
this crop of patterns produced by nature.  But of course, the 
nanoculture might be all around us.  If it were convenient, the 
nanoculture could edit our minds and sense impressions in any way they 
liked.  There could be a glowing nanocity right outside my window, and 
my retinas might not register it.  Or I could be seeing it, but 
continually forgetting about it....

PARTING SHOTS 

There's no point in continuing along these lines, since I've now 
proposed a theory that can't be refuted. Since I could argue that you 
can't trust your sense impressions, or even your thoughts, any argument 
against the hidden nanocity could be a production of the nanoculture 
attempting to conceal itself.  In science, theories that can't be 
refuted aren't good theories.

If I were to summarize the point of this posting, I guess it would be 
that this forum follows the same pattern that most technological 
forecasting does.  In the short term, it's too optimistic.  It will take 
us a long time to get some of the basic nanotechnology to work, and the 
process will be full of possibilities for disaster.  In the long term, 
you are too pessimistic.  The possibilities opened up by nanotech are 
limitless, and will change us drastically.  To talk of artificially 
intelligent crash protection in cars (to name one recent topic) is 
absurd.  Any one of the precursors necessary to bring about some of your 
trivial conveniences would also be enough to change human life forever.

Use some imagination!

------------------------------------------------------------------
Michael Goodfellow, mgoodfel@oracle.com               Oracle Corp. 

These opinions have been set free, to make their way in the world
as best they can.

peb@tma1.eng.sun.com (Paul Baclaski) (11/17/89)

In article <Nov.14.21.09.46.1989.5904@athos.rutgers.edu>, mgoodfel@mgoodfel.oracle.com (Michael Goodfellow) writes:
> 
>   Someday, we will all be eaten by Grey Goo.
> 

The Grey Goo/Blue Goo senario looks very much line the Nuclear War/SDI
senario today.  Mutual Assured Destruction is what keeps us from having
a nuclear war.  The cost of building more nuclear weapons is lower
than the cost of building an impregnable sheild (like the fantasy SDI
system proposed by Reagan (originally Edward Teller, I suppose)).
Likewise, the cost of a Blue Goo defense would be much higher than the
cost of building a Grey Goo.  Biological example:  the Human Immune 
System is very complex and the AIDS virus is very simple.  

I think the idea of copying/backing up your consciousness is probably
going to be expensive, but perhaps not as expensive as an impregnable
Blue Goo.  However, the technology for backing up a human is certainly
going to come much later than military Grey Goo.  

M.A.D relies on a rationality assumption, which is a good assumption
when it comes to large countries. However, it would not be reliable 
assumption if Grey Goo could be concocted by a singular person.  Also,
there is the danger of a Dr. Strangelove senario (but I will ignore
this at this time--too many variables for making predictions).

(The cost of building a nuclear weapons lab is very high and has
certainly limited the spread of these weapons.  The requirements 
are:  fuel, knowledge and equipment.  Thus, building nuclear weapons
is the domain of governments, not individuals.)

I think that designing nanomachines is not going to be in the realm
of the kitchen table--it will be very expensive, and only large 
corporations will have the factilities to implement significant
designs.  Home nanotech kits should be limited to non-reproducing,
non-evolving, fixed designs with changable firmware and well 
characterized manipulation capabilities.

The question then becomes:  what is the actual cost of building an
assembler lab?  You need a programmable assembler, raw materials,
design support equipment and very specialized knowledge.  This is 
very similar to building a nuclear weapons lab.  A programmable 
assembler is going to be very expensive, even if it is cheap for 
it to reproduce (this is because the opportunity cost of losing 
the assembler to the competition is very high)--so security will 
be extremely tight.  However, no security is perfect and some 
assemblers will be stolen.  The people stealing an assembler will 
still need raw materials, design support equipment (which does not 
self reproduce) and specialized knowledge.  Perhaps the cost is on 
the order of $10,000,000 minimum, when all things are considered, but
the black market usually pays more for anything, so the cost would
be higher.  This is probably less than the cost of building a 
nuclear weapons lab, but is still higher than most individuals 
can afford.

Given these limitations, populations of humans on Earth should be 
safe--Grey Goo is not inevitable.  However, these assumptions do
not lead to the conclusion that any particular individual is safe
from nefarious activity of corporations or governments (no change
from current state of the world), so there should be considerable
need for a consciousness backup system.


Paul E. Baclaski
Sun Microsystems
peb@sun.com

josh@klaatu.rutgers.edu (J Storrs Hall) (11/17/89)

In article <Nov.16.17.51.03.1989.23283@athos.rutgers.edu>, peb@tma1.eng.sun.com (Paul Baclaski) writes:
> 
> The Grey Goo/Blue Goo senario looks very much line the Nuclear War/SDI
> senario today. 

Let me offer an analysis of SDI that may shed some light on the Goo
problem.  Suppose in 1800 we (the USA) were afraid of a Japanese
invasion force sailing across the Pacific and marching across the
plains to attack the settled regions in the East.  To prevent this
we would have had to fortify the entire west coast and maintain the
fortifications at a 2000-mile march from any existing U.S. center
of commerce.  This endeavor would have been well beyond the
capacity of the young Republic to support, even though it could and
did mount a few exploratory expeditions into the region. 

This is where we stand with respect to space right now.  To do an
effective SDI we would have to build a whole space infrastructure
where we have now only sent isolated expeditions.  It is probably
beyond our capacity to do this.

By 1900, the west coast did have forts all up and down it.  However,
the infrastructure was that of a thriving self-supporting development
and settlement of the West.  

Jump across the intervening metaphors to the Grey Goo problem.
Building a 100% effective Blue Goo out of nowhere and giving blanket
coverage to the current world, is probably a tour de force that is
impossible to achieve.  However, against the background of a mature
nanotechnological industrial base, where everything is built,
maintained, observed, studied, monitored, and repaired at the
molecular level, gray goo control is just another case of scraping
off the barnacles.

Point 1: Widespread, universally applied, well understood
nanotechnology is probably a best defense against grey goo.

> The question then becomes:  what is the actual cost of building an
> assembler lab? [...]  Perhaps the cost is on 
> the order of $10,000,000 minimum, when all things are considered, but
> the black market usually pays more for anything, so the cost would
> be higher. 
> 
> Given these limitations, populations of humans on Earth should be 
> safe--Grey Goo is not inevitable.  

I believe this is wishful thinking.  Assuming that nanotech is
understood, so the hacker isn't trying to do basic research and
engineering, but simply construction of a relatively well understood
machine with a bit of trial and error to cover proprietary info gaps,
I would bet that it could be done for about $100,000.  I'd put $10k
into a computer (assuming year 2000 price/performance), probably
twice that in CAD and simulation software, another $10k would buy
the mechanics (mostly from surplus places) to build a STM-sytle
proto-assembler, $20k into chemical paraphenalia, another $20k
in chemical supplies, and the rest to cover whatever I forgot.

The proto-assembler is based on the fact that a STM tip can be 
controlled to within a typical atomic diameter, so that if you could
rig a gripper to go on the end of it, you could use it to build 
your first assembler.  Grippers can be manipulated chemically,
e.g. with tRNA style end-effectors.  (This from Eric's talk at
the conference.)

If you prefer the biomolecular route, I don't know what a DNA
synthesizer costs, but basically you need one of those, a listing
of the protoassembler enzymes' sequences, and a petri dish.

I'm not guaranteeing you success, here, but just claiming that
I certainly can't guarantee your failure.  And I think that 
*somebody's* success isn't terribly unlikely.

Point 2: There is a significant chance of intentional grey goo
at some point.

    Point 1
 +  Point 2
-----------
    Point 3:  The thorough and widespread development of nanotechology
as soon as possible is probably a very good idea.

--JoSH

raburns@sun.com (Randy Burns) (11/18/89)

In article <Nov.16.17.51.03.1989.23283@athos.rutgers.edu> peb@tma1.eng.sun.com (Paul Baclaski) writes:
>
>In article <Nov.14.21.09.46.1989.5904@athos.rutgers.edu>, mgoodfel@mgoodfel.oracle.com (Michael Goodfellow) writes:
>> 
>>   Someday, we will all be eaten by Grey Goo.
... argument boils down to the premise that  nanotechnological
    war would be at least as expensive as nuclear war is now. 

>Given these limitations, populations of humans on Earth should be 
>safe--Grey Goo is not inevitable.  However, these assumptions do
>not lead to the conclusion that any particular individual is safe
>from nefarious activity of corporations or governments (no change
>from current state of the world), so there should be considerable
>need for a consciousness backup system.

I think that there is another important point that supports this
argument.  Chemical warfare is already far cheaper than nuclear
warfare.  So far few nations have found it strategic to use chemical
weapons.  What is even more remarkable is that no terrorist group has
made large scale use of chemical weapons.  Since chemical weapons have
been so rarely used in recent years, it is unlikely that 
nanotechnological weapons would be any more heavily used.

I'm still more worried about the unintended side effects of peaceful
use of nanotechnology.

cphoenix@csli.stanford.edu (Chris Phoenix) (11/18/89)

In article <Nov.16.17.51.03.1989.23283@athos.rutgers.edu> peb@tma1.eng.sun.com (Paul Baclaski) writes:
>The question then becomes:  what is the actual cost of building an
>assembler lab?  You need a programmable assembler, raw materials,
>design support equipment and very specialized knowledge.  This is 
>very similar to building a nuclear weapons lab.  ...
>... Perhaps the cost is on 
>the order of $10,000,000 minimum, when all things are considered ...

This is a good point that I hadn't thought of yet.  Can anyone comment on
how realistic these estimates are?  
Assuming that Drexler's description of nanomachine (lack of) mutation is
on track, it looks like accidental grey goo is pretty unlikely.  So then
the question is, how likely is purposeful grey goo?
I'd been assuming that whoever tried to build it would start from scratch.
I hadn't really thought of stealing already-built assemblers and using them
to build more illegal ones.  The question is:  How hard would it be to
reprogram a stolen assembler and provide it with a good working environment?
Could we make it harder to do one or the other of these, in order to provide
more of a safeguard?  
And then of course there's the question of how to deal with governments, which
actually have the resources to do it.  Not to start a political flame war or
anything, but I would *not* want to see the CIA get control of an assembler.
And we're the good guys!  As I recall, EoC just said we had to get nanotech
first to keep bad guys from getting it... but how are we going to keep 
track of our own government, when they have nanotech and most people 
don't?  Seems like we need a much better verification system than we have
currently.  [JoSH, should this thread go to another newsgroup?]

-- 
Chris Phoenix              | A harp is a nude piano.
cphoenix@csli.Stanford.EDU | "More input!  More input!"
First we got the Bomb, and that was good, cause we love peace and motherhood.
Disclaimer:  I want a kinder, gentler net with a thousand pints of lite.

[For political discussions, please do go to other groups 
 (comp.society.futures seems fairly low-volume about now).
 However, see my message about assembler-building costs.
 The best defense against official malfeasance, as well as
 individual berserkerism, appears to be an early and widespread
 adoption of the technology.
--JoSH]

cphoenix@csli.stanford.edu (Chris Phoenix) (11/21/89)

This reminded me of an earlier thread.  Well, actually it wasn't a thread,
because no one responded to the comment, but IMHO it should have been.  So,
I'll bring it up again...
Several months ago, we were talking about artificial grass.  I asked why not
just use real grass, slightly modified for our needs.  
One of the responses was very disturbing.  Sorry if this is a misquote or
out of context, but as I recall JoSH said that he expected that low-level
infestations of gray goo would destroy a lot of our biological inventory.

This is disturbing, for several reasons:
1)  I thought we had been assuming that active shields would be able to
successfully combat gragu.  Now I find out that you don't expect to be 
able to protect grass.  How can we protect ourselves, then?  I hope the
comment just wasn't well-thougt-out (or that I'm remembering it wrong),
but it looks like either we have some innate quality that makes it easier
to protect ourselves than to protect other life forms, or that our resources
will be so limited that we can't afford to waste them protecting other
life forms, or we'll lose a lot of our human biological inventory too.
The first is improbable, the second implies that the situation is a lot 
more touchy than we (at least I) thought, the third is bad for obvious 
reasons.
2)  Even assuming for some reason we can protect ourselves but not some
other organisms, it seems like an infestation big enough to wipe out a 
major species would automatically be disastrous.  I assume that, pound
for pound (or nanite for nanite), gragu will be stronger than blueites.
Gragu doesn't have to discriminate in what it attacks, and it can take
raw materials from anything it lands on.  And anything you can give to
blueites (global communication, weapons, ...) you can also give to gragu.
Can anyone calculate some reasonable values for lethality of blue and 
grey goo, and figure out what ratio of blue to grey goo will tip the 
scales and let the grey win?  Then figure out how big a grey infestation
might get before we discover it, and how much blue goo we're really 
willing to spread around to combat possible infestations.  I suspect 
the results will be disturbing... Also, we might want to think about a 
"blob" scenario.  If a grey nanite is caught and surrounded, I don't doubt
it can be destroyed.  But if we get a blob of them, the surface/volume
ratio changes drastically.  The layer on which they can be attacked is 
basically one-on-one, so they might be able to hold off the blueites
with a "shell"--while on the macroscopic scale they move around, devouring
new resources and energy, and growing the blob bigger.  I suppose there's 
ways around this, such as building the blueites so that they can clump 
together to form a laser cannon (or something), but I haven't heard of
any work done in this direction so far and it might be worth looking into.

Chris Phoenix              | A harp is a nude piano.
cphoenix@csli.Stanford.EDU | "More input!  More input!"
First we got the Bomb, and that was good, cause we love peace and motherhood.
Disclaimer:  I want a kinder, gentler net with a thousand pints of lite.

[I'm actually surprised my comments didn't arouse more discussion the
 first time around.  Obviously this is an important subject that needs
 to be explored.  
 Let me try to clarify my position a bit:
 (a) A completely general Grey Goo is hard to make.  It won't happen
     by accident.  It will happen late in the game if it does; it may
     never happen.
 (b) Limited "special purpose" goo is much easier.  Particular habitats,
     species, materials may be at risk early on.
 (c) We live in a "grey goo" environment *right now*.  Biological matter
     left unprotected is soon consumed by bacteria, ie, it rots.
 (d) Since the attackers of biomaterials are special-purpose, the
     defenses can be also; i.e., wood can be kept from rotting by 
     painting, keeping dry, treating with copper chlorides, etc.
 (e) Active immune systems have several advantages over attacking 
     microbes *on their own turf*, i.e., inside the organism.  However,
     even there they largely act as backup to passive shields, i.e. skin.
 (f) Releases will occur, of increasing generality and dangerousness.
     Fighting them will be interesting but not impossible; we fight
     disease, forest fires, superstition, and other replicators.
 (g) Forest fires are harder to fight in *unpopulated areas*; goo will
     be harder to fight the less installed nanotechnology we have.

 May you live in interesting times...
 --JoSH]

toma@attctc.dallas.tx.us (Tom Armistead) (11/29/89)

in article <Nov.14.21.09.46.1989.5904@athos.rutgers.edu>, mgoodfel@mgoodfel.oracle.com (Michael Goodfellow) says:
> Approved: nanotech@aramis.rutgers.edu
> 
> I've been reading sci.nanotech for the past few months, after reading 
> EoC earlier this year.  Since the forum seems to be a little slow 
> lately, perhaps you'll be interested in my two cents on the subject.  
> Bear with me if this seems a bit obvious at first...
> 

Excellent article!!!

I agree with this guy 100% (maybe this explains the parallel universe theories?)

One thing as a followup.

There was an interview in Omni mag. a little while back titled 'Interview with
Hans Moravec'. In this article Mr. Moravec describes a future world where we
humans have evolved beyond our organic selves to a robotic physical self, being
able to mechanically clone our brain/thoughts, etc... He goes on to say that 
we, in our new physical form, would possibly maintiain the old organic form
of ourselves as mere pets. "Little Johnny-bot, go feed your humans. Their bowl
has been empty for days. And how long has it been since you changed their
water?"

Hans Moravec has a book entitled 'Mind Children: The Future of Robot and Human
Intelligence' that goes into more detail on his thoughts. I haven't read it
yet, but plan to...

Tom Armistead (robot to be).
-- 
-------------
Tom Armistead
UUCP:  {ames,lll-winken,mit-eddie,osu-cis,texbell}!attctc!toma

[Mind Children (Harvard Univ. Press, 1988, $18.95) is an excellent book.
 It has to be considered the authoritative source on uploading.
 --JoSH]

landman@hanami.eng.sun.com (Howard A. Landman x61391) (12/05/89)

In article <Nov.16.17.51.03.1989.23283@athos.rutgers.edu> peb@tma1.eng.sun.com (Paul Baclaski) writes:
>The question then becomes:  what is the actual cost of building an
>assembler lab?  You need a programmable assembler, raw materials,
>design support equipment and very specialized knowledge.  This is 
>very similar to building a nuclear weapons lab.  A programmable 
>assembler is going to be very expensive, even if it is cheap for 
>it to reproduce (this is because the opportunity cost of losing 
>the assembler to the competition is very high)--so security will 
>be extremely tight.  However, no security is perfect and some 
>assemblers will be stolen.  The people stealing an assembler will 
>still need raw materials, design support equipment (which does not 
>self reproduce) and specialized knowledge.  Perhaps the cost is on 
>the order of $10,000,000 minimum, when all things are considered, but
>the black market usually pays more for anything, so the cost would
>be higher.  This is probably less than the cost of building a 
>nuclear weapons lab, but is still higher than most individuals 
>can afford.

The black market pays LESS for rubles and pirated software.  Be careful
what you assume.

Anyway, I don't agree with this estimate at all.  You need:

	1. CAD tools to design what you want to make
	2. A way to translate the plan into "nanospeak".
	3. One assembler.
	4. Raw materials for more assemblers and product.
	5. An inspection device to monitor the process.

OPTIMISTIC SCENARIO
Let's assume our rogue works at a nanotech company, and has access to
both 1 and 2.  Then he can design his product "after hours", or while
appearing to be working on other things.  Taking the assembler and
"data tape" are about equally difficult, which is to say not at all.
Raw materials should be cheap by definition (this is after nanotech
exists, after all).  So the (somewhat optional) inspection device is
probably hardest.  Already you can buy a commercial scanning tunneling
microscope capable of resolving atoms for about $40,000.  I've heard
that a high school student built one out of about $100 in parts.  So it
appears that a home nanotech lab could be set up for somewhere between
$1,000 and $100,000.  With a lot of work.

PESSIMISTIC SCENARIO
1 requires a personal computer (circa 2000, a PC will be over 100 MIPS,
128 MB memory, 1 GB disk) or workstation (don't ask!) costing (in today's
dollars) perhaps $5,000 to $200,000.  The tools themselves (including
translation capability 2) will cost between $500 and $500,000.  The
assembler is bought illegally for under $100,000 (after all, the source
can always make more!).  Raw materials are insignificant next to these.
A commercial grade lab setup should cost no more than a few $100,000.
So the total cost won't be more than $1,000,000.

So I think the $10,000,000 number is 1 to 4 orders of magnitude too large,
most likely around 2.  That is, you'd have a shot at it for $100,000.

The reason big factories are expensive is that they're optimized for
maximum throughput and economies of scale.  A factory designed to make
a small quantity of one product once can be considerably cheaper.  Also,
it won't need to operate under as stringent safeguards.

Of course, we don't need nanotech to have this kind of problem.  There's
a reaction starting with (a simple inexpensive organic compound) and
catalyzed by (a certain metal) which produces (an extremely toxic and
non-bio-degradable compound) in moderately good yield.  It's been estimated
that for $3000 or so you could make enough of this stuff to render a large
metropolitan area uninhabitable for a few decades.  So then you rent a
small private plane, fly over the city of your choice, and dump this stuff
out the door.  Presto!  You've just wasted a major city for under $10,000
(plus, probably, the lives of the pilot and the chemist).  Smaller targets
would be even cheaper.  (I hope you understand why I'm being vague about
the exact process!!!)

	Howard A. Landman
	landman%hanami@eng.sun.com

jerry@olivey.olivetti.com (Jerry Aguirre) (12/05/89)

In article <Nov.16.17.51.03.1989.23283@athos.rutgers.edu> peb@tma1.eng.sun.com (Paul Baclaski) writes:
}self reproduce) and specialized knowledge.  Perhaps the cost is on 
}the order of $10,000,000 minimum, when all things are considered, but
}the black market usually pays more for anything, so the cost would
}be higher.  This is probably less than the cost of building a 
}nuclear weapons lab, but is still higher than most individuals 
}can afford.

Just how much would certain people be willing to pay for an assembler
that could transform sugar into heroin.  For certain people $10,000,000
is no problem.  This could add new meaning to the term "designer-drugs".

[Oddly enough, $10 million is also the figure usually mentioned as the
 minimum needed to put together an A-bomb...
 --JoSH]

rawlins@iuvax.cs.indiana.edu (Gregory J. E. Rawlins) (12/06/89)

In article <Dec.4.18.45.02.1989.19116@athos.rutgers.edu> landman@hanami.eng.sun.com (Howard A. Landman x61391) writes:
[...]
-would be even cheaper.  (I hope you understand why I'm being vague about
-the exact process!!!)

It always puzzled me that there are no chemical/biological terrorists. If i
was a terrorist i would surely consider that the low cost, easy availability,
and wide delivery advantages of chemical/biological weapons make them the
weapons of choice. Perhaps terrorists are just not very imaginative. Or perhaps
terrorists aren't scientists and don't know any scientists. Or perhaps there is
something really sexy about threatening to use a thermonuclear weapon that just
isn't there when threatening to release something that most people can't see.
It's a mystery.
	gregory.
--
ps. to Howard: it isn't necessary to hire a plane; an alternate weapon probably
works very well when dropped in a dam serving a major city...<shudder>