[sci.nanotech] Goo

djo@pacbell.com (Dan'l DanehyOakes) (06/24/89)

alan@oz.nm.paradyne.COM (Alan Lovejoy) writes:


>The explosion of two small bombs does not a nuclear war make.  It can be 
>argued that the Nuclear Peace we have enjoyed since the end of WWII is
>partially a consequence of the Hiroshima and Nagasaki bombs.

The explosion of two small bombs does not, doesn't it?

What about three small bombs?

Or two large ones?

A war becomes nuclear when nuclear weapons are used.  If there had been more,
they would have been used -- of this I think there is no reasonable doubt.

In the only historical case where a nation holding nuclear weapons went to 
war with a nation it regarded as a serious threat, the nuclear nation used its
entire nuclear arsenal.

But this isn't about nanotechnology... 


>To say that a problem is insolvable is the same thing as saying it will not
>be solved.  To say that a problem is solvable is NOT the same thing as saying
>that a problem will be solved.  A statement that something is impossible is
>much harder to prove than a statement that something is possible.

Granted.  And my claim is that unless you can show that a gragu eruption is
impossible -- or at least *incredibly* unlikely -- we're better off during the
age of nuclear brinksmanship (which I liken to teenagers playing "chicken") than
we will be in the hypothetical nanoage.

A shield that shields against "most" gragu is *not* sufficient, any more than an
immune system that shields against "most" microorganisms is sufficient for the
survival of a body.  If that one organizm you aren't shielded against gets at 
you -- that's it.


>My claim is simply "The fact that we have avoided nuclear war for forty
>years provides a basis for hoping that nuclear war--and similar disasters
>such as a biotech or gray-goo war--can be avoided long enough so that
>mankind can survive."  My "arguments that the problem is solvable" are
>precisely that.  They are not arguments that the problem is guaranteed to
>be solved.

And my counterclaim:  "We have avoided nuclear war for forty years, but we have
to avoid it for many years more, at least until we are spread to other planets,
and probably until we are spread to other systems, before we can reasonably
decide that nuclear war did not demonstrate that technology (and, by extension,
intelligence) is an evolutionary dead end.  Nanotech AIAs and the gragu problem
simply offer us another possible way to turn ourselves into an evolutionary
dead end.  Nothing less than a convincing argument that a gooproof shield can
be implemented faster than goo will prevent nanotech from being, at least at
first, a far greater threat than boon."

>I think you overestimate our level of optimism.  We are in great danger which
>may lead to our destruction.  I think there is reason to hope that we will
>survive.  I fear that we may not.

Complete agreement.

>Gray goo is almost impossible as an accident.  

Again:  Pish and tosh.  Consider your requirements:

>Gragu requires nanomachines which:
>
>a) Can faithfully replicate themselves;
>b) Can disassemble and/or maliciously reassemble (in the sense of modification
>of molecular structure) almost anything, and/or which can assemble "poisons"
>in strategically sensitive locations.
>c) Can survive in most environments for significant periods of time;
>d) Can hide and/or fight off attack from active shields;
>e) Can obtain sufficient energy to perform their functions rapidly enough
>to pose a threat;
>f) Have sufficient intelligence (or receive sufficiently intelligent direction)
>to avoid strategic and/or tactical mistakes (such as devouring each other or
>consuming the energy supply before the job is finished).

Now consider any attempt to create a truly useful general AIA.

(A) will be required of such a machine.  (B) will be a likely concommittant --
you have to disassemble things to find out how they're made, if you want to
replicate them.  (C) is not required of an AIA -- but it isn't really required
of gragu, either; it has to survive in the climate you want it to amok in.
For example, if you wanted to wipe out LA, you'd have to make something that
could work in an oxygen-and-smog atmosphere, at temperatures from 70 to 10000
degrees Fahrenheit (okay, so I exaggerated a little.  It drops down to 60 in the
winter, sometimes), etc., etc.  It need only survive under conditions that 
humans survive in to make it deadly to humans.  (D) is a serious consideration,
but only if it's being put somewhere that active shields already exist -- I'm
mostly worried about the first few years when talking about accidental gragu.
(E) is likely to be the case with any AIA.  I'd imagine we want them absorbing
heat from their surroundings (or the reactions they cause) wherever possible.
Lightpowered AIAs seem a good possibility for first guess; LA's got plenty of
light.  (F) is nonsense.  They just have to go around devouring everything in
sight.  If they run out of energy, they "lose," but they've done some serious
damage in the meanwhile.  Eating each other could cause more trouble, but I'm
given to believe they'd have that built in in a scenario like this:

Jho Nano decides to build a "complete" AIA system, one that can take a general
program from nanotape, find the atoms it needs to build the desired object, and
assemble it.  This will be the first such AIA ever built.  After a great deal of
fiddling, he decides he has a working design, and grows his molecule.  One
molecule isn't much use.  He could grow more, but it seems more valuable, as a
test of his design, to give it instructions for building more of itself.  He
programs a nanotape that translates as follow:
	"Build a copy of yourself."
	"Decrement the counter on this tape by 1."
	"Make a copy of this tape for the copy of yourself."
Say the counter starts at "5."  The AIA will make a copy of itself, decrement
the tape to "4," and then there will be two AIAs with 4-tapes.  Then 8 with
3-tapes.  Then 16 with 2-tapes.  Then 32 with 1-tapes.  You wind up with 64
AIAs, all of them with used up tapes.

But.

Suppose the decrementer fails?  Or the tape accidentally reads "50000000"?

Answer:  Grey goo.

If this happens *after* we've got some kind of useful active shields going, I'm
not too worried.  But if it happens in the next few years...I'm worried.


>The more complex the machine, the more likely that
>"accidents" which introduce "bugs" are to occur --and the more likely it is 
>that those "bugs" will simply prevent the macine from working.

Not necessarily.  Humans being humans are likely to attempt some modularity of
design (makes the whole thing easier to understand, neh?), and it's possible for
a module (say the "decrementer" module or the "don't eat that, it might be
human" module) to fail without the whole failing.  Also, it's a totally normal
human tendency to try to make machines as robust as possible...

>...homo sapiens is living proof that "accidents" can and will lead to more 
>advanced and capable replicators--but only over periods of billions, or at 
>least millions, of years. 

Ahem:  when left to happen by themselves.  Most of the "accidents" in the
development of nanotech and AIAs will not be accidents at all -- (human?)
intelligence will guide the process.

>Since disassemblers will not be replicators (UNLESS SOMEONE DELIBERATELY DESIGNS
>THEM THAT WAY), 

Contrariwise:  replicators *WILL* be disassemblers UNLESS SOMEONE MANAGES TO
DESIGN THEM NOT TO BE.  That is, unless someone builds in intelligence that
directs them "Don't use that for spare parts -- it might be part of something."

>Nanosystems will be DESIGNED to make accidental gragu as unlikely as we know
>how.  

Yes -- but how well do we know?

>This restriction
>is not as onerous as it seems if you use "idiot-savant" AI's which are
>brilliant molecular engineers AND OTHERWISE AS DUMB AS A CRAY-V to program
>your nanomachines--and to check the programs offered by your fully-intelligent 
>AIs for "trojan horses".

Hmmmm... the Trojan horse got through in spite of Cassandra's warning, didn't
it?  More to the point, this is an excellent place for Hofsteder's "record
player breaking records."  There are by definition trojans that can get past any given security system or set of systems.

>What I was trying to suggest is that we need to make a change in what we 
>consider to be "acceptably sane."  

Agreed.

>And we need to find out how to reliably
>cure and prevent the sort of "insanity" (or "antisocial behavior") which
>drives (or permits) people to purposely seek to harm others.

Again -- whose definition?

>May I suggest that "insanity" is any state of mind which engenders destructive
>anti-survival behavior?  In light of nanotechnology, militarism and terrorism
>are insane states of mind under this definition.

That's culturocentric.  Samurai, for example, often performed anti-survival
acts.  Ditto car bombers.  Are they insane?

>Both shields and goo have to overcome the "is it possible or practical?"
>hurdle.  Why should this cause shields more difficulty than goo?

Because goo only has to attack on thing.  Sheilds have to attack any 
hypothetical goo.

<Oh yeah?  Care to prove it?

>See above.  And also, if virii and bacteria were gragu-class devices,
>why are we still here?

They aren't.

And you still haven't proven anything by my book.

Dan'l Danehy-Oakes

[I think you are ascribing some magical powers to the goo that are not
 likely in a real nanotech device.  For example, it is almost certain 
 that the first assemblers (and most "industrial" assemblers thereafter)
 will get their raw materials from floating in a soup of them, and will
 not be able to take anything apart.  Assemblers that will live in 
 an artificial environment, requiring to be "spoon fed", will be easier
 to design, will work faster, and will be *safer*--reason enough for
 people to design them that way.  
 --JoSH]

alan@oz.nm.paradyne.com (Alan Lovejoy) (06/27/89)

In article <Jun.24.00.41.52.1989.23764@athos.rutgers.edu> djo@pacbell.com (Dan'l DanehyOakes) writes:
>alan@oz.nm.paradyne.COM (Alan Lovejoy) writes:

{discussion about whether WWII was a "nuclear" war}

>But this isn't about nanotechnology... 

[Enough already.  I am sorry I left the original remarks in to let things
 go this far.  I will delete all references to the nuclearity of WWII--
 from *anyone* -- in the future.
 --JoSH]


>>To say that a problem is insolvable is the same thing as saying it will not
>>be solved.  To say that a problem is solvable is NOT the same thing as saying
>>that a problem will be solved.  A statement that something is impossible is
>>much harder to prove than a statement that something is possible.

>Granted.  And my claim is that unless you can show that a gragu eruption is
>impossible -- or at least *incredibly* unlikely -- we're better off during the
>age of nuclear brinksmanship (which I liken to teenagers playing "chicken") than
>we will be in the hypothetical nanoage.

>A shield that shields against "most" gragu is *not* sufficient, any more than an
>immune system that shields against "most" microorganisms is sufficient for the
>survival of a body.  If that one organizm you aren't shielded against gets at 
>you -- that's it.

Whatever "works"  against the realized threats which you actually face is
by definition "sufficient."     

The motivation for SDI is to eliminate the possibility that any enemy can ever
be REASONABLY CERTAIN of achieving a "successful" first strike.  As long as
an enemy feels that your SDI system JUST MIGHT WORK he can not rationally
risk attempting a first strike--unless he is absolutely convinced that his
own SDI system will work.

Similarly, as long as your enemies can not be CERTAIN that they can overcome
your active shield, they can not rationally "sick" their gray goo on you--unless
they can be sure that their own active shield AND SDI SYSTEM is impenetrable.

It is the possible actions of IRRATIONAL people that have me worried.

>>My claim is simply "The fact that we have avoided nuclear war for forty
>>years provides a basis for hoping that nuclear war--and similar disasters
>>such as a biotech or gray-goo war--can be avoided long enough so that
>>mankind can survive."  My "arguments that the problem is solvable" are
>>precisely that.  They are not arguments that the problem is guaranteed to
>>be solved.
>
>And my counterclaim:  "We have avoided nuclear war for forty years, but we have
>to avoid it for many years more, at least until we are spread to other planets,
>and probably until we are spread to other systems, before we can reasonably
>decide that nuclear war did not demonstrate that technology (and, by extension,
>intelligence) is an evolutionary dead end.  Nanotech AIAs and the gragu problem
>simply offer us another possible way to turn ourselves into an evolutionary
>dead end.  Nothing less than a convincing argument that a gooproof shield can
>be implemented faster than goo will prevent nanotech from being, at least at
>first, a far greater threat than boon."

What about first proving that "gray goo" is possible?  But first, what IS
gray goo, precisely?  What horrors will nanotechnology actually be able to
conjure up, and what are the parameters which limit their actions, effects
and uses?  How long will it take before such horrors can be realized?

There are indications that gray goo may not pose much of a threat simply
because it is impossible--or at least only possible if "gray goo" is given
a very "watered down" definition.  

I am much more concerned about biotechnology:  It is already upon us, and
we KNOW how dangerous microbes can be!!!

>>Gray goo is almost impossible as an accident.  

>Again:  Pish and tosh.  Consider your requirements:

>>Gragu requires nanomachines which:

>>a) Can faithfully replicate themselves;
>>b) Can disassemble and/or maliciously reassemble (in the sense of modification
>>of molecular structure) almost anything, and/or which can assemble "poisons"
>>in strategically sensitive locations.
>>c) Can survive in most environments for significant periods of time;
>>d) Can hide and/or fight off attack from active shields;
>>e) Can obtain sufficient energy to perform their functions rapidly enough
>>to pose a threat;
>>f) Have sufficient intelligence (or receive sufficiently intelligent direction)
>>to avoid strategic and/or tactical mistakes (such as devouring each other or
>>consuming the energy supply before the job is finished).

>Now consider any attempt to create a truly useful general AIA.

What is an AIA?  An "artificially intelligent assembler" I assume?  Have you
read Engines Of Creation yet?  An assembler will be roughly the same size
as a ribosome.   It simply cannot be possible to store enough information in
such a small space (around 10**6 atoms) to have an artificially intelligent
device--unless you mean something different by AI than I do.  The first true AI 
with human-scale intelligence will be at least a cubic millimeter--if not much
bigger.  The first super-human AI will be many times bigger still.

I suspect that what you really mean by "AIA" is "a system of assemblers and
nanocomputers running expert-system software for molecular mechanics."  And
by "expert system" I refer to something that resembles current expert systems,
which are called "AI" only due to excessive hype and the unrealistic hopes
of researchers.

>(A) will be required of such a machine.  

This will be required of the nanomachines that are used to construct the
first true AIs.

>(B) will be a likely concommittant --
>you have to disassemble things to find out how they're made, if you want to
>replicate them.  (C) is not required of an AIA -- but it isn't really required
>of gragu, either; it has to survive in the climate you want it to amok in.
>For example, if you wanted to wipe out LA, you'd have to make something that
>could work in an oxygen-and-smog atmosphere, at temperatures from 70 to 10000
>degrees Fahrenheit (okay, so I exaggerated a little.  It drops down to 60 in the
>winter, sometimes), etc., etc.  It need only survive under conditions that 
>humans survive in to make it deadly to humans.

We are in agreement.  But given nanotechnology, what environment will humans
be able to survive in?  And active shields?  And what about gray-goo counter-
measures which temporarily "kill" biologic life--but not beyond the ability
of nanomachines to perform "resurrections?"

>>d) Can hide and/or fight off attack from active shields;
> (D) is a serious consideration,
>but only if it's being put somewhere that active shields already exist -- I'm
>mostly worried about the first few years when talking about accidental gragu.

Then rest your mind.  This is precisely the period when the greatest care will
be taken, if the recent history of biotechnology is any guide.  And the early
years will also coincide with the time when nanomachines will be the most
dependent of special enviroments for their survival.  This situation will
be quite deliberate, and probably forced on us by the laws of nature.

>>e) Can obtain sufficient energy to perform their functions rapidly enough
>>to pose a threat;
>(E) is likely to be the case with any AIA.  I'd imagine we want them absorbing
>heat from their surroundings (or the reactions they cause) wherever possible.
>Lightpowered AIAs seem a good possibility for first guess; LA's got plenty of
>light.  

Ahem.  There is no known organism which survives entirely off of heat--or light.
The reasons why an organism (or a machine--same thing) cannot live entirely
off of heat go very deep into the nature of things.  We conceptualize
this as the Laws of Thermodynamics.

An AI does not need to function either in all enviroments or in the absence
of localized energy supplies in order to be useful.  I'll settle for one
that wants direct current, special nutrients and a liquid nitrogen bath.
I'll communicate with it over fiber-optic link from most places in the U.S.
if I need to.  The molecular requirements for an AI are much less demanding
than those for gray goo--unless you use a very watered down definition of goo.
Software is another matter...but that may be a moot point also in light of the
fact that neural nets are not programmed--just taught.

>>f) Have sufficient intelligence (or receive sufficiently intelligent direction)
>>to avoid strategic and/or tactical mistakes (such as devouring each other or
>>consuming the energy supply before the job is finished).
(F) is nonsense.  

Have you ever experimented with the game of "Life?"  If the gray goo devices
are programmed to blindly eat everything that they encouter, then they are
very likely to quickly eat themselves and/or their energy supply--long bofore
they have "finished" their job.  This is related to the reason why forest
fires eventually burn out naturally--without destroying the entire forest,
let alone burning out all plant life on a continent.

>They just have to go around devouring everything in
>sight.  If they run out of energy, they "lose," but they've done some serious
>damage in the meanwhile.  Eating each other could cause more trouble, but I'm
>given to believe they'd have that built in in a scenario like this:
>
>Jho Nano decides to build a "complete" AIA system, one that can take a general
>program from nanotape, find the atoms it needs to build the desired object, and
>assemble it.  This will be the first such AIA ever built.  After a great deal of
>fiddling, he decides he has a working design, and grows his molecule.  One
>molecule isn't much use.  He could grow more, but it seems more valuable, as a
>test of his design, to give it instructions for building more of itself.  He
>programs a nanotape that translates as follow:
>	"Build a copy of yourself."
>	"Decrement the counter on this tape by 1."
>	"Make a copy of this tape for the copy of yourself."
>Say the counter starts at "5."  The AIA will make a copy of itself, decrement
>the tape to "4," and then there will be two AIAs with 4-tapes.  Then 8 with
>3-tapes.  Then 16 with 2-tapes.  Then 32 with 1-tapes.  You wind up with 64
>AIAs, all of them with used up tapes.
>
>But.
>
>Suppose the decrementer fails?  Or the tape accidentally reads "50000000"?

Suppose failure of the decrementer necessarily engenders failure of the
"execution unit?" Suppose that the tape can't represent numbers higher
than 24 as an iteration constant? or 16?  Suppose that the nanomachines
can only operate in a special lab environemt and are useless in normal 
terrestrial conditions?

>Answer:  Grey goo.

>If this happens *after* we've got some kind of useful active shields going, I'm
>not too worried.  But if it happens in the next few years...I'm worried.

>>The more complex the machine, the more likely that
>>"accidents" which introduce "bugs" are to occur --and the more likely it is 
>>that those "bugs" will simply prevent the macine from working.

>Not necessarily.  Humans being humans are likely to attempt some modularity of
>design (makes the whole thing easier to understand, neh?), and it's possible for
>a module (say the "decrementer" module or the "don't eat that, it might be
>human" module) to fail without the whole failing.  Also, it's a totally normal
>human tendency to try to make machines as robust as possible...

It's also a completely normal human tendency to demand safeguards up the 
ying-yang when a new and dangerous technology is being tried out.  Just ask
Jeremy Rifkin.

>>...homo sapiens is living proof that "accidents" can and will lead to more 
>>advanced and capable replicators--but only over periods of billions, or at 
>>least millions, of years. 
>
>Ahem:  when left to happen by themselves.  Most of the "accidents" in the
>development of nanotech and AIAs will not be accidents at all -- (human?)
>intelligence will guide the process.

But nanomachines nevertheless cannot violate natural law.  And natural law is
not in favor of gray goo (we can't prove it's impossible, but it does appear
to be a difficult thing to engineer, let alone create by accident).

Do not be misled by biolife.  Biolife is the product of evolution.  The more
susceptible to evolution an organism is, the greater its chances of fathering
successful "offspring" species.  Life has evolved to be good at evolving.

We will not design our nanomachines to be good at evolving.  In fact, we will
strive for precisely the opposite quality.

>>This restriction
>>is not as onerous as it seems if you use "idiot-savant" AI's which are
>>brilliant molecular engineers AND OTHERWISE AS DUMB AS A CRAY-V to program
>>your nanomachines--and to check the programs offered by your fully-intelligent 
>>AIs for "trojan horses".
>
>Hmmmm... the Trojan horse got through in spite of Cassandra's warning, didn't
>it?  More to the point, this is an excellent place for Hofsteder's "record
>player breaking records."  There are by definition trojans that can get past any given security system or set of systems.

Nothing is 100% safe.  There are dangers in avoiding nanotechnology, which may
in the end be even more formidable than gray goo.  The question is, "What course
provides the least risk over all?"

>>And we need to find out how to reliably
>>cure and prevent the sort of "insanity" (or "antisocial behavior") which
>>drives (or permits) people to purposely seek to harm others.

>Again -- whose definition?

Do you want to argue over definitions or do you want to prevent gray goo?
Remember, nanotechnology promises indefinite life spans.  Think about that
carefully before you become overly concerned about what happens to you
during any relatively-short 50-year period.

>>May I suggest that "insanity" is any state of mind which engenders destructive
>>anti-survival behavior?  In light of nanotechnology, militarism and terrorism
>>are insane states of mind under this definition.

>That's culturocentric.  Samurai, for example, often performed anti-survival
>acts.  Ditto car bombers.  Are they insane?

Those aren't even good examples.  Of course they are/were insane.  The degree
to which you give people the benefit of the doubt is a function of what risks
you are willing to take.  Which do you fear most?

>>Both shields and goo have to overcome the "is it possible or practical?"
>>hurdle.  Why should this cause shields more difficulty than goo?

>Because goo only has to attack one thing.  Sheilds have to attack any 
>hypothetical goo.

Pish and tosh.  You have it backwards, my friend.

The goo has to attack EVERYTHING--by definition.  And it does not have the
luxury of knowing everthing about the active shield.  Most of the secrets of
the shield will be in its programming.  And the programming can be for 
assemblers, disassemblers and nanocomputers which only come into existence
once the goo is detected--produced by standard nanodevices in response to
any suspicious situation.  I challenge you to figure out what a program does 
given only the binary object code, when you can not get access to the 
instruction set definition!  This is similar to the problem we currently
face with DNA:  what does it MEAN?  And biolife's DNA uses the same 
"instruction set encodings" from amoebae to humans.  The active shield can
use a different code in each nanocomputer, encrypt it, and even vary the 
program from one nanocomputer to the next.  In fact, the shield can "live"
inside every nanocomputer in existence as an encrypted program which is
decrypted and run when conditions warrant it.  The shield can reliably 
identify friends using Zero Knowledge Proofs.  The shield would actually
be composed of billions of differently-optimized units.  The most succesful
would reproduce in greater numbers to continue the fight.  Survival of the
fittest by natural deselection of what does not work.    

Finally, the shield need only PREVENT NANOMACHINERY FROM WORKING.  This is
strategically a much easier goal than what the goo must accomplish.

Trying to study the shield by triggering its release is a good way to get
caught--just like the bank robbers who get indelibly inked shortly after
leaving the bank with the money bags.  The shield sets off lots of redundant
alarms and keeps on expanding the "interdicted" area until told to stop using
the proper Zero-Knowlege-Proof-verified command.

><Oh yeah?  Care to prove it?

>>See above.  And also, if virii and bacteria were gragu-class devices,
>>why are we still here?

>They aren't.

>And you still haven't proven anything by my book.

But the burden of proof is on you:  gray goo has never existed before, as
far as we know.

>[I think you are ascribing some magical powers to the goo that are not
> likely in a real nanotech device.  For example, it is almost certain 
> that the first assemblers (and most "industrial" assemblers thereafter)
> will get their raw materials from floating in a soup of them, and will
> not be able to take anything apart.  Assemblers that will live in 
> an artificial environment, requiring to be "spoon fed", will be easier
> to design, will work faster, and will be *safer*--reason enough for
> people to design them that way.  
> --JoSH]

Amen.


Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.
Disclaimer: I do not speak for AT&T Paradyne.  They do not speak for me. 
______________________________Down with Li Peng!________________________________
Motto: If nanomachines will be able to reconstruct you, YOU AREN'T DEAD YET.