[sci.nanotech] What, me worry?

djo@pacbell.com (Dan'l DanehyOakes) (06/29/89)

This is getting *way* out of hand.

Part of the problem is that we're simply talking about two different things.
Alan is concerned with military gragu; I'm more worried (for now) about the
randomly created voracious molecule.

Since a lot of this dialog is based on that noncommunication, I'm dropping a
*lot* of points from this message (and there was much rejoicing).


>What about first proving that "gray goo" is possible?  But first, what IS
>gray goo, precisely?  

Simply:  an uncontrollably self-replicating nanodevice or group of nanodevices.

>There are indications that gray goo may not pose much of a threat simply
>because it is impossible--or at least only possible if "gray goo" is given
>a very "watered down" definition.  

I don't regard my definition as watered down; and I'd say it allows for the
possibility of real goo:  using Drexler's everpresent analogy to biostuff, the
existence of uncontrollably self-replicating things like bacteria and viruses
is proof of the possibility of goo.

>What is an AIA?  An "artificially intelligent assembler" I assume?  Have you
>read Engines Of Creation yet?  

Yes, I have.  And, for all the validity of its ideas, it remains a patchwork
of guesses and speculation.  By "AIA" I mean a nanodevice or system of 
nanodevices capable of replicating other objects, whether by program (directed
assembly) or by analysis (disassembly) and synthesis (self-directed assembly).
Pretty much by definition, such a device would be capable of self-replication.

>An assembler will be roughly the same size as a ribosome.   

An example of what I call guesses and speculation.  Until you've built the damn
thing -- or at least made a plausible design for it -- you really don't know how
big it will be.  And, no,  I don't regard KED's "design-ahead" speculations on
the assembler anything like plausible; he makes assumption after assumption to
come up with a size he's happy/comfortable with.  We just plain *DO*NOT*KNOW*
what's involved in building a doohickey of such flexibility.  We've never done
it on the macro scale.  Nor do such general assemblers exist in nature; 
ribosomes are *very* limited in what they can build -- they build essentially-
linear objects (chains of amino acids) from an extremely small "alphabet" of 
components.  Even if you allow that the assembler is working solely with atoms
and not groups of atoms (such as amino acids), there are more kinds of atoms
than there are types of amino acids used by ribosomes...and atoms with the same
valence will be difficult for an assembler to tell apart by the "purely 
chemical" means so dear to KED.

>It simply cannot be possible to store enough information in
>such a small space (around 10**6 atoms) to have an artificially intelligent
>device

Right; even if each atom carried a full information bit, there would be on
the order of one Mbyte total.  Small.

But I don't buy that assemblers are *necessarily* that small.  And the AIA
is the integrated system, not a single nanodevice.

>I suspect that what you really mean by "AIA" is "a system of assemblers and
>nanocomputers running expert-system software for molecular mechanics."  

Possibly.

>But given nanotechnology, what environment will humans
>be able to survive in?  And active shields?  And what about gray-goo counter-
>measures which temporarily "kill" biologic life--but not beyond the ability
>of nanomachines to perform "resurrections?"

To the first, who should I know?  I imagine it'll be environments with
food and oxygen and a reasonable temperature range.  If nanodevices change
that, I'll be surprised; more likely they'll bring an environment with them.

>Then rest your mind.  This is precisely the period when the greatest care will
>be taken, if the recent history of biotechnology is any guide.  And the early
>years will also coincide with the time when nanomachines will be the most
>dependent of special enviroments for their survival.  This situation will
>be quite deliberate, and probably forced on us by the laws of nature.

The first point fails to convince me; I think you're underestimating human
stupidity.  The second is somewhat more comforting.

>Software is another matter...but that may be a moot point also in light of the
>fact that neural nets are not programmed--just taught.

...maybe...

>Have you ever experimented with the game of "Life?"  

Yes.

>If the gray goo devices
>are programmed to blindly eat everything that they encouter, then they are
>very likely to quickly eat themselves and/or their energy supply--long bofore
>they have "finished" their job.  

Possible.  What's finish?  Will they wipe themselves before they cripple what 
they've begun destroying?  If it eats my legs and the bottom two feet of my 
house and car (and wife and kids!) I don't really care about it wiping itself 
out at that point.

>It's also a completely normal human tendency to demand safeguards up the 
>ying-yang when a new and dangerous technology is being tried out.  Just ask
>Jeremy Rifkin.

Please don't mention that name.  I just ate.

Yeh, it *does* seem possible.  Here, where protestors can shut down a lab or
force it to take all reasonable precautions and some not.

But what about a place (the USSR, for example) where lab work is not only not
to be interrupted by the people's unreasonble demands for "safety," but in fact
not even known to the public?

>Nothing is 100% safe.  There are dangers in avoiding nanotechnology, which may
>in the end be even more formidable than gray goo.  The question is, "What course
>provides the least risk over all?"

Agreed; and I think avoiding nanotech is more dangerous than pursuing it.  
(Always assuming it can "really" be done at all.)

>>>And we need to find out how to reliably
>>>cure and prevent the sort of "insanity" (or "antisocial behavior") which
>>>drives (or permits) people to purposely seek to harm others.  <----------
									   |
>>Again -- whose definition?						   |
									   |
>Do you want to argue over definitions or do you want to prevent gray goo? |
									   |
I want to protect human beings.  And I think your scenario above -----------
violates people's right to mental privacy, among other things.

>Remember, nanotechnology promises indefinite life spans.  

Yeah?  I've hear promises before.  Again I say "prove it."

>>That's culturocentric.  Samurai, for example, often performed anti-survival
>>acts.  Ditto car bombers.  Are they insane?
>
>Those aren't even good examples.  Of course they are/were insane.  The degree
>to which you give people the benefit of the doubt is a function of what risks
>you are willing to take.  Which do you fear most?

The car bombers, ovbviously.  But neither can I call them "insane," and I repeat
that calling them insane is culturocentric.  They're sane, in both cases, by the
standard of their own societies.

>>Because goo only has to attack one thing.  Sheilds have to attack any 
>>hypothetical goo.
>
>Pish and tosh.  You have it backwards, my friend.

Absolutely right.  I confused myself.  (I'm frequently groggy when doing this
news stuff.)  That should have read:

"Because goo doesn't have to be selective about what it attacks.  Sheilds have
to attack any hypothetical goo and *only* goo."

(Though goo *can* be selective about what it attacks.  Imagine stuff that 
attacks only the vitreous fluids of human eyes... Yuccch...)

>But the burden of proof is on you:  gray goo has never existed before, as
>far as we know.

...but neither has human-made nanotech.  If you allow biolife as proof that
nanotech is workable, than you have to accept highly successful bacteria as
proof that grey goo is workable...


Roach

alan@oz.nm.paradyne.com (Alan Lovejoy) (06/30/89)

In article <Jun.28.16.17.30.1989.11650@athos.rutgers.edu> djo@pacbell.com (Dan'l DanehyOakes) writes:
>This is getting *way* out of hand.

This is a healthy debate.  I wish more people would participate.

>>What about first proving that "gray goo" is possible?  But first, what IS
>>gray goo, precisely?  
>
>Simply:  an uncontrollably self-replicating nanodevice or group of nanodevices.

What does "uncontrollably" mean? Does it mean that the nanomachines can use
almost anything as fuel?  Does it mean that they can disassemble almost    
anything and/or use almost any molecule as building material?  Does it mean
that they can withstand most common forms of radiation (e.g., sunlight, 
background radiation)? Does it mean that they tolerate most common chemical
environments?  Temperatures?  Does it mean they are immune to interference 
from bacteria and immune system cells?

Nanodevices which depend on a special environment, special fuels and special
building materials not randomly available simply cannot get truly "out of
control."  Nanomachines which can operate anywhere, eat anything, and use any 
common building material are probably a relatively HARD design problem compared
to more limited devices.  Until we have a VERY capable active shield technology
that we have great confidence in, it would be foolhardy--and unnecessary--for
us to create nanomachines that could survive and operate in the biosphere.

As I have pointed out before, there already ARE nanomachines that have been
"released" into the biosphere.  They're replicating out of control even as
you read this.  They were designed to evolve quickly, and they do it rather
well.  They're called bacteria and virii.  

>>There are indications that gray goo may not pose much of a threat simply
>>because it is impossible--or at least only possible if "gray goo" is given
>>a very "watered down" definition.  
>
>I don't regard my definition as watered down; and I'd say it allows for the
>possibility of real goo:  using Drexler's everpresent analogy to biostuff, the
>existence of uncontrollably self-replicating things like bacteria and viruses
>is proof of the possibility of goo.

And the existence of immune systems in flora and fauna is proof of the 
viability of active shields.

Could nanomachines "break" or have "bugs" in their programming?  Of course.
Do they have to be designed so that the probability that such breaks and 
programming bugs will lead to still-functional machines is anywhere near as
high as it is for biolife?  NO!!!!!!!!   Biolife is designed to MAXIMIZE the
probaility that breaks and "software bugs" result in still-functional units.
And noticeable evolution STILL takes millenia.  No one can GUARANTEE that 
an accident leading to goo is impossible.  But we can radically reduce the
risk compared to what biolife presents by using known engineering technology.
For instance, multiply-redundant systems with multiply-redundant error-detection
and correction systems could be used--AND REQUIRED.

>>Have you read Engines Of Creation yet?  

>Yes, I have.  And, for all the validity of its ideas, it remains a patchwork
>of guesses and speculation.  

And your gray-goo scenario, in contrast, is a well-documented scientific paper
that proves beyond a shadow of a doubt that we're all doomed to become a snack
for Gray Goo?  

Ahem.  

This is all just speculation.  We have to start somewhere.  It's by speculation
and discussion and research and deep thought and cooperation that we will 
advance from speculation to scientific fact.  We aren't there yet.  Not you, 
not me and not Drexler.  

So do you think, perhaps, that we should just forget about gray goo until we can
cross all the t's and dot all the i's on the definitive scientific description 
of nanotechnology?  I don't.  I think that, in view of our relative ignorance, 
we should neither be overly alarmist nor overly reassuring at this point. 

>>What is an AIA?  An "artificially intelligent assembler" I assume?  

>By "AIA" I mean a nanodevice or system of 
>nanodevices capable of replicating other objects, whether by program (directed
>assembly) or by analysis (disassembly) and synthesis (self-directed assembly).
>Pretty much by definition, such a device would be capable of self-replication.

Why do you call this an "AIA"?  What you describe sounds like a system of
assemblers and nanocomputers.  I don't see what "AI" has to do with it.

>>An assembler will be roughly the same size as a ribosome.   

>An example of what I call guesses and speculation.  Until you've built the damn
>thing--or at least made a plausible design for it -- you really don't know how
>big it will be.  And, no,  I don't regard KED's "design-ahead" speculations on
>the assembler anything like plausible; he makes assumption after assumption to
>come up with a size he's happy/comfortable with.  We just plain *DO*NOT*KNOW*
>what's involved in building a doohickey of such flexibility.  We've never done
>it on the macro scale.  Nor do such general assemblers exist in nature; 
>ribosomes are *very* limited in what they can build -- they build essentially-
>linear objects (chains of amino acids) from an extremely small "alphabet" of 
>components.  Even if you allow that the assembler is working solely with atoms
>and not groups of atoms (such as amino acids), there are more kinds of atoms
>than there are types of amino acids used by ribosomes...and atoms with the same
>valence will be difficult for an assembler to tell apart by the "purely 
>chemical" means so dear to KED.

You are correct that we have no examples of true generic assemblers.  It may
be that the first generation(s) of assemblers will NOT be fully generic--nor
even very much more capable than a ribosome.  You are correct that we don't
really know how big they will be.  Therefore, the most JUSTIFIED assumption
is that they will ROUGHLY be the same size as a ribosome--since the ribosome
is the closest thing to an assembler that we have seen.  Whatever their size,
they will probably NOT contain very many more atoms than are absolutely
necessary to perform their function--programmed molecular assembly. Unless
the number of atoms necessary for the molecular assembly function is equal
to or greater than the number necessary for a complete computer system.
And that obviously depends on how you define "complete computer system."
It's not likely to be "complete" enough to deserve to be called "AI." 

Drexler claims he will be ready to PUBLISH his design for an assembler Real
Soon Now.  He thinks he can actually build one by the year 2000. [Source:
January 1989 issue of OMNI Magazine, interview of KED.]  Seems to me that 
Drexler knows FAR more about this subject than you or I do.

Atoms with the same valence tend to have similar chemical properties. The
more chemical properties are similar, the less the distinction between atoms
matters (for non-nuclear applications).  And if the chemical properties are
not identical, then it is possible in principle to tell the atoms apart.

>>It's also a completely normal human tendency to demand safeguards up the 
>>ying-yang when a new and dangerous technology is being tried out.  Just ask
>>Jeremy Rifkin.
>
>Please don't mention that name.  I just ate.
>
>Yeh, it *does* seem possible.  Here, where protestors can shut down a lab or
>force it to take all reasonable precautions and some not [so reasonable?].
>
>But what about a place (the USSR, for example) where lab work is not only not
>to be interrupted by the people's unreasonble demands for "safety," but in fact
>not even known to the public?

<<Switch to Voice Of Ronald Reagan>>

Well, we could threaten to nuke them if they don't behave... :-) :-)

<<Switch back to default voice>>

Just how are we going to prevent certain countries from doing whatever they
please?  Perhaps if we SCARE them into being safe (make science fiction 
movies with some irresistable-to-the-leadership political message that
depicts in grisly detail the dangers of gray goo).  In a way, this is a
corollary to the terrorism problem.  

The other option is to start an ACCELERATED program to develop AI and
nanotechnology so that we can have active shields in place before the time
certain other countries even start experimenting with their first human-designed
assemblers.  But somehow this option doesn't seem to likely to be tried. And
it's not certain it would work if it were. 

>>>>And we need to find out how to reliably
>>>>cure and prevent the sort of "insanity" (or "antisocial behavior") which
>>>>drives (or permits) people to purposely seek to harm others.  <----------
>									   |
>>>Again -- whose definition?						   |
>									   |
>>Do you want to argue over definitions or do you want to prevent gray goo? |
>									   |
>I want to protect human beings.  And I think your scenario above -----------
>violates people's right to mental privacy, among other things.

And putting criminals in jail doesn't?  Isn't the prison system supposed to
"rehabilitate" people?  In other words, make different people out of them?
If that's not an attempt to manipulate minds, what is?  Sometimes, you have
to choose the lesser of two evils.  The time for this choice approaches
rapidly...

>>>That's culturocentric.  Samurai, for example, often performed anti-survival
>>>acts.  Ditto car bombers.  Are they insane?
>>
>>Those aren't even good examples.  Of course they are/were insane.  The degree
>>to which you give people the benefit of the doubt is a function of what risks
>>you are willing to take.  Which do you fear most?
>
>The car bombers, ovbviously.  But neither can I call them "insane," and I repeat
>that calling them insane is culturocentric.  They're sane, in both cases, by the
>standard of their own societies.

All our ideas are culturocentric.  So what?  If someone is breaking into my
house with a gun, I'm going to shoot him.  And I'm not going to worry about
the fact that the right of self-defense--or my desire to keep on living--is
culturocentric.

Our concept of property is culturocentric.  Should you let aborigenes camp
out in your back yard, cut down your trees and light fires in your garage
just because they have no concept of property?

Take action to insure your survival--or die.  We can NOT afford to think of
ourselves as unrelated groups of independent societies, cultures and
nations for very much longer.   What the Soviets do in Chernobyl affects
ALL OF US.

>>>Because goo only has to attack one thing.  Sheilds have to attack any 
>>>hypothetical goo.
>>
>>Pish and tosh.  You have it backwards, my friend.
>
>Absolutely right.  I confused myself.  (I'm frequently groggy when doing this
>news stuff.)  That should have read:
>
>"Because goo doesn't have to be selective about what it attacks.  Sheilds have
>to attack any hypothetical goo and *only* goo."
>
>(Though goo *can* be selective about what it attacks.  Imagine stuff that 
>attacks only the vitreous fluids of human eyes... Yuccch...)
>
>>But the burden of proof is on you:  gray goo has never existed before, as
>>far as we know.
>
>...but neither has human-made nanotech.  If you allow biolife as proof that
>nanotech is workable, than you have to accept highly successful bacteria as
>proof that grey goo is workable...

And you have to accept immune systems as proof that shields are workable.

Shields do not have to "attack" anything.  They merely have to immoblize 
nanomachinery.  The hard problem for the shield is deciding when to go into 
action--and where.  Bioimmune systems are triggered by "distress signals."
This suggests seeding the world with nanodevices whose only job is to detect 
suspicious behavior--and then to "raise the alarm."  When a nanocomputer
receives a distress signal, it directs the assembly of active shield units
whose job it is to shut down all nanomachinery in the immediate area.

The "distress signal" might be either chemical or electromagnetic--probably
both.  Nanomachinery can be immobilized both by jamming the communications
between nanocomputers and assemblers/disassemblers, and by "gumming up"
the mechanics using molecules roughly analogous to antibodies.  Also, energy
supplies can be pirated, blocked or destroyed.  A shield which simply tried
to eat the energy supplies up as fast as possible would cause the goo 
considerable difficulty.

Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.
Disclaimer: I do not speak for AT&T Paradyne.  They do not speak for me. 
______________________________Down with Li Peng!________________________________
Motto: If nanomachines will be able to reconstruct you, YOU AREN'T DEAD YET.

[A shield which tried to eat the energy supplies as fast as possible would
 be awwwwwfully close to being goo itself.  In fact, that probably constitutes
 the strongest case against active shields--Quis custodiet ipsos custodes?
 --JoSH]

trebor@uunet.uu.net (Robert J Woodhead) (06/30/89)

In article <Jun.28.16.17.30.1989.11650@athos.rutgers.edu> djo@pacbell.com (Dan'l DanehyOakes) writes:
>Alan writes:
>>But the burden of proof is on you:  gray goo has never existed before, as
>>far as we know.
>...but neither has human-made nanotech.  If you allow biolife as proof that
>nanotech is workable, than you have to accept highly successful bacteria as
>proof that grey goo is workable...

If anything, the existance of the HIV viruses proves that simple ``devices''
can replicate themselves and actively attack specific sites in the target,
eventually causing death.  And, if you think about it, many of the attempts
to cope with AIDS that involve serious bioengineering [eg: spoofing the
HIV binding sites] can be considered rudimentary shields.

It seems clear to me that the first real ``nanomachines'' that we create are
going to be protein-based devices; in effect, engineered viruses.  This
will allow the designer to borrow the elaborate machinery inside each cell
and bend it to his/her purposes.  I would venture to guess that the first
nanomachines will be built for the purposes of gene therapy.

Consider, for example, the benefits to be gained if we can fix a defective
gene and alleviate Diabetes.  Or do it in-vitro or in-utero (or pre-
conception by treating the parent(s)) and banish Tay-Sachs.

-- 
(^;-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-;^)
Robert J Woodhead, Biar Games, Inc.   !uunet!biar!trebor | trebor@biar.UUCP
  ``I can read your mind - right now, you're thinking I'm full of it...''

alan@oz.nm.paradyne.com (Alan Lovejoy) (07/01/89)

>[A shield which tried to eat the energy supplies as fast as possible would
> be awwwwwfully close to being goo itself.  In fact, that probably constitutes
> the strongest case against active shields--Quis custodiet ipsos custodes?
> --JoSH]

Depends on what the "energy supplies" are precisely.  If they're protein, then
"eating them up as fast as possible" is definitely NOT the strategy we want
the shield to pursue.  However, it may be a perfectly sensible strategy in
some cases.  Another related tactic would be to "lock up" the energy-supplying
molecules by surrounding them with useless molecules that require more energy
to remove than is contained in the molecule(s) they are "protecting".  The 
problem with this is tactic is where the supply of "useless molecules" is
supposed to come from.  

You do have to be able to trust your shield.  You do your best to design it
so that it can't betray you.  And then you have to decide whether going without
a shield is a greater risk than releasing it.  We DO have options.  It's just
that none of them are surefire bets.  But that's life, ain't it?

Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.
Disclaimer: I do not speak for AT&T Paradyne.  They do not speak for me. 
______________________________Down with Li Peng!________________________________
Motto: If nanomachines will be able to reconstruct you, YOU AREN'T DEAD YET.