[sci.nanotech] Whoo-ha

djo@pacbell.com (Dan'l DanehyOakes) (07/04/89)

Quoth Alan:

>This is a healthy debate.  I wish more people would participate.

Hmmm.  There *is* that, yes.

>>>What about first proving that "gray goo" is possible?  But first, what IS
>>>gray goo, precisely?  
>>
>>Simply:  an uncontrollably self-replicating nanodevice or group of nanodevices.
>
>What does "uncontrollably" mean? Does it mean that the nanomachines can use
>almost anything as fuel?  Does it mean that they can disassemble almost    
>anything and/or use almost any molecule as building material?  Does it mean
>that they can withstand most common forms of radiation (e.g., sunlight, 
>background radiation)? Does it mean that they tolerate most common chemical
>environments?  Temperatures?  Does it mean they are immune to interference 
>from bacteria and immune system cells?

Any of the above.  If you can't control it, for *whatever* reason, it's
uncontrollable.  I'm using a fairly inclusive definition, and for good reason;
as we've both agreed, the stuff doesn't exist yet, so I don't want to limit
the definition before we even get to having something to define:*)

>Nanomachines which can operate anywhere, eat anything, and use any 
>common building material are probably a relatively HARD design problem compared
>to more limited devices.  

In fact, I'd say "eat anything" is by definition an impossible design problem
-- goedel strikes again.  There's going to be *something* any given disassembler
or system of disassemblers can't get its hooks into.

However, that "*something*" may be something so bizarre/abstruse/whatever that
it simply doesn't exist in the real world in any significant quantity (like
at all...)

W/ regard to bacteria and viri:  yes, they pretty much fit my definition of
goo.  But (and here's the rub) an active shield system that has had billenia
of evolution time to fight 'em is still less than 100% effective.  It's easy
for people in the U.S. to forget, but without antibiotics, vaccines, and other
wonders of modern medicine, infant mortality would still be well over 50%.
Further, there are the odd breakdowns such as autoimmune disorders, cancers,
and simple allergies to what should be healthy or harmless substances.  The 
human immune system is *not* what I call an extremely effective active-shield 
system, or proof that active shield systems "work."  (I mean:  would you be
satisfied that we were "protected" if you knew that a goo breakout in your
general area had even a 25% chance of killing you, or your loved ones?)

>And the existence of immune systems in flora and fauna is proof of the 
>viability of active shields.

What I said.

>And your gray-goo scenario, in contrast, is a well-documented scientific paper
>that proves beyond a shadow of a doubt that we're all doomed to become a snack
>for Gray Goo?  

Heavens, no.  It's just another example of what we can come up with by the same
techniques of guesswork and speculation.

>So do you think, perhaps, that we should just forget about gray goo until we can
>cross all the t's and dot all the i's on the definitive scientific description 
>of nanotechnology?  I don't.  I think that, in view of our relative ignorance, 
>we should neither be overly alarmist nor overly reassuring at this point. 

Jeezus, no, I don't either.  To paraphrase Santayana, those who don't foresee
the future are condemned not to be repeated in it.

But I'd rather err on the side of caution.  I don't regard that as "alarmist;"
there is no such thing as "too cautious" when a single drastic mistake could be
as bad as an "accidental first strike" in Global Chicken.

>Why do you call this an "AIA"?  What you describe sounds like a system of
>assemblers and nanocomputers.  I don't see what "AI" has to do with it.

AI is too vaguely defined.  Sigh.  I'm going on the assumption that anything
capable of (a) deciding what to dissassemble and replicate (b) dissassembling
it (c) analysiing the dissassembly (d) storing the analysis (e) replicating
is certainly smarter than any computers we have now...

>You are correct that we don't
>really know how big they will be.  Therefore, the most JUSTIFIED assumption
>is that they will ROUGHLY be the same size as a ribosome--since the ribosome
>is the closest thing to an assembler that we have seen.  

I don't think *any* assumption is justified at this point.  Hell, it may wind
up being much *smaller* than a ribosome -- I understand that the Japanese are
doing wonderful things with miniaturizing atoms:*)

(Or:  "Bell labs is working on faster photons.")

>Drexler claims he will be ready to PUBLISH his design for an assembler Real
>Soon Now.  

Uh-huh.  I'll believe it when I see it.  And I mean the assembler -- not the
design.  (BTW, what level of design does he claim to be working at?  Is he
claiming to have designed this sucker down to the atom level, or just what
general components need to go where?)

>Seems to me that 
>Drexler knows FAR more about this subject than you or I do.

About *that*, I have no doubt whatsowhoever...

>Just how are we going to prevent certain countries from doing whatever they
>please?  Perhaps if we SCARE them into being safe (make science fiction 
>movies with some irresistable-to-the-leadership political message that
>depicts in grisly detail the dangers of gray goo).  In a way, this is a
>corollary to the terrorism problem.  

Hey, it worked with the terrors of atomic mutation, right?  Everybody knows
that the giant bug movies of the '50s are the *real* reason we haven't had a
nukewar to speak of...

"Quick, Henry, the Flit!"

>>I want to protect human beings.  And I think your scenario above
>>violates people's right to mental privacy, among other things.
>
>And putting criminals in jail doesn't?  Isn't the prison system supposed to
>"rehabilitate" people?  In other words, make different people out of them?

(a) Why do you assume I support the penal system as we have it?

(b) No, I don't think it's supposed to do that.  If it is, it certainly doesn't
do a very good job of it.  No, like the entire penal system of justice, it is
based, not on prevention (the police are *not* an effective active-shield
system, alas), not on rehabilitation, but on revenge.  The penal system is 
simply a more "civilized" version of talon law.  (Which is why I laugh-cry at
the attempts of misguided judges to keep prison populations to a nominal
"reasonable" level.  If jail is supposed to be a punishment, why should the
judge worry about whether it's comfortable for the criminal...?)

(c) No, I don't support the penal system as we have it...

>If that's not an attempt to manipulate minds, what is?  Sometimes, you have
>to choose the lesser of two evils.  

Horsepuckey.  The lesser of two evils is *still* an evil.  Find something good
and choose that.

>All our ideas are culturocentric.  So what?  If someone is breaking into my
>house with a gun, I'm going to shoot him.  And I'm not going to worry about
>the fact that the right of self-defense--or my desire to keep on living--is
>culturocentric.
>
>Our concept of property is culturocentric.  Should you let aborigenes camp
>out in your back yard, cut down your trees and light fires in your garage
>just because they have no concept of property?

Not necessarily (though if there were actual Alamedan aborigines that were 
likely to do that I might reconsider).  But absolutely I would *not* try to
impose on them my concept of property.  I might make it clear to them that it
was not safe to go certain places and do certain things, but I would *not*
feel that it was necessary for them to agree that I had a right to do so.  They
would be (to my way of thinking) perfectly justified in hating me, or thinking
I was crazy, for keeping them from burning down my garage; and even justified
in trying to kill me for it (and myself justified in trying to prevent them from
killing me).

I regard culturocentrism as dead wrong.  Fortunately for us all, I also
recognize that attitude as cultural and won't try to impose my cultural
relativism on anyone:  but don't expect your culturocentrism to carry any
weight with me.

>Take action to insure your survival--or die.  We can NOT afford to think of
>ourselves as unrelated groups of independent societies, cultures and
>nations for very much longer.   What the Soviets do in Chernobyl affects
>ALL OF US.

As noted above, cultural relativism doesn't mean sitting back and saying grace
while the cannibals are heating the stew pot.  It *does* mean not trying to
force them out of cannibalism because *your* ethos regards cannibalism as wrong.

>This suggests seeding the world with nanodevices whose only job is to detect 
>suspicious behavior--and then to "raise the alarm."  When a nanocomputer
>receives a distress signal, it directs the assembly of active shield units
>whose job it is to shut down all nanomachinery in the immediate area.

Including biolife, which you keep eliciting as a form of nanomachinery?

Thanks, but no thanks.  I'm not happy with an active shield that kills me,
even "temporarily".  (Culturocentric note:  I'm a soulist and don't think
that a reconstructed/revivified "me" would necessarily be me.)




Roach, roach, roach yer boat