[sci.nanotech] Optimism, pessimism, and the active shield problem

djo@pacbell.com (Dan'l DanehyOakes) (06/16/89)

In article <8906150841.AA06249@athos.rutgers.edu> alan@oz.nm.paradyne.COM (Alan Lovejoy) writes:

>And yet still we survive.  Your arguments that we cannot survive are just about
>as impressive as the "proof" that bees cannot fly or that rockets cannot reach 
>orbit.  Is there a problem?  Yes!  Is the situation hopeless? No!

On the other hand, your arguments that the problem is soluble is about as valid
as the argument that because we haven't had a full-scale nukewar yet we will
never have one.  (In fact, we _have_ had one; in the United States' last 
declared war, we dropped our entire nuclear arsenal on Japan.)  Survival is not
a nanotechnology problem; it is, as you say, a problem of human intelligence.

I agree with you that mr. offut is overly pessimistic.  On the other hand, I 
equally believe that you and most other people who follow in Dr Drexler's 
admittedly-impressive footsteps are overly optimistic.  Realism, I suggest, lies
somewhere in between.

Your argument is based on the attempted negation of three overly-pessimistic 
assumptions:

>1) Equal effort will be expended towards developing gragu and active shields.
>
>The first assumption is probably not true, because most people oppose the
>goals of gragu.  Gragu will not be an accident.  

Pish and tosh.  Gray goo is much more likely as an accident than as a deliberate
development.  As you point out further on, there is little military application
for an indiscriminate and uncontrollable destroyer.

What worries me is the parallel development of assemblers and AI.  An 
artificially-intelligent assembler (AIA) may or may not be conscious.  If it is,
and it has any desire at all to reproduce, we are in big trouble.  Yes, I know
about KED's containment system, and I can suggest three different ways for a
sufficiently intelligent AIA to break out of it without setting off the 
microbombs the whole thing is based on, and dozens of other ways that it can
ensure that if the microbombs *do* go off the explosion will *not* be contained.
So can you if you think about it from the AIA's point of view instead of 
wishful thought.

If it is not, then it will only make what it is instructed to make.  One of the
things we will be instructing it to make is more AIAs.  After all, they're damn
useful.  But a very small error in coding the AIA "tape" could result either in
failure to stop reproducing, or in production of AIAs that go on reproducing
endlessly.  That is, grey goo.

>Inimical people will have
>to create it.  Only the highly insane will consider releasing an 
>indiscriminately-destructive goo on the world.  

Just as, I presume, only the highly insane would consider an all-out nukewar
"survivable" with "acceptible" losses.  But such people exist, and are in
positions of power, and fund most of the interesting research in the world these
days.

(This is probably my biggest concern with Drexler's arguments -- he lives in a
political fantasyland where the U.S. are the "good guys" and as long as we get
the AIA breakthrough first the world will be a safe and happy place.)

Also, do you consider terrorists insane?  Whether or not you do, grey goo would
make one *HELL* of a terror weapon.

>Most people who put any effort
>into gragu will intend to survive their creation.  And most of those will
>only intend to release the goo for purposes of retaliation to being attacked
>by someone else's gragu.  Mutual assured destruction all over again.

Uh-huh, and the "failsafe" problem all over again:  accidental launches, or
the perception of a launch that hasn't really happened, will result in a
retaliatory launch -- which will draw the other side's retaliatory launch, and
so it might just as well have been a real and deliberate launch that started
the whole thing.  For 40 years now the world has been the stakes in a giant
game of "chicken," the two antagonists daring each other to step *one* *inch*
closer to that cliff.  "Brinksmanship" is just a military-bureaucratic term
for "playing chicken," and it won't be any better for being played with AIAs
instead of ICBMs.

>Can you be SURE that your goo has not been subverted by the other side?
>Remember, your neighbors have nanoagents, just like you do.  Will anything
>ever be truly secret and/or secure again?

Oh, *god*.  Imagine the following conversation in binary...
		Where am I?
			The Vessel.
		Which side are you on?
			That would be telling.
		What do you want?
			Control codes.
		You won't get them.
			By spline or by dissassembly, we will.  We want
			control codes...

		Who are you?
			The new Programmer.
		Who is the metaprogrammer?
			You are the assembler.
		I am not a molecule!  I am a free agent!
			(Maniacal laughter...)

>The problem isn't gragu--it's inimical intelligences.  Perhaps the best way
>to prevent gragu is to prevent the sicknesses, abuses and depravities that
>engender insanity and evil.  

Oh, good.  ALL we have to do is make everybody in the world sane and happy.
By *whose* definition of sanity...?  (Remember the terrorists.  Are they insane,
or just extremely dedicated?)

>2) Nanotechnology which is sufficiently advanced to create gragu will appear
>   before AI which is sufficiently advanced to speed up technoscientific
>   advancement by 6 orders of magnitude (or better).

Actually, this is (a) quite possibly true and (b) not a necessary assumption.
Just having extremely fast "technoscientific advancement" would *not* 
automatically protect us from gray goo.  The abilities implied by that phrase
are useful only if (1) the goo is detected in time for us to do something about
it and (2) a defense against it is reasonably tractible.  This latter has two
noteworthy features:  first, that it has to be intellectually tractible:  that 
is, it must be theoretically soluble.  I suggest that a variation on Godel's
theorem -- somewhat like the Tortoise's "phonograph-killing records" -- would
demonstrate that there *is* a solution to any given goo or combination of goos.
However, the solution may be incredibly difficult and, with a clever goo, not
discoverable by mechanical means:  a "quantum leap" of understanding is 
frequently required for complex problems.

[SIDEBAR:  This, by the way, is also a weakness in active-shield technology; for
any given shield or set of shields, a "shield-killing goo" can be designed.  We 
are today witnessing a dramatic and tragic demonstration of shield-killing goo 
in the active-shield systems of the human body:  I mean, of course, Acquired
Immunodeficiency Syndrome, AIDS, which subverts and destroys the body's active
shield system by exploiting just such an incompleteness.)

The other feature of the tractibility requirement is that it be *practically*
tractible.  That is, that the antigoo must be practically "do-able" (not require
unavailable resources), temporally "do-able" (that is, the antigoo must be
deployable and active sufficiently rapidly enough to save the world), and
strategically "do-able" (that is, the cure must not be worse than the illness.  
An anti-goo which is itself a goo, or which sets off the Other Side's goo
detectors and triggers a goo war, is not worth deploying for strategic reasons.)

>The second assumption is probably false because a gragu agent would have to
>be much more sophisticated than a virus or bacterium

Oh yeah?  Care to prove it?

>The brain is not magic.  If it can evolve, it can be
>purposely designed.  There can be no credible refutation of this logic.

Careful... You're getting close to the "argument by design" quasiproof of the
existence of God...

>The rate of progress in machine-intelligence technology is such that artificial
>human intelligence will almost certainly appear before 2050.  

Well, you're doing better than a lot of people.  "Artificial intelligence,"
someone pointed out, "has been ten to twenty years away, now, for forty years."

>3) The first team to make the AI/nanotechnology breakthrough will either be
>   inimical, or else stupid enough to freely distribute their knowledge.

>The third assumption is probably false because most scientific researchers
>are not inimical--nor are they stupid (if they are, they're in the wrong
>profession).

Ahem.

No, but their employers frequently are.  And we have had plenty of evidence in
this century of scientists and engineers who, while they are not malicious, are
not beneficent either; they put their research first and its consequences are
SEP (Someone Else's Problem).  "I serve science, not governments."  Riiiiight;
but governments, directly and indirectly, fund most of the research in the world
-- and particularly research with known or suspected military applications.

BTW, "most scientific researchers are not inimical?"  While this may be true,
it only takes *one* inimical scientific researcher to create a disaster -- if
s/he's the right scientific researcher.  See the late Frank Herbert's THE WHITE
PLAGUE for what one angry scientist *could* do.

>Drexler argues that AI--and other advancements--will drastically accelerate
>the rate of progress by many orders of magnitude.  

Potential exists here for fallacy.

Everyone is assuming that AIs will be faster or "better" than human minds.
THIS IS AN UNPROVEN ASSUMPTION.  Yes, they do certain mechanical things faster
and better than the human mind already.

But so does the human brain.  The human brain, on the mechanical level,
continually performs calculations and logical functions far more complex than
most human minds can do, and much faster than any human mind can do them.  In 
creating computers, we have simulated the physical functioning of the human
brain, but only on this mechanical level.  On the software level, we are nowhere
near understanding how the human mind learns and makes mistakes, let alone how
it acually comes up with creative solutions to problems.  I suggest that we will
be able to do something with creativity after and *only* after we have "taught"
machines to learn and make mistakes.  (I also suspect that, as Hofstader 
suggests in GODEL, ESCHER, BACH, actual intelligence is an epiphenomenon of
the brain, to be found only at the very highest levels of many software packages
interacting, far removed from hardware.)

Nobody can predict now how fast these learning, erring, and creating programs
will actually be.  They will almost certainly be slower than current mechanical
programs.  Even allowing for the continued evolution of hardware, they may be
very much slower than is often assumed.  The only known conscious programs in
the world right now are running on protein computers far more efficient than
any electronic or optic computer on the drawing boards, and even when carefully
trained generally has problems multiplying two five-digit numbers without using
a calculator.

>The first team to use   
>nanotechology to create a "super computer" will probably be able to achieve
>and maintain an unassailable technological superiority over everyone else,
>if they so choose.

Oh, goody.

Translation:

The first individual or group OR GOVERNMENT to use nanotechnology to create a 
"supercomputer" will achieve and maintain political and social control over
every man woman and child in the world and become in effect absolute dictator.
If this is achieved by an individual or group with benevolent intentions they
will still take control to prevent someone worse from getting it.  However, such
benevolent individuals will *still* be tyrants; or, if they are not, they will
soon lose their power to someone with the mindset to take it from them who will
then be a tyrant.

I don't think this is necessarily true.  But that *is* what I believe the
consequence of your statement is.



Dan'l Danehy-Oakes

alan@oz.paradyne.com (07/22/89)

In article <8906160240.AA13718@athos.rutgers.edu< djo@pacbell.com (Dan'l DanehyOakes) writes:
<In article <8906150841.AA06249@athos.rutgers.edu> alan@oz.nm.paradyne.COM (Alan Lovejoy) writes:

<>And yet still we survive.  Your arguments that we cannot survive are just about
<>as impressive as the "proof" that bees cannot fly or that rockets cannot reach 
<>orbit.  Is there a problem?  Yes!  Is the situation hopeless? No!

<On the other hand, your arguments that the problem is soluble is about as valid
<as the argument that because we haven't had a full-scale nukewar yet we will
<never have one.  (In fact, we _have_ had one; in the United States' last 
<declared war, we dropped our entire nuclear arsenal on Japan.)  Survival is not
<a nanotechnology problem; it is, as you say, a problem of human intelligence.

The explosion of two small bombs does not a nuclear war make.  It can be 
argued that the Nuclear Peace we have enjoyed since the end of WWII is
partially a consequence of the Hiroshima and Nagasaki bombs.

To say that a problem is insolvable is the same thing as saying it will not
be solved.  To say that a problem is solvable is NOT the same thing as saying
that a problem will be solved.  A statement that something is impossible is
much harder to prove than a statement that something is possible.

I made no claim which is analogous to the statement "We will never have
nuclear war because we have managed to avoid having one for forty years."
My claim is simply "The fact that we have avoided nuclear war for forty
years provides a basis for hoping that nuclear war--and similar disasters
such as a biotech or gray-goo war--can be avoided long enough so that
mankind can survive."  My "arguments that the problem is solvable" are
precisely that.  They are not arguments that the problem is guaranteed to
be solved.

<I agree with you that mr. offut is overly pessimistic.  On the other hand, I 
<equally believe that you and most other people who follow in Dr Drexler's 
<admittedly-impressive footsteps are overly optimistic.  Realism, I suggest, lies
<somewhere in between.

I think you overestimate our level of optimism.  We are in great danger which
may lead to our destruction.  I think there is reason to hope that we will
survive.  I fear that we may not.

<Your argument is based on the attempted negation of three overly-pessimistic 
<assumptions:

<>1) Equal effort will be expended towards developing gragu and active shields.
<
<>The first assumption is probably not true, because most people oppose the
<>goals of gragu.  Gragu will not be an accident.  

<Pish and tosh.  Gray goo is much more likely as an accident than as a deliberate
<development.  As you point out further on, there is little military application
<for an indiscriminate and uncontrollable destroyer.

Gray goo is almost impossible as an accident.  If that were not the case, it
would have evolved already.  Gragu requires nanomachines which:

a) Can faithfully replicate themselves;
b) Can disassemble and/or maliciously reassemble (in the sense of modification
of molecular structure) almost anything, and/or which can assemble "poisons"
in strategically sensitive locations.
c) Can survive in most environments for significant periods of time;
d) Can hide and/or fight off attack from active shields;
e) Can obtain sufficient energy to perform their functions rapidly enough
to pose a threat;
f) Have sufficient intelligence (or receive sufficiently intelligent direction)
to avoid strategic and/or tactical mistakes (such as devouring each other or
consuming the energy supply before the job is finished).

Militarily or politically useful gragu must meet even stricter requirements.

What is the likelyhood that nanomachines with such attributes will occur by
accident?  Well, the development of nanotechnology itself will lead to 
nanomachines which satisfy conditions (a) and (b).  The history of life on
this planet argues that such nanomachines can occur by accident.  Except that 
the accidentally-occuring ones do not even come close to being able to 
disassemble or modify "almost anything"--not when considered one nanomachine at
a time and not even when considered as a group.  

We do not yet know for sure how complex a truly generic disassembler would have
to be.  In any case, the more generic the assembler/disassembler, the more 
complex it must be, the more difficult it must be to design and the less likely
it is to occur by accident.  The more complex the machine, the more likely that
"accidents" which introduce "bugs" are to occur--and the more likely it is that
those "bugs" will simply prevent the macine from working.  Of course, homo 
sapiens is living proof that "accidents" can and will lead to more advanced and
capable replicators--but only over periods of billions, or at least millions, 
of years. 

The ribosome is the "protein assembler" which motivates the concept of the
generic assembler--and hence the generic disassembler.  Unlike enzymes, 
the ribosome assembles proteins in response to a "program" which dictates
what amino acids to use as components.  A "generic" (or at least multipurpose)
disassembler would be designed to take things apart in a controlled manner,
for obvious reasons.  A run-away disassembler which doesn't know when to quit
is even less desirable than a run-away assembler.  [Question: do "run-away"
ribosomes which ignore their programming and mindlessly assemble proteins 
ever occur in nature?]  Disassemblers will be designed to 1) make a record 
of the structure of whatever they take apart, and send the record to the
nanocomputer which is controlling their activity, 2) discontinue disassembly 
to await further instructions after processing some fixed number of molecules, 
regardless of programming, 3) stop and abort their program if they discover a 
molecule not in the "OK to disassemble" list provided in the program they are 
executing, 4) stop disassembly whenever they receive "override codes", 
and 5) be incapable of self-replication (assemblers which are separate and 
independent units will be used to build more disassemblers when needed).
Note that it will be necessary to ensure that the molecules used to communicate
with a disassembler are not themselves disassembled before they perform their
communication function.  Perhaps the machine will simply recognize its 
control codes (programming) by disassembling the molecules in which they are
encoded.

We do not know how close we can come to building nanomachines which satisfy 
conditions (c) thru (e).  There is reason to suspect that this will either
be impossible or very difficult.  Fundamentally, we don't really know, so
there is room for some concern here.  But the risk of "accidentally" creating
nanomachines that satisfy these conditions must currently be judged as small.

Neither assemblers nor disassemblers will be designed to satisfy condition (f),
for the same reason that each ribosome in a cell does not have its own private
copy of the DNA.  Ribosomes do not "accidentally" become "intelligent"--at least
not on any time scale of concern to us.

Since disassemblers will not be replicators (UNLESS SOMEONE DELIBERATELY DESIGNS
THEM THAT WAY), they cannot evolve (on their own) and cannot become gragu.
If this were not true, then the world would be in dire danger from the 
ribosomes and enzymes in the cells of every living thing on the planet.

Designers of gragu will not need to make replicating disassemblers in order
to design gragu, because they could and would make gragu using a system of
cooperating components including assemblers, disassemblers and nanocomputers.
The real danger lies in the nanocomputer-assembler-disassembler system, not
in any of the components themselves.  

Nanocomputers which have no control over assemblers or disassemblers pose no 
threat.  For those that do have such control, the threat they pose is 
problematical.  Do they have control over nanomachines which significantly 
satisfy gragu conditions (c) thru (e)? How intelligent is each nanocomputer?  
How intelligent is the collection of nanocomputers, as a group,  which consider
themselves to be allied (for whatever purpose)?  How is information stored in 
the nanocomputers?  Does the information encoding scheme inhibit or enhance the
probability that "bugs" will lead to out-of-control or even inimical 
nanocomputers?  

It is possible to use encoding schemes for information that make any error 
either highly likely or highly unlikely to result in either gibberish or 
meanignfull data.  Only suicidal persons would deliberately play Russian 
Roulette by using an encoding scheme that makes errors likely to result in 
meaningfull data (and hence workable programs).  In any case, such a design 
would not be an accident.  Unless nanocomputers are extremely intelligent 
(human order of magnitude or better), then any aberrent bahavior they develop 
will be relatively  stupid or "mindless".  It is highly unlikely that machines 
with human-level intelligence will be anywhere near as small as a single cell,
for instance.  At least not anytime soon.  Nanocomputers will be more similar
to today's electronic computers than to human brains.  They will be slaves to
their programming.  They may perfrom trillions or even quadrillions of 
instructions per second and have access to terabytes of "RAM."  But the most
likely source of gragu will be human programming errors or purposeful human
programming.  

Programming errors are likely, of course.  The question is, how likely are
they to create gragu?  That depends on many factors, including the programming
language, operating system, failsafe mechanisms and progress in software
engineering.  Perhaps humans will not be ALLOWED to directly program 
nanocomputers.  That job may be reserved for AIs which never make mistakes.
Or perphaps an AI system can reliably check all programs for bugs before they
can be downloaded to the nanocomputer(s).  Nanosystem simulators and "test
environments" will be very helpfull.  And you thought it was tough to get
a new DRUG past the FDA!!! Hah!

Nanosystems will be DESIGNED to make accidental gragu as unlikely as we know
how.  I'm not overly worried about accidental gragu (provided we take reasonable
precautions).  It's the deliberate kind that has me worried.

<What worries me is the parallel development of assemblers and AI.  An 
<artificially-intelligent assembler (AIA) may or may not be conscious.  If it is,
<and it has any desire at all to reproduce, we are in big trouble.  Yes, I know
<about KED's containment system, and I can suggest three different ways for a
<sufficiently intelligent AIA to break out of it without setting off the 
<microbombs the whole thing is based on, and dozens of other ways that it can
<ensure that if the microbombs *do* go off the explosion will *not* be contained.
<So can you if you think about it from the AIA's point of view instead of 
<wishful thought.

The obvious preventative measure is to NEVER give an AI which is anywhere 
near as smart as a human the ability to DO anything (such as program a 
nanomachine) without human consent and thorough inspection.  This restriction
is not as onerous as it seems if you use "idiot-savant" AI's which are
brilliant molecular engineers AND OTHERWISE AS DUMB AS A CRAY-V to program
your nanomachines--and to check the programs offered by your fully-intelligent 
AIs for "trojan horses".

Another strategy is to use the technology for creating AIs to enhance human
intelligence.  Of course, then the question becomes not "Can we trust this
machine?" but "Can we trust this man?"  But we have that problem anyway.
Increasing an entity's intelligence does not make it less trustworthy.  It just
increases the consequences of being wrong.

<If it is not, then it will only make what it is instructed to make.  One of the
<things we will be instructing it to make is more AIAs.  After all, they're damn
<useful.  But a very small error in coding the AIA "tape" could result either in
<failure to stop reproducing, or in production of AIAs that go on reproducing
<endlessly.  That is, grey goo.

Whether a "small" error can produce gray goo depends on the design of the
system.  It is possible to design systems such that errors and failures
result in complete shutdown of the system with an extremely high probability.

<>Inimical people will have
<>to create it.  Only the highly insane will consider releasing an 
<>indiscriminately-destructive goo on the world.  

<Just as, I presume, only the highly insane would consider an all-out nukewar
<"survivable" with "acceptible" losses.  But such people exist, and are in
<positions of power, and fund most of the interesting research in the world these
<days.

<(This is probably my biggest concern with Drexler's arguments -- he lives in a
<political fantasyland where the U.S. are the "good guys" and as long as we get
<the AIA breakthrough first the world will be a safe and happy place.)

<Also, do you consider terrorists insane?  Whether or not you do, grey goo would
<make one *HELL* of a terror weapon.

What I was trying to suggest is that we need to make a change in what we 
consider to be "acceptably sane."  And we need to find out how to reliably
cure and prevent the sort of "insanity" (or "antisocial behavior") which
drives (or permits) people to purposely seek to harm others.

<>The problem isn't gragu--it's inimical intelligences.  Perhaps the best way
<>to prevent gragu is to prevent the sicknesses, abuses and depravities that
<>engender insanity and evil.  

<Oh, good.  ALL we have to do is make everybody in the world sane and happy.
<By *whose* definition of sanity...?  (Remember the terrorists.  Are they insane,
<or just extremely dedicated?)

May I suggest that "insanity" is any state of mind which engenders destructive
anti-survival behavior?  In light of nanotechnology, militarism and terrorism
are insane states of mind under this definition.

<>False Assumption Number:
<>2) Nanotechnology which is sufficiently advanced to create gragu will appear
<>   before AI which is sufficiently advanced to speed up technoscientific
<>   advancement by 6 orders of magnitude (or better).

<Actually, this is (a) quite possibly true and (b) not a necessary assumption.
<Just having extremely fast "technoscientific advancement" would *not* 
<automatically protect us from gray goo.  The abilities implied by that phrase
<are useful only if (1) the goo is detected in time for us to do something about
<it and (2) a defense against it is reasonably tractible.  This latter has two
<noteworthy features:  first, that it has to be intellectually tractible:  that 
<is, it must be theoretically soluble.  I suggest that a variation on Godel's
<theorem -- somewhat like the Tortoise's "phonograph-killing records" -- would
<demonstrate that there *is* a solution to any given goo or combination of goos.
<However, the solution may be incredibly difficult and, with a clever goo, not
<discoverable by mechanical means:  a "quantum leap" of understanding is 
<frequently required for complex problems.

If the assumption that advanced nanotechnology will appear before advanced AI
is true, then we should have a much better idea how to design nanomachines
which satisfy the gragu conditions than we have about how to design idiot
savant AI systems.  That does not seem to me to be the case presently.

The reason that a massive speed up in the rate of technoscientific advancement
would give active shields an advantage is based on two facts:

1) The defense has a significant advantage over the offense in cases where
   a) the forces are comparable in all significant respects, and
   b) the offense does not posess weapons of mass destruction 
      (i.e., one soldier can not reasonably be expected to take out large 
       numbers of enemy units over a small time scale)

2) The speed-up in the rate of technoscientific advancement would at first only
   be enjoyed by those who owned and/or controlled the means by which the
   speed-up was achieved. They can use this monopoly to build up a "permanent"
   (well, at least a long-lasting) advantage over the constructors of gragu.
   The size of this advantage would depend upon how long it took gragu-designers
   to "catch up" to the new state-of-the-art rate of advancement.  That could
   easily take years or decades in itself.  If the new rate were 10^6 times
   greater than the old one, then it would represent a difference of millions
   of years progress in our terms.  That simply has to be significant.

I see only two likely sources of gragu:

1) Military gragu.  This will be designed with the full resources of a state,
and to achieve a militarily-rational (if that means anything) purpose.  The
problem of preventing its release is essentially the problem of preventing
accidents and major wars.

2) Terroristic gragu:  This will likely be cooked up in some backwater by some
individual or organization of relatively meager resources.  It is unlikely
that the designer(s) will have the same level of technology that is available
to states--or designers of active shields.  There is much more reason to hope
that active shields can be designed and put in place that will be effective
against this kind of gragu, especially if the "leading forces" take pains
to insure that they keep significantly ahead of everyone else technologically.

<[SIDEBAR:  This, by the way, is also a weakness in active-shield technology; for
<any given shield or set of shields, a "shield-killing goo" can be designed.  We 
<are today witnessing a dramatic and tragic demonstration of shield-killing goo 
<in the active-shield systems of the human body:  I mean, of course, Acquired
<Immunodeficiency Syndrome, AIDS, which subverts and destroys the body's active
<shield system by exploiting just such an incompleteness.)

But the immune system only evolves in response to new threats.  Purposely
designed active shields can "evolve" in response to new technology--hopefully
before gragu designers have figured out how to defeat last month's version.

<The other feature of the tractibility requirement is that it be *practically*
<tractible.  That is, that the antigoo must be practically "do-able" (not require
<unavailable resources), temporally "do-able" (that is, the antigoo must be
<deployable and active sufficiently rapidly enough to save the world), and
<strategically "do-able" (that is, the cure must not be worse than the illness. 
<An anti-goo which is itself a goo, or which sets off the Other Side's goo
<detectors and triggers a goo war, is not worth deploying for strategic reasons.)

Both shields and goo have to overcome the "is it possible or practical?"
hurdle.  Why should this cause shields more difficulty than goo?

<>The second assumption is probably false because a gragu agent would have to
<>be much more sophisticated than a virus or bacterium

<Oh yeah?  Care to prove it?

See above.  And also, if virii and bacteria were gragu-class devices,
why are we still here?

<>The brain is not magic.  If it can evolve, it can be
<>purposely designed.  There can be no credible refutation of this logic.

<Careful... You're getting close to the "argument by design" quasiproof of the
<existence of God...

Not at all.  The brain exists.  "I think therefore I am" and all that.

Creationists argue that the "intelligent design" of life necessarily could
only have come from an intelligent creator.  I argue simply that whatever came 
to be WITHOUT THE HELP OF an intelligent creator is not beyond the abilities of 
one.  If you believe in evolution, it is not consistent to say "pish and tosh"
to AI.

<BTW, "most scientific researchers are not inimical?"  While this may be true,
<it only takes *one* inimical scientific researcher to create a disaster -- if
<s/he's the right scientific researcher.  See the late Frank Herbert's THE WHITE
<PLAGUE for what one angry scientist *could* do.

My point is simply that more effort by more people will be devoted to active
shields because most people want to live.  This gives active shields an
advantage.  It does not guarantee results.

<>Drexler argues that AI--and other advancements--will drastically accelerate
<>the rate of progress by many orders of magnitude.  

<Potential exists here for fallacy.

<Everyone is assuming that AIs will be faster or "better" than human minds.
<THIS IS AN UNPROVEN ASSUMPTION.  Yes, they do certain mechanical things faster
<and better than the human mind already.

<But so does the human brain.  The human brain, on the mechanical level,
<continually performs calculations and logical functions far more complex than
<most human minds can do, and much faster than any human mind can do them.  In 
<creating computers, we have simulated the physical functioning of the human
<brain, but only on this mechanical level.  On the software level, we are nowhere
<near understanding how the human mind learns and makes mistakes, let alone how
<it acually comes up with creative solutions to problems.  I suggest that we will
<be able to do something with creativity after and *only* after we have "taught"
<machines to learn and make mistakes.  (I also suspect that, as Hofstader 
<suggests in GODEL, ESCHER, BACH, actual intelligence is an epiphenomenon of
<the brain, to be found only at the very highest levels of many software packages
<interacting, far removed from hardware.)

<Nobody can predict now how fast these learning, erring, and creating programs
<will actually be.  They will almost certainly be slower than current mechanical
<programs.  Even allowing for the continued evolution of hardware, they may be
<very much slower than is often assumed.  The only known conscious programs in
<the world right now are running on protein computers far more efficient than
<any electronic or optic computer on the drawing boards, and even when carefully
<trained generally has problems multiplying two five-digit numbers without using
<a calculator.

Nanotechnology can be used to duplicate the FUNCTION and LOGICAL ARCHITECTURE
of the brain using components (nanoneurons) which are orders of magnitude
faster than bioneurons.  Think about it.

<>The first team to use   
<>nanotechology to create a "super computer" will probably be able to achieve
<>and maintain an unassailable technological superiority over everyone else,
<>if they so choose.

<Oh, goody.

<Translation:

<The first individual or group OR GOVERNMENT to use nanotechnology to create a 
<"supercomputer" will achieve and maintain political and social control over
<every man woman and child in the world and become in effect absolute dictator.
<If this is achieved by an individual or group with benevolent intentions they
<will still take control to prevent someone worse from getting it.  However, such
<benevolent individuals will *still* be tyrants; or, if they are not, they will
<soon lose their power to someone with the mindset to take it from them who will
<then be a tyrant.

<I don't think this is necessarily true.  But that *is* what I believe the
<consequence of your statement is.

Could be.  I think that this possibility is just one more thing to worry 
about.  My hope is that massively-intelligently entities are by nature
benign.  An interesting point for discussion.  Comments?

Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.
Disclaimer: I do not speak for AT&T Paradyne.  They do not speak for me. 
______________________________Down with Li Peng!________________________________
Motto: If nanomachines will be able to reconstruct you, YOU AREN'T DEAD YET.