[sci.nanotech] Is this stuff for real?

ms@pogo.ai.mit.edu (Morgan Schweers) (02/27/91)

Greetings,
    How much of nanotechnology is vaporware/dreaming?  Can anyone point me
to a solid and *REALISTIC* exploration of PRESENT DAY research on this topic?
 
    I've gotten a *LOT* of $#!t from people, when I try to explain nanotech
to them.  The things that got me the most strange looks were:
 
    *  Nano-Dissassemblers -    The idea that something can actually be
                           programmed at that size, and then ACTUALLY HAVE
                           AN INFLUENCE on other items seems to be a sticking
                           point for a lot of people.  What sort of materials
                           are REALLY dissassemblable?
 
    *  Nano-Assemblers     -    The same problem, really.  Even when people
                           manage to accept the idea of dissassembly, they
                           rarely accept the idea of reassembly.
 
    *  NanoProgramming     -    Is it REALLY possible to actually *PROGRAM*
                           something that small?  What *IS* the size that we
                           are talking about?
 
    *  Movement            -    How does something that small MOVE?
 
    *  Power source        -    Obvious.  What's their power source?
 
    *  ETA                 -    What are the optimistic assessments of when
                           this technology will be available?  The pessimistic?
                           Or is all this just a joke?
 
    Any other information on the *REALITY* of nano-hacking would be greatly
appreciated.
 
                                                         --  Morgan
 P.S.  I've read Blood Music, and consider it nonsense.  I've also read
     a book named something like Down The Sea Of Stars (or something-similar)
     and it's nano-techs seem to make a *LITTLE* more sense.  (but not much)
 
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|  I DON'T UNDERSTAND!!!!!!!!!!!!!!   |  I understand perfectly,    |
|  This makes *NO* SENSE!             |  You simply don't comprehend|
|  I'm *SCARED*!!!!                   |  my genius.                 |
|                      --  Morgan     |                --  Nagrom   |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
ms@ai.mit.edu OR ms@albert.ai.mit.edu(preferred) OR ms@woodowl OR ...

[Read Engines of Creation by Eric Drexler (Anchor Doubleday), now out
 in a second printing.  Look on the /nanotech/papers FTP directory on 
 planchet.rutgers.edu.

 I could wish there were more discussion of these "lower-level"
 more technically oriented questions on this newsgroup.  Please,
 anyone with anything to add to these questions, don't hold back
 just because I've given them a "lick and a promise". 

 With that caveat, the short answers:

 Assembly/disassembly:  Cells (both bacteria and cells that are part
 of larger organisms) do this all the time.  Nanotechnology simply
 presumes that we can translate mechanical design concepts to the
 same scale.

 Programmability: Similarly, cells have a "program" in their DNA.
 We are assuming that the structures of formal computation can be
 reproduced at a molecular scale.  The nanocomputer, in a crude 
 mechanical version, is actually the most complete actual nanotech
 design so far.

 Movement: In many envisioned applications, the nano-robot floats
 around at random in some solution, making desired contacts with
 raw materials and other nanobots stochastically.  Otherwise it
 could crawl or have propellors.

 Power:  Most commonly suggested is chemical fuels in solution. 
 Other schemes include tuned antennas converting some wavelength of
 light to electricity.

 ETA:  Optimistically, 200x.  It does depend on the amount of effort
 expended to that end.  Could we land a man on the moon before 2000
 (starting from where we are right now)?  Will we?

--JoSH]

bill@braille.uwo.ca (W.B. Carss (519) 438-0344) (03/01/91)

Let me preface what follows by saying that my ignorance on this topic could 
fill volumes. I am neither a "scientist" nor a "theoratician" (nor probably as
you can tell, a speller of any great note).  Even so, I have a few ideas
which I thought I would add to the discussion.  Probably the best thing I have
going for me is a good imagination which, I have often been told, is more than
a little over-active.

As Josh<?> mentioned, there are several examples of
nanotechnology in existence in the world around us right now
- bacteria, cells in our bodies, process involved in
digestion and many many more.

The big question, as far as I can tell, is whether we will
be able to get enough of a handle on EXACTLY how things are
done to CAUSE THEM TO BE DONE OURSELVES.  Certainly, many
people would say that we are already doing that with genetic
engineering.  This is only partly true.  From the little I
know of the topic, it seems to me that what we are doing is
CAUSING existing systems to make changes for us.  There is a
big difference between telling a calculator to find the nth
root of a number and knowing how to do it yourself.  I think
we are still pretty-much at the 'calculator' stage of our
nanotech development.  That isn't to say that we won't
eventually get there, only that we have an awful lot to
learn before we are even generally knowledgeable enough to
make any SERIOUS of MEANINGFUL attempts.

I think one of the major dangers, (without wishing to
squelch any dreams or rain on anyone's parade), is that we
may try to do more than we are ready for too soon and botch
it.  From my own experience, I have done this several times
in all kinds of situations.  Certainly, the only real result
in my own personal case is a fail to accomplish what i have
set out to do.  In the case of nanotechnology, however, it
isn't inconceivable that something may be created which we
can't control.  I don't wish to be an alarmist nor anything
like that, but I do believe that GREAT PAINS SHOULD BE TAKEN
TO INSURE THAT BEFORE ANY NEW "MACHINES" GET CREATED BY US
WE KNOW WHAT THE HEC WE ARE DOING!!!

Certainly the ever-popular trial and error method is just
about the only way we will know for sure whether something
works, and I have no problem with that except to say that
when the trial takes place let's make darn sure it is in a
situation where we can control WHATEVER happens!!  AND I
MEAN CONTROL!!!!!!!!!!!!!!!!!  All we need is some "rogue"
machine running loose doing who knows what as a result of
mutation to who knows what.

I am thinking specifically of machines that are able to
self-replicate.  In any such situation millions if not
billions (thanks Karl) of these machines would be necessary
to really accomplish anything of significance on our size
scale.  Therefore, the machines would have to be
self-replicating and the "offspring" would be prone to
"mutation" (for lack of a better term).


What would these mutations be "designed" to do?

Would the mutations increase the rate at which the
    self-replications occur?

Would this be a perpetually compounding problem?


It could be argued that there could well be "guard" machines
to oversee the machines that are self-replicating.  In such
a manner some "control" could be exerted over the situation.
To do this would require millions or billions of "guards"
and the problem recurs.

A related concern is indeed the "fuel" that these machines
use.  What if the mutations "require" a different fuel than
we planned?  What if the mutations "took a liking" to
organic matter?  Never mind us, plants, useful bacteria, the
list isn't endless, but I am sure you get the point.

Without wishing to sound too foolish, this is the stuff of
which "The Attack of the Killer Tomatoes" is made.  And
whether anyone wants to actually discuss these ideas, they
ARE within the realm of possibility.

All I am saying is that we should go slowly and carefully.
What we are really talking about here, in essence, is the
creation of life from lifelessness.  At least I believe that
nono-machines could be considered "alive".  We don't have
any real idea what the release or even exposure of "beings"
such as these into our environment would do.  We don't know
how the other organisms in our environment would react to
these newly arrived "intruders".  Could we end up with a
similar situation to rabbits in Australia, where no
"natural" predator (or in our case controls) exist?


-- 
Bill Carss
bill@braille.uwo.ca

[This is essentially what is referred to as the gray goo problem, from
 the concept that unchecked replication could lead to the entire
 biosphere being consumed by nanobots and there would be nothing left
 but a "goo" consisting of them.  This scenario is considered quite 
 unlikely and overdramatic by most who have studied it seriously.

 The reasons are several.  First, the only reason we have to believe
 that we can build a nanobot more efficient than a bacterium, for
 example, is that it would be built like a machine:  it would be
 specialized, it would have precise components built to atomic 
 precision, it would have a highly sophisticated design.  By this
 very assumption, it COULD NOT MUTATE.  The inefficiencies in cells
 are the very thing that allow mutation, and for lifeforms, that's
 good.  But you couldn't build a nanobot to mutate unless you tried
 very very hard to achieve that specific goal.
 On the other side of the assumption, of course, is that if your 
 nanobots are not more efficient than say, bacteria, they won't
 win out over bacteria when taken out of the laboratory environment.
 
 Fears of accidental gray goo scenarios are less comparable to 
 rabbits in Australia, than to a story where feral automobiles
 run wild, mutating into herds of grass-eating vans hunted by
 carniverous pickup trucks.  I would worry instead about what 
 people do with them on purpose; I was in Australia recently and
 I saw a hell of a lot more sheep than rabbits.

 --JoSH]

barryf@rpi.edu (Barry B. Floyd) (03/01/91)

I am no expert, though my interests are more than passing...
 
In my quest for relevant information I find Science News (MIT? weekly?)
and Scientific America (monthly) to be accessible (intellectually).
 
Each has run one or more stories in recent months describing 
independent efforts to "move" individual atoms using  normal-scale machines.
Researchers have successfully "written" names and designs by manipulting
surface and sub-surface atoms (details and references are not on hand).
 
I am less atuned to the biological approach (e.g. protien machines, 
enzyme machines) etc. though I have read "Blood Music" et al and find
it plausible. Several genetic engineering firms written up in financial
newspapers seem to be predisposed to advances and commercial applictions
along these lines. To the extent that such companies (vs Universities)
exist, prospects seem positive.
 
barry

-- 
+--------------------------------------------------------------------+ 
| Barry B. Floyd                   \\\       barry_floyd@mts.rpi.edu |
| Manager Information Systems - HR    \\\          usere9w9@rpitsmts |
+-Rensselaer Polytechnic Institute--------------------troy, ny 12180-+

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) (03/01/91)

ms@pogo.ai.mit.edu (Morgan Schweers) writes:

>    How much of nanotechnology is vaporware/dreaming?  Can anyone point me
>to a solid and *REALISTIC* exploration of PRESENT DAY research on this topic?
[...questions deleted...]
>    Any other information on the *REALITY* of nano-hacking would be greatly
>appreciated.

   What do you mean by "nanotech"?  I've seen some quite fascinating ideas
spun on this group, some with considerable technical sophistication to it,
but I wonder how realistic people are about bringing it about (are
they too hooked up on the dreams?).

   There are some of us who are at least interested in working on the
*REALITY*, and are preparing.  IMHO, there seems to be a surprising number
of people on this group interested in interdisciplinary studies, which seem
to me to be the best way to get there.

   I have few illusions as to my own ability to understand all the
complexities involved.  Just a little molecular genetics can dampen one's
spirits fast unless one is willing to stick it out.

> P.S.  I've read Blood Music, and consider it nonsense.  I've also read
>     a book named something like Down The Sea Of Stars (or something-similar)
>     and it's nano-techs seem to make a *LITTLE* more sense.  (but not much)

   David Brin gives a rather pessimistic, but plausible (given other
precedents set in history) view of nanotechnology in "Earth".  In short, it
says that they are very specific and need to have absolutely *pure*
nutrient baths...  and even then can only produce repeating units of some
sort, like a crystal.

Apparently the rest is from the editor [JoSH...], I guess:

[...encouraging comment deleted...  ;-)]
> With that caveat, the short answers:

> Assembly/disassembly:  Cells (both bacteria and cells that are part
> of larger organisms) do this all the time.  Nanotechnology simply
> presumes that we can translate mechanical design concepts to the
> same scale.

   Ack!  Have you studied any molecular genetics?  (well, you probably have ;-)
IMHO, there seems to be a definite lack of mention just how radical a
*transformation* of concepts would be necessary to achieve such a translation
of scale.  What I know of molecular genetics seems to clearly indicate that
the mechanisms involved are many orders of magnitude more complex, even in
the prokaryotic case (single-celled organisms).  Many people resist the notion
of parallel computing (at least doing the parallelizing work themselves),
much less having to work with complex automata-like systems.  Now, of course,
this doesn't stop some of us die-hards from trying anyway ;-).

> Programmability: Similarly, cells have a "program" in their DNA.
> We are assuming that the structures of formal computation can be
> reproduced at a molecular scale.  The nanocomputer, in a crude 
> mechanical version, is actually the most complete actual nanotech
> design so far.

   A *program*?!?  Arghh...  although I will grant you that it *can* be called
a "program" per se, this has no reference as to the encoding of this program.
Needless to say, it is neither linear, nor easily decodable.  This doesn't
account for the fact that these programs are perhaps meant to have more (and
different) long-term functions than anything we currently have.  Lately, I have
been considering the concept of what I call "minimal encodings", sort of
like packing the most information possible into a set of instructions.  It
seems that an information theoretic-like attack on this problem might have
some interesting leads.  Again, the concept has to undergo radical revision.

> Movement: In many envisioned applications, the nano-robot floats
> around at random in some solution, making desired contacts with
> raw materials and other nanobots stochastically.  Otherwise it
> could crawl or have propellors.

   This seems reasonable.  There is a question of how generally the forms of
the "raw materials" would take...  remember that our own assembly systems
are easily fooled by look-alikes in building proteins...  stochastic systems,
even in their most specific forms, can still in many cases be fooled,
especially as they get sufficiently small.

> ETA:  Optimistically, 200x.  It does depend on the amount of effort
> expended to that end.  Could we land a man on the moon before 2000
> (starting from where we are right now)?  Will we?

   This very much depends on what we want to work for...  and how much we do
about it.

   Discussion is, of course, encouraged.  I also am very interested in
getting the discussion into details/ideas of substance.

   Erich

             "I haven't lost my mind; I know exactly where it is."
     / --  Erich Stefan Boleyn  -- \       --=> *Mad Genius wanna-be* <=--
    { Honorary Grad. Student (Math) }--> Internet E-mail: <erich@cs.pdx.edu>
     \  Portland State University  /  >%WARNING: INTERESTED AND EXCITABLE%<

toms@fcs260c2.ncifcrf.gov (Tom Schneider) (03/01/91)

In article <Feb.26.17.12.27.1991.23759@athos.rutgers.edu> ms@pogo.ai.mit.edu
(Morgan Schweers) writes:

>    How much of nanotechnology is vaporware/dreaming?  Can anyone point me
>to a solid and *REALISTIC* exploration of PRESENT DAY research on this topic?

>    *  Nano-Dissassemblers -    The idea that something can actually be
>                           programmed at that size, and then ACTUALLY HAVE
>                           AN INFLUENCE on other items seems to be a sticking
>                           point for a lot of people.  What sort of materials
>                           are REALLY dissassemblable?

Enzymatic digestion is an example.  The proteins are joined by a 'peptide
linkage; add water and they come apart.

So the rebuttal is:
Eat your dinner: it gets disassembled at the molecular level!

>    *  Nano-Assemblers     -    The same problem, really.  Even when people
>                           manage to accept the idea of dissassembly, they
>                           rarely accept the idea of reassembly.

DNA is copied into RNA in the cell.  That is, for the set of the chemical
"letters" in DNA (the bases a,c,g,t separated by deoxy ribose sugars and
phosphates), the set of RNA "letters" (the bases a,c,g,u separated by ribose
sugars and phosphates), is created.  Then sets of three RNA bases are read to
insert amino acids in a growing chain in the process of "translation".  The
string of amino acids folds up into proteins that are involved in all these
steps plus many other wonderful things (like sensors and feedback control
systems!)

So the answer to your friends is:  what you eat makes you grow!

>    *  NanoProgramming     -    Is it REALLY possible to actually *PROGRAM*
>                           something that small?  What *IS* the size that we
>                           are talking about?

This one is harder to give an example of, because 'program' has the specific
meaning of using a high level language such as C or Pascal on a general purpose
computer.  But consider all the instinctive behavior that animals do.  It is
'programmed' by the genes.

Temporary Rebuttal: teach fido to get the stick.

>    *  Movement            -    How does something that small MOVE?

Nobody knows how muscles work, but they do work at the molecular level!

Rebuttal: a good punch in the nose should do the trick!

>    *  Power source        -    Obvious.  What's their power source?

In biology, the triphosphate nucleotides, mostly rATP are energy sources.
But to get the ATP, lots of other tricks are used, sunlight being the main one.

Rebuttal:  dab their bloody nose with cotton balls.  The cotton was grown using
solar power and constructed by molecular machines.

>    *  ETA                 -    What are the optimistic assessments of when
>                           this technology will be available?  The pessimistic?
>                           Or is all this just a joke?

Lesee - I forget.  Just how long has life been around on the planet?  At least
a billion years.  So the optimistic assesment of when we can use nanotechnology
is about a billon years ago!  Of course I understand you mean:  when can we begin
to direct it for our own use.  Well, we've made bread since ancient times; we make
drugs, we now modify enzymes...  It's here folks!  Of course the full general
assembler idea of Drexler is not yet, but I wouldn't bet more than 50 to 100
years at the rate we are going.

>    Any other information on the *REALITY* of nano-hacking would be greatly
>appreciated.

Read lots and lots of molecular biology.  A good source is:

@book{Watson1987,
author = "J. D. Watson
 and N. H. Hopkins
 and J. W. Roberts
 and J. A. Steitz
 and A. M. Weiner",
title = "Molecular Biology of the Gene",
edition = "fourth",
year = "1987",
publisher = "The Benjamin/Cummings Publishing Co., Inc.",
address = "Menlo Park, California"}

Well, I was planning on waiting to see when people would notice my latest two
papers, but it looks like people aren't willing to post references or don't
look where I put them...  Anyway, all of you budding nanotechnologists
(molecular machinists) will have a fun time reading:

@article{Schneider.ccmm,
author = "T. D. Schneider",
title = "Theory of Molecular Machines.
{I. Channel} Capacity of Molecular Machines",
journal = "J. Theor. Biol.",
volume = "148",
number = "1",
pages = "83-123",
year = 1991}

@article{Schneider.edmm,
author = "T. D. Schneider",
title = "Theory of Molecular Machines.
{II. Energy} Dissipation from Molecular Machines",
journal = "J. Theor. Biol.",
volume = "148",
number = "1",
pages = "125-137",
year = 1991}

In these papers you will find plenty of references.  (Note: figure 1 is on page
97, but should be placed just after page 84.)  The second paper proves that it
will be possible to create ACCURATE computers built out of molecular parts.

>                                                         --  Morgan
> P.S.  I've read Blood Music, and consider it nonsense.

Perhaps you should read it again.

  Tom Schneider
  National Cancer Institute
  Laboratory of Mathematical Biology
  Frederick, Maryland  21702-1201
  toms@ncifcrf.gov

bill@braille.uwo.ca (W.B. Carss) (03/03/91)

[The arguments in this message are, fortunately for the future of life
 on Earth, flawed.  See below. -j]

>[This is essentially what is referred to as the gray goo problem, from

{...lines deleted}
> 
> Fears of accidental gray goo scenarios are less comparable to 
> rabbits in Australia, than to a story where feral automobiles
> run wild, mutating into herds of grass-eating vans hunted by
> carniverous pickup trucks.  I would worry instead about what 
> people do with them on purpose; I was in Australia recently and
> I saw a hell of a lot more sheep than rabbits.
>

To suggest that we can build machines that work perfectly EVERY time forever
is (IMHO) just about as silly as the paragraph I have reproduced.  All I said,
essentially, was that considering our record so far, i.e. the things we think
we KNOW, I would be willing to bet that there will be a lot more POTENTIALLY 
serious screw-ups than successes.

In your GRAY GOO summation you neglected to discuss our i.e. human capacity for
error, our often frequent assumption or at least attitude that we know 
everything and the frequent occasions when we find out that we don't.  We are
a long long way from being anywhere near "programming" these nanobots, nor 
even doing much more than day-dreaming about them.  For you to suggest that
sometime between now and when ever we are actually able to build or design 
them we as human beings will exhibit a lot less arogance and a lot more sense is
just about as silly as your trite dismissal of a potential problem that you
didn't seriously address or even consider.  The problem isn't nanotechnology,
it is US.  Our inefficiencies and stupidity which has been shown time and
time again.

Let me take this a step further ...

We design nanobots that are either self-replicating or are built by other
nanobots.  Assuming that ever replication from which ever source is not
perfect, some of the machines will be flawed.  To check for flawed
machines we design sel-test abilities it to the replicators.  Assuming that
the flaws don't develop in the self-test portion of these machines 
everything works fine.  If, however, the flaws develop in the self-test
portion of the machine, you will either have a machine which is scrapping
good nanobots and/or passing flawed nanobots.

What will these flawed nanobots create?  We have no real way of knowing.
What type of end-product would be built by flawed nanobots?  Again, we have no
real way of knowing.
Could we test the testers with testers with testers ...  We would end up with
so many testers nothing constructive could be built because all of the 
available resources would be consumed by testing the test ...

So now I state the ultimate question, and I mean this very seriously,  

CAN SOMEONE SHOW ME WITH SERIOUS SCIENTIFIC REFERENCES I.E. SOME RESULT FROM 
EXPERIMENTATION OR SOME OTHER TESTABLE MEANS WHY THIS CRITICISM IS FLAWED?

I truly believe that it requires a more serious answer than the 
ever popular gray goo catch-all which you have used.  I challenge you, in  a
friendly way to show me that I am wrong rather than attempting to sweep my
criticisms under the proverbial rug.

-- 
Bill Carss
bill@braille.uwo.ca

[The major flaw in the above argument is that it doesn't take any account
 of the difference between a machine and an animal.  Bill has taken a
 major (and completely unsupportable) leap of faith:  since we are 
 quite likely to make mistakes (true) our machines will suddenly become
 supermachines able to take over the world in spite of all we can do to 
 resist them.  
 A machine built for some useful purpose will tend to be as efficient as
 we can make it for that purpose.  This means that it will tend to be
 highly specialized, run on special fuels or power sources, require
 inputs preprocessed for their special purpose, etc.
 Consider a car.  It runs on gasoline.  For a car to turn feral, it would
 have to convert to some naturally occuring fuel, say wood.  You could
 *design* a car to live off the land; it would come with saws and chippers
 for harvesting trees, some low-efficiency but highly robust motor able
 to burn anything in some broad range; it would trade speed for off-road
 capability, etc, etc.  Spend a little time actually trying to design
 a self-fueling, self-repairing car.
 Now come back and tell me how you are accidentally going to build this
 amazing vehicle, without intending to, by making a MISTAKE in the process
 of building an ordinary regular car that runs on gasoline, and is fixed
 at service stations with parts manufactured by factories.

 If you don't understand why I'm talking about feral cars you've missed
 some important point and need to go back and try to explain why you 
 think gray goo could happen in the first place.  Early nanotech thinkers
 realized that nanomachines could be much more efficient than natural
 organisms and it is easy to jump to the conclusion that they could thus
 outperform them in an evolutionary struggle and take over the biosphere.
 That's wrong.  It's based on completely ignoring what it is that makes
 natural organisms less efficient than machines.  The answer turns out 
 to be flexibility, adaptability, self-repairability, evolvability.

 The lesson to be learned from this is that using nanotechnology for some
 given task might be much safer than the alternative if that alternative
 was to use BIOtechnology.  Modifying actual organisms does *not* have
 the safeguards of specialization, inflexibility, brittleness of design,
 and so forth that a mechanistic approach to nanotechnology has.  With
 biotechnology you're talking rabbits instead of cars.
--JoSH]

cphoenix@csli.stanford.edu (Chris Phoenix) (03/03/91)

Josh writes:
> The inefficiencies in cells
> are the very thing that allow mutation, and for lifeforms, that's
> good.  But you couldn't build a nanobot to mutate unless you tried
> very very hard to achieve that specific goal.

From what I've heard, it is true that a nanomachine can be easily designed
to avoid mutation.  But I don't believe it would be that hard to build one
that mutated.  All you'd need is some encoding of the specification in a 
format such that a high (say, .01%) number of random changes to the spec 
produced something meaningful.  Then program it to change 3 bits of the spec
before it replicates itself.  "genetic" algorithms work in finding good 
solutions to problems, and while I don't know much about them, it seems that
there should be a way to code a machine spec so that it could be optimized in
this way.
I can't see why anyone would want to, though.  Seems like once we get 
nanocomputers it would be easier to do a top-down design and simulation, and
get a machine that does exactly what we want (we hope) rather than relying on
chance.

[Genetic algorithms are a good example of what I'm talking about.  
 As an experiment, try writing a self-reproducing program in C that
 introduces random changes in itself, and still works.  Genetic 
 algorithms use highly inefficient production system mechanisms
 for the same reason cells do--because they are the only way (we
 know of) to make evolution actually work.
 --JoSH]

bill@braille.uwo.ca (W.B. Carss) (03/09/91)

In article <Mar.2.22.52.40.1991.21778@athos.rutgers.edu> bill@braille.uwo.ca (W.B. Carss) writes:
>[The arguments in this message are, fortunately for the future of life
> on Earth, flawed.  See below. -j]

>[The major flaw in the above argument is that it doesn't take any account
> of the difference between a machine and an animal.  Bill has taken a
> major (and completely unsupportable) leap of faith:  since we are 
> quite likely to make mistakes (true) our machines will suddenly become
> supermachines able to take over the world in spite of all we can do to 
> resist them.  

No, I don't believe I have taken that leap at all.  The point that you have
apparently missed or refuse to acknowledge is a little aspect of 
industry called "quality control".  If you machine building is so un-flawed, 
why do we need quality control?  If our production methods were so 
perfect you wouldn't ever see people returning things that don't work properly.

Computers (at least ours) make errors perhaps once every few million
operations.  Why is that?  What is the result of the error?

In the case of nanobots, it would take billions of them to create
anything of an appreciable size i.e. something that is large enough
for us to get any real use out of it.  If you have billions of
machines each making one mistake every ten million operations, that
makes a lot of mistakes.  Certainly, in self-replicating nanobots most
of the mistakes would result in nanobots that are not viable.  But in
those cases where the mistake has not led to what we will call a fatal
error what will be the result?  We don't know.

Let's say we have built machines that respond to the colour navy blue.
We have designed these machines to replicate themselves and in the
process an error occurrs so that some of the machines now respond to
robin's eff blue not navy blue.  the machine would still "work", it
would just be responding to a different shade of blue.  Whether this
is a serious aboaration or not depends on what the machines "do" when
they come into contact with the activating colour.  Wuppose they are
designed to break-down the navy blue item whatever it is.  Those
machines that are now responding to robin's egg blue would be
breaking-down robin's egg blue items not navy blue items.  What would
be the result of all of that?

As far as killer nanobots are concerned, certainly that may be the
stuff of which science fiction is made.  My point still is (and was) a
question.  How do we control what would essentially be mechanical
errors?

-- 
Bill Carss
bill@braille.uwo.ca

[Again: You are talking about two distinct phenomena:
 (a) [shades of blue] a machine with a specific, designed, function,
 performs that function on something slightly different than intended;
 (b) [gray goo] due to an error in copying, a plan for a ten-thousand
 part machine which does one specific function and runs on one specific
 fuel, becomes a plan for a 100 million part machine, able to perform
 hundreds of functions, recognize the circumstances under which each
 should be performed, run on a wide variety of naturally occuring 
 energy sources, and survive the chemical attacks of the natural,
 highly adaptable microorganisms it will compete with.

 Think of a car again for a moment.  Suppose we have a working car, 
 and we come up with some improvement that consists of a new design
 for one of the mechanical parts of the engine.  Can you design a car
 so that to incorporate the new part, I simply open the hood and throw 
 it in?  Well, guess what: cells work that way.  If a copying mistake
 produces a better part, anywhere, it works, automatically.  In a mechanical
 design, you have to change the whole design in a highly coordinated
 way to incorporate improvements.  Almost all the copying errors in 
 a cell are detrimental, i.e. they make it work less well.  A tiny fraction
 improve, or simply change, its function.  Almost all of even that
 tiny fraction, in a mechanical design, would simply cause it not
 to work at all.  Try changing a bubblesort program into a heapsort,
 one character at a time, with the constraint that each intermediate
 form not only sorts correctly but does so at least as well as bubblesort.
 --JoSH]

mmt@client2.DRETOR (Martin Taylor) (03/09/91)

I think that much of the argument as to whether nanobots might be likely
to mutate into grey goo hinges on a difference of opinion about the
underlying structure of a successful nanobot.  One designed to do
"exactly what we want (we hope) rather than relying on chance" is
likely to be built, shall we say, symbolically. and rely on rules with
truth values near 1 or 0.  The design space is large, and viable designs
few and far between.  JoSH's arguments apply pretty well to such machines
(even though theoretically the chance of an error leading to a new viable
design can never be reduced to zero).

But it is unlikely that we will know "exactly what we want" the machine
to do, and even if we did, we would probably want it to do something
quite similar if it was confronted with circumstances very like those
we envisaged in our designing.  A machine with these desirable abilities
would be in a design space that (at least locally) was rather dense
with viable machines, and the probability of a mutation leading to
a viable design could be appreciably different from zero.  If a mutated
machine propagated its design better than the original did, then it
has at least made the first step toward grey goo.

I'm not sure that the argument is as clear-cut as either the worriers
or JoSH make it out to be, but I am sure that it is better to err on
the side of prudence, and think very carefully about all the trade-offs
between behavioural flexibility (topological neighbourhoods likely to
contain viable points), design rigidity (enpty neighbourhoods but
probably ineffective machine), and mutability.  I know that there is
not a LOGICAL connection here, but there is a probable linkage if
designs are not well thought out.  For example, with what we know now,
behavioural flexibility is likely to be attained through the use of
distributed representations for the perceptual-behavioural knowledge
and the incorporation of trainability.  But a design of this kind
which replicated itself would be very likely to produce a working
descendant if some mutation altered the form of the network.  It would
just do something a little different.  I think here we have a situation
much closer to that of natural evolution than is envisaged by the
"clean design" school.
-- 
Martin Taylor (mmt@ben.dciem.dnd.ca ...!uunet!dciem!mmt) (416) 635-2048
To be a fundamentalist takes considerable flexibility of mind.

forbis@milton.u.washington.edu (Gary Forbis) (03/13/91)

The moderator writes:

>[Again: You are talking about two distinct phenomena:
> (a) [shades of blue] a machine with a specific, designed, function,
> performs that function on something slightly different than intended;
> (b) [gray goo] due to an error in copying, a plan for a ten-thousand
> part machine which does one specific function and runs on one specific
> fuel, becomes a plan for a 100 million part machine, able to perform
> hundreds of functions, recognize the circumstances under which each
> should be performed, run on a wide variety of naturally occuring 
> energy sources, and survive the chemical attacks of the natural,
> highly adaptable microorganisms it will compete with.

I'm not sure I see that much difference between the two cases.  Be that as
it may, I have deleted most of the following paragraph so I can focus on
the specific assertions which cause me trouble.

> Almost all the copying errors in 
> a cell are detrimental, i.e. they make it work less well.

I'm not so sure about this.  I seem to remember that from parent to child
there is usually a cross-over error on at least one chromosome.  I don't
remember how many transcription errors exist.  I doubt there would be any
life if these errors were that detrimental.  Cells get around this
problem by redundant code and huge portions of garbage code.  The garbage
code is probably created by ill-placed cross-overs.

> A tiny fraction
> improve, or simply change, its function.  Almost all of even that
> tiny fraction, in a mechanical design, would simply cause it not
> to work at all.

This may be true with today's designs when thought of at some levels but
is not necessarily true.  One of the niffty ideas of the 19th century was
the mass-produced replaceable part.  This was accomplished by tolerance
specifications.  We are allowed to have slop during the manufacturing
process as long as it is kept within limits.  Some replacement car parts
will work in several places (becuase their design takes advantage of
tolerances) where the originals are not interchangable.

> Try changing a bubblesort program into a heapsort,
> one character at a time, with the constraint that each intermediate
> form not only sorts correctly but does so at least as well as bubblesort.

This is easy provided comment lines are indicated by a single character.
Simply grow the replacement procedure after the existing procedure and a
coment prior to the existing procedure which will cause a branch 
when activated then activate the branch.  There are other ways of doing this
but this is the easiest.

Burroughs B300 assembler has labels which can be addressed by relative
position, that is, +a will take you to the next a label and -a will take 
you to the last a label.  I wouldn't be surprised to learn more recent
languages have this feature.  I have had many programs malfunction in
wonderous ways yet be syntatically correct.  I leave it to you to see
how a single bit error or card misread could cause this problem.

Machine language is usually quit densely coded.  A single bit error might
turn the IBM PC machine code for jmp into jnz or mov c,d into mov d,d.  
The MS-DOS loader does not contain error detection/correction.  It is not
clear to me that any particular bit error will even be executed let alone
cause a hard failure.

I am a systems analyst/applications programmer.  On occassion I will encounter
program bugs in production programs which have existed for a decade or more.
Some system changes will cause existing programs to malfunction because the
data values and abstractions move beyond the original scope.  As systems age
they become more complex and their behavior becomes harder to predict.  I
would hate to be the person who has to claim any specific system cannot
mutate through random processes.

I hope it is clear that I am not a comfortable as you are on this issue.

> --JoSH]

--gary forbis

[Far be it from me to comfort you against your will... I'll just point
 out a couple of things: the human genome is in fact full of "comments"
 in which errors can occur and have no effect, and it is also quite
 redundant.  It is easy to design our nanobots without either feature.
 The other observation is that "evolving" a whole new sorting program
 under cover of comment and "switching" to it at the last minute has no
 feedback to guide the evolutionary process--which means that it has the
 same chance of happening as the program changing wholesale, in a single
 random event, to the given end state.  If this were the way evolution
 actually worked, I would be a fundamentalist.
 --JoSH]

markb@agora.rain.com (Mark Biggar) (03/13/91)

I'm not very worried about a nano-machine mutating into grey goo.  I'm
much more worried about a nano-machine designed to destroy dioxins in
the water supply mutating into a machine that goes after some very
simular chemical in my cells that I need to stay alive.  Small organic
toxins can be very simular to other necessary chemicals, but just
different enough to foul up the works.  In fact that is usualy why
they are toxic.

Note that all you need is a possibly simple mutation in the sensory
part of the nano-machine to get this problem.

--
Mark Biggar
markb@agora.rain.com

[I think this is a much more well-founded concern than some others
 we've heard.  It points out the *extreme* dangers in trying to do
 large-scale environmental engineering with nanotechnology.
 --JoSH]

toms@fcs260c2.ncifcrf.gov (Tom Schneider) (03/13/91)

In article <Mar.8.16.32.40.1991.12453@athos.rutgers.edu> bill@braille.uwo.ca
(W.B. Carss) writes:

> How do we control what would essentially be mechanical errors?

The same way that we reduce errors in communications systems and computers:
error checking and correcting codes.  See:

@article{Schneider.ccmm,
author = "T. D. Schneider",
title = "Theory of Molecular Machines.
{I. Channel} Capacity of Molecular Machines",
journal = "J. Theor. Biol.",
volume = "148",
number = "1",
pages = "83-123",
year = 1991}

@article{Schneider.edmm,
author = "T. D. Schneider",
title = "Theory of Molecular Machines.
{II. Energy} Dissipation from Molecular Machines",
journal = "J. Theor. Biol.",
volume = "148",
number = "1",
pages = "125-137",
year = 1991}

>Bill Carss
>bill@braille.uwo.ca

(Kirk Reiser may be reading these in, ask him please.)

  Tom Schneider
  National Cancer Institute
  Laboratory of Mathematical Biology
  Frederick, Maryland  21702-1201
  toms@ncifcrf.gov

peb@uunet.uu.net (Paul Baclaski) (03/13/91)

In article <Mar.8.16.35.15.1991.12515@athos.rutgers.edu>, mmt@client2.DRETOR (Martin Taylor) writes:
>...A machine with these desirable abilities
> would be in a design space that (at least locally) was rather dense
> with viable machines, and the probability of a mutation leading to
> a viable design could be appreciably different from zero. 

Not necessarily.

I can see two types of errors that can be controlled using standard
engineering:

1.  Development errors.  This can include errors from reading the genotype,
	identifying parts and installing parts.  The number of degrees of
	freedom incorrect parts have is directly proportional to the
	density of the "construction space".  Errors such as these
	can be mitigated through self testing and through validation--
	using multiple nanomachines that test each other such that
	reproduction is not possible if a machine is not validated.

	Such validation would certainly slow reproduction, but it 
	creates a nice fail safe link such that two or more machines
	must fail for a failure to be successful.  Further, since
	self-test is difficult, tests using multiple machines would
	have more flexibility.

2.  Genotype transcribing errors.  This corresponds to the "dense 
	design space" in the quoted message above.  It is tempting
	that this be a dense space with continuously varying genes--
	this is a design for evolvability.  On the other hand, the
	genotype "Turing Machine Tape" does not need to have this
	characteristic and it can have checksums to validate that
	it never mutates successfully.


Developmental errors must be avoided by using design discipline 
to test created machines.  Genotype errors can be made arbitrarily
small by checksums.  Genotypes that are designed for evolvability
(continuously varying genes that map to continuums in the phenotype)
are to be avoided or at least used very carefully (this is what Martin
Taylor probably means by "locally dense").

In previous Gray Goo discussions, the conclusion has often been
that gray goo will probably not occur accidently--it requires a 
(malicious) designer.  

For background, there are two relevant articles in _Artificial Life_, 
Chris Langston, ed., Addison-Wesley, 1989:  The Evolution of 
Evolvability by Richard Dawkins and Biological and Nanomechanical 
Systems: Contrasts in Evolutionary Capacity by Eric Drexler.



Paul E. Baclaski
Autodesk, Inc.
peb@autodesk.com

cphoenix@csli.stanford.edu (Chris Phoenix) (03/14/91)

Seems to me that some people have a basic confusion here, perhaps caused 
in part by the nanomachine/cell parallels that are drawn to show nanotech
is possible.  People see cells replicating, and cells mutating.  People
are told that nanomachines will replicate, and so they wonder if they
also will mutate.

The replication process will be totally different, and this is the key
to preventing mutations.  When I started to write this I thought I
could prove that it was possible to prevent mutations, but now I
realize I don't know enough to prove it.  I've even managed to
unconvince myself.  But at any rate, I hope this article will remove
some of the red-herring of cellular type mutation.

In a cell, chemicals float around in water, bump into each other
randomly, and cause changes.  For example, producing other chemicals.
For example, copying DNA.  The process is essentially highly parallel,
with no controls except feedback caused by chemicals changing some
parameter in the cell.  For example, a chemical may "turn on" a
section of DNA which produces another chemical which catalyzes a
reaction which ... and the desired end product has the ability to
deactivate the first chemical, so it's a self-limiting process.  Since
the cell has so many feedback loops and so many things happening in
parallel, if something is changed it has a good chance of leaving the
cell alive.  If a mutation to the DNA does not kill the cell outright,
there are error correction processes; but the error-correction, like
everything else, is dependent on chemicals bumping into the DNA at the
right time.  So it's possible for a change in the DNA to occur, the
cell to remain viable, and the change to go uncorrected.  This is
mutation.  (If this is wrong, please correct me; if it's
oversimplified, please don't bother.)

Picture the following nanomachine, designed to prevent mutation:
*everything* will be under the control of one or more computers.  If
these computers don't like what they see, they can shut down the
machine permanently.  If they choose not to copy the machine, they
won't.  And the copying process will also be under their direct
control.  The nanomachine can't turn itself on--that has to be done
from outside.  When a nanomachine is copied, it sends the contents of
all its computer programs back to the original.  If the original
verifies that the program is correct, it can turn the copy on.
Otherwise, it won't.  A machine under total computer control is
probably the easiest kind to build, anyway.

Now let's consider two kinds of mutation: A change in computer memory,
and any other "hardware" change.  There is one parallel between cells
and nanomachines that I'm willing to leave in: computer memory
corresponds to DNA.  It contains all the instructions for running and
replicating the machine.  However, computer memory has one feature
that the DNA doesn't: it has a computer.  The computer can manipulate
the memory far more easily and reliably than the cell can manipulate
the DNA.  It can store many copies of it, can do calculations on it,
and can compare large chunks of it.  As far as I know, cells can't do
any of these.  I know DNA is "redundant", but there are still only two
copies of any given chromosome, and the copies are different, and
there is no way to compare the chromosomes anyway.  A computer can
store enough information about its memory, and do enough checking of
its memory, that virtually any error in the memory can be detected.  I
don't know enough theory to do the calculations, but I think it should
be relatively easy to ensure that the probability of any undetected
memory error in any nanomachine that will ever be created is less than
the probability of <insert catastrophic event here>.  Can any
information-theory people confirm this?  If it's true, then mutation
of the kind that cells do is impossible.

Now, let's consider non-memory changes.  This is where I started to
wonder if nanomachines could mutate after all.  A hardware change
should *not* be duplicated when the machine copies itself.  It will
change the behavior of the original, but will not be transmitted to
the copy.  This is the theory, anyway.  But this is where I started to
wonder.  It may be possible to change the copying hardware in a way
that causes mistakes in the copy, but the change itself is
undetectable to the original.  In this case, the mistakes might be
missed.  I think this is unlikely, because the "copying" hardware will
probably be a large part of the machine and will probably be used for
many other things as well.  But consider: a change in the precision of
a manipulator arm might cause very few errors, and it's possible that
the only error made in copying would be to reduce the precision of the
copy's manipulator arm...

Although computers can "see" all their internal state, they are
dependent on sensors to "see" the outside world.  How does a computer
know if its dioxin-detector is detecting the right molecule?  Well, it
has to look at the molecule.  With what?  With a dioxin-detector...
Now suppose the dioxin-detector uses the same arm that is used in
duplicating the machine.  One mistake in the hardware could be both
self-perpetuating and dangerous, and the computer wouldn't have
anything to do with it, and couldn't detect it.

Now we're getting into nanomachine engineering.  The problem, it
seems, has come down to this: Is it possible to build a copying
mechanism which will have detectable errors whenever it is broken
enough to make even slightly imperfect copies?  When I put it that
way, I get worried...

toms@fcs260c2.ncifcrf.gov (Tom Schneider) (03/16/91)

In article <Mar.13.19.09.22.1991.10983@athos.rutgers.edu>
cphoenix@csli.stanford.edu (Chris Phoenix) writes:

>Picture the following nanomachine, designed to prevent mutation:
>*everything* will be under the control of one or more computers. ...

What you are constructing is a way to make error correcting codes.  Shannon
showed many years ago that it is possible to construct codes that reduce the
error rate to as low as you may desire.  This stunning result is still not well
appreciated by communications engineers.  (I have a recent book in which it is
incorrectly stated.)

Basically it goes like this.  If you want a communications line which runs at
(say) 10^6 bits per second with one error in 10^5, it can be built.  If instead
you insist on 1 in 10^10 with the same data rate, sure that can be built.
Well!  You need 1 in 10^20?  sure!  And so on!  HOWEVER the price you must pay
is that you must encode the signal before transmission and decode it
afterward.  There will be delays in these operations.

Actually, you can do this only so long as the data rate is below a certain
level called the channel capacity which depends on the power absorbed by the
receiver, the thermal noise and the bandwidth.  If you go above the channel
capacity, you'll get lots of errors that force you back (at least) to the
channel capacity.

How does this apply to nanotech?  What we need to do is make a correlation
between the little molecular machines and Shannon's mathematics.  I did that in
the papers I mentioned previously (JTB 148:83-123,125-137,1991).  The
translation is a bit bizarre from a biologists viewpoint: so long as the
machine capacity is not exceeded, the error rate may be as low as is necessary
for survival of the organism the machine is part of.  ("desire" has no
meaning in evolutionary biology.)

Shannon's theorem shows that you can get the mutation rate as low as you might
want, but you can't make it zero.  (This is based on the assumption that there
is white gaussian noise affecting the machine, so if you can get around that,
you could beat the capacity.)

So what it will come down to in the end is we will have to decide how likely we
want errors to be, and then pay the design and material costs to get there.
Encoders and decoders are not free.

  Tom Schneider
  National Cancer Institute
  Laboratory of Mathematical Biology
  Frederick, Maryland  21702-1201
  toms@ncifcrf.gov

john@granada.mit.edu (John Olson) (03/16/91)

In article 992,  Chris Phoenix writes:

>that the DNA doesn't: it has a computer.  The computer can manipulate
>the memory far more easily and reliably than the cell can manipulate
>the DNA.

I am not convinced of this.  My understanding is that DNA replication is
very reliable.  Keep in mind the vast information content of DNA, and the
rarity of mutations.

The August 1988 Scientific American had an article by Radman and 
Wagner, "The High Fidelity of DNA Duplication," on this topic.  They
say that DNA is duplicated with an error rate of about 1 error per ten
billion (10^10) base pairs.  A comparable number for the exabyte tape
backup system here would be 1 error per trillion (10^12) bits.  That's
only a factor of 100, and each base carries two bits worth of data (four
states for a base, vs. two states for a bit).  How do these error rates
compare to, say, the rates for reading RAM or ROM?  Someone out
there can probably tell us.

John Olson.

bsmart@bsmart.tti.com (Bsmart) (03/16/91)

In article <Mar.13.19.09.22.1991.10983@athos.rutgers.edu>,
cphoenix@csli.stanford.edu (Chris Phoenix) writes:

> replicating the machine.  However, computer memory has one feature
> that the DNA doesn't: it has a computer.

I think part of the problem is that the point of nanomachines is that
they operate at the molecular (or even submolecular) level, and are
presumably implemented on a similar scale.  Nanocomputers probably won't
work like the digital electronic computers we're familiar with today;
they'll be mechanical devices that represent logic states by chemical
composition, physical arrangement of their components, or some such
trick.  Now, there's no reason why a mechanical computer can't be
digital in its operation (every now and then Marvin Gardner presents
some delightful macro-scale mechanical computers in Scientific American;
one of my favorites was an enormous hypothetical contraption composed of
ropes and pulleys and operated by grunting, sweating teams of slaves)
but the fact remains that computers (or any other kind of gadgetry)
implemented on a molecular scale will be subject to chemical and
radiation interference at the same scale.

Just as old-fashioned core memory (does anybody still remember?)
represented information by imposing different magnetic charges on iron
doughnuts, nanocomputers will most likely represent bits by moving an
oxygen atom (or something) from HERE to THERE in their structure, and
"execute" their "programs" by folding and unfolding and cleaving and
binding -- kinda the way natural proteins do.  If a mutagenic chemical
or other influence came along and moved the atom elsewhere, the result
would be pretty similar to what happens in an electronic computer in
response to a voltage surge or a stray cosmic-ray hit.  Perhaps the
error could be corrected by some kind of detection scheme (we do it with
memory errors all the time on the macro scale) but if the error were
severe enough, or if it happened to hit the error-correcting part of the
nanocomputer, then on come the red lights and it no longer behaves like
a computer.

Even a computer has to be made out of something, and when you're working
with a device composed of only a few hundreds or even a few thousands of
atoms, you have to live with some constraints and failure modes that
just don't apply on larger scales.  My concept of a nanocomputer is that
it's basically some exotic and complicated chemical compound -- or maybe
a soup containing several such carefully constructed compounds.  Perhaps
structures on a somewhat larger scale (bigger than molecular, but not
bigger than cellular) would fall into the "nanotechnology" realm as
well, but even these would be dependent upon very small effects for
their operations.

Discussion, gentlebeings?

[The current tentative designs for assemblers and nano-robots with
 nanocomputer "brains" are on the order of billions, even up to a trillion,
 of atoms.  Even proteins, which are very special-purpose machines,
 run into the thousands of amino acids, at a handful of atoms each.
 If an alpha particle came whiffling through a nanocomputer of the
 design Eric Drexler has talked about with the rod logic, I would
 imagine it sould seize up and not work at all, so many random bonds
 would form between closely-spaced moving parts.  This is one more
 reason expect solid-state, i.e. no moving parts, designs to be
 used wherever possible--alphas will still give you transient errors
 (a la DRAM) but wouldn't trash the whole machine.  Your nanocomputer
 would have to reboot, though.  Hopefully we can make necessarily moving
 parts (arms) with tolerances and geometries such that the bonds we
 put there, reform after ionization. 
 --JoSH]

cphoenix@csli.stanford.edu (Chris Phoenix) (03/25/91)

In article <Mar.15.22.38.00.1991.16995@athos.rutgers.edu> john@granada.mit.edu (John Olson) writes:
>In article 992,  Chris Phoenix writes:
>>that the DNA doesn't: it has a computer.  The computer can manipulate
>>the memory far more easily and reliably than the cell can manipulate
>>the DNA.
>
>I am not convinced of this.  My understanding is that DNA replication is
>very reliable.  Keep in mind the vast information content of DNA, and the
>rarity of mutations.

Perhaps I shouldn't have said "reliably".  However, my main point was 
"manipulate".  Computers can do things to their memory that cells simply
can't do to their DNA.  For example, perform arbitrary Turing-computable
computations on it, and compare arbitrary different parts of it.  This is
what allows computers to be designed with "mutation" rates as low as you
want.  And this was the main thrust of half of my previous article--that 
we don't have to worry about software mutation in a well-designed nanomachine.

cphoenix@csli.stanford.edu (Chris Phoenix) (03/25/91)

In article <Mar.15.23.15.59.1991.17434@athos.rutgers.edu> bsmart@bsmart.tti.com (Bsmart) writes:
>
>In article <Mar.13.19.09.22.1991.10983@athos.rutgers.edu>,
>cphoenix@csli.stanford.edu (Chris Phoenix) writes:
>> replicating the machine.  However, computer memory has one feature
>> that the DNA doesn't: it has a computer.
>
>I think part of the problem is that the point of nanomachines is that
>they operate at the molecular (or even submolecular) level, and are
>presumably implemented on a similar scale.  Nanocomputers probably won't
>work like the digital electronic computers we're familiar with today; ...
> Now, there's no reason why a mechanical computer can't be
>digital in its operation ...

I guess I should have thought more about the computer I was talking about.
Yes, I was assuming a digital computer, with distinct memory and processor.
I think Drexler's rod-logic computer qualifies.  I wasn't saying 
"nanomachines will not mutate," but rather "It's possible to build a 
nanomachine which will not mutate as a cell does"--that is, the "genetic
material" (computer memory) will not make undetected changes.  But maybe I 
should have thought more about the processor...

> ... but the fact remains that computers (or any other kind of gadgetry)
>implemented on a molecular scale will be subject to chemical and
>radiation interference at the same scale.

Notice, I said *undetected* changes.  I don't deny errors will happen--what
I was trying to point out is that if they happen, you can almost certainly
detect them.  And you can make that "almost" as close to "always" as you 
want.

>Perhaps the
>error could be corrected by some kind of detection scheme (we do it with
>memory errors all the time on the macro scale) but if the error were
>severe enough, or if it happened to hit the error-correcting part of the
>nanocomputer, then on come the red lights and it no longer behaves like
>a computer.

In my scheme, "on come the red lights" means "the nanomachine turns itself
off."  This is the best scenario.  The one we're trying to avoid is where
there's an error and the red lights don't come on.

We both need to think about what "the error-correcting part of the
nanocomputer" means.  I can see two possibilities: 1) a certain area 
of memory; 2) a certain part of the processor.  I don't think 1) needs
any special consideration.  Consider a correcting scheme in which 
everything in memory is stored three times.  The computer reads all three
locations each time it wants a byte, and if they disagree it turns off the
machine.  Now, which of the three copies is the "error-correcting part?" 
In other words, there needn't be a "critical" part of memory such that if
it's damaged the scheme won't detect the error.
2) may be more worrysome.  I would almost class this under the "hardware
errors" that I talked about in my first post--the ones that don't involve
"mutation" of the "information" but can still cause problems.  Here we're
getting into areas I don't know about, like circuit testing.  The question
is whether it's possible for a computer's hardware to fail so that it 
makes mistakes, but it doesn't "know" it's faulty, and the fault can't be 
detected from outside.  Currently, CPUs are hitting the market that have
errors in them--I think I read about one that would crash if it tried to
read the last 36 bytes of a segment!  How do you test for an error like that?
And what if the only code that filled a segment was the machine-copying code?
Again, anyone who knows enough, please fill in!
I find it really unlikely that such an error would be self-propagating, ie
that a hardware bug in the computer could cause copies to be built with 
exactly the same hardware bug in the computer.  (The case of the sloppy
arm building another sloppy arm is probably more likely, because there 
the cause and effect are both mechanical, whereas with the computer error
the effect is still mechanical but the cause is a logical error in running
a program--the only way such a bug can propagate is by changing the execution
of a program.)  But I'm starting to handwave here, so I'll stop.

opus@triton.unm.edu (UseCondomsFight AIDS) (03/25/91)

In article <Mar.13.19.09.22.1991.10983@athos.rutgers.edu> cphoenix@csli.stanford.edu (Chris Phoenix) writes:
>
>
>Picture the following nanomachine, designed to prevent mutation:
>*everything* will be under the control of one or more computers.  If
>these computers don't like what they see, they can shut down the
>machine permanently.  If they choose not to copy the machine, they
>won't.  And the copying process will also be under their direct
>control.  The nanomachine can't turn itself on--that has to be done
>from outside.  When a nanomachine is copied, it sends the contents of
>all its computer programs back to the original.  If the original
>verifies that the program is correct, it can turn the copy on.
>Otherwise, it won't.  A machine under total computer control is
>probably the easiest kind to build, anyway.


	This brings up an interesting point...  Most likely a privately owned
company will be producing a particular nanobot for a particular application.
Now say that said company finds that the memory (whether it be organic, mechan-
ical, or electrical is irrelevent) they will be using will work fine for
about a week, but starts to show errors after this time period.  The company
has already made contractual agreements for other companies to produce this 
memory.  (now assume that the nanobot is too complicated to be self-replica-
ting, or that the FDA has not yet approved self-replicating nanotechnology)
And if the nanobots have to stop operating after a week, but that the 
nanobot must work for atleast two weeks.  Assume that the company has
already made public statments about it's new nanobot.  Instead of possibly
breaking the job that the nanobot does and having to create two versions
of the nanobot (greatly increasing the price), the company goes ahead with
production.

	This scenario has happened in other technologies too many times to 
count.  If this fictional nanobot were used to clean the teeth of dentists
patients, could it possibly have it's memory corrupted and try to 
clean the brian?

-------------------------------------------------------------------------------
Institute for Combat Arms and Tatics - System programmer
MIDCO - Stereosyntatic Neurosurgery - System programmer
opus@triton.unm.edu
jkray@bootes.unm.edu
-------------------------------------------------------------------------------

[Most computers use memory that starts to show errors after about a 
 millisecond.  So they "refresh" the memory periodically.  Any well
 understood phenomenon can simply be taken into account in an engineering
 design.  What you have described above would constitute incompetent
 engineering, and the company would be out of business very soon.
 Unless propped up by the government...
--JoSH]

mike@everexn.com (Mike Higgins) (03/25/91)

In <Mar.15.22.38.00.1991.16995@athos.rutgers.edu> john@granada.mit.edu (John Olson) writes:

> . . .  My understanding is that DNA replication is
>very reliable.  Keep in mind the vast information content of DNA, and the
>rarity of mutations.
> . . . DNA is duplicated with an error rate of about 1 error per ten
>billion (10^10) base pairs.
> . . .   How do these error rates
>compare to, say, the rates for reading RAM or ROM?  Someone out
>there can probably tell us.

  I'm told that hard disk drives make errors at about 1 per 10^10 to
10^12 bits.  So a hard disk is as good or better than DNA, and RAM
is MUCH BETTER!  Consider: on a typical IBM AT computer, you have 
one megabyte, or 10^6 bits.  These bits are distructively read and
re-written (inside the chip) once every 4 miliseconds for the normal
dynamic RAM refresh!  That's 10^6 bits refreshed 10^9th times a year.
You would be very upset if your PC gave you 1 parity error a year, but
that represents an error rate of only 1 error every 10^15th!  DNA
doesn't light a candle to the box on my desk...
	Mike Higgins
	mike@everexn.com

landman@eng.sun.com (Howard A. Landman) (03/25/91)

In article <Mar.13.19.09.22.1991.10983@athos.rutgers.edu> cphoenix@csli.stanford.edu (Chris Phoenix) writes:
>Picture the following nanomachine, designed to prevent mutation:

I don't think anyone can have any serious argument with the notion
that it is physically possible to design machines which can build
useful items but have no chance of reproducing themselves.  Your
average high-school wood shop, if you imagine it being run by a
computer, is as good an example as any.  QED.

However, people seem to be assuming that this completely settles
the question.  I don't think so.  It doesn't address the "argument
from practicality".

Consider: we know how to build computers which are pretty much
impervious to attacks by viruses.  Yet not only don't we do so,
we often don't even build systems that make use of the built-in
protection available in their own microprocessors (e.g., in the
Macintosh OS, every program is run in privileged mode).  Why?
Because it's "faster" or "cheaper".

So (for the sake of argument) I claim that when people first
start doing nanotech it will be hard enough just to get the damn
stuff to work at all, and few if any will be concerned about
making absolutely sure that nothing can go wrong.  Some of the
design problems will be so hard that we will use evolution (the 
physical equivalent of "genetic algorithms") to solve them.  If
you don't believe this, consider that there are already people
developing artificial antibodies this way.  And some of the
methods for creating large systems via tiny operators might
more profitably make use of reproduction and morphogenesis based
on simple local rules, than of global direction by a Master
Control Program with all the massive communication and coordination
that implies.  In the first case (evolution as a design technique),
you can't (by definition) turn off evolution.  In the second, since
the end product is based on a few simple rules (which it is cheaper
NOT to place under strict error correction control), a mutation in
the rules could lead to a VERY different end product, which has some
potential for altering the reproductive viability of the replicators.

None of these things are very likely to create a problem.  But as
nanotech spreads, it will be used more and more often, by a broader
variety of people in a more motley collection of settings, and the
cumulative probability of SOME disaster happening SOMEWHERE will
eventually approach unity.

Note that it is wholly inadequate to counterargue that we "can" do
this or that to prevent problems.  You must argue that people "will"
do this or that, even under pressures of schedule, budget, politics,
war, etc.  This is a much harder argument and I haven't seen anyone
attempt it yet.

--
	Howard A. Landman
	landman@eng.sun.com -or- sun!landman

lovejoy@alc.com (Alan Lovejoy) (03/27/91)

In article <Mar.24.18.57.41.1991.897@athos.rutgers.edu> landman@eng.sun.com (Howard A. Landman) writes:
>Note that it is wholly inadequate to counterargue that we "can" do
>this or that to prevent problems.  You must argue that people "will"
>do this or that, even under pressures of schedule, budget, politics,
>war, etc.  This is a much harder argument and I haven't seen anyone
>attempt it yet.

An excellent argument that hits an unarguable bulls-eye at precisely the right
target.  Before one learns to run, one must first master walking.  Before
one lears to walk, one must first master crawling.  And so on.

This argues that we should strive for extremely tight safety procedures and
secure environments for nanotech experimentation and practical usage that the
FDA or its equivalent will obsessively require until such time as we have 
mastered a practical (i.e., usable, cheap, fast, simple, reliable and accepted)
method (methodology?) for keeping our nanotech fire under control.

Safe nanotechnology is like safe sex.  Until the safety technology is so 
user-friendly that no one would consider doing things the unsafe way, we had 
better assume that people will be tempted to do things the unsafe way.

Given the fact that the nanotech genie cannot be kept in its bottle 
indefinitely--no matter what precautions we take--it behooves us to make
viable safety methodology and technology our number one development priority.

Another priority issue is this:  at what point does the current situation
with respect to nanotech research and experimentation--where there are 
essentially no controls or safeguards--become significantly dangerous?  That
time is arguably decades away.  But perhaps there are those who feel an even
greater urgency?  (Genetic engineering is admittedly already a potential
problem in the case of malicious or intentionally-inimical activities).


-- 
 %%%% Alan Lovejoy %%%% | "Do not go gentle into that good night,
 % Ascent Logic Corp. % | Old age should burn and rave at the close of the day;
 UUCP:  lovejoy@alc.com | Rage, rage at the dying of the light!" -- Dylan Thomas
__Disclaimer: I do not speak for Ascent Logic Corp.; they do not speak for me!

[There is one very serious danger to this approach:  If responsible people
 inhibit themselves too much in developing nanotechnology, irresponsible
 people are certain to beat them to Breakthrough, and all hell will be
 out for noon.  There are no simple black-and-white issues here.
 --JoSH]