[sci.nanotech] Hardware Error Correction

71450.1773@compuserve.com (Steven B. Harris) (03/25/91)

   Chris Phoenix reminds us that a spontaneous error in the con-
struction of a self-replicating machine might conceivably:

1) Cause changes in construction of the next (third) generation
that cause the error to be exactly perpetuated "extragenetically"
(i.e., without a software change).  

2) Impair the ability of the mutant machine to sense the error in
next (third) generation machines.

3) Along the way, screw up the function of the machine so as to
make it dangerous.

  Mutations that meet all these criteria are the nasty ones. How
does the *initial* mutant machine escape the censorship of the
previous generation of purebreds, we wonder?  Perhaps the
mutation is of such a subtle kind that it can only be found by
destructive testing and comparison, and then only when compared
with and tested by a "clean" unmutated line of machines.

   If so, I think the problem can still be dealt with.  It seems
to me that if necessary each new generation of machines can be
held and scrutinized by and against _several_ previous genera-
tions, and required to pass comparison tests against all of them
(selected machines can be taken off the line for random destruc-
tive comparison, as is done on modern assembly lines).  If a
generation passes comparison tests in this fashion, this does
_not_ rule out a few 1st generation mutations that occur in
machines that didn't happen to get taken apart, but it *does*
rule out mutations in the parent machines of the form which meet
the first criterion, or ALL the daughter machines would be
affected.   So if a generation passes the comparison test, you
can release the parents (or if very conservative, the grand-
parents), and allow the daughters to proceed with the next
generation, which you then iteratively test.  If, on the other
hand, you find consistent mutations in a given generation, you
destroy both it AND the parents, in much the same way that you
may destroy a whole line of animals in stockbreeding when an
unwanted genetic trait appears.  Even mutations that do nothing
but increase the mutation rate can be ferreted out (to an
arbitrary degree) and destroyed by this method.

   All this, of course, is analogous to routines that check one
program off against another, save that things are a bit more
circuitous due to the fact that finished machines presumably
can't be compared except by inference from a sample of their
dissected progeny.  Still, anything that can be done with
software ought to able to be done with hardware-- they're the
same thing, are they not?

   The lesson, I suppose, is that you can't necessarily catch all
mutations if you have immediate-release free-floating repli-
cators, like E. Coli.  Thus, we may need tiny nanomachine "fac-
tories" or at least storage facilities so that we can do our
comparisons and go back and nip mutant lines in the bud before
they go out into the world (one thinks of the thymus, with its
tiered generations of cells in different phases of maturation). 
If we MUST do all with only one generic variety of replicator, it
might be possible to have a lot of them interlock arms and build
such a holding and checking facility out of themselves.  Allow me
to christen this a "breeding ball" (you heard it here first,
folks).  The term is suggestive of snake biology, but my visual
metaphor for this is actually the ball of thousands of living
workers which forms a nest of army ants.  




                                  Steve Harris

cphoenix@csli.stanford.edu (Chris Phoenix) (03/27/91)

In article <Mar.24.18.23.59.1991.678@athos.rutgers.edu> 71450.1773@compuserve.com (Steven B. Harris) writes:
>   Chris Phoenix reminds us that a spontaneous error in the con-
>struction of a self-replicating machine might conceivably:
>
>1) Cause changes in construction of the next (third) generation
>that cause the error to be exactly perpetuated "extragenetically"
>(i.e., without a software change).  
>
>2) Impair the ability of the mutant machine to sense the error in
>next (third) generation machines.
>
>3) Along the way, screw up the function of the machine so as to
>make it dangerous.
>
>  Mutations that meet all these criteria are the nasty ones. How
>does the *initial* mutant machine escape the censorship of the
>previous generation of purebreds, we wonder?  Perhaps the
>mutation is of such a subtle kind that it can only be found by
>destructive testing and comparison, and then only when compared
>with and tested by a "clean" unmutated line of machines.

Why does it need to specifically meet criterion 2?  If a faulty machine
escapes the scrutiny of its maker, surely a copy of it would escape its
own scrutiny.  Another point is that a machine may mutate any time after
it is constructed and turned loose.  So it never has to escape the 
scrutiny of a healthy machine, just its own self-checking... and at the
same time, a copy will only have to escape the scrutiny of an already-
damaged machine.

> If so, I think the problem can still be dealt with.  It seems to me
>that if necessary each new generation of machines can be held and
>scrutinized by and against _several_ previous generations .... this
>does _not_ rule out a few 1st generation mutations that occur in
>machines that didn't happen to get taken apart, but it *does* rule
>out mutations in the parent machines of the form which meet the first
>criterion, or ALL the daughter machines would be affected.

I don't think it's that simple, though I like your basic idea.  A
hardware flaw may be intermittent, and may have virtually random
effects.  Consider a flaw that manifests 1% of the time.  Even a
multi-generation test might not catch it for several generations.  At
that point, you'd have some damaged machines loose... and no way to
know how damaged they are.  A hardware flaw that copies itself is
threatening from the point of view of being self-propagating, but
there's lots of other flaws out there... I suspect we'll have to
answer this question for each nano-design individually.  How to tell
what the possible flaws are, and what effects they might have?  And
remember, you have a risk of any random subset of the machine being
changed at any time due to radiation damage.  (This may be overstating
it a little, but in any case I think fault analysis will be a
nightmare.  I'm not sure we can afford the possibility of mistakes, if
it turns out that there's a reasonable chance of a mistake and we're
reliant on simulation to catch all possible ones in each design.)

>Still, anything that can be done with
>software ought to able to be done with hardware-- they're the
>same thing, are they not?

This is one for the nano-engineers, but I'd suspect the answer is no.
Try to pull one carbon atom out of a thin sheet of diamond without
straining the sheet?  Disassembley may be harder than assembley, if
you want to analyze the structure.  (I'm talking about diamond
machines here, not biological soups.  I have no idea how easy it would
be to pick apart a protein molecule.

>If we MUST do all with only one generic variety of replicator, it
>might be possible to have a lot of them interlock arms and build
>such a holding and checking facility out of themselves.  Allow me
>to christen this a "breeding ball" (you heard it here first,
>folks). 

I like it!  Can we have some feedback from others on whether it'll work?

Now to clear up a couple of confusions that other people seem to have had:

Mike Higgins quotes Steve Harris...
>>   Chris Phoenix reminds us that a spontaneous error in the con-
>>struction of a self-replicating machine might conceivably:
>>1) Cause changes in construction of the next (third) generation . . .
>
>I keep reading all these postings of people afraid of nanomachine replicators
>getting out of hand, and I keep expecting one of the replies to have
>the solution suggested by Ralph Merkle (didn't I read it here?).  Since
>nobody else seems to remember it, I'll submit it:  Encrypt the genes of
>your nanomachine replicators.

But without the elision, what Steve wrote was:
>1) Cause changes in construction of the next (third) generation
>that cause the error to be exactly perpetuated "extragenetically"
>(i.e., without a software change).  
        ^^^^^^^^^^^^^^^^^^^^^^^^^
Which is a very good summary of one of my points.  I think we agree,
Mike, we agree that we don't have to worry about software "mutations" 
in a well-designed nanomachine.  Thanks for supplying a reference for 
the encoding... I knew it could be done, but didn't know where to 
find it.
   But that was only half my point.  The other half is that there might
be NON-SOFTWARE errors that can also cause dangerous and self-perpetuating
changes to a nanomachine designed for replication.  I haven't seen many
suggestions for dealing with it, and I suspect there won't be many 
until we actually get nanomachines built... at which point, in a rush
of victory, we may start using them anyway.  As far as I can see, any 
given solution for dealing with a mechanical problem will probably be 
impossible to integrate into some nanomachines we will want to build.  
Ideas, anyone? ...

Howard Landman writes:

>In article <Mar.13.19.09.22.1991.10983@athos.rutgers.edu> cphoenix@csli.stanford.edu (Chris Phoenix) writes:
>>Picture the following nanomachine, designed to prevent mutation:
>
>I don't think anyone can have any serious argument with the notion
>that it is physically possible to design machines which can build
.useful items but have no chance of reproducing themselves.  Your
>average high-school wood shop, if you imagine it being run by a
>computer, is as good an example as any.  QED.

Oops, I just assumed we were all talking about the same problem here.
I'm talking about nanomachines that are *designed* to replicate 
themselves.  This is a very usefull skill, because it provides for
exponential growth of your manufacturing base.  The question is 
whether it's possible to design a nanomachine to *safely* replicate
itself.  (If you haven't read Engines of Creation, I recommend it...
it gives some idea of what might be accomplished--and how big a role
self-replicating assemblers might play.)

>Some of the
>design problems will be so hard that we will use evolution (the 
>physical equivalent of "genetic algorithms") to solve them.  If
>you don't believe this, consider that there are already people
>developing artificial antibodies this way.  

But will we use evolution of physical nanomachines, or of simulations
of nanomachines.  The latter technique might be workable; the former is
surely not worth the trouble.  Antibodies are made that way because 
we're working from a biological base, constructing tools to deal with
very constrained biological problems.  I don't believe this is typical
of the setting, problem, or methods of solving the problem that most 
nanomachine applications will have.
Of course, I could be wrong... I've read a little nano-fiction that 
implies some people think nanotech will be much more biology-based,
at least at first.  I'm going mainly from a techie _Engines of Creation_
viewpoint.  Comments, anyone?

>Note that it is wholly inadequate to counterargue that we "can" do
>this or that to prevent problems.  You must argue that people "will"
>do this or that, even under pressures of schedule, budget, politics,
>war, etc.  This is a much harder argument and I haven't seen anyone
>attempt it yet.

Here, we agree... There exist people who if they think it will help,
will do almost anything.  I'm not fully convinced that EoC's
prescription for dealing with them (mainly hypertext and total
disclosure, as I understand) is adequate.  But that's a whole 'nother
topic...