71450.1773@compuserve.com (Steven B. Harris) (03/16/91)
Chris Phoenix reminds us that a spontaneous error in the con- struction of a self-replicating machine might conceivably: 1) Cause changes in construction of the next (third) generation that cause the error to be exactly perpetuated "extragenetically" (i.e., without a software change). 2) Impair the ability of the mutant machine to sense the error in next (third) generation machines. 3) Along the way, screw up the function of the machine so as to make it dangerous. Mutations that meet all these criteria are the nasty ones. How does the *initial* mutant machine escape the censorship of the previous generation of purebreds, we wonder? Perhaps the mutation is of such a subtle kind that it can only be found by destructive testing and comparison, and then only when compared with and tested by a "clean" unmutated line of machines. If so, I think the problem can still be dealt with. It seems to me that if necessary each new generation of machines can be held and scrutinized by and against _several_ previous genera- tions, and required to pass comparison tests against all of them (selected machines can be taken off the line for random destruc- tive comparison, as is done on modern assembly lines). If a generation passes comparison tests in this fashion, this does _not_ rule out a few 1st generation mutations that occur in machines that didn't happen to get taken apart, but it *does* rule out mutations in the parent machines of the form which meet the first criterion, or ALL the daughter machines would be affected. So if a generation passes the comparison test, you can release the parents (or if very conservative, the grand- parents), and allow the daughters to proceed with the next generation, which you then iteratively test. If, on the other hand, you find consistent mutations in a given generation, you destroy both it AND the parents, in much the same way that you may destroy a whole line of animals in stockbreeding when an unwanted genetic trait appears. Even mutations that do nothing but increase the mutation rate can be ferreted out (to an arbitrary degree) and destroyed by this method. All this, of course, is analogous to routines that check one program off against another, save that things are a bit more circuitous due to the fact that finished machines presumably can't be compared except by inference from a sample of their dissected progeny. Still, anything that can be done with software ought to able to be done with hardware-- they're the same thing, are they not? The lesson, I suppose, is that you can't necessarily catch all mutations if you have immediate-release free-floating repli- cators, like E. Coli. Thus, we may need tiny nanomachine "fac- tories" or at least storage facilities so that we can do our comparisons and go back and nip mutant lines in the bud before they go out into the world (one thinks of the thymus, with its tiered generations of cells in different phases of maturation). If we MUST do all with only one generic variety of replicator, it might be possible to have a lot of them interlock arms and build such a holding and checking facility out of themselves. Allow me to christen this a "breeding ball" (you heard it here first, folks). The term is suggestive of snake biology, but my visual metaphor for this is actually the ball of thousands of living workers which forms a nest of army ants. Steve Harris
mike@everexn.com (Mike Higgins) (03/25/91)
In <Mar.15.22.31.35.1991.16915@athos.rutgers.edu> 71450.1773@compuserve.com (Steven B. Harris) writes: > Chris Phoenix reminds us that a spontaneous error in the con- >struction of a self-replicating machine might conceivably: >1) Cause changes in construction of the next (third) generation . . . >2) Impair the ability of the mutant machine to sense the error in . . . >3) Along the way, screw up the function of the machine so as to >make it dangerous. ... lots of stuff deleted ... I keep reading all these postings of people afraid of nanomachine replicators getting out of hand, and I keep expecting one of the replies to have the solution suggested by Ralph Merkle (didn't I read it here?). Since nobody else seems to remember it, I'll submit it: Encrypt the genes of your nanomachine replicators. There are data encoding schemes that are "self healing", meaning if you make a single bit error in the encoded data you can still decode most of the data (Hufmann coding works this way). But lots of other encoding schemes become TOTALLY UNREADABLE GARBAGE if you make a single bit error in transmission. So this is the solution: encode the genes of your nanomachine when you design it. The replicating machines are always making copies of their encoded genes, so if a SINGLE BIT error is made in duplication, the offspring is TOTALLY NON-VIABLE. Build your nanomachines so they don't have the equiment to encode data. The gene has to have an unencoded decoder built into it, or the decoding code could be in symbiotic machines duplicated at replicaion time, like mitocondria in our cells. If you think there is a chance that the TOTAL GARBAGE resulting from this type of error in decoding can result in meaningful data, I invite you to search for the text to Hamlet in a dump of all the binary files on my PC. (Or if you prefer biological systems, there's this troup of monkeys typing on typewriters... ;-) Mike Higgins mike@everexn.com
toms@fcs260c2.ncifcrf.gov (Tom Schneider) (03/27/91)
>In article <Mar.13.19.09.22.1991.10983@athos.rutgers.edu> > cphoenix@csli.stanford.edu (Chris Phoenix) writes: >Picture the following nanomachine, designed to prevent mutation: Natural radiation, radioactive isotope decay and thermal noise will always cause you problems at some rate. Checking against multiple copies is not the best solution, the best one is the encoding method: >I keep reading all these postings of people afraid of nanomachine replicators >getting out of hand, and I keep expecting one of the replies to have >the solution suggested by Ralph Merkle (didn't I read it here?). Since >nobody else seems to remember it, I'll submit it: Encrypt the genes of >your nanomachine replicators. > Mike Higgins > mike@everexn.com Encripting (encoding) the entire genome is an elegant solution. The originator should write a paper on it and publish it. Shannon's theorem shows that one can make the errors as low as you may desire. Read: @article{Shannon1949, author = "C. E. Shannon", title = "Communication in the Presence of Noise", year = "1949", journal = "Proc. IRE", volume = "37", pages = "10-21"} and the papers I mentioned before. Tom Schneider National Cancer Institute Laboratory of Mathematical Biology Frederick, Maryland 21702-1201 toms@ncifcrf.gov