[sci.nanotech] Ye olde matter duplicator

vorth%sybil@GATECH.EDU (Scott Vorthmann, ttyXX) (06/14/89)

"Ron_Fischer.mvenvos"@XEROX.COM writes:

> Indeed, one of the interesting things about assemblers is that they make
> the value of unique physical objects as ephemeral as information.
> Currently the difference between information and physical objects is that
> when an object is sold its gone.  When informatoin is sold you still "have"
> it.  With nanotech this no longer holds true for objects, since you can
> easily keep the information needed to reproduce a perfect copy.

I disagree, in way.  You can keep a perfect copy (easily?), but you cannot
easily keep the information needed to produce it.  Many (or most) objects
_are their own most compact representation_.  Take a statue by Michelangelo,
for example.  The atoms in the marble are in no particular pattern; how could
you hope to represent their relative positions more succinctly than they
themselves represent this information?  Even with a molecular representation,
possibly compressed much as video images are compressed today, the "program"
to produce the statue should be at least 10 times as massive as the statue
itself!

There are several issues to consider here.  First, what do we mean by
a "perfect copy"?  If we mean "faithful to the original, down to the atoms",
we will have the problem outlined above.  Even if we say "exactly the same
within 5cm of the surface", we still have an unworkable amount of information.
Naturally there are other compromises we can make.  Many substances have some
microcrystalline structure:  we can exploit this regularity using the
equivalent of "FOR" loops ("DO" loops for FORTRANers) in our "program".
However, the boundaries of the microcrystals will typically still be
irregular.  In general, it is an extremely difficult problem to 'derive'
a succinct "program" from an artifact.

Assuming the last statement is correct, it is clear we won't be 'digitizing'
objects, but rather making copies.  Here the problem is that we will usually
have to dis-assemble and re-assemble the object to make a "perfect" copy.
Even if the atoms are removed, recognized, duplicated, and replaced in
"bites" of a hundred, is the Louvre going to allow this?  We'd better have
some pretty strong guarantees about the robustness of the process.  I have
seen some mention of duplicating human beings in this fashion... hmmmmm, I
think I'll clone myself the good old-fashioned way, and cryosleep for 20 years.

The issue of making perfect copies of objects is only a problem of this
magnitude for objects for which we don't have a program; that is, objects not
initially constructed by assemblers.  Presumably, our wonderful
nano-engineers can write VERY concise specifications of the molecular
structure of artifacts containing gazillions of atoms.  If this is the case,
then Mr. Fischer's observations hold, but ONLY for artifacts for which programs
exists.

But how small can we expect an assembler program to be?  Remember, we can
expect to be building artifacts ranging in sizes over 12 orders of magnitude,
or so (nanometer to kilometer sized).  Naturally, these programs
will have to be hierarchical in nature, to take advantage of large regions
of homogeneity (at various scales).  For instance, Drexler's rocket engine
(EoC, 2nd chapter or so) may be made of 'active' materials, containing
fibers that are really nanomachines.  The program might say "make a region,
in this shape, of Von Hoozitz fibers (parameters W, L, 3.459e8, 39'12''),
in such-and-such a pattern".  The "Von Hoozitz fiber" subroutine would call
other, lower-level subroutines.  Good old block-structured design of
software, where "lower-level" takes on a new strength of meaning.

...Except that we may want all interfaces between different sub-assemblies to
be specified at the molecular scale, to guarantee structural integrity.
If we really want to specify the position and bonding of every atom in the
artifact, we will have to pay the piper at some point.  I would be
unsurprised if the rocket engine program, in its most compact representation,
weighed about a pound.  Of course, this may still be workable, if
"subroutines" are actually armies of sub-assemblers, each with its own
short "tape loop".  A centralized coordinator, containing all the code,
with communications channels to vast hordes of assemblers, does not seem to
be a reasonable approach.

Enough rambling.  I hope you see my point.  Furthermore, I hope there's at
least some measure of sense in what I've said.  If not, I apologize.

	Scott Vorthmann
	Ph.D. candidate, School of ICS, Ga. Tech
	vorth@gatech.edu

"Ron_Fischer.mvenvos"@xerox.com (06/16/89)

>Even with a molecular representation,
>possibly compressed much as video images are compressed today, the
"program"
>to produce the statue should be at least 10 times as massive as the statue
itself!

A one order of magnitude increase in mass for an encoding would not be
troublesome unless one planned the encoding of something "large."  A planet
perhaps?  The possible advantage of the encoding over the actual object is
its stability w.r.t. the original.

Could you send your assumptions and derivation for the order of magnitude
mass increase?

>Presumably, our wonderful
>nano-engineers can write VERY concise specifications...

This was the case I was most concerned with: new engineered objects.  As
usual, the software engineer has little concern for backward compatibility
;-)

>Except that we may want all interfaces between different sub-assemblies to
>be specified at the molecular scale...

Using your previous statement regarding hierarchical design, I don't think
this is an issue, since assemblers operating at the interfaces could use
their knowledge of proper construction techniques to do this without an
explicit encoding.

I agree that some objects will tend to be valuable because of their
history, and that at some level of valuation this may not allow encoding.
This holds only when encoding is invasive enough to cause perceived risk to
the value of the object.  In the case of our bodies the argument will (no
doubt) continue indefinately.

(ron)

vorth%sybil@gatech.edu (Scott Vorthmann) (06/21/89)

"Ron_Fischer.mvenvos"@Xerox.COM writes:
>Could you send your assumptions and derivation for the order of magnitude
>mass increase?

I was making a rough guess, but let me see if I can make an argument...
Assume, for simplicity, that all atoms lie on lattice-points of a 3D lattice
with 1nm spacing.  Assume also that we are duplicating an object whose
composition is a random mixture of atoms of only 10 elements.  Both of these
assumptions are very optimistic, so we should be able to get some sort of
lower bound.  (The "random" assumption is actually quite pessimistic, so
the lower bound will be applicable only to such random compositions.)

Now our program could use a "global" specification ("put an atom of element
X at position (x,y,z)"), or a "local" one ("the next atom is element X").
Since the x, y, and z values in the former will be VERY large integers,
requiring many bits to encode, I think the latter will be more compact.
Also, it's easier to see how a local encoding might be "executed", using
something like nested for-loops building the object in "row-major order".

So our encoding will have 10 different "data" values, with a few "control"
values (like "start new row/face"), in a serial encoding.  Each position
in the program "tape" will need to encode 4 bits of information.  Using a
brute-force encoding, where bits are represented by the presence of either
of two possible atoms as the links in the chain, we now have a 4-to-1
mass increase of program over object (assuming all atoms are of a single,
"average" mass).

This is a safe lower bound.  In actual fact, the program "tape" will likely
require at least 4 atoms to encode a single bit (carbon backbone, extra
hydrogens, etc.).  Note that bits can also be encoded structurally, via
stereoisomers, etc.... this may help put a ceiling on the number of atoms
required to encode larger symbol sets.
 

>>Except that we may want all interfaces between different sub-assemblies to
>>be specified at the molecular scale...

>Using your previous statement regarding hierarchical design, I don't think
>this is an issue, since assemblers operating at the interfaces could use
>their knowledge of proper construction techniques to do this without an
>explicit encoding.
 
That "knowledge" may prove vast.  However, we could reduce the size of the
problem by having a set of standard "termination" compositions.  The
interface assemblers would then need only know how to connect the various
types of termination regions at plane, line, or point interfaces.
The assemblers for the sub-assemblies would have specified "boundary"
subprograms, perhaps with coordination between levels of the hierarchy, to
terminate sub-assemblies in standard ways.


		Scott Vorthmann
		School of ICS, Georgia Tech
		vorth@gatech.edu


[This analysis seems correct as far as it goes.  However, there seems
 to be a way to "cheat" it at higher levels.  If we consider ordinary
 natural or bulk-technology objects, one can use adaptive grid, octree,
 run length encoding, hierarchical layering, and similar techniques to
 reduce the amount of information needed to a very tiny amount.  For
 microscopic living organisms or the products of nanotechnology, it
 may be necessary to specify on the level of "a place for every atom 
 and every atom in its place";  but it would be the rare macroscopic
 sized object that would need this detail.  (Macroscopic living 
 organisms *do* have a tremendously compact encoding for their structure...)
 --JoSH]