[sci.nanotech] Barriers and Bottlenecks

doom@PORTIA.STANFORD.EDU (Joseph Brenner) (03/31/89)

We are people who have largely been convinced by Drexler's arguments.
I'd like to pose a question: What if we're wrong?  

Drexler typically argues:
(1) If life can do something, then it is physically possible
    to do the same thing artificially. 
(2) Human capability is quickly expanding to the limits of the 
    physically possible. 

I'm tentatively suggesting that point (2) may be wrong.  It's at least
conceivable that there could be a barrier we can't break, a human limit
that humans can't overcome.  For example, it *could* be that we're not
smart enough to write programs as smart as we are.  NOTE: I HAVE NO 
INTEREST IN DEFENDING THIS EXAMPLE.  I don't think this is a great
place to debate the feasibility of artificial intelligence. 

There is another way in which (2) could be wrong: there could be 
one or more bottlenecks to the development of nanotech which will 
slow things down radically (diamond is an unstable form of graphite,
but the transformation is so slow no one cares).  The future may
not be described by a smoothly rising curve: instead, imagine a
series of steep curves "punctuated" by plateaus.  

I propose that it may be worthwhile to try and identify possible
hard barriers or slow bottlenecks to various nanotech applications.

For example, consider genetic proofreading.  Drexler considers it a 
fairly simple process: each assembler meanders around until it finds 
a DNA strand.  It moves back and forth along it until it has the entire 
structure in memory. It compares the structure to the expected 
structure, and chooses to kill the cell if it differs (due to mutation or
infection).  There's an important caveat: the assembler must be able
to distinguish between the host's damaged cells, and the cells of
another human being (or any other plant or animal) otherwise it
would become a deadly virus if it escaped the host and wound up in
someone else's body.  Exactly how tough is this pattern recognition
job?  I don't think we know enough biology to say for sure. 
Note that this is a very simple application compared to a full-featured
cell repair machine. 

If there are problems in the development of atomic scale manipulation,
nanocomputers, self-replication, or a number of other areas it would be
devastating for the prospects of Drexler-style nanotech.  If you 
assume the problems do exist, what would they be like?

-- Joe Brenner 

(J.JBRENNER@MACBETH.STANFORD.EDU  Materials Science Dept/Stanford, CA 94306)

craig@GPU.UTCS.UTORONTO.CA (Craig Hubley) (04/22/89)

In article <8904180627.AA19496@athos.rutgers.edu> doom@PORTIA.STANFORD.EDU (Joseph Brenner) writes:
>that humans can't overcome.  For example, it *could* be that we're not
>smart enough to write programs as smart as we are.  NOTE: I HAVE NO 
>INTEREST IN DEFENDING THIS EXAMPLE.  I don't think this is a great
>place to debate the feasibility of artificial intelligence. 

One important thing to point out is that no *one* human is smart enough...
it has taken the collective effort of millions in order to get to the
present... no one person could have leaped over those thousands of years
to present a working brain program, although writing was invented in 
5000 BC or so, and no new technology is required to write the program...
all development in between has been conceptual bootstrapping, providing
theories, mirrors and hardware against which to test all such ideas.
Nor can one mammal cell reproduce itself - that cell is part of a larger
mechanism that can, however, reproduce.  So far as I can see, this negates
any idea that humans can't construct something smarter than a single human
brain - it's simply not a Goedelian affair, since a single human brain is
not what's constructing the 'smarter' thing.  The same applies to any argument
that collectively-developed capabilties are in some way limited by the
capabilities of their smallest distinguishable part... as intelligence is
emergent from neuron activity, so is AI emergent from research activity...
maybe.  But there's no barrier here... it seems to me that there is simply
no way to judge complexity until we run right into it.

>There is another way in which (2) could be wrong: there could be 
>one or more bottlenecks to the development of nanotech which will 
>slow things down radically (diamond is an unstable form of graphite,
>but the transformation is so slow no one cares).  The future may
>not be described by a smoothly rising curve: instead, imagine a
>series of steep curves "punctuated" by plateaus.  

The curve has always been an approximation... in fact 'punctuated
equilibrium' which is what physical anthropologists call what you 
describe, is the dominant form of evolution theory... this week.

>For example, consider genetic proofreading.  Drexler considers it a 
>fairly simple process: each assembler meanders around until it finds 
>a DNA strand.  It moves back and forth along it until it has the entire 
>structure in memory. It compares the structure to the expected 
>structure, and chooses to kill the cell if it differs (due to mutation or
>infection).  There's an important caveat: the assembler must be able
>to distinguish between the host's damaged cells, and the cells of
>another human being (or any other plant or animal) otherwise it
>would become a deadly virus if it escaped the host and wound up in
>someone else's body.  Exactly how tough is this pattern recognition
>job?  I don't think we know enough biology to say for sure. 
>Note that this is a very simple application compared to a full-featured
>cell repair machine. 
>
>If there are problems in the development of atomic scale manipulation,
>nanocomputers, self-replication, or a number of other areas it would be
>devastating for the prospects of Drexler-style nanotech.  If you 
>assume the problems do exist, what would they be like?

Programming the little beasties is indeed a difficult problem...
thankfully the biological mechanism is not based only on linear programs
at all, but on short segments that generate multiple mechanisms that are
intended to work in parallel - work begins only when all parts are assembled.
The short instruction sequences help to keep down errors, the need for all
parts to be present avoids wrongheaded activity, and the complete failure of
the process should one part be absent or defective (usually), serve to:
	1. cut losses incurred by errors (only one part need be regenerated)
	2. reduce errors through short command sequences
	3. reduce the possibility of not-obviously-defective parts by 
		keeping them simple, thus failures are easier to detect
	4. promote efficiency through parallelism

Note that nature discovered ICs and object-oriented programming long
before we did!  Anyone who intends to write Fortran in DNA pairs had
better be locked up now, before they destroy the world.  0.1 :-)

Craig Hubley

-- 
	Craig Hubley			-------------------------------------
	craig@gpu.utcs.toronto.edu	"Lead, follow, or get out of the way"
	mnetor!utgpu!craig@uunet.UU.NET -------------------------------------
	{allegra,bnr-vpa,cbosgd,decvax,ihnp4,mnetor,utzoo,utcsri}!utgpu!craig