[comp.society.futures] Bioproduced nanocomputers

nickyt@agent99.UUCP (Nick Turner) (10/29/87)

Thanks, Mike Scholtes, for helping to revive the group.

I think biologically produced nanocomputers might be do-able, but the
likelihood is that there will be far more efficient ways to build them.  The
kinds of nanocomputers that might be produced in living cells are bound to be
far more limited than the ones we'll be able to make with general purpose
nanomachines that are specially designed to handle all sorts of types of atoms
and atomic clusters.

I refer all of you who are interested in nanotechnology to Eric Drexler's
masterpiece called "Engines of Creation."  It is >the< book to read on the
topic, and it just came out in paperback.  Sorry I don't have pub info on
hand right now.

I recall one image that might be worth some discussion.  In the book Drexler
describes a system where rocket engines (and virtually anything else you
could imagine) would be grown in vats.  One "seed" machine the size of a very
large virus is dropped into the nutrient mixture, and it in turn assembles
machines that then assemble other machines, and so on, until eventually this
wondrously detailed macroscopic structure begins to emerge.  The potential
kinds of things we could build with such a system are unlimited.

If you could design such a system, how would you make it work?  What sorts of
stuff would you build/grow?  What kinds of atoms and molecules would you use
in your structures?  How would the nutrients be circulated?  How would you deal
with the inevitable waste products?  How would you supply energy to the nano-
machinery?  These are important questions.  Any ideas out there?

NickyT

oster@dewey.soe.berkeley.edu (David Phillip Oster) (10/30/87)

I've been spending a lot of time listening to Eric Drexler recently, and
he has been talking a lot about:

How do you measure bogosity in sciences that don't exist yet? How does a 
quiet truth get heard in an explosion of data?

Drexler himself says that the examples in his book are not of what we
WILL do, but what we COULD do. He chose them not to be optimal, but to
be comprehensible.

For example, he spends a lot of time in his book on miniature mechanical
computers, not because he necessarily believes that we build them, but 
because the math is easier than for electronic computers on that size 
scale.  Since the math is easier, it is easier for the scientific community
to check him.  (The bogosity issue again.)

He feels the answer to the bogosity question is: hypertext authoring tools.
Imagine a system as easy to use and as complete as the average university
library, but where any document can be annottated by anyone at anytime.
(And any reader can choose to turn off any subset of annotations he does
not wish to see.)
In a system where everyone has his say, but most people spend most of their
reading time reading material that has been editted by editors they trust,
in a system where each document has fast links to every document it references
and each document that references it, flaky theories get their rebuttals
attached strongly and quickly.

Drexler likes to talk about one of Jeremy Rifkin's books "Entropy".
"Entropy"'s arguments hinge on a single, damning misunderstanding of
thermodynamics. Yet, the book was almost used as a text book in a
course at M.I.T. (a socialogy course) because the rebuttal wasn't as
widely publicised as the book itself.

Conclusion:
How can we evolve usenet news into such a system?
(There Nick, I've tied nanotechnology to Stuart II.)

--- David Phillip Oster            --A Sun 3/60 makes a poor Macintosh II.
Arpa: oster@dewey.soe.berkeley.edu --A Macintosh II makes a poor Sun 3/60.
Uucp: {uwvax,decvax,ihnp4}!ucbvax!oster%dewey.soe.berkeley.edu

leech@unc.cs.unc.edu (Jonathan Leech) (10/30/87)

Summary:

Expires:

Sender:

Followup-To:

Distribution:

Keywords:


In article <8710290154.AA09880@agent99.wedge.com> nickyt@agent99.UUCP (Nick Turner) writes:
>If you could design such a system, how would you make it work?  What sorts of
>stuff would you build/grow?  What kinds of atoms and molecules would you use
>in your structures?  How would the nutrients be circulated?  How would you deal
>with the inevitable waste products?  How would you supply energy to the nano-
>machinery?  These are important questions.  Any ideas out there?

    People who have read _Engines_of_Creation_ will remember that
Drexler made numerous predictions of fantastic things we could do with
nanotechnology (NT): life extension, interactive design systems
oreders of magnitude more productive than today, nanocomputers, etc,
etc. The problem I have with this idea is that while NT is probably
feasible, it is simply an enabling technology, not a magic panacea.
For example, life extension requires that we understand deeply how
aging affects the body. NT will help researchers, but it won't
suddenly make them so much smarter that they understand in detail how
life 'works'. And the additional precautions needed to work with NT
will slow things down remarkably (ordinary viruses are bad enough;
imagine sophisticated AI systems gone wild in your body ala Greg
Bear's _Blood_Music_!) I feel that Drexler's envisioned explosion of
NT remaking the world suddenly is simply not going to happen. An
analogy is the old 'nuclear power too cheap to meter' idea. Comments?
-- 
    Jon Leech (leech@cs.unc.edu)    __@/
"The idea of ``picking up where Apollo left off'' in lunar exploration
is a chimera. There is nothing to pick up; when we dropped it, it broke."
    John & Ruth Lewis in 'Space Resources: Breaking the Bonds of Earth'

fiddler%concertina@Sun.COM (Steve Hix) (11/03/87)

FYI, the mid-December (!) 1987 issue of Analog has an
article on nanotechnology by C. Peterson and K.E. Drexler.

Haven't read it yet...but lunchtime looms.

	seh

tower@BU-IT.BU.EDU.UUCP (11/06/87)

In article <15300@bu-cs.BU.EDU> you write:
 > From: unc!leech@mcnc.org  (Jonathan Leech)
 > 
 >     People who have read _Engines_of_Creation_ will remember that
 > Drexler made numerous predictions of fantastic things we could do with
 > nanotechnology (NT): life extension, interactive design systems
 > oreders of magnitude more productive than today, nanocomputers, etc,
 > etc. The problem I have with this idea is that while NT is probably
 > feasible, it is simply an enabling technology, not a magic panacea.
 > For example, life extension requires that we understand deeply how
 > aging affects the body. NT will help researchers, but it won't
 > suddenly make them so much smarter that they understand in detail how
 > life 'works'.    ...     I feel that Drexler's envisioned explosion of
 > NT remaking the world suddenly is simply not going to happen. An
 > analogy is the old 'nuclear power too cheap to meter' idea. Comments?
 > 
 >     Jon Leech (leech@cs.unc.edu)

Drexler is predicting that both the nanotechnology and AI researchers
will succeed, and that it will be Artifically Intelligent
Nano-Machine, not Human, Researchers who will make many of the
breakthroughs.  He makes interesting arguments that these Nano-AI's
will think about a million times faster than humans, and be at least
as intelligent.

BTW, EoC is out in paperback now.  It's a book I recommend highly to
both the curious and the concerned citizen types.

-len

glg@sfsup.UUCP (11/12/87)

In article <8711052113.AA01020@bu-it.bu.edu> tower@bu-cs.bu.edu writes:
>In article <15300@bu-cs.BU.EDU> you write:
> > From: unc!leech@mcnc.org  (Jonathan Leech)
> >     People who have read _Engines_of_Creation_ will remember that
> > Drexler made numerous predictions of fantastic things we could do with
> > nanotechnology (NT): life extension, interactive design systems
	. . .
> > life 'works'.    ...     I feel that Drexler's envisioned explosion of
> > NT remaking the world suddenly is simply not going to happen. An
> > analogy is the old 'nuclear power too cheap to meter' idea. Comments?

>BTW, EoC is out in paperback now.  It's a book I recommend highly to
>both the curious and the concerned citizen types.

I recently bought the paperback, and I agree with your recomendation,
in fact I would place it in the "must read" catagory.

His arguments are very well thought out.  I agree to some extent with
Jonathan; I remain skeptical, but then so is Drexler.  He points out
(in the introduction I think) that it is impossible to predict the
direction of scientific discovery, because if we could, we would
already have the answer, and there are some aspects of his predictions
that rest on future scientific discoveries.

Often when reading EoC, I would feel that some point he makes is
pretty way out and hard to accept, but I would also have to admit
that it was plausable, and he would often temper these points by
bringing the discussion back to the more solid points, pointing out
that even these have many of the radical implication he is discussing.
The most significant points rest on well-understood science.

I have not finished the book yet, but I can see no gaping holes in
his reasoning.  Does anyone else who has read it have any major
objections?

Gerry Gleason
----- News saved at 11 Nov 87 22:55:29 GMT
In article <8711052113.AA01020@bu-it.bu.edu> tower@bu-cs.bu.edu writes:
>In article <15300@bu-cs.BU.EDU> you write:
> > From: unc!leech@mcnc.org  (Jonathan Leech)
> >     People who have read _Engines_of_Creation_ will remember that
> > Drexler made numerous predictions of fantastic things we could do with
> > nanotechnology (NT): life extension, interactive design systems
	. . .
> > life 'works'.    ...     I feel that Drexler's envisioned explosion of
> > NT remaking the world suddenly is simply not going to happen. An
> > analogy is the old 'nuclear power too cheap to meter' idea. Comments?

>BTW, EoC is out in paperback now.  It's a book I recommend highly to
>both the curious and the concerned citizen types.

I recently bought the paperback, and I agree with your recomendation,
in fact I would place it in the "must read" catagory.

His arguments are very well thought out.  I agree to some extent with
Jonathan; I remain skeptical, but then so is Drexler.  He points out
(in the introduction I think) that it is impossible to predict the
direction of scientific discovery, because if we could, we would
already have the answer, and there are some aspects of his predictions
that rest on future scientific discoveries.

Often when reading EoC, I would feel that some point he makes is
pretty way out and hard to accept, but I would also have to admit
that it was plausable, and he would often temper these points by
bringing the discussion back to the more solid points, pointing out
that even these have many of the radical implication he is discussing.
The most significant points rest on well-understood science.

I have not finished the book yet, but I can see no gaping holes in
his reasoning.  Does anyone else who has read it have any major
objections?

Gerry Gleason
Sender: 
Reply-To: glg@/guest4/glgUUCP (xmpj20000-G.Gleason)
Followup-To: 
Distribution: world
Organization: AT&T Information Systems
Keywords: 

Newsgroups: comp.info-futures
Subject: Re: Bioproduced nanocomputers
Summary: 
Expires: 
References: <8711052113.AA01020@bu-it.bu.edu>
Sender: 
Reply-To: glg@/guest4/glgUUCP (xmpj20000-G.Gleason)
Followup-To: 
Distribution: world
Organization: AT&T Information Systems
Keywords: 

In article <8711052113.AA01020@bu-it.bu.edu> tower@bu-cs.bu.edu writes:
>In article <15300@bu-cs.BU.EDU> you write:
> > From: unc!leech@mcnc.org  (Jonathan Leech)
> >     People who have read _Engines_of_Creation_ will remember that
> > Drexler made numerous predictions of fantastic things we could do with
> > nanotechnology (NT): life extension, interactive design systems
	. . .
> > life 'works'.    ...     I feel that Drexler's envisioned explosion of
> > NT remaking the world suddenly is simply not going to happen. An
> > analogy is the old 'nuclear power too cheap to meter' idea. Comments?

>BTW, EoC is out in paperback now.  It's a book I recommend highly to
>both the curious and the concerned citizen types.

I recently bought the paperback, and I agree with your recomendation,
in fact I would place it in the "must read" catagory.

His arguments are very well thought out.  I agree to some extent with
Jonathan; I remain skeptical, but then so is Drexler.  He points out
(in the introduction I think) that it is impossible to predict the
direction of scientific discovery, because if we could, we would
already have the answer, and there are some aspects of his predictions
that rest on future scientific discoveries.

Often when reading EoC, I would feel that some point he makes is
pretty way out and hard to accept, but I would also have to admit
that it was plausable, and he would often temper these points by
bringing the discussion back to the more solid points, pointing out
that even these have many of the radical implication he is discussing.
The most significant points rest on well-understood science.

I have not finished the book yet, but I can see no gaping holes in
his reasoning.  Does anyone else who has read it have any major
objections?

Gerry Gleason

robertj@yale-zoo-suned..arpa (Rob Jellinghaus) (11/16/87)

In article <21526@ucbvax.BERKELEY.EDU> oster@dewey.soe.berkeley.edu.UUCP (David Phillip Oster) writes:
>Imagine a system as easy to use and as complete as the average university
>library, but where any document can be annottated by anyone at anytime.
>(And any reader can choose to turn off any subset of annotations he does
>not wish to see.)
>In a system where everyone has his say, but most people spend most of their
>reading time reading material that has been editted by editors they trust,
>in a system where each document has fast links to every document it references
>and each document that references it, flaky theories get their rebuttals
>attached strongly and quickly.
...
>Conclusion:
>How can we evolve usenet news into such a system?
>(There Nick, I've tied nanotechnology to Stuart II.)
>
>--- David Phillip Oster            --A Sun 3/60 makes a poor Macintosh II.
>Arpa: oster@dewey.soe.berkeley.edu --A Macintosh II makes a poor Sun 3/60.
>Uucp: {uwvax,decvax,ihnp4}!ucbvax!oster%dewey.soe.berkeley.edu

In Stewart Brand's book about MIT's Media Lab, there is a passage
in which the director of the Lab is telling an audience that AT&T
currently has the capability to put fiber-optic cabling into every
home in America.  "What would be the bandwidth of such a system?"
he is asked.  "500 gigabits per second," he calmly replies.

This is the only way that a hypertext authoring system such as the
one above could become real.  (The supercomputers to hold all the
works themselves don't exist yet, but that's not my point.)  Nearly
universal access to this hyperlibrary would be essential to a full
implementation of this hyper-authoring scheme.

My response to David's question above (in case you were wondering
where this is all leading) is tha Usenet in its current form is
simply not suited to the kind of real-time interaction that this
authoring system would require.  If every household in America
got Usenet, that would be a start.  If Usenet was a real-time mode
of communication, rather than a delayed-propagation article relay
network (is that just a bunch of buzzwords?), that would help,
too.  But until we have the kind of omnipresent massive bandwidth
that the Media Lab's director mentions above, such a system will
remain largely imaginary.

(By the way, Stewart Brand's book about the Lab should be required
reading for anyone subscribing to this newsgroup.  You will all be
fascinated by it.  I unfortunately don't have my copy here, but
mail to me for publisher's information, as well as for references
to the material cited above.)

Rob Jellinghaus                | "Lemme graze in your veldt,
jellinghaus@yale.edu.UUCP      |  Lemme trample your albino,
ROBERTJ@{yalecs,yalevm}.BITNET |  Lemme nibble on your buds,
!..!ihnp4!hsi!yale!jellinghaus |  I'm your... Love Rhino" -- Bloom County

glg@sfsup.UUCP (11/18/87)

In article <18947@yale-celray.yale.UUCP> robertj@yale.UUCP writes:
>In article <21526@ucbvax.BERKELEY.EDU> oster@dewey.soe.berkeley.edu.UUCP (David Phillip Oster) writes:
>>Imagine a system as easy to use and as complete as the average university
>>library, but where any document can be annottated by anyone at anytime.
>>(And any reader can choose to turn off any subset of annotations he does
>...
>>Conclusion:
>>How can we evolve usenet news into such a system?
>>(There Nick, I've tied nanotechnology to Stuart II.)

>In Stewart Brand's book about MIT's Media Lab, there is a passage
>in which the director of the Lab is telling an audience that AT&T
>currently has the capability to put fiber-optic cabling into every
>home in America.  "What would be the bandwidth of such a system?"
>he is asked.  "500 gigabits per second," he calmly replies.

It would be interesting if this were to happen, and it would have
implications far beyond hypertext.  It is not as far-out as it sounds,
I can easily imagine stringing fiber could be as cheap as the coax for
cable (is it already?), and although 500Gbps is not current technology,
it is likely to be sooner than you think.  The major stumbling block 
that I see is that AT&T (and the rest) would want to make too much money
(think of what this kind of bandwidth would do to voice telephone rates)
on it, so it would take a long time to be widespread.

>This is the only way that a hypertext authoring system such as the
>one above could become real.  (The supercomputers to hold all the
>works themselves don't exist yet, but that's not my point.)  Nearly
>universal access to this hyperlibrary would be essential to a full
>implementation of this hyper-authoring scheme.

I don't think real time is all that important.  Sure, it is frustrating
to wait, and there are major benifits if following links can be done
in real time, but think about how much better it would be that trying
to hunt down references in a library.  Many people do not even have
access to a large enough library to do any real research.  Even if it
takes overnight to download a reference to your local system, I think
it is worthwhile.

Also, I can think of other ways to get a global hypertext system linked
into your home with existing technology.  The upload link does not have
to be that fast, since an individual contributer can only type so fast,
phone lines and modems can handle this.  Downloading of large files
could be done over broadcast media such as cable-TV channels.  Your
computer would dial in a request with your modem, and the file would
be captured and loaded into the computer from a box hooked on to your
cable-TV.  News type stuff could be broadcast continually, so it could
be waiting for you (if you told your computer to capture it that is).

Yes, current machines are not quite up to the most ambitious hypertext
projects, but they are more than enough to make a very useful system.
Besides, by the time a system is designed and built, the machines will
probably be ready.  In _EoC_ Drexler talks about "designing ahead",
that is, doing the groundwork in anticipation of future technology.
Laserdisks already have enough capacity for storage of large archives,
communications bandwidth is more of a problem, but even for this, 
technical solutions exist, we only have to wait for economic feasibility.

The is a danger of forming standards too early, before enough is known
about the problems, but lack of standards slows things down, and isolates
comunities of users within their local standard.  Enough is probably
known now to set standards for storage and transmission of text and
links, and for the interactions between archive-servers and terminal
machines.  The existence of these standards would clear the way to
set up some initial servers, to prompt development of user interfaces
for terminal machines, and to start dealing with issues like copywrite
protection and reimbursment of authors (probably the most significant
issues to encourage publication of quality material in this medium).

Gerry Gleason

lindsay@kelpie.newcastle.ac.UK ("Lindsay F. Marshall") (11/23/87)

In article <2375@sfsup.UUCP> you write:
>I can easily imagine stringing fiber could be as cheap as the coax for
>cable (is it already?),

Sadly stringing fibre is one of the most expensive things that you can
think of. Fibre simply is'nt robust enough yet, nor easy enough to join
and so has to be buried in trenches that are much deeper than those for
other services so that when the people from the power/gas/water
companies come to do something they don't rip right through the fibre.
It may be that the USA is better set up for this kind of thing (i.e.
services in ducts rather than just holes in the ground), but in Europe
the cost of such global cabling is absolutely prohibitive - hence the
lack of success of cable TV in the UK, let alone fibre data lines!!

Lindsay