[sci.nanotech] Update #5

josh@planchet.rutgers.edu (06/29/89)

[Well, this is the first Update I've sent with the new software.
 Let's see if it makes it intact. 
 As always, this is patched together from a MAC binary file,
 but they've switched from Ready-Set-Go to Pagemaker, which has
 a nasty habit of knocking hunks out of words.  How'd *you* like
 to have to guess "meristematic" from "mer^%#@atic"? (I kid you not...)
 Anyway, any typos or other idiocy herein is almost surely mine.
 --JoSH]

+---------------------------------------------------------------------+
|  The following material is reprinted *with permission* from the     |
|  Foresight Update No 5, 01 Mar 89.                                  |
|  Copyright (c) 1988 The Foresight Institute.  All rights reserved.  |
+---------------------------------------------------------------------+
FORESIGHT UPDATE   No. 5
A publication of the Foresight Institute
Preparing for future technologies
 Board of Advisors
Stewart Brand
Gerald Feinberg
Arthur Kantrowitz 
Marvin Minsky
Board of Directors
K. Eric Drexler, President
Chris Peterson, Secretary-Treasurer
James C. Bennett
Editor   Chris Peterson
Publisher   Fred A. Stitt
Assembler   Russell Mills
Publication date  1 Mar 89

 Copyright 1989 The Foresight Institute.  All Rights Reserved.  If you
find information and clippings of relevance to FI's goal of preparing
for future technologies, please forward them to us for possible
coverage in FI Update. Letters and opinion pieces will also be
considered; submissions may be edited. Write to the Foresight
Institute, Box 61058, Palo Alto, CA 94306; Telephone 415-948-5830.

****************************************************************

NANOTECHNOLOGY POLICY 

A team of graduate students and faculty at the Lyndon B. Johnson
School of Public Affairs at the University of Texas at Austin has been
asked to conduct a study of the political and economic ramifications
of nanotechnology. Headed by Dr. Susan Hadden, this research project
is in the format of a two-term course. The fall 1988 term started off
with an introductory lecture by Eric Drexler; the students went on to
study the technology itself and the effects of other formerly new
technologies such as biotechogy. This spring they will attempt to
predict the kinds of social, economic, and political changes inherent
in widespread adoption of nanotechnology, and will review various
policy responses and their possible effects.  The effort is funded by
Futuretrends, a nonprofit educational group. Roger Duncan, president
of Futuretrends and longtime Foresight supporter, initiated the
project, which is expected to release its report in mid-summer 1989.

Thanks 

Once again there are too many people deserving thanks for all to be
listed here, but the following is a representative group: Michael
Schrage for pointing the Rockefeller Foundation in our direction,
Peter C. Goldmark, Jr., for investigating nanotechnology for the
Rockefeller Foundation, Ray and John Alden for continuing useful
advice, Peter Schwartz and Stewart Brand of the Global Business
Network for help with the planned technical conference, David Gagliano
for looking into research funding sources, the Seattle NSG for putting
on Nanocon, Time-Life Books for covering nanotechnology, and Ed
Niehaus for public relations help. Others are mentioned throughout FI
publications.

FI Policy 

The Foresight Institute aims to help society prepare for new and
future technologies, such as nanotechnology, artificial intelligence,
and large-scale space development, by:
 promoting understanding of these technologies and their consequences,
 formulating sound policies for gaining their benefits while avoiding 
their dangers,
 informing the public and decision makers regarding these technologies 
and policies,
 developing an organizational base for implementing these policies, and
 ensuring their implementation.

FI has a special interest in nanotechnology: at this early stage, it
receives relatively little attention (considering its importance),
giving even a small effort great leverage. We believe certain basic
considerations must guide policy:

Nanotechnology will let us control the structure of matter, but who
will control nanotechnology? The chief danger isn't a great accident,
but a great abuse of power. In a competitive world, nanotechnology
will surely be developed; if we are to guide its use, it must be
developed by groups within our political reach. To keep it from being
developed in military secrecy, either here or abroad, we must
emphasize its value in medicine, in the economy, and in restoring the
environment. Nanotechnology must be developed openly to serve the
general welfare.

The Foresight Institute welcomes both advocates and critics of new
technologies.

****************************************************************

Nanotechnology Online

There is a nanotechnology Netnews group, sci.nanotech, on the USENET
system. The USENET newsgroups form a large, distributed, hierarchical
electronic bulletin board; formerly available only to those with UNIX
machines, it is now accessible to anyone through services such as the
WELL at 415-332-6106 (data), 415-332-4335 (voice) and the Portal at
408-725-0561 (data), 408-973-9111 (voice).

In cooperation with FI, sci.nanotech carries most FI publications. The
moderator is Josh Hall (josh@klaatu.rutgers.edu or
rutgers!klaatu.rutgers.edu!josh), who can answer specific questions
about the group by electronic mail.

****************************************************************

Meet the Advisors:

Meeting News

January and February saw a number of nanotechnology-related events:
MIT's annual symposium (see report elsewhere in this issue), a lecture
at Bell Communications Research (a spinoff of Bell Labs) on Jan. 13,
major coverage of protein folding and design at the AAAS meeting in
San Francisco, a lecture at Silicon Valley's Software Entrepreneurs
Forum on Feb. 17, a nanotechnology Physics Colloquium at the
University of Seattle, and Nanocon, a regional meeting sponsored by
the Seattle Nanotechnology Study Group. The last three have yet to
occur as we finish this issue, so will be reported on next time.

In addition to the items listed in the "Upcoming Events" column, both
Hewlett-Packard and Union Carbide are planning meetings to discuss
nanotechnology.

"Nanotechnology: Prospects for Molecular Engineering" was the title of
a symposium held at MIT on January 11-12. Sponsored this year by both
the MIT Nanotechnology Study Group (MIT NSG) and the Foresight
Institute, a nanotechnology event has been held at MIT annually since
1986.

The introductory lecture was given by Eric Drexler, in which the
technical foundations of the case for nanotechnology were laid out and
basic designs described. Next, David Pritchard of MIT's Physics
Department described his work on laser trapping and the use of optical
standing waves to diffract beams of sodium atoms.  In conversation
after his talk, he noted that it is possible to use optical trapping
to confine atoms to a space small compared to a wavelength of light,
but that positioning is quite inaccurate on an atomic scale.  This
inaccuracy (with today's technology, at least) precludes "optical
assemblers" for molecular structures.

Adam Bell of the Technical University of Nova Scotia described
computer-aided design and its role in design for nanotechnology.  He
emphasized the usefulness of designing a uniform language for
describing systems, and the need to develop new engineering
methodologies in this new domain.

Ray Solomonoff of the MIT NSG spoke on "Managing Innovation" with
particular emphasis on the prospects for managing nanotechnology as it
arrives.  His talk highlighted the practical parallels between
self-replicating molecular machinery and self-improving artificial
intelligence.  He expects that the latter, in particular, is apt to
bring an abrupt transition in knowledge, technology, and world
affairs.

Jeff MacGillivray, also of MIT NSG, looked at the economics to be
expected in a world with nanotechnology.  He asked "what will be of
value?"  His answers included land, resources, and human services.

On the second day, several of the previous speakers were joined by
Marvin Minsky (of the MIT Artificial Intelligence Lab and Media Lab)
and Paul Saia (of Digital Equipment Corporation) in panel discussions
of the technical basis of nanotechnology and the timeframe for its
arrival and of the social impact and implications expected from
nanotechnology.

The second day's events were capped off by a more informal meeting for
those interested in further pursuing the issues raised.

MIT NSG and FI would like to thank the groups whose funding made the
symposium possible: MIT's Departments of Electrical Engineering and
Computer Science, Materials Science and Engneering, Mechanical
Engineering, and Physics; MIT's Alumni Association, IAP Funding
Committee, and Media Laboratory; and the Digital Equipment
Corporation.

****************************************************************

Nanotechnology & Microengineering Progress

by Chris Peterson

This quarterly newsletter covers a wide range of topics, arranged
under five headings:

 Microchips: semiconductor technology

 Micromachines: miniature and micromachines

 Molecular Engineering: genetic engineering and biotechnology

 Sensors: a subset of micromachines

 Microstructures and Micromechanics: includes new materials, computer
advances, brain theory, math, and semiconductor fabrication, as well
as microstructures and micromechanics

The items consist of abstracts describing research news, technical
papers, company announcements, and patents.  Company profiles and
brief market analysis comments also appear.  The publication's
commercial slant will be useful for investors.

The enabling technologies leading toward nanotech--protein and other
polymer design, supramolecular (and biomimetic) chemistry, and STM/AFM
based micronipution--were not covered in the premiere issue we saw, so
the publication's name seems a bit of a misnomer.  However, this has
the advantage for potential subscribers that N&MP should have little
if any overlap with Update.

The newsletter does a good job at summarizing progress in various
micron-scale technologies.  For the technically literate reader who
wants to keep up with these, for business or other reasons, this
publication could easily be worth the subscription price.

N&MP is available from STICS, Inc., 9714 South Rice Ave., Houston, TX
77096, (713) 723-3949.  It is edited by Donald Saxman and costs $200
per year, with a 25% discount for libraries, universities, and medical
schools, and an extra $20 charge for overseas airmail. A sample copy
of the first issue costs $20.

****************************************************************

The Foresight Institute, in cooperation with the Global Business
Network, is planning a small technical colloquium on nanotechnology,
to be held in Palo Alto in fall 1989. This invitational meeting will
help researchers in enabling technologies make contact and communicate
their goals and concerns. Potential attendees will be asked to submit
position papers describing their interests. Additional information
will be announced as it becomes available.

Since last issue we've received news of a Hypermedia Design Workshop
held last October.  Organized by Jan Walker of DEC and John Leggett of
Texas A&M, and funded by DEC.  It was the first of two invited
hypermedia meetings. The goal of the first was to bring together
representatives of as many of the major hypertext media systems as
possible, have them compare designs, and design a hypermedia storage
substrate that would support the various systems. The second meeting
is planned for Texas in early 1989 and will look at user interface
issues and standards.

The following are the participants, their organitions, and the systems
they've worked on: Rob Akscyn (Knowledge Systems: KMS), Doug Engelbart
(McDonnell-Douglas: NLS, Augment), Steve Feiner (Columbia: FRESS,
Interactive Graphical Documents), Frank Halasz (leader of a new
hypertext team at MCC; also Xerox's NoteCards), John Leggett (Texas
A&M: teaches graduate course on hypertext), Don McCracken (Knowledge
Systems: ZOG, KMS), Norm Meyrowitz (Brown: Intermedia), Tim Oren
(Apple: HyperCard), Amy Pearl (Sun: Sun Link Service), Mayer Schwartz
(Tektronix: Neptune/HAM), Randy Trigg (Xerox PARC: TEXTNET,
NoteCards), Jan Walker (DEC: Concordia, Symbolics Document Examiner),
Bill Weiland (U of Maryland: Hyperties).

The only (known) major hypermedia systems not represented were Xanadu
and Guide. Marc Stiegler, Xanadu's Director of Product Development,
reports that all team members were locked in their offices, creating
software and therefore unable to attend.

Nanotechnology Education

A Foresight Institute Briefing paper is available for students and
others who want to learn the basics underlying nanotechnology. Send a
stamped, self-addressed envelope to FI and ask for Briefing #1,
"Studying nanotechnology."

****************************************************************

by Chris Peterson

The goal of nanotechnology and the engineering approach needed to
reach it are receiving increasing attention within the biotechnology
community, particularly among protein designers. Drawn from a pure
science background, these researchers are being pulled increasingly in
the direction of designing and building new structures, a task for
which creative engineering skills are needed.

This interest has shown up at two meetings: At the First Carolina
Conference on Protein Engineering (held last October) the subject was
raised by researcher Bruce Erickson of the University of North
Carolina's chemistry department. As the chair of the session on
Nongenetic Engineering, he led off with a reading from the book
Engines of Creation and recommended it to the audience.

This January's American Association for the Advancement of Science
conference in San Francisco, which included substantial coverage of
protein engineering, featured a plenary lecture given by Frederic
Richards of Yale's Department of Molecular Biophysics and
Biochemistry. In it, he highlighted one paper in particular: the 1981
PNAS paper describing a path from protein engineering to control of
the structure of matter(1).

While it is too soon to tell whether the protein path to
nanotechnology will be the fastest, the goal is becoming clearer to
researchers in that field.

1. Drexler, K.E., Proceedings of the National Academy of Sciences,
78:5275-5278, 1981.

****************************************************************

Upcoming Events

Nanotechnology Keynote address, April 21 evening, American Humanist
Assoc., LeBaron Hotel, San Jose, CA. Part of a weekend-long
conference. Contact 408-251-3030.

Human Genome Project Conference, April 23-25, Alliance for Aging
Research and AMA, J.W. Marriott Hotel, Washington, DC. Dinner lecture
on nanotechnology on 24th. Contact 800-621-8335.

HyperExpo, June 27-29, Moscone Center, San Francisco. Trade show
covering hypermedia and related topics. Contact American Expositions,
212-226-4141.

Second Conference on Molecular Electronics and Biocomputers, Sept.
11-18, Moscow, USSR, $150. Contact P.I. Lazarev, Institute of
Biophysics of the Academy of Sciences of the USSR, Pushchino, Moscow
Region, 142292, USSR.

First Foresight Conference on Nanotechnology, during October or
November, 1989, Foresight Institute and Global Business Network, Palo
Alto, CA. Small technical meeting; see writeup in this issue.

****************************************************************

The Foresight Institute receives hundreds of letters requesting
information and sending ideas. Herewith some excerpts:

I am enclosing an article, "Microscopic Motor is a First Step," by
Robert Pool, published in the Oct. 21 Science. The article discusses
recent developments in micron-scale mechanics. I have seen similar
discussions elsewhere.

A problem in this and other stories on micromachines is that they
often confuse information about nanomachines with information about
micromachines. Expecting micromachinery (which is developing sooner)
to accomplish the tasks of nanomaery could disillusion proponents of
micromachinery, and mistakenly discredit claims for nanomachinery. The
practical distinction between these two technologies needs to be
clarified.

Tom McKendree

Garden Grove, CA

I agree that progress toward nanotechogy can't be prevented by any
sort of organized suppression. If any one group, country, or group of
countries tries, someone else will make breakthroughs eventually. And
I believe that once it's developed, any attempt to make the technology
proprietary will be short-lived. The secrets of nanotechnology will be
simply too important to not have attempts made at espionage, midnight
computer hacking, and theft... "The Problem of Nonsense in Nanotech"
points out the problems of spreading misconceptions about
nanotechnology and their interference with foresight. If
nanotechnology has the capacity to utterly change society for better
or worse, then the general public needs to get prepared. But I don't
think people will be as interested in the details of the technology as
much as the kinds of change that will be brought about. Engines of
Creation points out a future, perhaps as close as the first half of
the 21st century, with fantastic posbilities. To spark curiosity and
interest it's necessary to discuss things like remaking the physical
shape of humanity. But dwelling on the fantastic would be an open
invitation to bogosity. To make best use of the next few decades, the
public needs to focus on issues such as population control, active
shields, environmental renewal, and space exploration.

John Papiewski

Elgin, IL

Currently I am a student at USC, interested in pursuing a career in
the field of nanotechnology...How should I structure my curriculum in
order to pursue this goal? It seems that this field is
interdisciplinary in nature, consisting of physics, electrical
engineering, molecular biology, and chemtry. Is there a common
denominator?

Shawn Whitlow

Manhattan Beach, CA

Students and others interested should request a free copy of FI's
Briefing #1, "Studying nanotechnology." Send a stamped, self-addressed
envelope.

****************************************************************

Will the BioArchive Work?

by David Brin

David Brin and Eric Drexler discuss here the challenge of using
nanotechnology to restore species from preserved tissue samples, for
species where habitat protection and captive breeding have failed. Dr.
Brin holds a doctorate in astrophysics, works as a consultant to NASA
and the California Space Institute, and teaches graduate-level physics
and writing. He also writes award-winning science fiction.

Dr. Brin leads off:

I have one minor cavil with the notion of gene banks being sufficient
to preserve the information inherent in the gene pools of species.
Certainly I use this concept extensively in my own latest novel
(titled Earth, it includes brief mentions of nanotech). But we should
not be so blithe in assuming the Gene Library contains all. There is
also Process Implemtation: getting the initial genetic mix to initiate
and maintain the processes leading to a complete organism.

Take the mitochondria and other purported "guest" genomes within
eukaryotic cells. These symbionts are different in each species. They
must be included. Then take the nurturing environment of the womb/egg.
These are programmed in, all right, but at the other end of the
library, where the shelves read "this is how the be a mother," not
"this is how to be an embryo." This becomes incredibly complicated in
placental mammals, in which certain genes are apparently turned on or
off depending on whether they were delivered by the sperm or by the
egg.

So the concept of recreating lost species from their recorded
information, while worthy and desirable, is not going to be any
trivial undertaking. Even when the day comes that we can read an
entire genome of, say, a blue whale, that'll be a far cry from making
one, even with nanomachines.

The last fly in the ointment is the apparent language of development,
in which one gene doesn't express directly into one macroscopic trait.
Rather, it's the pull and tug of a thousand enzyme selection sites,
all playing against each other, that results in a cell here deciding
to become a neuron and another over there deciding to become a bit of
alveoli. Each enzyme site may take part in hundreds of cuing
operations simultaneously.

Indeed, it may turn out simpler just to disassemble a blue whale, cell
type by cell type, and store that information, using nanomachines to
build an adult from scratch!

Eric Drexler responds:

Some thoughts on restoring species, given frozen tissue samples and
advanced nanotechnology: It is indeed important to save more than just
nuclear genes, especially in the minimal sequence-of-bases sense.
There are also mitochondrial genes, patterns of DNA methylation,
obscurely-encoded states of genetic activation, and who knows what.
Freezing entire tissue samples (and, for insects, entire organisms)
answers several of these concerns, because it saves numerous cell
types with essentially full information.  For plants, which can
typically regenerate from any meristematic cell, samples will clearly
be adequate.

Restoring animal species will be more challenging.  Setting aside
several separate problems (such as genetic diversity, habitat
restoration, and lost culural information), it would be adequate to
reconstruct fertilized eggs, and to raise the organisms to adulthood.
Starting with only somatic cells and a thorough knowledge of a
hypothetical martian biology, this might be impossible.  But we will
enjoy the advantage (one hopes!) of having closely related species
available. To restore a beetle species, for example, one would
integrate genetic (and other) information gathered from frozen cells
into an egg having a general structure derived from one or more
related species of beetle.  This would be done after studying the
relationship between somatic cell information and egg structure for a
number of existing, normally-reproducing species; note that early
embryology is terribly conservative, in an evolutionary sense.  The
resulting first generation might nonetheless have a somewhat atypical
phenotype, but one would expect the offspring of that generation to be
typical members of the original species.  Mammals require more than
just fertilized eggs, but embryos from endangered species have already
been brought to term by host mothers of related species, even where
the relationship between the species is not terribly close.

By the way, I agree with your evaluation of the relative difficulty of
(1) projecting an adult organism from its genes (etc.) and (2)
constructing tissues or organisms from scratch after a
molecule-by-molecule study of the original.  The first involves a
recipe, the second a blueprint; only blueprints describe products and
leave a choice of implemtation strategies.

****************************************************************

by Chris Peterson

Government decisions being made now and over the next few years will
influence these critical points:

  Will nanotechnology be developed openly, or will it be classified
and developed secretly in government labs?

  Will individuals be permitted to publish freely on hypertext
publishing systems, or will the system owners be forced to censor
their writings?

A recent report addresses these questions without ever using the terms
"nanotechnology" and "hypertext publishing." Surprisingly, the report
is published not by a private high-tech think tank, but by the US
Congresss Office of Technology Assessment. FI participants who care
about these issues will want to order this remarkable work. It's the
best short introduction we've seen to the new challenges to freedom of
speech and the press, and to how these challenges affect developing
technologies. Well-written, the report alternately horrifies and
encourages the reader with its review of past and possible future
govment actions likely to affect nanotechnology and hypertext
publishing.

Did you know, for instance, that the US government can classify
technical work done by independent, private researchers who take no
public money? Or that it can block a patent or any disclosure by an
inventor, even if the government has no right to the invention in
question? Although these powers might not withstand a Supreme Court
challenge, they appear to be current government policy.

While the report does not take an advocacy role per se, it does
clearly present arguments against such restrictive policies, some of
which date back to the 1940s. Readers who pursue this topic further
will find that FI Advisor Arthur Kantrowitz is a major proponent of a
policy of openness. (Surprisingly to some, Edward Teller, often termed
the father of the H-bomb by the media, also advocates openness.) The
basic argument is that an open society, in addition to being more
free, will progress faster technically, and be better able to defend
itself, than one which binds its minds with over-classification.
According to this theory, only critical short-term military
information, such as codes and troop movements during a war, should be
classifiable.

Although nanotechnology is still far too theoretical for
classification to be likely now, the issue will eventually arise. We
need to understand current policies and, if necessary, make
improvements before problems become acute.

The OTA report's section on electronic publishing gives a remarkably
clear view of such systems, which are described as the future crucible
of cultural change.

 While it misses the basic concept of linked hypertext, other key
features of hypertext publishing are described: the system is seen as
decentralized, more as a clearinghouse for exchange of news and
information than as a gatherer itself, in which the users themselves
can be reporters and publishers. Information is preprocessed or
screened to each individual's taste, either by a host computer sending
the data or by the user's personal computer, perhaps by an
artificially intelligent front-end.

The report tackles the critical issue of how such a system will be
regulated. Legally, publishers are responsible for their output and
can be sued for libel or for publishing false, damaging information.
However, in an open hypertext publishing system inviduals will be free
to publish their own material; it will not be prescreened by the
system's owner. But if regulators or the courts regard the system's
owner as the publisher, then the owner would be forced to verify and
police all information on the system--a crippling, impossible task.

In contrast, under today's law, common carriers such as the phone
system can't be held liable for what goes over their lines, since they
obviously have no control over it. Newspapers also are exempt from
liability for material they are legally required to publish. Obviously
hypertext publishing systems, lacking control over what is published,
should likewise be exempt from liability. The report goes so far as to
say that holding electronic publishers liable may conflict with First
Amendment rights. As the report makes clear, whether a new system is
treated as a common carrier in this way is a political decision rather
than a technical one.

The report makes the obvious suggestion that in such a system the
actual publisher of each piece of information should be held liable.
It gets confused on this point, however, by implying that for some
information it may be impossible to identify a responsible party. We
would disagree: a system can be set up such that every item it
contains is linked to a responsible party, typically whoever paid for
its publication on the system. Authorship could be kept anonymous in
some cases, yet made available by a court order when necessary.

Again the national security issue is raised: it has been suggested
that security concerns would force the screening of database entries
for militarily-valuable information, or require database subscriber
lists to be turned over to the government. The former would render the
system uneconomic; the latter violates the right to privacy. Those
wishing to pursue the issues raised by electronic publishing will want
to see Ithiel de Sola Pool's excellent book, Technologies of Freedom,
which is quoted in the report.

But for those desiring a compact introduction to these critical
issues, we suggest ordering a copy of the report:

Science, Technology, and the First Amendment, GPO stock number
052-003-01090-9. It can be obtained by mail from the Superintendent of
Documents, Government Printing Office, Washington, DC 20402-9325, or
by calling 202-783-3238. Checks in US currency, Visa, Mastercard, and
Choice cards are accepted. The cost is $3.50 within the US, $4.40
outside.

****************************************************************

Books

Signal: Communication Tools for the Information Age, ed. Kevin Kelly,
Harmony Books, 1988, paperback, $16.95. A Whole Earth Catalog focusing
on high tech subjects, mixes serious items (e.g., FI) with lighter
ones. Foreword by FI advisor Stewart Brand.

Filters Against Folly, by Garrett Hardin, Penguin Books, 1985,
paperback, $7.95. A respected en?ist looks at the relationship between
ecology and economics over time, pointing out the problems of
commonization and the error of thinking every worldwide problem is
global. A systems approach to a difficult problem; highly recommended.
One jarring note: Hardin's seeming belief that economics is a zero-sum
game.

Molecules, by P.W. Atkins, Scientific American Library Series #21
(distributed by W.H. Freeman), 1987, hardcover, $32.95. Lavishly
illustrated and elegantly written in nontechcal language, it makes the
molecular world understandable. Requires no prior knowledge of
chemistry.

Computer-Supported Cooperative Work, ed. Irene Greif, Morgan Kaufman,
1988, hardcover, $36.95. A collection of papers on groupware and
hypertext. Includes classic visionary papers by Vannevar Bush and
Douglas Engelbart, interesting work by Thomas Malone, Robert Johansen,
Xerox PARC, others.

Text, Context, and Hypertext, ed. Edward Barrett, MIT Press, 1988,
hardcover, $35. Diverse set of papers on how computers and hypertext
have changed the way people write using computers. Strong emphasis on
computer documentation. Quality is uneven, with some overlap, but
includes some noteworthy papers.

The Ecology of Computation, ed. Bernardo Huberman, Elsevier Science
Publishers, 1988, paperback, $39.50. Now available in a somewhat more
affordable edition. Open-systems perspective on advanced computing.
Includes a set of three papers on agoric market-based computation. For
the computer literate.

Proteins: Structures and Molecular Properties, by Thomas E. Creighton,
W.H. Freeman, 1984, hardcover, $37.95. Invaluable reference for
protein designers and nanotechnologists thinking about molecular
self-assembly.

Quanta, by P.W. Atkins, Clarendon, 1974 (reprint 1985), paperback,
$29.95. Qualitative explanations of quantum theory concepts with a
bare minimum of mathematics, in dictionary format. A reference rather
than a beginner's text.

****************************************************************

+---------------------------------------------------------------------+
|  This material is based on and builds on the case made in the book  |
|  "Engines of Creation" by K. Eric Drexler.                          |
|  It is reprinted with the additional permission of the author.      |
+---------------------------------------------------------------------+

by K. Eric Drexler

Artificial intelligence, like nanotechnology, will reshape our future.
nanotechnology means thorough, inexpensive control of the structure of
matter, and early assemblers will enable us to build better
assemblers: this will make it a powerful and self-applicable
technology. Artificial intelligence (that is, genuine, general-purpose
artificial intelligence) will eventually bring millionfold-faster
problem solving ability, and, like nanotechnology, it will be
self-applicable: early AI systems will help solve the problem of
building better, faster AI systems.

AI differs from nanotechnology in that its basic principles are not
yet well understood. Although we have the example of human brains to
show that physical systems can be (at least somewhat) intelligent, we
don't understand how brains work or how their principles might be
generalized. In contrast, we do understand how machines and molecules
work and how to design many kinds of molecular machines. In
nanotechnology, the chief challenge is developing tools so that we can
build things; in AI, the chief challenge is knowing what to build with
the tools we have.

To get some sense of the possible future of AI--where research may go,
and how fast--one needs a broad view of where AI research is today.
This article gives a cursory survey of some major areas of activity,
giving a rough picture of the nature of the ideas being explored and
of what has been accomplished. It will inevitably be superficial and
fragmentary. For descriptive purposes, most current work can be
clumped into three broad areas: classical AI, evolutionary AI, and
neural networks.

Classical AI

Since its inception, mainstream artificial intelligence work has tried
to model thought as symbol manipulation based on programmed rules.
This field has a huge literature; good sources of information include
a textbook (Artificial Intelligence by Patrick Winston,
Addison-Wesley, 1984) and two compilations of papers (Readings in
Artificial Intelligence, Bonnie Lynn Webber, Nils J. Nilsson, eds.,
Morgan Kaufmann, 1981, and Readings in Knowledge Representation,
Ronald J. Brachman, Hector J. Levesque, eds., Morgan Kaufmann, 1985).

The standard criticism of AI systems of this sort is that they are
brittle, rather than flexible. One would like a system that can
generalize from its knowledge, know its limits, and learn from
experience. Existing systems lack this flexibility: they break down
when confronted with problems outside a narrow domain, and they must
be programmed in painful detail. Work continues on alternative ways to
represent knowledge and action, seeking systems with greater
flexibility and a measure of common sense. (A learning program called
Soar, developed by Allen Newell of Carnegie Mellon University in
collaboration with John Laird and Paul Rosenbloom, is prominent in
this regard.) In the meantime, systems have been built that can
provide expert-level advice (diagnosis, etc.) within certain narrow
domains. Though not general and flexible, they represent achievements
of real value. Many of these so-called expert systems are in
commercial use, and many more are under construction.

Evolutionary AI

When one reads "artificial intelligence" in the media, the term
typically refers to expert systems. If this were the whole of AI, it
would still be important, but not potentially revolutionary. The great
potential of AI lies in systems that can learn, going beyond the
knowledge spoon-fed to them by human experts.

The most flexible and promising learning schemes are based on
evolutionary processes, on the variation and selection of patterns.
Doug Lenat's EURISKO program used this principle, applying heuristics
(rules of thumb) to solve problems and to vary and select heuristics.
It achieved significant successes, but Lenat concluded that it lacked
sufficient initial knowledge. He has since turned to a different
project, CYC, which aims to encode the contents of a single-volume
encyclopedia, along with the commonsense knowledge needed to make
sense of it, in representations of the sort used in classical AI work.

Another approach to evolutionary AI, pioneered by John Holland,
involves classifier systems modified by genetic algorithms. A
classifier system uses a large collection of rules, each defined by a
sequence of ones, zeroes, and don't-care symbols. A rule "fires"
(produces an output sequence) when its sensor-sequence matches the
output of a previous rule; a collection of rules can support complex
behavior. Rules can be made to evolve through genetic algorithms,
which make use of mutation and re?tion (like chromosome crossover in
biology) to generate new rules from old. This work, together with a
broad theoretical framework, is described in the book Induction:
Processes of Inference, Learning, and Discovery (by John H. Holland,
Keith J. Holyoak, Richard E. Nisbett, and Paul R. Thagard, MIT Press,
1986). So far as I know, these systems are still limited to research
use.

Mark S. Miller and I have proposed an agoric approach to evolving
software, including AI software. If one views complex, active systems
as being composed of a network of active parts, the problem of
obtaining intelligent behavior from the system can be recast as the
problem of coordinating and guiding the evolution of those parts. The
agoric approach views this as analogous to the problem of coordinating
economic activity and rewarding valuable information; accordingly, it
proposes the thorough application of market mechanisms to computation.
The broader agoric open systems approach would invite and reward human
involvement in these computational markets, which disguishes it from
the "look Ma--no hands!" approach to machine intelligence. These ideas
are described in three papers (Comparative Ecology: A Computional
Perspective, Markets and Computation: Agoric Open Systems, and
Incentive Engineering for Computational Resource Management) included
in a book on the broader issues of open computational systems (The
Ecology of Computation, B. A. Huberman, ed., in Studies in Computer
Science and Artificial Intelligence, North-Holland, 1988).

Ted Kaehler of Apple Computer has used agoric concepts in an
experimental learning system initially intended to predict future
characters in a stream of text (including written dates, arithmetic
problems, and the like). Called "Derby," in part because it
incorporates a parimutuel betting system, this system also makes use
of neural network principles.

Neural nets

Classical AI systems work with symbols and cannot solve problems
unless they have been reduced to symbols. This can be a serious
limitation.

For a machine to perceive things in the real world, it must interpret
messy information streams--taking information representing a sequence
of sounds and finding words, taking information representing a pattern
of light and color and finding objects, and so forth. To do this, it
must work at a pre-symbolic or sub-symbolic level; vision systems, for
example, start their work by seeking edges and textures in patterns of
dots of light that individually symbolize nothing.

The computations required for such tasks typically require a huge mass
of simple, repetitive operations before patterns can be seen in the
input data. Conventional computers simply do one operation at a time,
but these operations can be done by many simpler devices operating
simultaneously. Indeed, these operations can be done as they are in
the brain--by neurons (or neuron-like devices), each responding in a
simple way to inputs from many neighbors, and providing outputs in
turn.

Recent years have seen a boom in neural network research. Different
projects follow diverse approaches, but all share a connectionist
style in which significant patterns and actions stem not from symbols
and rules, but from the collective behavior of large numbers of
simple, interconnected units. These units roughly resemble neurons,
though they are typically simulated on conventional computers, and the
resemblance in behavior is often very rough indeed. Neural networks
have shown many brain-like properties, performing pattern recognition,
recovering complete memories from fragmentary hints, tolerating noisy
signals or internal damage, and learning--all within limits, and
subject to qualification. A variety of neural network models are
described in the two volumes of Parallel Distributed Processing:
Explorations in the Microstructure of Cognition (edited by David E.
Rummelhart and James L. McClelland, MIT Press, 1986). Neural network
systems are beginning to enter commercial use. Some characteristics of
neural networks have been captured in more conventional computer
programs (Efficient Algorithms with Neural Network Behavior, by
Stephen M. Omohundro, Report UIUCDCS-R-87-1331, Department of Computer
Science, University of Illinois at Urbana-Champaign, 1987).

A major strength of the neural-network approach is that it is
patterned on something known to work--the brain. From this perspective
a major weakness of most current systems is that they don't very
closely resemble real neuronal networks. Computational models inspired
by brain research are described in a broad, readable book on AI,
philosophy, and the neurosciences (Neurophilosophy, by Patrica Smith
Churchland, MIT Press, 1986) and in a more difficult work presenting a
specific theory (Neural Darwinism, by Gerald Edelman, Basic Books,
1987). A bundle of insights based on AI and the neurosciences appears
in The Society of Mind (by Marvin Minsky, Simon and Schuster, 1986).

Some observations

For all its promise and successes, AI has hardly revolutionized the
world. Machines have done surprising things, but they still don't
think in a flexible, open-ended way. Why has success been so limited?

One reason is elementary: as robotics researcher Hans Moravec of
Carnegie-Mellon University has noted, for most of its history, AI
research has attempted to embody human-like intelligence in com-puters
with no more raw computional power than the brain of an insect.
Knowing as little as we do about the requirements for intelligence, it
makes sense to try to embody it in novel and efficient ways. But if
one fails to make an insect's worth of computer behave with human
intelligence--well, it's certainly no surprise.

Machine capacity has increased exponentially for several decades, and
if trends continue, it will match the human brain (in terms of raw
capacity, not necessarily of intelligence!) in a few more decades.
Meanwhile, researchers work with machines that are typically in the
sub-microbrain range. What are the prospects for getting intelligent
behavior from near-term machines?

If machine intelligence should require slavish imitation of brain
activity at the neural level, then machine intelligence will be a long
time coming. Since brains are the only known systems with general
intelligence, this is the proper conservative assumption, which I made
for the sake of argument at one point in Engines of Creation.
Nonetheless, just as assemblers will enable construction of many
materials and devices that biological evolution never stumbled across,
so human programmers may be able to build novel kinds of intelligent
systems. Here we cannot be so sure as in nanotechogy, since here we do
not know what to build, yet novel systems seem plausible. It is, I
believe, reasonable to speculate that there exist forms of spontaneous
order in neural-style systems that were never tested by
evolution--indeed, that may make little biological sense--and that
some of these are orders of magnitude better (in speed of learning,
efficiency of computation, or similar measures) than today's
biological systems. Stepping outside the neural realm for a moment,
Steve Omohundro (see above) has found algorithms that outperform
conventional neural networks in certain learning and mapping tasks by
factors of millions or trillions.

Thus, although there is good reason to explore brain-like neural
networks, there is also good reason to explore novel systems. Indeed,
some of the greater successes in current neural network research
involve multi-level versions of back-propagation learning schemes that
seem rather nonbiological (and Omohun's algorithms seem entirely
nonbiological).

In summary, AI research is rich in diverse, promising approaches. Our
ignorance of our degree of ignorance precludes any accurate estimate
of how long it will take to develop genuine, flexible artificial
intelligence (of the sort that could build better AI systems and
design novel computers and nanomechanisms). If genuine AI requires
understanding the brain and developing computers a million times more
powerful than today's, then it is likely to take a long time. If
genuine AI can emerge through the discovery of more efficient
spontaneous-order processes (or through the synergistic coupling of
those already being studied separately) then it might emerge next
month, and shake the world to its foundations the month after.

In this, as in so many areas of the future, it will not do to form a
single expectation and pretend that it is likely ("We will certainly
have genuine AI in about 20 years." "Poppycock!"). Rather, we must
recognize our uncertainty and keep in mind a range of expectations, a
range of scenarios for how the future may unfold. Genuine AI may come
very soon, or very late; it is more likely to come sometime in
between. Since we don't know what we're doing, it's hard to guess the
rate of advance. Sound foresight in this area means planning for
multiple contingencies.

****************************************************************

Ultimate Computing

by Ralph Merkle

Stuart R. Hameroff's Ultimate Computing: Biomolecular Consciousness
and nanotechnology (Elsevier, 1987, $78), is an uncritical mix of
fact, fancy, and fallacy. Hameroff says "...this book flings metaphors
at the truth. Perhaps one or more will land on target..."  Perhaps,but
the reader must sort the hits from the misses. One miss is his central
premise, that "...the cytoskeleton is the cell's nervous system, the
biological controller/computer. In the brain this implies that the
basic levels of cognition are within nerve cells, that cytoskeletal
filaments are the roots of consciousness. (Italics in original.)
Unfortunately, there is every reason to believe this is completely
wrong. This casts something of a pall over the book.

Hameroff's chapter on nanotechnology is better than his average,
although it adopts the curious perspective that nanotechnology really
began with Schneiker in 1986, with Drexler mentioned only in passing.
(Readers can check Drexler's 1981 PNAS paper and decide for
themselves.) This is explained by the acknowledgements which say that
"Conrad Schneiker [Hameroff's research assistant] supplied most of the
material on nanotechnology and replicators for Chapter 10..."Hameroff
covers a lot of ground.  He has chapters on the philosophy of the
mind, the origin of life, the cytoskeleton, protein dynamics,
anesthesia (a good chapter--Hameroff is an anesthesiologist), viruses,
and nanotechnology.  He gives his own qualifications in a dozen fields
as "...an expert in none, but a dabbler in all..."  He's mostly right.
There are better books written by more qualified people--the reader is
advised to select from among them.

Dr. Merkle's interests range from neurophysiology to computer
security; he also lectures on nanotechnology.

****************************************************************

Dr. Mills has a degree in Biophysics and assists in the production of
Update.

by Russell Mills

Electronics

Researchers at Caltech, JPL, and Univ. Sao Palo, Brazil have designed
(but not built) a molecular-sized shift register -- a memory storage
device with 1000 times the density and a ten-thousandth the energy
consumption of its VLSI equivalent. Bits are stored by bumping
individual electrons into the energy levels of a polymer, where they
are moved along the polymer as more bits are written. The design has
been worked out in some detail, and specifies the orbital energy
levels of the molecules, the rates of competing (error-producing)
electron transitions, spacing of the polymers, and the timing of the
read/write cycle. The authors state that the register and associated
read/write devices could be implemented with current technology; they
provide chemical formulas of candidate molecules. [Science 241:817-820
(12Aug88)]

Molecular-sized conducting wires of lengths down to 3 nanometers have
been made at the Univ. of Minnesota from polyacenenone and imide
subunits. The researchers hope to generate 3-D networks under 10 nm in
size using similar chemical techniques. [New Scientist (19May88)]

Chemistry

Herschel Rabitz at Princeton Univ. proposes using femtosecond laser
pulses to excite molecules in solution, measuring their response, and
using the data to craft another pulse--thus homing in on the pulse
structure needed to produce a desired chemical reaction. Once the
correct pulse structure is known, it could be used routinely to carry
out the reaction while dispensing with the elaborate techniques now
required to protect one part of the molecule while another part is
being modified. If Rabitz's method works, it may shorten many of the
paths to nanotechnology by drastically simplifying the assembly of
complicated molecules. [Science News 134:6 (2Jul88)]

Micromanipulation

A technique has been developed at Bell Labs for trapping and
manipulating micro-organisms without damaging them. A lens is used to
focus a laser on the organism; light refraction results in a force
that pushes it toward the focal point of the beam. Viruses and
bacteria can be trapped and immobilized by the technique; larger
cells, such as yeast or protozoa, can be dragged around by moving the
beam. The investigators even found that they could reach inside a cell
with the laser beam, grasp internal organelles and move them around.
One wonders whether a similar technique could be used to assemble
components of micromachines like those discussed elsewhere in this
article. [Science 241:1042 (26Aug88)]

Physicists at the National Bureau of Standards are now able to confine
groups of sodium atoms between a set of laser beams and then slow down
their motions to under 20 cm/sec. Under these conditions the
properties of atoms can be studied with very high precision; such
information will someday be needed for the design of nanomachines and
zero-tolerance materials. [Science 241:1041-1042 (26Aug88)]

A step forward in our ability to handle individual molecules has been
made by Japanese researchers at Osaka Univ. who have directly measured
the tensile strength of an intermolecular bond--by pulling on it until
it broke. The bond is that between protein subunits in a skeletal
muscle filament. The filaments are chains of "actin" molecules held
together by non-covalent bonds; two such chains wind around each
another to form an actin filament. Another protein, "myosin", contains
the motor apparatus of the muscle. The researchers obtained a value of
108 piconewtons for the tensile strength of actin filaments. They
proceeded to measure the force exerted by each myosin "motor as it
pulls on an actin filament--about 1 pN. Since each actin filament is
pulled on by roughly 50 myosin molecules, there would seem to be a
safety factor of 2 built into our muscles. [Nature 334:74-76 (Jul88)]

Viewing

Biochemists at Cornell Univ. are now able to take 120 picosecond x-ray
diffraction exposures of organic molecules and enzymes. This
breakthrough is made possible by a magnetic undulator that produces an
intense x-ray beam. Until now, x-ray diffraction analysis has required
long exposures, especially for large molecules. Molecular motion would
cause the images to blur, thus limiting the resolution obtained. With
exposure times now reduced by a million-fold, it should be possible to
watch enzymes change shape as they catalyze reactions and to
troubleshoot nanomachines by observing them in action. [Science
241:295 (15Jul88)]

Micromechanics

An electric motor less than half a millimeter across, miniature
air-driven turbines, and gear trains--these are among the various
micromachines recently fabricated at the Univ. of Calif. at Berkeley,
Cornell Univ., and Bell Labs using the techniques of integrated
circuit manufacture. Intended to provide measurements of friction,
wear, viscosity, lubrication, stress, deformation, fatigue and other
factors at the scale of microtechogy, they may be forerunners of
practical devices: tiny fans for cooling integrated circuits,
drug-dispensing mechanisms for smart pills, cutting tools for
unblocking blood vessels, cell sorters for diagnostic tests. Similar
methods might be used to make even smaller machines, but true
nanomachines are probably beyond the range of these techniques.
[Science 242:379-380 (21Oct88)]

Public attitudes

Victim of numerous court-ordered delays inspired by unfounded fears,
the U.S. biotechnology industry has finally realized that it can no
longer take public awareness for granted. Some companies have dealt
with the problem by hiring public relations firms to promote positive
attitudes toward them; often this approach has led to
company-sponsored public meetings in communities where the testing of
genetically modified organisms is being planned. The effectiveness of
the effort is already evident--more than a dozen field tests have been
conducted recently without controversy. [Science 242:503-505
(28Oct88)] nanotechnology proponents: take note! Technophobia is an
easy nut to crack when moderate resources are devoted to the effort.

Protein engineering

Wm. DeGrado's group at the duPont Co. has continued to make remarkable
progress in protein design and production. Having designed a
four-helix protein that self-assembles into a stable bundle, they
proceeded to synthesize the gene for this protein, insert the gene
into a bacterium, and show that the bacterium produces the desired
protein. Although this effort aimed at studying the relationship
between amino-acid sequence and 3-dimensional structure of proteins,
the designed protein will probably be used as a platform for adding
functional features. [Science 241:976-978 (19Aug88)]

The molecules responsible for photon-capture in photosynthesis were
mapped in detail several years ago. To find out how they work,
scientists at MIT and Washington Univ. (St. Louis) are making
amino-acid substitutions in the reaction center of photosynthetic
bacteria. When they altered an important amino acid linking a
chlophyll molecule with its protein support, one of the chlorophyll
subunits lost its magsium atom--yet the system still functioned at
about 50% efficiency. This suggests that photosynthesis does not
depend critically on the molecular structures arrived at through
traditional evolution, and that better and simpler molecules may be
developed for powering some kinds of nanomachinery. [Science News
134:292]

Biological membranes are equipped with a variety of channels
connecting the inside and outside of cells or organelles. These
channels, made of protein, can be opened and closed; when open they
allow certain ions to pass through the cell membrane. Wm. DeGrado's
group at duPont has designed and synthesized a number of simple ion
channel proteins and tested their ability to form functional ion
channels in a phospholipid membrane. The proteins were chains of 14 to
21 serine and leucine residues, arranged into helical structures with
the polar serines running down one side and the apolar leucines along
the opposite side. A number of these helices would then aggregate in
parallel to form a cylindrical bundle around a central channel. The
researchers determined that 21-residue proteins spanned the membrane
and created a conductive path for ions. The amino-acid sequence of the
proteins determined the number of helices in a bundle, and this in
turn determined the size of ions that could pass through the channel.
[Science 240:1177-1181 (27May88)]

Protein engineering advances swiftly. In each of the following three
summaries, researchers have programmed Esterichia coli bacteria to
produce and secrete redesigned antibody molecules. Bacteria are far
easier to program and grow than eukaryotic (nucleated) cells, but in
earlier experiments bacteria would not output functional proteins. In
the latest work the bacteria have been persuaded to produce
"antigen-binding fragments" (Fabs) with the same specificity and
affinity for their substrates as the original antibodies.

Researchers at Max Planck Institute developed a bacterial expression
system mimicking the one eukaryotes use. In eukaryotic cells, an
antibody's protein chains are synthesized in the cell's cytoplasm,
then transported into an organelle called the "endoplasmic reticulum,"
where they are trimmed, folded, bonded, and paired into a functioning
configuration. The researchers first examined the 3-dimensional
structure of the antibody MCPC603 and decided which portions of it to
keep. They next constructed a custom plasmid (mini-chromosome)
consisting of: the DNA sequences coding for the antigen-binding
portions of the antibody's protein chains, two bacterial "signal
sequences" coding for protein appendages that tell the bacterial cell
membrane to secrete the proteins, and several other sequences required
for replication and translation of the DNA via RNA into protein. When
this plasmid was introduced into Escherichia coli, the bacteria used
the new DNA to make and secrete the Fab protein chains. The chains
then folded and bonded themselves correctly. [Science 240:1038-1041
(20May88)]

A group at International Genetic Engineering, Inc. used essentially
the same technique to produce a chimeric Fab consisting of antigen
recognition domains taken from a mouse antibody, and the remainder
taken from human antibody (presumably to forestall an immune attack on
the Fab if it should be used therapeutically in humans). This
particular Fab was chosen because it attacks human colon cancer cells.
[Science 240:1041-1043 (20May88)]

Genex Corp. researchers have gone a step further in simplifying
antibody molecules. Traditional antibodies are composed of four
polypeptide chains. In the Genex design, two of these chains are
eliminated and the other two are joined by a short chain of amino
acids. The result is called a "single-chain antigen-binding protein."
Genes to encode several such proteins were constructed and expressed
in E. coli. The proteins produced by the bacterium proved to have the
same specificity and affinity for the substrates as the original
antibodies. Single-chain antigen-binding proteins are expected to
replace monoclonal antibodies in such areas as cancer and
cardiovascular therapy, assays, separations, and biosensors. [Science
242:423-426 (21Oct88)]

Amidases are enzymes that catalyze the hydrolysis of amide bonds. Of
particular interest to biotechnologists are amidases specific for the
amide bonds connecting amino acids together in proteins; what is
needed are tools for cutting a protein at any desired place along its
amino acid sequence. Researchers at Scripps Clinic and Penn State
Univ. have overcome a major hurdle by developing a Fab that catalyzes
the hydrolysis of a somewhat different amide bond joining two aryl
components. Mice were immunized with a compound resembling the
transition state of amide hydrolysis; whole antibodies collected from
the mice were then enzymatically trimmed. The resulting Fabs sped up
the hydrolysis reaction by a factor of 250,000. [Science 241:1188-1191
(2Sep88)]

****************************************************************

by James C. Bennett

Looking over Stewart Brand's bio, the verb that jumps out at one is
"founded": he founded the Whole Earth Catalog, Point Foundation,
Coevolution Quarterly (now the Whole Earth Review), the WELL (a
regional computer teleconferencing system), and co-founded the Global
Business Network. He also has taught at U. Cal. Berkeley and the
Western Behavioral Sciences Institute, serves on the Board of Trustees
of the Santa Fe Institute, been a Visiting Scientist at MIT's Media
Lab, and written books including The Media Lab: Inventing the Future
at MIT. The following is a discussion between Stewart and Jim Bennett,
cofounder and Vice President of the American Rocket Company. Jim
serves on FI's Board of Directors and will be profiled in a later
issue.

Editor

FI: Foresight's ambition is to begin the debate about nanotechnology
on a more reasonable, less polarized basis than previous debates about
technology, such as that about nuclear power. How reasonable do you
think this ambition is?

SB: I don't know; it will be interesting experiment. The only previous
attempt at anything like this that I can think of is the Asilomar
conference on genetic engineering, where they got a lot of
professionals together and tried to predict what the negative
conquences of recombinant DNA experiments might be, and what measures
would be reasonable to take to prevent such consequences. It's not
clear how beneficial that conference was. A lot of opponents of
genetic enneering took the statements made there, and, in effect, said
"See, even the scientists had some doubts about this, so we should
really be worried.

FI: There you had a situation where a number of people were already
polarized, so they essentially took advantage of the situation?

SB: Yes, but now that I think of it, that was a first attempt. The
second attempt at anything is usually quite different from the first
time around.

FI: If you were going give us "Stewart Brand's Rules for Productive
Debate," what would they be?

SB: Don't know yet. What's important is to get very smart people, who
have ears as well as mouths. Some very smart people can't listen.

FI: So one thing to do would be to be selective as to who to invite?

SB: Word gets around as to who's good at conferences. Most people who
are high up in science and technology spend a lot of time in
conferences, and it's fairly easy to tell who are listeners as well as
talkers. You can also tell a lot by how people talk on the telephone:
some people just preach at you.

FI: To what extent is it useful to get people who don't have
scientific and technical backgrounds involved in the debate, and at
what point is it useful to do that?SB: I think it's worth having
people who are politically active involved at all stages of the
process. You want to have both people who are astute technically and
who are sophiscated politically. Some who are competent in science
also have a practical knowledge of politics. Especially in fields like
conservation biology, you need to have a comprehensive view of things
so that the Costa Rican farmer doesn't get left out of the campaign,
for example. You need to get people who know what it takes to
negotiate agreement. And to negotiate disagreement, by the way.

FI: I can see a lot of cases where you're not going to get to quick
agreement among people. You are at least going to have the
disagreements be productive rather than destructive.

SB: You need to have people come in and say, "Yeah, we agree on 80% of
this stuff," and then identify the items they disagree on, so that as
further evidence or information becomes available those items can be
resolved.

FI: I think that the open-minded people you're describing here are the
sort of people who would be interested in seeing the new information
come in to resolve such points, rather than fearing being proven wrong
by it...

SB: Yes. Edward O. Wilson, the sociologist, is an example. When he
first came out with his theories on sociobiology, based on his work on
insect behavior, a lot of the liberals attacked him, because he
contradicted their current beliefs. And he was willing to change and
modify his views on the basis of argument and new information. A Noam
Chomsky, on the other hand, tends to be more overbearing and hurt his
field of linguistics with heavy-handedness. In the sociology instance,
by the way, the liberals were probably as much wrong as right, not
that they're likely to admit it.

FI: Speaking of trying to bring in new information, to what extent do
you think that new information technology such as hypertext, or other
things such as you describe in The Media Lab, can improve the quality
of debate?

SB: It would be interesting to do an article on what is sometimes
called "grey lit"ture"papers informally passed among
scientistsdiscussing how that's progressed over the years. First it
was just the exchange of letters among scientists, eventually
formalized by the Royal Society, then it took a jump in the level of
traffic with the arrival of the typewriter and carbon paper. The
arrival of Xerox copying caused another major jump in traffic.
Computer networks, starting with the ARPANET, caused another major
jump. Maybe we're at a virtual hypertext level now.

FI: I think the difference between what we have now and what
hypermedia is intended to be is the ability to screen the material.
With the mass of material we are now beginning to have, there has to
be some way of indicating which material is worthwhile.

SB: A lot of that'll be automatable, and I also expect that there will
be a lot more humans editing material. A library is far more useful
with a good librarian.

FI: There has been some material on nanotechnology printed in Whole
Earth Review. What kind of interest have you noticed from the WER
readers?

SB: The two populations which have shown interest are computer
enthusiasts and the major corporations. I have spent some time in the
past few years among major corporations, and they have a lot of
interest in what the future has in store for them. Computer
enthusiasts have a strong interest in it as wish fulfillment, while
the corporate person is asking "what will this mean to my company?"

FI: What about the people primarily interested in environmental
issues?

SB: They've been blind, deaf and dumb on the issue, as far as I can
see.

FI: When you look at the degree to which an anti-technological
viewpoint is entrenched in some people, I don't see this as going away
quickly.

SB: I'm not sure you want it to go away quickly. Nanotechnology is the
sort of thing which could take off exponentially, and could result in
a lot of change happening very rapidly, things changing more rapidly
than people can adapt. The no-sayers can help flatten that curve, make
it arithmetical rather than exponential; of course, they want to see
it stopped altogether. No-sayers have their place. I wouldn't want to
see them go away.

The Alaska pipeline is an example--the first proposal was strongly
criticized by environmentalists; they said that it would wipe out the
caribou, and so on. They were right in that it was a lousy pipeline
design. But it was a bad pipeline design that was improved by delay,
and by the pressure to go back and re-think the proposal. It's useful
to have no-sayers, to slow up the process. But at the end you did have
a pipeline, and it didn't do the terrible things they thought it
would. So no-sayers have a role, even if they aren't always
reasonable. Sometimes it's useful to have unreasonable people.

Also remember that a good many environmentalists are highly
reasonable, and can be extremely astute on technical issues. Beware of
characterizing comments; they invite reply in kind, such as that all
nanotechnologists are unstable Mensoid nerds. Anyway, both reason and
unreason have value in the big picture.

FI.  Doesn't that depend on whether you have a political and social
system that can take people who are hard-over nos and have the result
be a compromise, rather than giving them a veto over things?

SB: There's a danger of change increasing exponentially. I don't think
it's a matter of vetoes; I think that they end up just acting as a
kind of brake.

As far as how the U.S. political system works, I think it's worth
reading Jerome Weisner's article in the January Scientific American.
Look at what's happened with the Science Advisory Council, which was
set up by Eisenhower as a response to Sputnik, and gave good advice to
him and to Kennedy, but was reduced to ineffectiveness under Nixon and
since then. The Challenger accident showed that correct technical
information was not filtering up through the Cabinet agencies to the
President. Perhaps if the Science Adviser's office been functioning
properly, that information might have gotten through.

FI: In fact there NASA has been waging a very strong and usually
successful war against any other independent source of thinking on
space in the Executive Branch. That's what's happened to the White
House Office of Science and Technology Policy, for example.

SB: Yes, but they couldn't control what was being said or done in the
Soviet Union, or Europe, or elsewhere.

FI: Or even in the American private sector.

SB: Or even there.

****************************************************************

by Mark Gubrud

One of the goals of the Foresight Institute is to stimulate debate on
the public policy consequences of advanced technologies such as
nanotechnology.  This essay will start off the discussion on military
aptions of nanotechnology.  The essays in this series are the opinions
of the authors and not neccesarily those of FI.

Editor

When we contemplate the application of nanotechnology to weapons we
find virtually unlimited room for fantasy. A number of clich's have
arisen in the nanotech community: omnivorous robot locusts,
omnipresent surveillance gnats, microbes targeted for genocide, mind
control devices, and so on. But what makes good science fiction does
not necessarily make an effective tool of combat.

Will nanotechnology make nuclear weapons obsolete? Perhaps in peace,
but not in war. Nuclear energy will remain preeminent in total war,
for at least three reasons. First, it is infinitely lethal; chemical
bonds cannot resist nuclear energy. Second, it is cheap, and
nanotechnology will make it cheaper. Third, and most important, it is
quick; the bomb goes bang and that's it, end of discussion.

Nanotechnology might seem to make SDI's Rube Goldberg schemes
workable, but space weapons will only create a final front. The
principle of preemption--getting in the first blow, and aiming for a
knockout--is an ancient and essentially unalterable fact of military
life. Missiles are now targeted on missiles. And in a war involving
space weapons, the first strike will be in space.

Battles with first-generation, bulk technology space weapons will
already be so swift that we will have to trust a machine to decide
when to start shooting. Nanotechnology could produce huge numbers of
such weapons, and also nuclear and chemical explosive-driven
directed-energy weapons that will reduce the decision time praccally
to zero, below even what a computer can cope with.

We see it most clearly in space, but on every front the speed and
numbers of today's high-tech and tomorrow's nanotech weaponry collapse
decision time and undermine the basis of mutual deterrence. One does
not have to callate that a first strike will succeed, one has only to
fear that the other side may try it, perhaps as some conflict
escalates or as some situation gets out of control. Preparing to
attack is not generally distinct from preparing to defend or deter;
defenses are needed against retaliation, and second strikes may aim at
the same targets as first strikes. As in World War I, motion may be a
slippery slope leading inexorably to war. Today that in?ity is
mitigated by the gap between the time scale of crisis and combat and
that of production and deployment. Nanotechnology will reduce and
eventually eliminate this margin of safety.

Replicating assemblers could be used at any time to initiate an arms
buildup, one that could reach fantastic proportions in the time frame
of historical military crises. The buildup would be exponential, and
traditional order-of-battle correlations would still apply, so it
would seem that whoever initiated the buildup (assuming equal
technologies) would have supremacy--not falling behind would be a
security imperative. Finally, the strike time compression of massively
pro?ated and lightspeed weaponry would undermine mutual deterrence at
the brink. These are the basic characteristics of the nanotechnic era
that combine to make it militarily as different from the present as
the present is from the pre-nuclear era. The difence is that no level
of armament will be even metastable, not even complete disarmament.

Perhaps nuclear disarmament and major conventional disarmament will be
achieved, but each proud, independent nation still retain its
vestigial military--including one nano-supercomputer, busily planning
rearmament and war. Then one day a dispute could arise, and quickly
develop into an awesome, nuclear-powered, nanotechnic struggle for the
control of territory and matter. Large-scale space deployment would
not change the essence of this situation.

We cannot depend on the balance of terror to hold the peace, for even
if there is ultimately no defense against nuclear weapons, especially
not in space, there may still be temporary shelter in dispersal and/or
underground. Deep tunnels and closed-cycle life support systems can
provide a redoubt for entire populations, while their machines
struggle for control of the open land, sea, air, and space and to
penetrate the enemy's shelters.

Nano/nuclear war could be a drawn-out struggle, and the victor would
have means to clean up the mess and to remake the world. Or so it
might seem. But in practice, hot war would probably break out before
anyone was ready for it. There would be no assurance of destruction to
hold back the first strike; rather, there would be great pressure to
preempt, since the outcome might be decided in the first few
microseconds. One could not afford to concede land, sea, air and space
without a fight, despite the inevitable vulnerability of
predeployments in these environments. On the other hand, a
well-prepared, long war of attrition, with detralized and versatile
assembler-based production, might kill everyone before one regime
could neutralize all the others.

The challenge of the nuclear era has been to limit arms and to resolve
disputes between armed soveign states without recourse to war. The
challenge of the nanotechnic era will be to abolish the armed
sovereign state system altogether; otherwise military logic will
always point toward fast rearmament and then to war. In the near term,
the challenge will be to avoid star wars and a new Cold War. To
governments, nanotechnology will suggest power, and power is dangerous
in a divided and militarized world. For the world as a whole,
nanotechnology will mean change, and even slow change has often been
amplified by the world's complex and discontinuous system to produce
violent results.

To prevent such results, our development of nanotechnology must be
fully open, international, and accompanied by a rising worldwide
awareness of its significance and earnest planning for swift,
neccesary, and unavoidable change in economic and security
arrangements. Any leading force must include all potential
nanotechnology powers, which does include the USSR at least! And it
must lead, not force. In answer to the question of the military uses
of nanotechnology: it must never have any at all.

Mark Gubrud is a policy intern at the Federation of American
Scientists. Your responses to his comments are welcome.

+---------------------------------------------------------------------+
|  Copyright (c) 1988 The Foresight Institute.  All rights reserved.  |
|  The Foresight Institute is a non-profit organization:  Donations   |
|  are tax-deductible in the United States as permitted by law.       |
|  To receive the Update and Background publications in paper form,   |
|  send a donation of twenty-five dollars or more to:                 |
|    The Foresight Institute, Department U                            |
|    P.O. Box 61058                                                   |
|    Palo Alto, CA 94306 USA                                          |
+---------------------------------------------------------------------+

jac@paul.rutgers.edu (Jonathan A. Chandross) (07/01/89)

josh@planchet.rutgers.edu [from Update #5]

> January and February saw a number of nanotechnology-related events:
> MIT's annual symposium (see report elsewhere in this issue), a lecture
> at Bell Communications Research (a spinoff of Bell Labs) on Jan. 13,

Bell Communications Research (BCR) is not a ``spinoff of Bell Labs.''
When the Bell system was broken up by Justice, BCR was created to
perform functions for the operation companies that had been previously
handled by Bell Labs.  It is a totally separate organization.  The
chart now looks like:

Before:		    AT&T + Bell Labs
			/  \ 
		       /    \ 
		      /      \ 
		     /        \ 
After:           AT&T          RBOC (Region Bell Operating Companies)
                  +             +
               Bell Labs       Bellcore


Jonathan A. Chandross
Internet: jac@paul.rutgers.edu
UUCP: rutgers!paul.rutgers.edu!jac

[Indeed, in a recent speech I heard the head of Bellcore claim it was bigger
 (now) than Bell Labs.  However, I'm sure the reference was intended merely
 identify what kind of organization Bellcore is in as few words as possible.
 --JoSH]