[sci.nanotech] But are they safe?

blowfish@triton.unm.edu (rON.) (03/13/91)

Imagine a scene in the near future:
A congressional committee room:

Chairman: Sen. P.K. Barrell now has the floor.
Sen. Barrell: Thank you, Mr. Chariman. Now, Mr. Fish, you say these little
nanorobits...
Me: Nanobots, sir.
Sen Barrell: Er, ahm, yes, whatever, these little nanobots, will re-produce 
themselves in the process of creating a larger term object?
Me: Yes, sir, that is essentially it.
Sen. Barrell: Well, then, Mr. Fish, you also say that there is a mutation
level connected with these nanobits, er nanobots.
Me: Yes, sir.
Sen. Barrell: Can you explain, please?
Me: Yes- essentially, mutations are small changes in the structure, and 
sometimes performance of the units in question. Mutation is a necessary fact
of life, Senator, even your own cells mutate.
Sen. Barrell: Can these mutations be controlled?
Me: Not really. The level can be increased with outside influences, but it
can never be eliminated compleatly.
Sen. Barrell: Then what you are saying, Mr. Fish, is that you have no control
over what these things become?
Me: Well, not exactly, sir, we can stop processes known to be mutated beyond
useful applications.
Sen. Barrell: So if a situation arises where you are unable to detect a change
in the behavior of the nanobots, you would be unable to control them.
Me: Mutations are a necessary part of the growth/evolution cycle.
Sen. Barrell: Like cancer is, Mr. Fish? I'm sorry, but you have not convinced
me that these nanobots are controllable, and what cannot be controlled cannot
be considered safe. I am going to vote against continued funding of your
nanotech research.....

While the above sceanario might seem fanciful, anyone out there who has done
any serious research knows how tenuous funding can be, and how deadly serious
it is trying to convincepeople who do not understand the technology advances
that the advances are indeed safe and useful.
The whole thing brings up a few questions on the areas of mutation and safety:

1) Can nanotech technology, once started, be safely controlled, or is there
a danger of the 'missing nanobots', sort of like hidden viruses, ready to
flare up into life when the conditions exist right. (I know, viruses are
non >self< replicating beasties, but there are examples of organisms that
are self-replicating only under certain conditions)

2) Can a nanobot be designed to 'self-destruct' if it mutates beyond a certain
set of parameters? And how do you gaurd against these parameters being 
mutated themselves?

3) Given that a nanobot bunch has the potential to evolve into a sort of
self-awarness (the grey goo sceanario, I believe), can you come up with
a plausible arguement that such a created 'life-form' is safe? Just look
at the flack being received by the biologial engineering people over the
creation of new strains of bacteria. And you want to possibly create a
self-aware form of life?

Sorry if these questions are a re-hash of old issues, but I've just started
reading this group with some interest in the field, and these questions
came to mind....


rON. (blowfish@triton.unm.edu!ariel.unm.edu)
"It is only with the heart that one see rightly;
 what is essential is invisible to the eye."

[Having nanotech research controlled by senators of the IQ you have
 depicted means it's sure to be screwed up.  I hope a better way (such
 as idea futures) can be found.
 As has hopefully been explained in recent articles here, it is not 
 terribly hard to design machines that just can't mutate in the commonly
 used sense of the word.  Whether mutable nanobots will be built *on
 purpose* is another matter entirely.
 --JoSH]

opus@triton.unm.edu (UseCondomsFight AIDS) (03/25/91)

In article <Mar.12.18.34.19.1991.2545@athos.rutgers.edu> blowfish@triton.unm.edu (rON.) writes:
[Discussion of governmental funding of nanotechnology research]

More and more funding is coming out of the private sector for technologies
which may become money producing ventures... Does anyone reading know the
break down for where most of the research money is coming from?

--------------------------------------------------------------------------------
Institute for Combat Arms and Tatics - System programmer
MIDCO - Stereosyntatic Neurosurgery - System programmer
opus@triton.unm.edu
jkray@bootes.unm.edu
--------------------------------------------------------------------------------

forbis@milton.u.washington.edu (Gary Forbis) (03/27/91)

I'm wondering what error rate is being hypothesised here.  How many units
are people thinking about.  Might the codes necessary to keep the probability
of undetected error low enough slow down the machines so much as to make
them inviable solutions to the problems for which they are being considered?

I see that CD are now being advertised with eight times oversampling and this
is just for audio applications.  The space shuttle uses five flight computers,
four run the same program with semaphores to indicate proper functioning and
the fifth runs an independently created program and serves as an arbitrator.
Even with this at least one flight was delayed in the final seconds because
of a program malfunction.  I think that in some cases some of the computers
were shut off during reentry.  The SpaceLab had (I believe) seven synchronized
gyros yet it had hardly been placed in orbit before they started to fail.
One would think that if there was a way to prevent these errors it would have
been taken in these cases.

My first home computer was an Altair 8800, my second, a TRS80 mod I.  The
first had a problem with the cpu the second a memory problem.  The cpu failure
was pretty straight forward, it had a single bit error in one of the registers.
The memory problem wasn't so easy, it had two cells welded together.  All of
the standard memory tests showed that the memory was good yet some programs
failed in strange ways.  I found the problem by setting memory each location
then testing all of the other locations in the bank.  Please note that a
any arbitrary number of bits could have been welded together by having a 
single short or open in the address lines.  

I don't get the feeling everybody is considering this with open eyes.
Assuming that every one alive today had a million nanocomputers working
for them this would be about 5*10^15 machines.  Is this number too high?
Even if most failures are hard failures this is still about 10^19 machine
hours per year.  Even if each person only has a hundred computers this is
still 10^15 machine hours per year.  I'm having a hard time believing this
many machine hours per year could be achieved without some soft failures.

--gary forbis@u.washington.edu

[I had an Altair too; the memory didn't even have parity.  I imagine that
 nanocomputers will have oodles and gobs of soft errors; this comes with the 
 territory of sizes and speeds we're talking about, and from working in
 a domain where quantum effects are sure to raise their ugly heads. :^)
 That's why it's almost certain that nanocomputers are virtually certain
 to have a major error-correcting component and that coping with the
 probabilistic nature of the data will be a central design issue.
 --JoSH]

news@elroy.jpl.nasa.gov (Usenet) (04/04/91)

In article <Mar.26.18.10.50.1991.3124@athos.rutgers.edu>, forbis@milton.u.washington.edu (Gary Forbis) writes:
> 
> I'm wondering what error rate is being hypothesised here.  How many units
> are people thinking about.  Might the codes necessary to keep the probability
> of undetected error low enough slow down the machines so much as to make
> them inviable solutions to the problems for which they are being considered?
[deleted]

But there are fault-tolerant models for us to study: for instance,
when was the last time your brain core dumped from a parity error?
Even on those occasions when the human brain shuts itself down as a
last resort (fainting), it usually reboots without tech support :-)

But you don't design something to work perfectly; you design it so
it has benign failure modes and fallback positions that you can
detect.  [Visions of a nanomachine-serviced human with a little
tiny voice inside screaming, "Let me out!"]  We never expected
Voyager to last as long as it has; it was built to be extremely
flexible and reprogrammable, and those factors have saved it from
numerous problems.

-- 
This is news.  This is your       |    Peter Scott, NASA/JPL/Caltech
brain on news.  Any questions?    |    (pjs@euclid.jpl.nasa.gov)