[sci.nanotech] viruses, computer & bio

mckinney@cs.uiuc.edu (C.R. Mckinney) (07/22/89)

This is not a topic of current discussion, but I am curious
to see what people on this notesfile have to say about it.
I am not a computer virus expert or a biology expert, 
so please no flames...

I was having a discussion with a friend about computer viruses,
and he was saying that he thought that they weren't really like
biological viruses.  Several other people have expressed this view,
that the term "virus" is misleading.  Seems to me that it captures
quite a few of the critical qualities of computer viruses, and the
analogy holds up quite well, for several reasons:

* First, a bio-virus is one of the simplest ways that DNA has
  of replicating itself.  That is, if you view organisms as merely
  vehicles which DNA uses to replicate itself, then viruses
  represent the minimal means of doing so.  Likewise, computer
  viruses are programs whose primary task is to replicate them-
  selves by attaching to other programs, just as bio-viruses
  attach to cells.  Some computer viruses have code that
  helps to camouflage them, or keeps them dormant until a 
  specified time.  Likewise, some bio-viruses have DNA that
  codes for traits that hides them or keeps them dormant 
  until conditions are right...

* Second, in most cases bioviruses harm their hosts, and often
  lead to their deaths, but they allow them to live long
  enough to infect other hosts.  The same is true of computer
  viruses, which may or may not be intended to bring down the
  machine; or may do so as a "byproduct" of their replication,
  just as a host may die as a "byproduct" of over-replication
  of a biovirus.

* Third, there are "vaccines" for computer viruses, just as there
  are vaccines for bio-viruses.  When the vaccine is administered,
  the virus is no longer a threat.

In summary, the computer virus metaphor is very apt, and I don't know
why people want to criticize it.  I welcome your replies and comments.

--Randy McKinney
  Urbana, IL
  mckinney@m.cs.uiuc.edu

[exercise for the reader:  Why is it unlikely that nanotech will
 produce "bio" viruses that are worse than those that could be produced
 by existing (ie, gene-splicing) techniques?
 --JoSH]

ems%nanotech@princeton.edu (07/25/89)

[ C.R. Mckinney draws apt analogies between computer and
  biological viruses.]

>....In summary, the computer virus metaphor is very apt, and I don't know
>why people want to criticize it.  I welcome your replies and comments.
>
>--Randy McKinney
>  Urbana, IL
>  mckinney@m.cs.uiuc.edu

Probably the analogies are criticized to guard against a natural 
human tendency to lump things that are called by the same name
together.  As time goes on, and parallelism makes more inroads
in computer technology, and as "parallel" computer viruses
appear, I think that it's likely that even more analogies may
be drawn.

Even so, it is good to remember that these viruses are covered
by two different scientific disciplines, that we are used to
thinking of today as being widely separated. (No flames from
biological chip designers, please, but other mail is welcome :-)

>[exercise for the reader:  Why is it unlikely that nanotech will
> produce "bio" viruses that are worse than those that could be produced
> by existing (ie, gene-splicing) techniques?
> --JoSH]

May I answer this question? I assume that by "worse" you meant
"more lethal". One reason may be that every means of producing 
illness or death in an organism has already been ferreted
out by chance during the long evolutionary battle. Hence there
is some gene complex, already coded for each ill, available for gene
splicing. Nanoviri could at best, only equal this lethality. After
all, dead is dead.

But, I don't think this is quite the entire picture. One promise
of nanotechnology is the ability to make that nanovirus vastly
more *selective* in it's targets, hence a better weapon. One might
build an AI-based nanovirus that would only spare ardent capitalists,
for instance. ( Thereby giving new meaning to the phrase 
"never volunteer" :-)

I'd hope that neither means of producing such a virus is ever
attempted. It's more realistic to assume that both techniques *will*
be tried by various groups of unethical technologists. After all
germ warfare labs do exist. It's also likely that gene-splicing, 
as the much more mature technology, has already been used to create 
some new disease. By a clear choice of vector, even a splicing-derived
virus could be made more selective, although never to the degree
of a nanotech virus. If someone told you that a blood-borne
disease, lethal to drug addicts and promiscuous persons,
but *unable* to use the mosquito vector, just arose naturally, 
would you believe them? And lest we believe we're safe just because
most of us fall into neither category, remember that a virus may
mutate. (OK, I finally admit I'm being a little paranoid here :-)

This is part of the whole category of questions relating to the
unethical misuse of technology. Let me now suggest a "fix" that one
day, just *might* be possible thru nanotechnology. The leading force
might use their time advantage to design an artificial conscience,
and apply it to *everyone*, to modify behavior. The artificial
conscience would make it impossible for anyone to attempt to 
injure others using technology, sort of like an enforceable
Hippocratic Oath.  Sounds like an abhorrent restriction of freedom?
Well, just keep in mind that it may be the only practical means
of permitting us to explore powerful new technologies without
courting world disaster. Perhaps the artificial conscience 
would only need to be applied to those persons who desired to
actually learn the dangerous technologies. This would result in
a future where everyone would be forced (at their majority?) to 
choose between complete knowledge and complete freedom.

Ed Strong princeton!nanotech!ems

[The major advantage of a virus is that it hijacks the "construction
 machinery" of the host's cells.  Thus it must consist in large part 
 of host-compatible DNA.  Thus the putative advantages of novel 
 construction and/or coding methods would be inapplicable.  This
 is what I meant by my conundrum...

 I think your "conscience" mechanism has a great similarity to 
 Asimov's 3 Laws of Robotics.  A good starting place (Asimov is
 no dummy) but with some unsettling ultimate implications--read
 Asimov's later works where he follows some of them up (he's still
 no dummy...).
 --JoSH]

yamauchi@cs.rochester.edu (07/26/89)

In article <Jul.24.23.27.40.1989.19198@athos.rutgers.edu> ems%nanotech@princeton.edu writes:
>But, I don't think this is quite the entire picture. One promise
>of nanotechnology is the ability to make that nanovirus vastly
>more *selective* in it's targets, hence a better weapon. One might
>build an AI-based nanovirus that would only spare ardent capitalists,
>for instance.

I find the idea of a nanovirus that could read personalities very
unlikely, due to the inherent complexity of mapping from neural
activity in the brain to even abstract thoughts, much less political
inclinations.  Before we have the ability to design such a nanoagent,
we will probably need the ability to design minds from scratch -- and
*that* would have far greater implications (both promises and
problems) than mere biowarfare.

On the other hand, a much more tractible nanoweapon would be one which
could scan a person's genetic code.  The simplest variant might be
something like Frank Herbert's White Plague -- a virus which searches
for XX or XY chromosomes and is lethal to only one sex.  A more
sophisticated version might scan gene patterns for race-specific
genotypes such as skin pigmentation and kill people of a certain
color.

The good news is that genocide is not really in the best interests of
any major power.  Even South Africa, arguably one of the most racist
high-tech nations, would not benefit from having all of the blacks in
their country die, since it would leave them without much of their
manual labor force.  On the other hand, the Holocast wasn't
particularly useful for Nazi Germany in military terms, and if an
anti-Jewish nanovirus was developed, Syria or the PLO might not
hesitate to use it.

>If someone told you that a blood-borne
>disease, lethal to drug addicts and promiscuous persons,
>but *unable* to use the mosquito vector, just arose naturally, 
>would you believe them?

How about the reverse?  What kinds of research could be done (and
probably is being done -- possibly in the US, probably in the USSR) to
turn HIV into a weapon?  You would need rapid-onset and a highly
contagious vector (contact/water/air).  Perhaps a recombinant DNA
splice between HIV and some type of flu virus?  Is this or something
similar possible?  (If so, it's probably been done.)

>This is part of the whole category of questions relating to the
>unethical misuse of technology. Let me now suggest a "fix" that one
>day, just *might* be possible thru nanotechnology. The leading force
>might use their time advantage to design an artificial conscience,
>and apply it to *everyone*, to modify behavior. The artificial
>conscience would make it impossible for anyone to attempt to 
>injure others using technology, sort of like an enforceable
>Hippocratic Oath.

For the reasons I stated above, I find this extremely unlikely short
of a complete solution to both psychology and AI.  This would require
knowledge of how extremely high-level concepts such as "other people"
and "harm" are stored in extremely low-level neurological processes.
Furthermore, it requires knowing how to modify neurological structures
to acheive a very complex high-level behavior.  If we can do this, I
feel we will be able to design minds to our own specifications, and
when this happens we will need to deal with much more complex issues.

_______________________________________________________________________________

Brian Yamauchi				University of Rochester
yamauchi@cs.rochester.edu		Computer Science Department
_______________________________________________________________________________

ems%nanotech@princeton.edu (07/28/89)

>In article <Jul.24.23.27.40.1989.19198@athos.rutgers.edu> ems%nanotech@princeton.edu writes:
>>But, I don't think this is quite the entire picture. One promise
>>of nanotechnology is the ability to make that nanovirus vastly
>>more *selective* in it's targets, hence a better weapon. One might
>>build an AI-based nanovirus that would only spare ardent capitalists,
>>for instance.
>
>I find the idea of a nanovirus that could read personalities very
>unlikely, due to the inherent complexity of mapping from neural
>activity in the brain to even abstract thoughts, much less political
>inclinations.  Before we have the ability to design such a nanoagent,
>we will probably need the ability to design minds from scratch -- and
>*that* would have far greater implications (both promises and
>problems) than mere biowarfare.
>
Au contraire, mon frere :-) You're trying to do the job by the most 
direct route, which is probably also the toughest. There is a much
simpler way the nanovirus can get a pretty good picture of how good
a capitalist you are, (as well as much else). It simply contacts its
compatriots, that have infiltrated your financial records, which are
already in convenient electronic form.  Now, if you fall outside
certain preset parameters, zzzzt! No mind reading is required.

In fact, variations of this technique can yield a great deal of
selectivity, without requiring "true" AI at all. (Hmmm, should
this be meme be discouraged? )

[ Much interesting thought on genocide, HIV elided...]

>>This is part of the whole category of questions relating to the
>>unethical misuse of technology. Let me now suggest a "fix" that one
>>day, just *might* be possible thru nanotechnology. The leading force
>>might use their time advantage to design an artificial conscience,
>>and apply it to *everyone*, to modify behavior....

>For the reasons I stated above, I find this extremely unlikely short
>of a complete solution to both psychology and AI.  This would require
>knowledge of how extremely high-level concepts such as "other people"
>and "harm" are stored in extremely low-level neurological processes.
>Furthermore, it requires knowing how to modify neurological structures
>to acheive a very complex high-level behavior.  If we can do this, I
>feel we will be able to design minds to our own specifications, and
>when this happens we will need to deal with much more complex issues.

I admit that designing an artificial conscience is much tougher than
the capitalist nanovirus described earlier. I would attempt it this
way: First, map the neuron structure of a human brain using nanotech.
Next, use the virtually unlimited amounts of CPU available thru
nanotech to simulate the interaction of a computer model of this
brain with a (simplified) model of an "outside world". (You can run
this simulation very fast, and also run many copies in parallel).
Finally, "all" you do is treat this brain model as a black box, 
and determine the set of outputs you want to avoid. Deciding when
the "outside world" has been harmed (by the brain model outputs)
could possibly be determined by examining increases in the
disorder of the "outside world" part of the simulation.

Sounds simple? It's not, but it does outline at least one
approach to the job, that is more tractable than creating 
complete theories of psychology & AI. (I started with CPUs having 
a whole 8K. After a while, I learned to never make my machines
do anything that they don't have to. :-)

>Brian Yamauchi				University of Rochester
>yamauchi@cs.rochester.edu		Computer Science Department

Ed Strong	    			AT&T, Bell Labs
princeton!nanotech!ems			
		 And in < 1 month  ->	Princeton University
					Computer Science Department

[The simulation method for implementing the "conscience" seems unlikely
 to work, primarily because (remember) the thing we're trying to protect
 against is mischief done by complex nanosystems.  I find it difficult
 to believe that an nth-generation system could simulate (a) the designer's
 thoughts, (b) the nth-generation CAD system, (c) the n+1st generation
 system being built, and (d) enough of the real world, almost necessarily
 at the molecular level, to detect craftily laid schemes.
 Though offhand, I can't think of a better one...
 --JoSH]

ems%nanotech@princeton.edu (08/01/89)

>I admit that designing an artificial conscience is much tougher than
>the capitalist nanovirus described earlier. I would attempt it this
>way: First, map the neuron structure of a human brain using nanotech.
>Next, use the virtually unlimited amounts of CPU available thru
>nanotech to simulate the interaction of a computer model of this
>brain with a (simplified) model of an "outside world". (You can run
>this simulation very fast, and also run many copies in parallel).
>Finally, "all" you do is treat this brain model as a black box, 
>and determine the set of outputs you want to avoid. Deciding when
>the "outside world" has been harmed (by the brain model outputs)
>could possibly be determined by examining increases in the
>disorder of the "outside world" part of the simulation.
>
>Sounds simple? It's not, but it does outline at least one
>approach to the job, that is more tractable than creating 
>complete theories of psychology & AI. (I started with CPUs having 
>a whole 8K. After a while, I learned to never make my machines
>do anything that they don't have to. :-)

[ Some signatures elided...]

>[The simulation method for implementing the "conscience" seems unlikely
> to work, primarily because (remember) the thing we're trying to protect
> against is mischief done by complex nanosystems.  I find it difficult
> to believe that an nth-generation system could simulate (a) the designer's
> thoughts, (b) the nth-generation CAD system, (c) the n+1st generation
> system being built, and (d) enough of the real world, almost necessarily
> at the molecular level, to detect craftily laid schemes.
> Though offhand, I can't think of a better one...
> --JoSH]

(Brace yourself! More facile hand-waving "explanations" ahead ...:-)

I think by focusing the "conscience" on a person's *intent* to
commit technicide (killing/immoral acts via technology?) we could
come up with something to do the job.

Remember Burgess's "A Clockwork Orange"? Alex, the protagonist, was
conditioned to become sick whenever he *thought* about violence.
(In actuality his emotional state was monitored. This prevented Alex
from outsmarting the conditioning just by dreaming up a new form
of violence.)
The techniques used were crude but effective. (Of course the rest
of the society he was in was *not* conditioned, but that's another story.)

Using our nanosimulations, we could do this in a vastly improved
form, winding up with exactly the behavior controls that we want. By 
controlling those negative impulses from the onset of nanotechnology
we could ensure that those killer nanoviri never get built. (This type
of conditioning might not work on someone who was insane from the
start, but I think that serious insanity would probably be apparent
anyway).

By the way, I'm not advocating making people sick whenever they
think the dangerous thoughts. Something gentler could be just as
effective. Perhaps the subjects would be conditioned to undergo
a pseudo "religious" experience, controlled hallucinations along
with emotional overtones, that would effectively steer them off 
of the wrong track. The simulations would help the leading force 
design the best techniques. 

However much I've sugar-coated it, I'm still talking about some
form of "mind control". I've outlined a reasonable approach to
the technical question of "how". A more important question is
whether the leading force should undertake this task at all. 
If for instance, the leading force is not completely honest, and 
leaves back doors for themselves in the conditioning, the world 
might be worse off than before.  

Ed Strong   	email: att!mtuxo!ems1 
		or {princeton,mccc,attmail}!nanotech!ems

[Remember that in Clockwork Orange, "violence" is defined to the 
 victim by example, ie showing him violent films.  However, when
 any more complex chain of reasoning is involved, people exhibit
 an amazing capacity to deceive themselves about the reasons and
 consequences of their actions.  I think you are going to wind
 up needing something so complex, that you may as well discard the
 people and let your machine do whatever it is you were going to
 control them into doing.  Like Eric wrote, a nanotech totalitarian
 state would probably discard us rather than enslaving us.

 I feel that if people are to remain anything like recognizably
 human, the answer lies more along the path of increasing their
 ability to withstand accidents, than of decreasing their ability
 to have them.  
--JoSH]

jwi@lzfme.att.com (Jim Winer @ AT&T, Middletown, NJ) (08/02/89)

The reference lines are a mess, but somebody writes:
| |
| ||But, I don't think this is quite the entire picture. One promise
| ||of nanotechnology is the ability to make that nanovirus vastly
| ||more *selective* in it's targets, hence a better weapon. One might
| ||build an AI-based nanovirus that would only spare ardent capitalists,
| ||for instance.
| |
| |I find the idea of a nanovirus that could read personalities very
| |unlikely,...

| Au contraire, mon frere :-) ... There is a much
| simpler way the nanovirus can get a pretty good picture of how good
| a capitalist you are, (as well as much else). It simply contacts its
| compatriots, that have infiltrated your financial records, ...

It's actually far easier to create a virus that spares only ardent
capitalists, and far more likely -- just create something extremely
deadly, extremely contagious, and extremely expensive to cure. Then
let it infect everyone. Only the ardent capitalists who can afford
the cure will survive.

In fact, this seems a likely scenario in the near future. (It's
certaily one effective way for the wealthy to solve the
overpopulation and pollution problems.)

Jim Winer ..!lzfme!jwi (Please don't email, unable to reply outside AT&T)

Those persons who advocate censorship offend my religion.

Upuaut:	a wolf-headed Egyptian deity | Voodoo: the art of sticking ideas
	assigned as Guidance System  |         into people and watching
	for the Barque of Ra.        |         them bleed.

The opinions expressed here are not necessarily  

[If I were wealthy, capitalist or not, and I wanted to wipe out any section
 of the population, the *last* thing I would do would be something that
 required me to shell out for an expensive cure.  Besides the expense, 
 people might catch on, become envious, and take your money by force.
 The obvious way to go about killing off the human race is to do it 
 selectively, a small unpopular group at a time, so that most of the 
 people spend most of their effort saying "thank god I'm not one of
 *them*" until it's too late.
 Of course, it's not the rich who want to upset the applecart anyway--
 after all, they're on top now.  It's usually the poor who feel they
 would benefit from major upheavals (not that they do...).
 --JoSH]

pmb@swituc.uucp (Pat Berry) (08/05/89)

> In article <Jul.24.23.27.40.1989.19198@athos.rutgers.edu> ems%nanotech@princeton.edu writes:
> >day, just *might* be possible thru nanotechnology. The leading force
> >might use their time advantage to design an artificial conscience,
> >and apply it to *everyone*, to modify behavior. The artificial

And who is going to play God and decide what my conscience is to consider
right and wrong?  What if this decision-maker happens to admire Hitler?
(or any of an inumerable list of individual-specific "evils")

No, leave me out of your mass conscience... I prefer to find my own way
to Nirvana.

Pat Berry

-- 
Pat Berry KN7B
pmb%swituc.uucp@arizona.edu
KN7B @ WB7TLS.AZ packet radio