[sci.nanotech] Update 11: Viruses

josh@cs.rutgers.edu (04/20/91)

+---------------------------------------------------------------------+
|  The following material is reprinted *with permission* from the     |
|  Foresight Update No 11, 4/15/91.                                   |
|  Copyright (c) 1991 The Foresight Institute.  All rights reserved.  |
+---------------------------------------------------------------------+

Are Viruses Inevitable?
by Norman Hardy

Computers are increasingly important in our daily lives: more and more
products and activities throughout society depend on computers working
as programmed. A major question arises: how reliable is the software
running on these computers, and how much can its reliability be
improved? Is it possible to protect computer operations from outside
tampering, or is it all intrinsically vulnerable to attack by software
'viruses,' which copy themselves from machine to machine?

A paper by William Dowling (ref. 1) published last fall touched off a
flurry of media coverage on this question, in which the answer seemed
to be "SorryQdamage by computer viruses canUt be prevented, even in
theory." Under the headline "Eternal Plague: Computer Viruses," the
paper was summarized by the prestigious journal Science: "Short of
total isolation, there is no way to protect a computer against all
possible viral attacks." (ref 2) The popular press gave even stronger
interpretations.

In fact, what Dowling showed was more limited and does not rule out
the possibility of secure systems. As Science pointed out later in the
same article: "What is futile, Dowling's work shows, is to look for a
single 'magic bullet' that will eradicate all conceivable computer
viruses." This does not warrant pessimism, because there are other
approaches to dealing with the problem.

Stupid, Brute-Force Methods

Dowling shows that no single program can correctly identify all
viruses unless the operating system is unalterable. (ref 2) Operating
systems can, of course, be made unalterable. A simple but effective
approach would be to store the operating system in read-only memory,
which no software can alter. Indeed, one could store not just the
operating system but all programs in read-only memory. Such a computer
could process incoming data without becoming infected. It could be
reprogrammed only by physically swapping memory chips, but it would be
secure from viruses entering over data transmission lines.

Filtering Out Risk

Programs are available today to search for viruses, but these programs
can only recognize members of some specific set of known viruses.
While Dowling showed that it is not possible to determine whether all
programs are definitely safe or definitely unsafe, this is not
required in the task of accepting only safe programs. One need only be
able to sort into two categories: (1) definitely safe, and (2)
possibly unsafe. A program which could reject all viruses, while
accepting some (or even most) safe programs, has not been ruled out.

About twenty years ago J. Peter Deutsch sent me a program that would
examine another program and accept it or reject it. An accepted
program was sure to terminate in a known time and not store outside a
pre-specified area of memory. Not all programs that met these
restrictions would be accepted. Indeed, accepted programs had to
conform to rigid rules, but these rules allowed certain useful
programs.

This early work shows the basic point: by being overly
strict--rejecting some safe programs as well as the risky ones--we
could in principle filter out all risky programs. That this is true is
easily seen by taking an extreme example: suppose the filter screened
out all risky programs by accepting only those exactly matching a
short list of known safe programs. This would be very crude, but
effective. DowlingUs work shows that even the optimal screening
algorithm would still screen out some safe programs, but this may be a
small price to pay for a secure system.

Today's Methods

Dowling goes on to argue that most real operating systems are
necessarily vulnerable to some virus because they reside in writable
memory. Indeed, most popular personal computers suffer this weakness
today. More fully developed operating systems, however, use hardware
memory protection features that have been widely available since 1965
(note 3).
 
Such hardware distinguishes two modes: privileged and user. The
hardware limits which memory can be modified while in user mode. A
program may change these limits only when in privileged mode (note 4).
When the machine is initially turned on, it is in privileged mode, and
the first program the machine begins to obey is in a position, with
these modes, to protect itself and its data while it allows other,
untrusted programs to run in user mode. The machine reverts to
privileged mode and resumes obeying the original program upon any of
several events called interrupts. Attempts to violate the memory
limits cause an interrupt. Exceeding a time limit established in
privileged mode likewise causes an interrupt.

Operating systems (or kernels thereof) are designed to run in this
manner, as privileged code. An untrusted program can run efficiently
under the restraint of the operating system with the nearly undivided
attention of the CPU (central processing unit), subject only to the
caveat that it is in user mode and the consequent limitations.

With memory limits, the operating system reserves to itself the memory
for its code and some more memory in which to remember its agenda. By
enforcing time limits, the operating system reserves some time for
itself to execute its policies.

Not all operating systems have used these safety features, and not all
systems that did use them have maintained sufficient care to retain
control against clever attack. Even if the privileged code remains in
control, there are other points of attack by the virus. Nearly all
operating systems run programs at the request of a user with all of
the authority of the user: the program automatically has as much
authority as the person running it. There may be ways for a user to
run a program while limiting its reach, but this is seldom convenient
or known to casual users. A virus in such a program is thus in a
position to modify the program in any file that the user could modify,
thus propagating itself. Some users seldom run programs where they can
modify such files. But in Unix there are several other kinds of files,
such as shell scripts, that are enough like programs to serve as hosts
for active viruses.

In most systems a program learns what input it is to process by first
learning the name of the file and then asking the operating system to
copy data from the file to its memory. The authority it uses to read
the file is the same authority the virus uses to infect other files.

 A Better Approach

A newer type of operating system is the capability system. It uses the
principle behind the old saying "Good fences make good neighbors": if
you donUt want an untrusted program messing up other programs, make
sure it doesn't have access to them. Rather than giving a program the
same level of authority as its user, this system gives it only enough
to get its job done. This detailed, exact allocation can be described
as fine grain authority: it separates functions with more impenetrable
walls (i.e., fences) than do earlier methods.

When a program is initially set up, the user indicates which tools and
inputs it is permitted to access; it then has the required
capabilities with respect to these items. It has no ability to modify
other material, and so any associated virus is unable to spread.

Currently, very few operating systems use the capability approach. One
of them, KeyKOS by Key Logic, is currently being evaluated by the U.S.
government for general environments requiring high levels of military
security, and has never been cracked.

Why Security Matters

Powerful future technologies, such as nanotechnology, will be
controlled by increasingly complex computational systems. Whether and
how they can be made secure from tampering is of critical importance.
For the reasons above, it appears that security is possible, with
sufficient care. We will need to understand what is possible in this
field if we are to cope successfully with the problems ahead.
Assertions that secure systems are impossible are false and
misleading.

Norman Hardy has been involved both with secure operating systems used
in commercial timesharing systems and with computer network security.
He cofounded and is a senior scientist at Key Logic, a company that
builds secure operating systems.

Notes: 1. Dowling, William F., "Computer Viruses: Diagonalization and
Fixed Points," Notices of the American Mathematical Society, 37.858,
pp. 858-861.  2. Cipra, Barry, "Eternal Plague: Computer Viruses,"
Science, Vol. 249, 21 September 1990, p. 1381.  3. The Motorola 68030
and Intel 80386 chips and their successors have memory protection
suitable to these ends.  4. Control of I/O is also typically limited
to privileged mode.

+---------------------------------------------------------------------+
|  Copyright (c) 1991 The Foresight Institute.  All rights reserved.  |
|  The Foresight Institute is a non-profit organization:  Donations   |
|  are tax-deductible in the United States as permitted by law.       |
|  To receive the Update and Background publications in paper form,   |
|  send a donation of twenty-five dollars or more to:                 |
|    The Foresight Institute, Department U                            |
|    P.O. Box 61058                                                   |
|    Palo Alto, CA 94306 USA                                          |
+---------------------------------------------------------------------+