[comp.virus] Universal Virus Detector

Leichter-Jerry@CS.YALE.EDU (02/01/90)

David Chess asks how my hardware timestamp forms a universal virus
detector.  He misses the point of my message.  I wasn't trying to
define a good user interface to such a system; I was only sketching
out how the hardware might work.

Any creation or modification of executable code on a system is either
desired or undesired.  You first of all have to be able to distinguish
between those two possibilities.  The distinction is based on the
intent of the operator, so is not amenable to mathematical argument as
such.

While it may sometimes be difficult to decide exactly what catagory
some transitions fall into, in many cases I can be definitive.  In
particular, there it is almost always the case that no existing
executable should be modified, ever.  All my existing executables can
be checked by comparing their timestamps with known-correct values.
Think of this as a very cheap, absolutely unforgeable checksum.

More generally, any time I am certain my system is "clean" I can
generate and save on a secure medium a list of all timestamps on my
disk.  Any time later, I can generate a new list and compare.  It is
then up to me to decide whether any differences that show up are
legitimate - but I have the absolute assurance that I WILL get an
indication of any changes.

BTW, if you really want to build such hardware you can easily go
further, in several ways.  For example, you can add a
hardware-enforced switch which when in the OFF position makes it
impossible to set the "is executable" bit at all.  In this mode, you
can't do program development, install new executables, or even copy
executable files - but you absolutely can't be infected either.  The
vast majority of systems could probably spend most of their time with
the switch in this position.

Another alternative is to add another bit, the "may create
executables" bit.  Only code running from a block marked with this bit
may turn on the "executable" bit for another block.  Normally, only
the linker and an image copier would have this bit set.  A virus could
still be written - but it couldn't modify existing code directly, it
would have to produce object code and pass it through the linker.

There are certainly some fundamental issues in dealing with viruses,
but most of the PRACTICAL issues are the direct result of PRACTICAL
hardware and software design decisions.
							-- Jerry

CHESS@YKTVMV.BITNET (David.M..Chess) (02/01/90)

>        Any time later, I can generate a new list and compare.  It is
> then up to me to decide whether any differences that show up are
> legitimate - but I have the absolute assurance that I WILL get an
> indication of any changes.

Sure, and that's certainly a good first step.  But I still claim that
it isn't by any means a universal virus detector, and would not solve
the virus problem, because the thing that is "up to you" is just too
hard.  The system can tell you that only files that you expect to have
changed have changed, but it *can't* tell you that they've changed
only in innocent ways.  That's one of the largest problems of virus
protection; the system can't in general tell, and certainly can't tell
down below the "which file was changed" level, which modifications to
the executable-set were intended by the user, and which were not.  A
system like this might catch any viruses that we know of today; on the
other hand, if it became widespread, viruses that it would not catch
(or, more accurately, that a human using it would not catch) would
shortly appear.

> Another alternative is to add another bit, the "may create
> executables" bit.  Only code running from a block marked with this bit
> may turn on the "executable" bit for another block.  Normally, only
> the linker and an image copier would have this bit set.  A virus could
> still be written - but it couldn't modify existing code directly, it
> would have to produce object code and pass it through the linker.

Or it could create the object that it wanted, and call the copy
utility.  Or is it impossible for a program to copy a non-executable
thing to an executable thing?  That would help a little, but would
also make the system less convenient to use.  How do you get a new
copy of the linker?  How do you write a patch program?

Don't get me wrong: I think these are all good ideas for future,
more virus-hardened systems.   I just want to point out that,
even if implemented perfectly, they don't make the problem go
away...

DC

peter@ficc.uu.net (Peter da Silva) (02/03/90)

> For example, you can add a
> hardware-enforced switch which when in the OFF position makes it
> impossible to set the "is executable" bit at all.

So far so good.

> In this mode, you
> can't do program development, install new executables, or even copy
> executable files -

Pretty much so.

> but you absolutely can't be infected either.

Not true. What constitutes an "executable file"? Is a BASIC program one?
You can write a virus in BASIC. How about Postscript? You can hide a
virus in Postscript. You can't turn off your BASIC or Postscript
interpreters...

This is the basic sort of protection used by old Burroughs computers: only
the compilers could create executable files, and they were trusted programs.
They had no memory protection hardware at all.
- --
 _--_|\  Peter da Silva. +1 713 274 5180. <peter@ficc.uu.net>.
/      \
\_.--._/ Xenix Support -- it's not just a job, it's an adventure!
      v  "Have you hugged your wolf today?" `-_-'

crocker@TIS.COM (02/11/90)

Robert Eachus explained quite lucidly why there is no possibility of
building a universal virus checker WHICH PERFECTLY DISTINGUSIHES
BETWEEN VIRUSES AND NON-VIRUSES [emphasis mine].  As with most
theoretically intractable problems, a slight change in the question
leads to remarkably different results.  For example, it's entirely
feasible to build a virus checker which errs on the safe side and
throws out some good programs as well as all bad programs.  Whether
this is useful depends on how many good programs it throws out.  At
the extreme, you can postulate it throwing out ALL programs.  This is,
of course, the easiest filter to build, but also the least useful,
i.e. completely useless.  A more interesting challenge is whether you
can build a checker that permits a usefully large set of good
programs to be executed while excluding all bad programs.   A related
question is whether it's possible to define programming standards whic
facilitate the checking process.  If such standards existed, the
burden of proving that a program is virus-free would fall back on the
writer of the program.  Programs not meeting the criteria would be
treated the same as virus-laden programs and prohibited from execution.

Maria Pozzo is working in this area, and she and I published a paper
at the IEEE Symposium on Privacy and Security last year.  I also
posted a description of the basic ideas some time ago.  (Perhaps the
editor would be kind enough to supply the volume and number?)

woody@chinacat.Unicom.COM (Woody Baker @ Eagle Signal) (04/09/90)

Don't forget to check for RAM shadowed BIOS and modifications to the
bios.

Cheers
Woody

CHESS@YKTVMV.BITNET (David.M..Chess) (04/09/90)

jmolini@nasamail.nasa.gov (JAMES E. MOLINI) writes:
> If you have questions, or see a flaw in the process, please let us
> know.  We are building a virus detector, which could be placed into the
> public domain, that uses the techniques below to detect virus
> infections.  Our initial tests have shown encouraging results. ...

These comments are based on the abstract only, not the paper
(I'll eventually figure out how to FTP from here...).

Modification detectors seem like a promising way of detecting at
least some "new" (not seen before) viruses.   The usual problems
faced by a modification detector include:

 1) A virus that knows about the detector (or about a class
    of them to which it belongs) might make changes that the
    detector won't detect.
 2) A similarly detector-aware virus might update the detector's
    database to reflect the changes it makes when it infects.
 3) A virus might (by luck or design) modify files in such a way
    that the user, presented with a list of files that have
    changed, will not notice anything wrong.
 4) If the virus is active in the system when the detector runs,
    it could lie to the detector about the state of the system.

A simple CRC approach runs into point (1): if lots of people start
using your detector, and it always uses the same CRC polynomial,
it's not all that hard for the virus to include code that patches
infected objects so that the CRC is the same as it was before
infection; a CRC isn't hard to invert.   My favorite solution to
this is to allow the user to specify his own polynomial (through
the use of a "key phrase" or whatever); other solutions also
exist (crypto-based MDC's and such).  I gather from the abstract
that the exact scheme used isn't fixed by the proposal; that's
a reasonable approach.

I gather that your point (f)
> f.  In order to prevent a virus from attacking the CRC table, we will
>     add a set of dynamic "State Vectors" for the machine, which define
>     the run time environment for the detector.  This creates an
>     unforgeable "fingerprint" of the detector as it exists in memory
>     and can be prepended to each file prior to computing the CRC.

is supposed to deal with my point (2), but I don't really
understand it.  If it's possible for the detector to update the
database (and it must be, when the user gets new pieces of
software and so on), then it's possible for a virus to as well,
if the database is ever r/w to the system while the virus is
active.

(3) is one of the harder problems, I think; in some of the
environments that are most important to protect (program
development environments, for instance), many executables
will be expected to change.   Helping the user figure out
which changes are OK and which are not is something that
needs considerable thinking about, I think.  Doing it
perfectly is probably impossible (a good reason to avoid
calling anything a "universal" virus detector...).

Most of the abstract seems to be devoted to (4); making sure
the virus isn't lurking anywhere when the detector runs.
This is the general computer-security problem of getting the
system into a trusted state; I tend to think that the
problem needs to be solved at the system level rather than
the application level (that is, there should be a good
wired-in procedure for getting the system into a trusted state,
rather than making every security application program do
it itself).   I doubt that any piece of software in DOS can
really determine that the system is trustworthy; checking
interrupt vectors doesn't tell you anything about the code
they're pointing to, for instance.  Painful as it is, the
only method I know of that I trust is booting cold from a
trusted floppydisk.

Sounds like an interesting project, though, and I -will- try
to get the full paper...

DC

rwallace@vax1.tcd.ie (04/09/90)

jmolini@nasamail.nasa.gov (JAMES E. MOLINI) writes:
> I am working with a colleague on defining a robust virus detection
> utility.  The following is an extended abstract of a paper which
> discusses an approach we are investigating.  The work was undertaken as
> part of a research project sponsored by the National Aeronautics &
> Space Administration at the Johnson Space Center.  Please look it over
> and tell us (or Virus-L) what you think.

This is I think the fourth serious attempt on this newsgroup to propose a
universal virus detector. Unfortunately like all the rest it won't work.

        (theoretical UVD discussion)

> So to put our theoretical UVD into practice, on, for example, an IBM
> PC, we would do the following:
>
> a.  Begin by validating the integrity of the detector code.  This has
>     been discussed above. [not included in abstract]

How? I haven't copied your entire posting in this followup because it was too
long but I couldn't see any proposed method for validating the detector code.
And an obvious way to defeat your mechanism is to overwrite the detector
program with code that always says "OK".

        ...

> f.  In order to prevent a virus from attacking the CRC table, we will
>     add a set of dynamic "State Vectors" for the machine, which define
>     the run time environment for the detector.  This creates an
>     unforgeable "fingerprint" of the detector as it exists in memory
>     and can be prepended to each file prior to computing the CRC.

What do you mean? Another obvious way to defeat the detector is to recalculate
CRCs for infected programs and put the new CRC value into the table. I don't
see any way to prevent this other than storing the table offline (which would
create what most users would consider unacceptable hassle).

Also your detector would detect most resident programs as well as multiuser
systems and upgraded versions of the operating system as viruses because it
checks the system call vectors.

"To summarize the summary of the summary: people are a problem"
Russell Wallace, Trinity College, Dublin
rwallace@vax1.tcd.ie

alpope@skids.Eng.Sun.COM (Alan L. Pope) (04/11/90)

A Universal Virus Detector?  Go reread Goedel's Incompleteness Theorem.
				Alan Pope <alpope@Sun.COM>