[comp.virus] Testing viruses - was Re: Network World Article

PHYS169@csc.canterbury.ac.nz (Mark Aitchison) (06/06/91)

padgett%tccslr.dnet@mmc.com (A. Padgett Peterson) writes:
>>From:    rtravsky@CORRAL.UWYO.EDU (Richard W Travsky)
>
>>An accompanying chart shows the percentage of detection by the packages
>>against 921 viruses...
>
> Just to provide "apples vs apples" tests, possibly in conjunction with
> the public domain viral list, we should make a stab at a weighted test
> (e.g. Jerusalem 1000 pts for detection, Pentagon 1 pt.) if we can come
> up with a probability function for infection it would certainly be
> better than "We can detect 900 viruses".

I agree that many virus tests are a bit irrelevant, and could be
improved by weightings such as that suggested. But also there needs to
be a component in the tests for measuring the product's ability to
spot new viruses (and the scanners should carry out at least some
simple tests for the presence of a virus - e.g. top of memory reduced,
interrupt 21 redirected, etc.)

Therefore I suggest that whoever is collecting new viruses hold some
back (ones that have not been seen in the wild, and probably won't
be), and make them available to some a-v testers, not a-v writers.
This is because, in the lifetime of an anti-virus product, there is
bound to be some new viruses released that aren't in the scan list.
And my definition of the lifetime of the a-v software isn't "until the
next version is made", but until the average user gets around to
updating their copy. So those that make updated scan lists available
conveniently, cheaply and often should get a better score.

Also, the convenience in using the product should be taken into
account, as programs that take a long time, or in other ways
disencourage the user from running it often, should get lower scores.
In effect, we should see a probability that, with this product, you
will be free from viruses.

(Personally, I like to see a more detailed analysis, including the
effect of using several products together - which, I think, most
sensible people do, but the majority of potential users of a-v
software haven't got the time to go into those details, and I doubt
many a-v testers have the time/resources to produce such detailed
reports, unfortunately).

Mark Aitchison.