[news.admin] Author's Reliability

pete@octopus.UUCP (Pete Holzmann) (07/03/88)

In article <11589@agate.BERKELEY.EDU> weemba writes:
>>>One can, however, scan source code for inordinately complicated monkey-
>>>shines, comments that don't appear to match code, etc.
>>>I cannot do this with *any* "short little" binaries.
>>
>I wrote:
>>I certainly can! The equivalent to quickly scanning a source program,
>>is to try out a binary in a controlled environment.

>That's not much of an equivalent. [He can do non-trivial quick scans of
>short <10K source files]

Well, this discussion could go on for some time. There are plenty of
PC-based tools for binary analysis that can be quickly run over reasonble-
sized programs (and slowly run over big programs). String searches, 
automatic disassemblers that produce comments for anything that touches
the external environment (system memory, I/O ports, interrupts, system
calls), etc. One of the nice things about having millions of hardware-
compatible systems, is that we've got automatic tools to do what must
be done by 'hand' in Unix. [BIG disclaimer: there are many things that
Unix has that DOS can't touch. I don't want to imply that I'm knocking
Unix!]

>No!!  If Billy Binary compiles his binary with an infected compiler, a
>virus could spread via Usenet, without Billy's knowledge or intent.  En-
>crypting won't protect him.  What happens with source is that the risk of
>infection from bad compilers rests squarely with the end user.

If we follow a rule of "Author's reliability", then it is Billy Binary's
worry as to whether or not his development tools are reliable or not.
If we must go so far as to have real worries about our development tools,
then yes, a reliable binary posting is more worrisome than a reliable
source posting.

On the other hand, the most probable problem a user is going to see is that
of a *bug* in the program. Bugs tend to be independant of the distribution
method (source or binary) [actually, binary would tend to be better here,
since a tested binary produced with known good tools may be better than a
binary produced with lesser tools in the hands of the user]. And bugs
point back to the author.

I think that, when all is said and done, what we need to settle on is
some kind of author's reliability/responsibility policy. That is practical,
workable, reasonable, and should free us from a lot of worries!

Pete
-- 
  OOO   __| ___      Peter Holzmann, Octopus Enterprises
 OOOOOOO___/ _______ USPS: 19611 La Mar Court, Cupertino, CA 95014
  OOOOO \___/        UUCP: {hpda,pyramid}!octopus!pete
___| \_____          Phone: 408/996-7746

karl@sugar.UUCP (Karl Lehenbauer) (07/09/88)

In article <279@octopus.UUCP>, pete@octopus.UUCP (Pete Holzmann) writes:
> ...There are plenty of
> PC-based tools for binary analysis that can be quickly run over reasonble-
> sized programs (and slowly run over big programs). String searches, 
> automatic disassemblers that produce comments for anything that touches
> the external environment (system memory, I/O ports, interrupts, system
> calls), etc. One of the nice things about having millions of hardware-
> compatible systems, is that we've got automatic tools to do what must
> be done by 'hand' in Unix. 

Well, I assume automatic dissasemblers blow off stuff they don't understand
and just .DATA it or whatever as binary data.  It would be no problem for a 
Trojan Horse to decrypt the portion of itself that actually trashes your 
system when it has decided that the time is right.  That way, string searches
and code that looks for anything that touches system memory, I/O ports,
interrupts, system calls, etc, will fail to locate the Trojan.  More clever 
variations can be envisioned in which the encrypted part, or code that 
generates the code to do the trashing, etc, etc appears to be something 
useful, or at least seems too complicated to bother to decipher.

-- 
-- uunet!sugar!karl
-- These may be the official opinions of Hackercorp -- I'll have to ask Peter.