peter@ficc.uu.net (Peter da Silva) (06/08/89)
In article <19930@adm.BRL.MIL>, bzs@bu-cs.bu.edu (Barry Shein) writes: > Will someone explain to me exactly how usernames and passwords and > file protections (a not unknown form of security) will protect against > computer viruses?? Thirty-fifteen. I guess it's time for this again. I originally posted this before the Internet Worm Scare. The Usenet virus: a case history. A cautionary tale. The Usenet virus was detected when a user discovered that a program he had received from the net seemed to have two versions of malloc included with the source. One version of malloc might be odd, but people have never tired of reinventing the wheel. Two versions were suspicious, particularly since they lead to a name conflict when the program was linked. The first, lmalloc.c, seemed to be identical to the malloc listed in Kernighan and Ritchie. The second, bmalloc.c, was rather strange, so we concentrated our efforts on it... this time was later found to have been wasted. After a little work during spare moments over the course of a week we decided it was actually a clumsy version of the buddy system (a fast but space-inefficient method of memory allocation). It might make a good example of how not to write readable code in some textbook, but it wasn't anything to get worried about. Back to the first. It made use of a routine named speedhack() that was called before sbrk() the first time the malloc() was called. There was a file speedhack.c, but it didn't contain any code at all, just a comment saying that it would be implemented in a future version. After some further digging, speedhack was found at the end of main.c. The name was disguised by some clever #defines, so it never showed up in tags and couldn't be found just by grepping the source. This program turned out to be a slow virus. When it was run, it looked for a file 'lmalloc.c'. If it found it, or it didn't find Makefile, it returned. From then on malloc ran normally. If it didn't find it, it reconstructed it using a series of other routines with innocuous names tagged on to the end of other files. This was apparently an attempt to avoid overly increasing the size of any one of the files in the directory. Then it went into Makefile or makefile (it looked for both) and added lmalloc.o onto the end of the first list of '.o' files it found. It then reconstructed each of the extra routines, and speedhack itself, using techniques familiar to any reader of the obfuscated 'C' contest. These were tagged onto the ends of the '.c' files that corresponded to the '.o' files in this same list. The program was now primed to reconstruct the virus. On inspection, we discovered that about 40% of the sources on our system were infected by the speedhack virus, We also found it in one set of shell archives that we'd received but never unpacked or used, which we took as evidence that it had spread to a number of other systems. We have no idea how our system was infected. Given the frequency with which we make modifications and updates, it's likely that the original speedhacked code is no longer on the system. We urge you to inspect your programs for this virus in an attempt to track it to its source. It almost slipped by us... if the author had actually put a dummy speedhack in speedhack.c we would have merely taken lmalloc.o out of the Makefile and defused *this* copy of the virus without being any the wiser. There are other failings in this program that we have thought of. We have decided not to describe them to avoid giving the author of this program ideas we might regret. Some ways that programs like this can be defeated include 'crc' checks of source files and, of course, careful examination of sources received from insecure sites. ----- Now I have to make a confession. This whole document is a hoax intended to dramatize the problems involved with viruses and Usenet. I suspect that most of you were clued to this by the Keywords line. While playing with the idea and writing this article several things occurred to me: First of all, this virus is a much more complex program than any of the viruses that have been spotted on personal computers. I think it has to be, based on the design goals that a *real* UNIX virus must satisfy. It must be small, to avoid detection. It must not cause files to grow without bound. It must infect foreign files, otherwise it's not a virus... just a Trojan Horse (like the bogus ARC and FLAG programs on the PC). Trojan horses are a dime-a-dozen. It must infect source files, since this is the primary software distribution channel for UNIX. A virus stuck on one machine is a boring one. It must not break the infected program (other than what it might care to do deliberately). It must not be obvious from a simple examination of the source (like, changing main to Main and having a virus-main call Main). I believe that given these goals (which are, of course, subject to debate) a simpler program would be successful in infesting more than a small fraction of the machines that (say) comp.sources.misc reaches. There are systems immune to this particular attack, of course. Ones not running UNIX, so sbrk() doesn't work. Or ones with radically different versions of malloc(). Ones with no 'c' compiler. They are in the minority, though. On the other hand a virus of this type could infest a large proportion of the net before it was found. The virus I described does not cause any direct damage, except for using up a relatively small amount of disk space. A more vicious virus is possible. Other variations of this virus are obviously possible. For example, it could be tagged onto any standard 'C' library routine... I chose malloc merely because source was available and because it's something that people complain about, so they wouldn't be likely to find an extra copy suspicious. Another good routine would be perror(), for the same reason. This would have the additional benefit of making the spread of the infection dependent on an additional random factor, making it harder to detect the virus. Do I think something like this is likely? Well, I'm sure that eventually someone will try something like this, I suspect that their virus would get caught much sooner than in this story, because I think that more people look at the source than conventional wisdom would lead you to believe. -- Peter da Silva, Xenix Support, Ferranti International Controls Corporation. Business: uunet.uu.net!ficc!peter, peter@ficc.uu.net, +1 713 274 5180. Personal: ...!texbell!sugar!peter, peter@sugar.hackercorp.com.
jfh@rpp386.Dallas.TX.US (John F. Haugh II) (06/10/89)
In article <4457@ficc.uu.net> peter@ficc.uu.net (Peter da Silva) writes: >In article <19930@adm.BRL.MIL>, bzs@bu-cs.bu.edu (Barry Shein) writes: >> Will someone explain to me exactly how usernames and passwords and >> file protections (a not unknown form of security) will protect against >> computer viruses?? > >Thirty-fifteen. > >I guess it's time for this again. I originally posted this before the >Internet Worm Scare. Anyone interested in a really good paper on trojan horses and trust should read Ken Thompson's Turing Award presentation. Ken creates a scenario in which the C compiler and login are in cahoots to create this security hole which only he [ and dmr ;-) ] are aware of. It ends with some very sound advice - eventually a secure OS comes down to trusting the people who wrote the code. I don't think GNU will ever produce a trusted OS for exactly this reason - who is going to trust people such as Stallman who believes security is something big companies use to steal from the average Joe? -- John F. Haugh II +-Button of the Week Club:------------- VoiceNet: (512) 832-8832 Data: -8835 | "AIX is a three letter word, InterNet: jfh@rpp386.Cactus.Org | and it's BLUE." UucpNet : <backbone>!bigtex!rpp386!jfh +--------------------------------------
alo@kampi.hut.fi (Antti Louko) (06/10/89)
In article <16655@rpp386.Dallas.TX.US> jfh@rpp386.cactus.org (John F. Haugh II) writes: >Anyone interested in a really good paper on trojan horses and trust >should read Ken Thompson's Turing Award presentation. >Ken creates a scenario in which the C compiler and login are in >cahoots to create this security hole which only he [ and dmr ;-) ] >are aware of. >It ends with some very sound advice - eventually a secure OS comes >down to trusting the people who wrote the code. I don't think GNU >will ever produce a trusted OS for exactly this reason - who is >going to trust people such as Stallman who believes security is >something big companies use to steal from the average Joe? Actually it comes down to trusting the people who COMPILED the code. If you don't use bootstrapping binaries coming with sources you are much safer. Can we trust any of those big companies either? Or that they have never had any saboteur programmers working with the OS you are buying. Besides, big companies usually don't give you the source code for their systems. At least some of the pieces are missing. With GNU you can compile everything from sources. First you compile the GCC with a different compiler, of course. With GNU you will have sources without any license agreements. You don't even have to tell anyone that you desperately NEED the sources! I believe many high security facilities will find GNU more suitable than proprietary systems. Antti Louko (alo@hut.fi) Helsinki University of Technology
mike@thor.acc.stolaf.edu (Mike Haertel) (06/11/89)
In article <16655@rpp386.Dallas.TX.US> jfh@rpp386.cactus.org (John F. Haugh II) writes: >It ends with some very sound advice - eventually a secure OS comes >down to trusting the people who wrote the code. I don't think GNU >will ever produce a trusted OS for exactly this reason - who is >going to trust people such as Stallman who believes security is >something big companies use to steal from the average Joe? You do Richard a great disservice in this assumption. It is doubtful that he will want to do anything beyond traditional UNIX protection mechanisms in GNU. However, if he were to announce that he intended to, say, produce a secure system, I would have a great deal more faith in him than I would have in software companies. Who is going to trust big companies, which are interested in getting a product to market sooner than the competition? Who is going to trust big companies, that are likely to keep problems secret to avoid marketing losses, rather than making fixes available in a timely and public fashion? Who is going to trust organizations like the NSA, who just *might* want to see people using systems with holes that only they know about? Remember the DES controversy. The only system you can trust is the one you design, build, and program yourself, from the chips on up. (And then only if you really know what you are doing--there are many nonobvious traps for the unwary--just look at all the dumb done by authors of setuid programs in UNIX.) Incidentally, does anyone know if Ken Thompson's proposed compiler hack was ever implemented? -- Mike Haertel <mike@stolaf.edu> ``There's nothing remarkable about it. All one has to do is hit the right keys at the right time and the instrument plays itself.'' -- J. S. Bach
leo@philmds.UUCP (Leo de Wit) (06/14/89)
In article <22709@santra.UUCP> alo@kampi.hut.fi (Antti Louko) writes: [] |Can we trust any of those big companies either? Or that they have |never had any saboteur programmers working with the OS you are buying. |Besides, big companies usually don't give you the source code for |their systems. At least some of the pieces are missing. With GNU you |can compile everything from sources. First you compile the GCC with a |different compiler, of course. Which compiler may well create a backdoor in your new GCC compiler ... Leo.
flint@gistdev.UUCP (06/14/89)
Having the sources to the compiler won't help much: the person who wrote the backdoor can have it sitting right there in the code and you probably won't know it. (Yes, if you take the time to figure out what the code is doing, for every line of the code, but who is going to do that? If the author of the code didn't comment it, even when they wrote the code with no intent to hide what it is doing, it can take days to figure out what something is really doing. If someone really wanted to put in a backdoor and hide it, it would likely go unnoticed for a long long time. The people who get that code are just going to use it until they bump into a bug, and only then will they go poking around in the code to figure out what the bug is: if it isn't in the same place as the backdoor, the backdoor won't be found. If you really want security, you need to pay somebody (not the code author) to actually look at every line of code and figure out what it does, and let them know there is a big bonus in it for finding a security problem. Of course, you'll have to make sure that the person who wrote the assembler didn't put in a backdoor, and that the person who built the hardware didn't either. Flint Pellett, Global Information Systems Technology, Inc. 1800 Woodfield Drive, Savoy, IL 61874 (217) 352-1165 INTERNET: flint%gistdev@uxc.cso.uiuc.edu UUCP: {uunet,pur-ee,convex}!uiucuxc!gistdev!flint
kempf@tci.UUCP (Cory Kempf) (06/17/89)
In article <8800020@gistdev> flint@gistdev.UUCP writes: > >Having the sources to the compiler won't help much: the person who wrote >the backdoor can have it sitting right there in the code and you probably >won't know it. (Yes, if you take the time to figure out what the code is >doing, for every line of the code, but who is going to do that? There is a paper titled "On Trusting Trust", written by Dennis Ritchie (or was it Ken Thompson? oh well, the two were back to back). He brings up some interesting idea about how to insert a bug into a compiler... As a quick summary, even going through the source line by line won't help... I highly recomend the paper... +C -- Cory Kempf Technology Concepts 40 Tall Pine Dr. uucp: {anywhere}!uunet!tci!kempf, kempf@tci.uu.net Sudbury MA 01776 phone: (508) 443-7311 x341 DISCLAIMER: TCI is not responsible for my opinions, nor I for theirs