Human-Nets-Request%rutgers@brl-bmd.UUCP (Human-Nets-Request@rutgers) (11/15/83)
HUMAN-NETS Digest Monday, 14 Nov 1983 Volume 6 : Issue 71 Today's Topics: Responce to Query - Digesting Standards (4 msgs) & Archiving Ephemeral Information, Computers and the Law - Computer Crime ---------------------------------------------------------------------- Date: Wed 9 Nov 83 01:10:45-PST From: Mabry Tyson <Tyson@SRI-AI.ARPA> Subject: Re: Mail Digests When I was at the Univ. of Texas, I started undigestifying the digests for posting on our local bboard. When I came out to SRI, I've continued keeping my code running at UT and having it mail the undigestified digests (and some bboards from other sites) to SRI-AI so we can keep abreast of what's going on. (I would be willing to remail these undigestified digests (in TOPS-20 type format) to one (semi-)official address per site.) Yes, there is a semi-standard for the digests. I know I have code that is usually pretty accurate on breaking the digest into messages. It almost never misses a break it should have noticed but occasionally separates one message into two. I seem to remember that the MIT mail reading program Babyl has a command for undigestifying digests. Whether the standard should be changed to make undigestifying easier may be argued. But who is going to change the code that creates the digest? Are YOU (generic, not specifically the person whose message I'm replying to) willing to? No matter how you specify message separators, I can send you a message with that separator in my message. I asked Ken Laws who runs AILIST if he'd be willing to send out separate messages to sites that ran some sort of bboard system (ie, only one mail box gets the messages and everyone reads that mail box). He pointed out several problems. One was that he sometimes adds comments to messages that would be difficult to do if he remailed the message directly. Another was that he gets lots of rejected messages back. If he sent the messages individually, he would get many more. There were some other problems too but I have forgotten the specifics. (Actually, I guess you can send out a digest with an unambiguous way to separate individual messages. You can't do it with separators the way it is currently done. You need to have something like character counts to specify how long each message is. That occasionally will lose because some mail programs and ftp protocols will change your message (remove nulls, add a lf after a cr, occasionally even duplicate or lose a period in column 1).) ------------------------------ Date: Wed, 9 Nov 83 4:45:47 EST From: Stephen Wolff <steve@brl-bmd> Subject: [Andy Adler: Mail Digests] From: Andy Adler <andya@BBN-UNIX> Subject: Mail Digests If we could come to some sort of agreement of the form of these digests, ....... then it would be possible to write filters to process them, ....... Currently, one must resort to heuristic approaches. Andy Adler On all the digests I know about, every contribution begins with a legal, parsable header. The version of MSG (p/o MMDF) in use here has an "undigestify" command which produces from the digest a collection of rock-stock messages, each of which can be squirreled away in its subject- file, or answered, or forwarded or what-have-you. That is, after the "undigestification" rfc822 gives you all you need to know (and all you CAN know) to build an automatic post-processor. -steve ------------------------------ Date: 9 November 1983 18:29 EST From: Gail Zacharias <GZ @ MIT-MC> Subject: Mail Digests Babyl has an Undigestify command which uses the following conventions: A digest starts with an introductory section, which is terminated by a line of about 70 dashes. The first word of this section is the list name. After this section come the messages, separated by a blank line followed by a line of about 30 dashes. (The reason for the blank line is to allow use of dashes to "underline" text within messages). Each message consists of any number of blank lines, followed by a regular mail message, including a header and all. One gross hack which I found necessary is for the undigestifier to parse the header of each msg and check whether the list name is among recipients. If not, it adds the line "To: ListName@MC" to the header. This is because some digests, like Human-nets and SF-lovers, perversely remove the To: line from the individual messages, making it difficult to include the list in replies. This adhoc method is better than nothing, and most of the lists which do this gratuitous pruning have forwarding pointers at MC... ------------------------------ Date: 9 November 1983 23:29 EST From: Alyson L. Abramowitz <ALA @ MIT-ML> Subject: Digest Standards Andy asked in the latest issue of Human-Nets if there were any standards for formating digests. The answer is a definite yes. For example, there are a standard number of dashes and blank lines between messages and the Today's Topics section of each digest provides a real (abeit simple) index into each issue. This format is reasonably consistent across quite a number of digest lists going over and originated from a multitude of networks. It's consistent enough that programs can (and have) been written to help moderators in creating digests and readers in "undigestifying" (splitting a digest into individual messages) messages. Matter of fact, I believe our Human-Nets moderator uses one of them written by a former HNT moderator, Mike Peeler, and former SFL moderator, Jim McGrath. If you wanted to write a tool to help you read parts of a digestt you would find it a very reasonable task. That's the good news. Here's the bad news: the exact format is not written down in one complete document anywhere. Not formally as a RFC or other network standard and not even informally all in one place. It's passed down from one moderator, tools writer, redistributor, etc. to another. At various times a few of us who have held these "positions" have said we would document the "standard". Alas, those who take these tasks on tend to be over-committed (generally doing these tasks) and it has never happened. And that can create "interesting" problems when these jobs change hands as those who have lived through moderator changes, for example, have seen. Meanwhile, however, I supect, Andy, you'd find no problem in getting those of us who understand the format to answer enough questions to allow you to write whatever software you have in mind. Enjoy, Alyson ------------------------------ Date: 9 November 1983 23:00 EST From: Keith F. Lynch <KFL @ MIT-MC> Subject: Why save everything? Why not? The cost of storage is dropping rapidly. So is the cost of processor time, making compression more practical. I have all my netmail for the past 15 months stored (compressed) on my vax. It is about 20 million characters (when not compressed) and is currently growing by about a character per second on the average. I hope to keep it indefinitely and to continue to accumulate netmail indefinitely. I also have all my vax mail (work related) online, in a non-compressed form. Whenever a question comes up in reference to anything I have done in the past, I get into my favorite editor and search the mail file (my vax mail file is currently only about 2 million characters) and I can find any character string in about 2 minutes of processor time. Better search algorithms are needed. I CC every outgoing message to myself and I make sure that the topic is clearly mentioned, i.e. if the message is about a Printronix printer called LPB0: I make sure the words "Printronix" "printer" and "LPB0:" appear in the message, to facilitate future searches. Most messages from other people are not so clearly labelled. And it is very hard to search for some things, such as a phone number someone mailed me a few months ago if I get many messages from that person every week. I think that programs that parse sentences and can "understand" what is being talked about will be of great help in this, once they become more highly available (distributed as system software on every new word processor, perhaps) and as soon as processor speeds become fast enough to let them run in a reasonable amount of time (one page per second would be good). We have no idea what future generations may consider to be of great value! Who would have predicted that old comic books would sell for thousands of dollars? Consider the renaissance painters who painted over masterpieces because they couldn't afford new canvas. Fortunately it is sometimes possible to peel off the newer paint and restore what was underneath. I doubt we will be so lucky with magnetic disks! If only we had more information about the past! So little was considered worth recording that much of what we know about what really happened was discovered through archaeology, even regarding periods as late as parts of the last century. I don't know about you, but I want to be remembered. For the first time, computer technology is making it possible to save almost everything. Even if it not presently possible to index it properly and to rapidly locate what one is looking for, someday it will be possible. Perhaps someday, maybe in a hundred years, maybe in a thousand, there will be a system so "intelligent" that it will be able to "know" all accumulated knowledge and be able to visualize it all at once and see all the interrelationships, rather like a human being can do now with a single paragraph. Perhaps this system will be able to know all the people then living as well or better than a person knows their best friends. I wonder if this issue of Human-Nets will be a part of its memory. ...Keith ------------------------------ Date: 10 November 1983 04:02 est From: SSteinberg.SoftArts at MIT-MULTICS Subject: historical curiousities Anyone who reads any history (or history of science) quickly realizes that there is IMMENSE value in keeping track of all the random ephemera from a particular period. You might not want a copy of 1927 issue of Popular Radio for its clever circuit diagrams but if you are interested in the impact and perception of hobby radio then this issue is priceless. If you've ever read any of Steve Gould's essays on paleontology, biology or baseball you will learn a great deal about how human thought and science progress. When scientists suggested that each human contains miniature versions of every possible descendent they weren't being ludicrous, they just came up with a reasonable solution in perfect concord with a world a few thousand years old with a few thousand to go. In any event it was a much better solution than ANY of the alternatives. There has been a tremendous amount of stuff written and an even more incredible amount of stuff lost. If you want to undrstand Tudor England you might want to wade through the Lisle letters (all six volumes - not the one volume condensation) and that will give you some ideas about one family. If you want to understand how to make a Hungarian roast pork dish you might want to read up on Hungarian and European folk cooking before deciding how "authentic" you want your dish to be. The modern recipes are distant descendents with modern techniques appended of the original dishes which in turn record the various ethnic groups, external perceptions and economic geographic of the area. It is quite easy to have a 200 book + 500 magazine cooking library and still not have a recipe for a single Burmese dish. A friend of mine had to rederive the recipe for Sar Moo Sar since it's not clear there is a written one. In science there is a bias towards current information but anyone who wants to understand what they are doing and step back in the hope of a new perspective needs to understand the past with all its random short lived "facts". As the biologists always point out, it isn't really fair to call a living organism primitive since it has several billion years of evolution behind it just as we do. Bring back the computer - (SAVe THE WHALE) - Seth ------------------------------ Date: Sun, 6 Nov 83 23:35:20 pst From: unisoft!pertec!bytebug@Berkeley Subject: kids and computer crime Date: 28 October 1983 23:33 edt From: Dehn at MIT-MULTICS (Joseph W. Dehn III) Subject: kids and computer crime It is indeed an unfortunate situation when computer crime has to be dealt with by having FBI agents break into kids rooms, but this is just another symptom of the tremendous ignorance and confusion that still exists among the general public regarding computers and their use. . . . Where are those kids supposed to have learned what is right and wrong in computer access? Where are kids supposed to learn about any kind of right and wrong? Presumably the process begins in the home... Certainly the process begins at home, and most likely these days in front of a TV set that doubles as a display for a handy-dandy home-computer/video game! You and I may chuckle to ourselves at the exploits of the "Whiz Kids" TV series, but just how many kids can distinguish the reality of life from the fiction of a television script? In this past weeks episode, as in those that I've seen before it, we see the stars of this series doing things WHICH ARE CLEARLY ILLEGAL. You want to see how far I get if I'm caught breaking into the DMV computer to query who has a particular license plate? A show such as this is just what every kid needs to show him that cracking into the local university computer system is just the latest video game created for his/her amusement. So, will the FBI busting a few kids do anything? Perhaps, but I wouldn't count on it. A lot more needs to be done to educate the public that the box of electronic chips in their living room can be just as dangerous as a loaded gun, and that they (parents) need to take an active role in insuring that their kids are taught in its proper use. We need to teach the public (system administrators) that proper security is just as important as not leaving your keys in the car. -roger ------------------------------ Date: 7 Nov 1983 at 2058-PST Subject: Computer Crime From: zaumen@sri-tsc Just saw an article in the local paper: its about a "19-year old UCLA student" who used a home computer to break into a "Defense Department" communications system. Whats interesting is the official reaction: "This is not some childish prank", said District Attorney Robert Philibosian. "We're talking about an individual who has cost the federal government, private organizations, and universities literally hundreds of thousands of dollars in reprogramming costs." ... Philibosian said that all the computer systems reached by Austin had to be reprogrammed. ... "Some of the information was very sensitive," he said. We can't give a more complete description at this time. There was no mention of deleted, or modified files, so I presume the "reprogramming" meant that passwords had to be changed. Does this *really* cost $$$$$, or is the DA technically confused (or running for office)? One wonders at talk of "very sensitive" information, with *no* other details (did he see reports, personnel records, source code, or what?). Seems to me that if this becomes a serious problem, a computer's modems could be rigged to call back the user to establish a connection. This might be awkward while travelling (especially if there is a restriction on phone numbers that the computer will call), but certainly reasonable for system accounts. ------------------------------ End of HUMAN-NETS Digest ************************