nut@wet.UUCP (adam tilghman) (07/18/90)
does not require MTCSH to be in operation for it to run. I would also be _very_ interested in any type of news package for the ST- the kludge system that I have been using for the past 6 months is not able to handle the kind of traffic that I am now handling. Thanks! Adam Tilghman -- == Adam G. Tilghman - Trendy Quote: "Beware of Greeks bearing Trojans!" == ============ Bang-Path UUCP: {uunet | ucbvax}!unisoft!wet!nut ============ =============== Disclaimer? My employer? What, me work? ================
steve@thelake.mn.org (Steve Yelvington) (07/18/90)
[In article <19896.26a3ddf7@oregon.uoregon.edu>, nut@wet.UUCP (adam tilghman) writes ... ] > I am looking for a full implementation of UUCP for the ST - I have been (garble deleted) > does not require MTCSH to be in operation for it to run. I guess we're close enough to releasing this to give a status report: A couple of years ago, Dale Schumacher modified UUSLAVE into UUMASTER, then evolved it into UUMAIL, a single program that included a simple UUCP transport mechanism and a simple user interface. Various other folks added functionality -- Kent Schumacher wrote a file pager, John Stanley wrote a superb reader that organized the messages visually, and I wrote basic news software. UUMAIL had problems, though, and because of its kitchen-sink design and its history, it was just about impossible to figure out what was going on inside it. And it was slow. So Dale decided that the solution was to junk UUMAIL and build a new mail system based on various freely available components from Un*x, plus original tools to fill in the gaps. He asked for volunteers from the Minnesota Atari ST (MAST) user group. Eventually it boiled down to Dale, Tom Cook, John Stanley and me. We've been working on this for what seems like forever. Every time we think we're close to releasing it, we either find one more bug or notice one little problem (docs? You mean people would want docs?). The software runs, though, at eight or nine sites that I know of. We have set up a mailing list for discussion among folks who are testing it and folks who want it. If you would like to be added, send e-mail to st-mail-request@thelake.mn.org. The package includes: * A uucico program based on dcp with Peter Housel's 7-packet windowing 'G' driver. Translated into English, that means it's fast -- the protocol does not wait for each individual packet to be ACKed before sending the next one. Tom Cook is running it with a Telebit T2500 and getting excellent throughput. * Smail 2.5, modified somewhat for single-tasking systems. Most of the functionality of smail remains -- you can resolve complicated domain addresses, @, % and !. You can shelter unregistered systems behind your FQDN, as Tom Cook has done: user@your_favorite_bbs.citadel.moundst.mn.org. * lmail as the local delivery agent, allowing for even more flexibility in aliasing and the maintenance of mailing lists. lmail also can deliver to pipes, which means you could write an archive-server if you're sufficiently nuts. I wrote a mail-to-news gateway. lmail also generates intelligent error messages and bounces bad mail properly. * Many original programs, including cron, a timed-event scheduler email, a user interface for writing messages readmail, a user interface for reading messages various utilities > I would also > be _very_ interested in any type of news package for the ST- the kludge > system that I have been using for the past 6 months is not able to handle > the kind of traffic that I am now handling. Our package includes news support, but not B or C News. I wrote an rnews program for UUMAIL that unbatched news and wrote single files to a spool directory. It worked well but slowly -- GEMDOS is miserably slow in creating new files on a drive that's three-quarters full. (In my experience, all disk drives are three-quarters full, except the ones that are completely full.) After Tom shipped me the new mail package, I wrote a quick-and-dirty rnews that delivered news to Unix-style mailbox files, as determined by a file that aliases newsgroups to mailboxes. This avoids the slow GEMDOS file-creation problem, but it unfortunately renders John Stanley's excellent UUREADER for UUMAIL useless. As a result, John is still using the old rnews with the mail new package. This is OK, since his dynasoft.UUCP is a leaf node. Dale heavily rewrote and improved rnews. I added a couple of more features, he fixed them :-), and the result is a very fast news processor that delivers local copies, forwards batches to upstream and downstream sites, and understands complicated sys file entries with '!' negation and ".all" or ".*". The program does not yet handle compression. Dale wrote a sendbatch utility to call uux remote!rnews. I wrote a postnews program that seems to be working well. I'm using it now. The weak spot is the newsreader -- there isn't one. I'm using a shell script to call readmail -f<newsgroup> for about 35 newsgroups. Since readmail doesn't have any idea what the highest-message-read might have been, this is not optimal. This whole mail/news package is not 100 percent ready, the programs are not 100 percent reliable, and the documentation is not 100 percent accurate (or even finished). It will never be bug-free, since TOS has internal problems and these programs hammer the daylights out of TOS. I still get error 65 (TOS internal error) and error 35 (out of file handles) occasionally while sending mail. Nevertheless, anybody who seriously wants to set up a UUCP mail/news system can ask to be put on the discussion list in the hopes that we'll eventually mail out some software. Again, the address is: st-mail-request@thelake.mn.org -- Steve Yelvington at the lake in Minnesota steve@thelake.mn.org
nut@wet.UUCP (adam tilghman) (07/19/90)
I am looking for a full implementation of UUCP for the ST - I have been Message-ID: <1344@wet.UUCP> Date: 18 Jul 90 02:49:30 GMT Reply-To: nut@wet.UUCP (adam tilghman) Organization: Wetware Diversions, San Francisco Lines: 14 does not require MTCSH to be in operation for it to run. I would also be _very_ interested in any type of news package for the ST- the kludge system that I have been using for the past 6 months is not able to handle the kind of traffic that I am now handling. Thanks! Adam Tilghman -- == Adam G. Tilghman - Trendy Quote: "Beware of Greeks bearing Trojans!" == ============ Bang-Path UUCP: {uunet | ucbvax}!unisoft!wet!nut ============ =============== Disclaimer? My employer? What, me work? ================
gl8f@astsun.astro.Virginia.EDU (Greg Lindahl) (07/20/90)
In article <A1542853874@thelake.mn.org> steve@thelake.mn.org (Steve Yelvington) writes: >I wrote an rnews program for UUMAIL that unbatched news and wrote >single files to a spool directory. It worked well but slowly -- GEMDOS >is miserably slow in creating new files on a drive that's >three-quarters full. (In my experience, all disk drives are >three-quarters full, except the ones that are completely full.) This is a known and fixed problem in TOS 1.0 and TOS 1.2 that has been discussed here repeatedly. Just like MS-DOS 2.XX, these tos versions pick a new cluster by searching from the start of the FAT, every time. When your drive is mostly full, this takes quite a while. TOS 1.4 has a next-fit algorithm instead, which starts from the last cluster it allocated. If you are running 1.0 or 1.2, you can use FATSPEED to fix this problem. FATSPEED also accellerates finding the number of free bytes on a drive. I have been using FATSPEED for several years and it seems very reliable. -- "In fact you should not be involved in IRC." -- Phil Howard
steve@thelake.mn.org (Steve Yelvington) (07/20/90)
[In article <1990Jul19.173152.1647@murdoch.acc.Virginia.EDU>, gl8f@astsun.astro.Virginia.EDU (Greg Lindahl) writes ... ] > In article <A1542853874@thelake.mn.org> steve@thelake.mn.org (Steve Yelvington) writes: > >>I wrote an rnews program for UUMAIL that unbatched news and wrote >>single files to a spool directory. It worked well but slowly -- GEMDOS >>is miserably slow in creating new files on a drive that's >>three-quarters full. (In my experience, all disk drives are >>three-quarters full, except the ones that are completely full.) > > This is a known and fixed problem in TOS 1.0 and TOS 1.2 that has been > discussed here repeatedly. Just like MS-DOS 2.XX, these tos versions > pick a new cluster by searching from the start of the FAT, every time. > When your drive is mostly full, this takes quite a while. TOS 1.4 has > a next-fit algorithm instead, which starts from the last cluster it > allocated. > > If you are running 1.0 or 1.2, you can use FATSPEED to fix this > problem. FATSPEED also accellerates finding the number of free bytes > on a drive. I have been using FATSPEED for several years and it seems > very reliable. Yeah, I'm still stuck back in the dark ages with TOS 1.0. I heartily recommend FATSPEED -- along with a few other programs such as PINHEAD (or NULLFILL) and a disk cache, it helps make TOS a lot less painful. Even with that help, though, we found that creating new files involved significant overhead, especially when the directories got big (sometimes 300 or 400 files) and the drive got fragmented. Unbatching large quantities of Usenet news was taking more time than receiving it at 1200bps. If the ST were a multitasking system it wouldn't have been such a problem, but it's not -- and having to wait 45 minutes while the machine processes news isn't my idea of fun. Ditching the whole idea of "one message, one file" was just one of the speed enhancements. We also pull an entire Usenet message into RAM before processing it. The header is scanned using pointers and offsets; the "Path:" line is examined and updated; the "Message-ID:" line is extracted for comparison with a history file, and the "Newsgroups:" line is extracted for delivery and forwarding purposes. Disk writes then take place using single Fwrite() calls. Dale Schumacher's most recent speed enhancement is crunching the message-IDs to a 32-bit CRC, which allows a simple comparison of long integers to replace a function call to strcmp() and also cuts the RAM usage. This rnews is about as fast as it can get. -- Steve Yelvington at the lake in Minnesota steve@thelake.mn.org
gl8f@astsun9.astro.Virginia.EDU (Greg Lindahl) (07/21/90)
In article <A339547380@thelake.mn.org> steve@thelake.mn.org (Steve Yelvington) writes: >Even with that help, though, we found that creating new files involved >significant overhead, especially when the directories got big (sometimes >300 or 400 files) and the drive got fragmented. Unbatching large >quantities of Usenet news was taking more time than receiving it at >1200bps. This is a problem on many operating systems -- they use a poor algorithm to search in directories, so performance goes to heck, or the directory gets fragmented on disk, which means even efficient search algorithms can be slow. There was a discussion about this on comp.unix.wizards recently. The traditional work-around is grab one character from the filename and put the file in that subdirectory with that name, i.e. "fred" would be "f\fred". Then you end up with 36x fewer files on average per subdirectory and things run at a traditional speed. Of course, the best solution is to change the algorithm, and it sounds like you guys have made the right trade-offs between memory and disk to speed things up substantially. Is it faster than C News yet? ;-) -- "In fact you should not be involved in IRC." -- Phil Howard